Message ID | alpine.LFD.2.20.1708160105470.17016@knanqh.ubzr (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wednesday, August 16, 2017, Nicolas Pitre wrote: > > Yes, now I can boot with my rootfs being a XIP cramfs. > > > > However, like you said, libc is not XIP. > > I think I have it working now. Probably learned more about the memory > management internals than I ever wanted to know. Please try the patch > below on top of all the previous ones. If it works for you as well then > I'll rebase and repost the whole thing. > > diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c > index 4c7f01fcd2..0b651f985c 100644 > --- a/fs/cramfs/inode.c > +++ b/fs/cramfs/inode.c Yes, that worked. Very nice! $ cat /proc/self/maps 00008000-000a1000 r-xp 1b005000 00:0c 18192 /bin/busybox 000a9000-000aa000 rw-p 00099000 00:0c 18192 /bin/busybox 000aa000-000ac000 rw-p 00000000 00:00 0 [heap] b6e23000-b6efc000 r-xp 1b0bc000 00:0c 766540 /lib/libc-2.18-2013.10.so b6efc000-b6f04000 ---p 1b195000 00:0c 766540 /lib/libc-2.18-2013.10.so b6f04000-b6f06000 r--p 000d9000 00:0c 766540 /lib/libc-2.18-2013.10.so b6f06000-b6f07000 rw-p 000db000 00:0c 766540 /lib/libc-2.18-2013.10.so b6f07000-b6f0a000 rw-p 00000000 00:00 0 b6f0a000-b6f21000 r-xp 1b0a4000 00:0c 670372 /lib/ld-2.18-2013.10.so b6f24000-b6f25000 rw-p 00000000 00:00 0 b6f26000-b6f28000 rw-p 00000000 00:00 0 b6f28000-b6f29000 r--p 00016000 00:0c 670372 /lib/ld-2.18-2013.10.so b6f29000-b6f2a000 rw-p 00017000 00:0c 670372 /lib/ld-2.18-2013.10.so be877000-be898000 rw-p 00000000 00:00 0 [stack] beba9000-bebaa000 r-xp 00000000 00:00 0 [sigpage] ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors] Just FYI, I'm running an xipImage with all the RZ/A1 upstream drivers enabled and only using about 4.5MB of total system RAM. That's pretty good. Of course for a real application, you would trim off the drivers and subsystems you don't plan on using, thus lowering your RAM usage. > +/* > + * It is possible for cramfs_physmem_mmap() to partially populate the > mapping > + * causing page faults in the unmapped area. When that happens, we need > to > + * split the vma so that the unmapped area gets its own vma that can be > backed > + * with actual memory pages and loaded normally. This is necessary > because > + * remap_pfn_range() overwrites vma->vm_pgoff with the pfn and > filemap_fault() > + * no longer works with it. Furthermore this makes /proc/x/maps right. > + * Q: is there a way to do split vma at mmap() time? > + */ So if I understand correctly, the issue is that sometimes you only have a partial PAGE worth that you need to map. Correct? For the AXFS file system, XIP page mapping was done on a per page decision, not per file. So the mkfs.axfs tool would only mark a page as XIP if the entire section would fit in a complete PAGE. If for example you had a partial page at the end of a multi page code segment, it would put that partial page in a separate portion of the AXFS image and be marked as 'copy to RAM' instead of being marked as 'map as XIP'. So in the AXFS case, it was a combination of the creation tool and file system driver features to fix the partial page issue. Not sure if any of this info is relevant, but I thought I would mention anyway. Thank you for your efforts on adding XIP to cramfs! Chris
On Wed, 16 Aug 2017, Chris Brandt wrote: > On Wednesday, August 16, 2017, Nicolas Pitre wrote: > > > Yes, now I can boot with my rootfs being a XIP cramfs. > > > > > > However, like you said, libc is not XIP. > > > > I think I have it working now. Probably learned more about the memory > > management internals than I ever wanted to know. Please try the patch > > below on top of all the previous ones. If it works for you as well then > > I'll rebase and repost the whole thing. > > > > diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c > > index 4c7f01fcd2..0b651f985c 100644 > > --- a/fs/cramfs/inode.c > > +++ b/fs/cramfs/inode.c > > > Yes, that worked. Very nice! Good. > Just FYI, > I'm running an xipImage with all the RZ/A1 upstream drivers enabled and > only using about 4.5MB of total system RAM. > That's pretty good. Of course for a real application, you would trim off > the drivers and subsystems you don't plan on using, thus lowering your > RAM usage. On my MMU-less test target I'm going under the 1MB mark now. > > +/* > > + * It is possible for cramfs_physmem_mmap() to partially populate the mapping > > + * causing page faults in the unmapped area. When that happens, we need to > > + * split the vma so that the unmapped area gets its own vma that can be backed > > + * with actual memory pages and loaded normally. This is necessary because > > + * remap_pfn_range() overwrites vma->vm_pgoff with the pfn and filemap_fault() > > + * no longer works with it. Furthermore this makes /proc/x/maps right. > > + * Q: is there a way to do split vma at mmap() time? > > + */ > > So if I understand correctly, the issue is that sometimes you only have > a partial PAGE worth that you need to map. Correct? Yes, or the page is stored in its compressed form in the filesystem, or it is misaligned, or any combination of those. > For the AXFS file system, XIP page mapping was done on a per page > decision, not per file. So the mkfs.axfs tool would only mark a page as XIP if > the entire section would fit in a complete PAGE. If for example you had > a partial page at the end of a multi page code segment, it would put > that partial page in a separate portion of the AXFS image and be marked as > 'copy to RAM' instead of being marked as 'map as XIP'. > So in the AXFS case, it was a combination of the creation tool and file > system driver features to fix the partial page issue. > Not sure if any of this info is relevant, but I thought I would mention > anyway. Same applies here. The XIP decision is no longer a per file thing. This is why mkcramfs puts loadable and read-only ELF segments into uncompressed and aligned blocks while still packing the remaining of the file. The partial page issue can be "fixed" within mkcramfs if considered worth it. To incure the page alignment overhead only once, all the uncompressed blocks can be located together away from their file block tables, etc. The extended format implemented in this seris allows for all this layout flexibility the fs creation tool may exploit. The current restriction in the fs driver at the moment is that XIP blocks must be contiguous in the filesystem. That is a hard requirement in the non-mmu case anyway. Given that I also applied the device table patch to mkcramfs (that allows for the creation of device nodes and arbitrary user/group/permission without being root) it would be possible to extend this mechanism to implement other XIP patterns such as for uncompressible media files for example. > Thank you for your efforts on adding XIP to cramfs! Thank you for testing. Nicolas
On Wednesday, August 16, 2017 1, Nicolas Pitre wrote: > > Just FYI, > > I'm running an xipImage with all the RZ/A1 upstream drivers enabled and > > only using about 4.5MB of total system RAM. > > That's pretty good. Of course for a real application, you would trim off > > the drivers and subsystems you don't plan on using, thus lowering your > > RAM usage. > > On my MMU-less test target I'm going under the 1MB mark now. Show off ;) > Given that I also applied the device table patch to mkcramfs (that > allows for the creation of device nodes and arbitrary > user/group/permission without being root) it would be possible to extend > this mechanism to implement other XIP patterns such as for > uncompressible media files for example. Good, I was going to ask about that. I made an example once were all the graphics were RAW and uncompressed and marked as XIP in AXFS. The result was a large saving of RAM because as the graphics framework (DirectFB) would copy directly from Flash whenever it needed to do a background erase or image re-draw (button press animations). Same went for playing MP3 files. The MP3 files were XIP in flash, so mpg123 pulled from flash directly. Chris
On Wed, 16 Aug 2017, Chris Brandt wrote: > I made an example once were all the graphics were RAW and uncompressed > and marked as XIP in AXFS. The result was a large saving of RAM because > as the graphics framework (DirectFB) would copy directly from Flash > whenever it needed to do a background erase or image re-draw (button press > animations). > > Same went for playing MP3 files. The MP3 files were XIP in flash, so > mpg123 pulled from flash directly. I wouldn't have expected mpg123 to mmap() its input files though. If you use read() and not mmap() then you don't get the full benefit. Nicolas
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c index 4c7f01fcd2..0b651f985c 100644 --- a/fs/cramfs/inode.c +++ b/fs/cramfs/inode.c @@ -321,6 +321,86 @@ static u32 cramfs_get_block_range(struct inode *inode, u32 pgoff, u32 *pages) return blockaddr << 2; } +/* + * It is possible for cramfs_physmem_mmap() to partially populate the mapping + * causing page faults in the unmapped area. When that happens, we need to + * split the vma so that the unmapped area gets its own vma that can be backed + * with actual memory pages and loaded normally. This is necessary because + * remap_pfn_range() overwrites vma->vm_pgoff with the pfn and filemap_fault() + * no longer works with it. Furthermore this makes /proc/x/maps right. + * Q: is there a way to do split vma at mmap() time? + */ +static const struct vm_operations_struct cramfs_vmasplit_ops; +static int cramfs_vmasplit_fault(struct vm_fault *vmf) +{ + struct mm_struct *mm = vmf->vma->vm_mm; + struct vm_area_struct *vma, *new_vma; + unsigned long split_val, split_addr; + unsigned int split_pgoff, split_page; + int ret; + + /* Retrieve the vma split address and validate it */ + vma = vmf->vma; + split_val = (unsigned long)vma->vm_private_data; + split_pgoff = split_val & 0xffff; + split_page = split_val >> 16; + split_addr = vma->vm_start + split_page * PAGE_SIZE; + pr_debug("fault: addr=%#lx vma=%#lx-%#lx split=%#lx\n", + vmf->address, vma->vm_start, vma->vm_end, split_addr); + if (!split_val || split_addr >= vma->vm_end || vmf->address < split_addr) + return VM_FAULT_SIGSEGV; + + /* We have some vma surgery to do and need the write lock. */ + up_read(&mm->mmap_sem); + if (down_write_killable(&mm->mmap_sem)) + return VM_FAULT_RETRY; + + /* Make sure the vma didn't change between the locks */ + vma = find_vma(mm, vmf->address); + if (vma->vm_ops != &cramfs_vmasplit_ops) { + /* + * Someone else raced with us and could have handled the fault. + * Let it go back to user space and fault again if necessary. + */ + downgrade_write(&mm->mmap_sem); + return VM_FAULT_NOPAGE; + } + + /* Split the vma between the directly mapped area and the rest */ + ret = split_vma(mm, vma, split_addr, 0); + if (ret) { + downgrade_write(&mm->mmap_sem); + return VM_FAULT_OOM; + } + + /* The direct vma should no longer ever fault */ + vma->vm_ops = NULL; + + /* Retrieve the new vma covering the unmapped area */ + new_vma = find_vma(mm, split_addr); + BUG_ON(new_vma == vma); + if (!new_vma) { + downgrade_write(&mm->mmap_sem); + return VM_FAULT_SIGSEGV; + } + + /* + * Readjust the new vma with the actual file based pgoff and + * process the fault normally on it. + */ + new_vma->vm_pgoff = split_pgoff; + new_vma->vm_ops = &generic_file_vm_ops; + vmf->vma = new_vma; + vmf->pgoff = split_pgoff; + vmf->pgoff += (vmf->address - new_vma->vm_start) >> PAGE_SHIFT; + downgrade_write(&mm->mmap_sem); + return filemap_fault(vmf); +} + +static const struct vm_operations_struct cramfs_vmasplit_ops = { + .fault = cramfs_vmasplit_fault, +}; + static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) { struct inode *inode = file_inode(file); @@ -337,6 +417,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_MAYWRITE)) return -EINVAL; + /* Could COW work here? */ fail_reason = "vma is writable"; if (vma->vm_flags & VM_WRITE) goto fail; @@ -364,7 +445,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) unsigned int partial = offset_in_page(inode->i_size); if (partial) { char *data = sbi->linear_virt_addr + offset; - data += (pages - 1) * PAGE_SIZE + partial; + data += (max_pages - 1) * PAGE_SIZE + partial; while ((unsigned long)data & 7) if (*data++ != 0) goto nonzero; @@ -383,35 +464,42 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma) if (pages) { /* - * Split the vma if we can't map it all so normal paging - * will take care of the rest through cramfs_readpage(). + * If we can't map it all, page faults will occur if the + * unmapped area is accessed. Let's handle them to split the + * vma and let the normal paging machinery take care of the + * rest through cramfs_readpage(). Because remap_pfn_range() + * repurposes vma->vm_pgoff, we have to save it somewhere. + * Let's use vma->vm_private_data to hold both the pgoff and the actual address split point. + * Maximum file size is 16MB so we can pack both together. */ if (pages != vma_pages) { - if (1) { - fail_reason = "fix me"; - goto fail; - } - ret = split_vma(vma->vm_mm, vma, - vma->vm_start + pages * PAGE_SIZE, 0); - if (ret) - return ret; + unsigned int split_pgoff = vma->vm_pgoff + pages; + unsigned long split_val = split_pgoff + (pages << 16); + vma->vm_private_data = (void *)split_val; + vma->vm_ops = &cramfs_vmasplit_ops; + /* to keep remap_pfn_range() happy */ + vma->vm_end = vma->vm_start + pages * PAGE_SIZE; } ret = remap_pfn_range(vma, vma->vm_start, address >> PAGE_SHIFT, pages * PAGE_SIZE, vma->vm_page_prot); + /* restore vm_end in case we cheated it above */ + vma->vm_end = vma->vm_start + vma_pages * PAGE_SIZE; if (ret) return ret; + pr_debug("mapped %s at 0x%08lx, %u/%u pages to vma 0x%08lx, " + "page_prot 0x%llx\n", file_dentry(file)->d_name.name, + address, pages, vma_pages, vma->vm_start, + (unsigned long long)pgprot_val(vma->vm_page_prot)); + return 0; } - - pr_debug("mapped %s at 0x%08lx, %u/%u pages to vma 0x%08lx, " - "page_prot 0x%llx\n", file_dentry(file)->d_name.name, - address, pages, vma_pages, vma->vm_start, - (unsigned long long)pgprot_val(vma->vm_page_prot)); - return 0; + fail_reason = "no suitable block remaining"; fail: pr_debug("%s: direct mmap failed: %s\n", file_dentry(file)->d_name.name, fail_reason); + + /* We failed to do a direct map, but normal paging will do it */ vma->vm_ops = &generic_file_vm_ops; return 0; }