Message ID | 20250110154659.95464-1-kalyazin@amazon.com (mailing list archive) |
---|---|
Headers | show |
Series | mm: filemap: add filemap_grab_folios | expand |
On 10.01.25 16:46, Nikita Kalyazin wrote: > Based on David's suggestion for speeding up guest_memfd memory > population [1] made at the guest_memfd upstream call on 5 Dec 2024 [2], > this adds `filemap_grab_folios` that grabs multiple folios at a time. > Hi, > Motivation > > When profiling guest_memfd population and comparing the results with > population of anonymous memory via UFFDIO_COPY, I observed that the > former was up to 20% slower, mainly due to adding newly allocated pages > to the pagecache. As far as I can see, the two main contributors to it > are pagecache locking and tree traversals needed for every folio. The > RFC attempts to partially mitigate those by adding multiple folios at a > time to the pagecache. > > Testing > > With the change applied, I was able to observe a 10.3% (708 to 635 ms) > speedup in a selftest that populated 3GiB guest_memfd and a 9.5% (990 to > 904 ms) speedup when restoring a 3GiB guest_memfd VM snapshot using a > custom Firecracker version, both on Intel Ice Lake. Does that mean that it's still 10% slower (based on the 20% above), or were the 20% from a different micro-benchmark? > > Limitations > > While `filemap_grab_folios` handles THP/large folios internally and > deals with reclaim artifacts in the pagecache (shadows), for simplicity > reasons, the RFC does not support those as it demonstrates the > optimisation applied to guest_memfd, which only uses small folios and > does not support reclaim at the moment. It might be worth pointing out that, while support for larger folios is in the works, there will be scenarios where small folios are unavoidable in the future (mixture of shared and private memory). How hard would it be to just naturally support large folios as well? We do have memfd_pin_folios() that can deal with that and provides a slightly similar interface (struct folio **folios). For reference, the interface is: long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end, struct folio **folios, unsigned int max_folios, pgoff_t *offset) Maybe what you propose could even be used to further improve memfd_pin_folios() internally? However, it must do this FOLL_PIN thingy, so it must process each and every folio it processed.
On 10/01/2025 17:01, David Hildenbrand wrote: > On 10.01.25 16:46, Nikita Kalyazin wrote: >> Based on David's suggestion for speeding up guest_memfd memory >> population [1] made at the guest_memfd upstream call on 5 Dec 2024 [2], >> this adds `filemap_grab_folios` that grabs multiple folios at a time. >> > > Hi, Hi :) > >> Motivation >> >> When profiling guest_memfd population and comparing the results with >> population of anonymous memory via UFFDIO_COPY, I observed that the >> former was up to 20% slower, mainly due to adding newly allocated pages >> to the pagecache. As far as I can see, the two main contributors to it >> are pagecache locking and tree traversals needed for every folio. The >> RFC attempts to partially mitigate those by adding multiple folios at a >> time to the pagecache. >> >> Testing >> >> With the change applied, I was able to observe a 10.3% (708 to 635 ms) >> speedup in a selftest that populated 3GiB guest_memfd and a 9.5% (990 to >> 904 ms) speedup when restoring a 3GiB guest_memfd VM snapshot using a >> custom Firecracker version, both on Intel Ice Lake. > > Does that mean that it's still 10% slower (based on the 20% above), or > were the 20% from a different micro-benchmark? Yes, it is still slower: - isolated/selftest: 2.3% - Firecracker setup: 8.9% Not sure why the values are so different though. I'll try to find an explanation. >> >> Limitations >> >> While `filemap_grab_folios` handles THP/large folios internally and >> deals with reclaim artifacts in the pagecache (shadows), for simplicity >> reasons, the RFC does not support those as it demonstrates the >> optimisation applied to guest_memfd, which only uses small folios and >> does not support reclaim at the moment. > > It might be worth pointing out that, while support for larger folios is > in the works, there will be scenarios where small folios are unavoidable > in the future (mixture of shared and private memory). > > How hard would it be to just naturally support large folios as well? I don't think it's going to be impossible. It's just one more dimension that needs to be handled. `__filemap_add_folio` logic is already rather complex, and processing multiple folios while also splitting when necessary correctly looks substantially convoluted to me. So my idea was to discuss/validate the multi-folio approach first before rolling the sleeves up. > We do have memfd_pin_folios() that can deal with that and provides a > slightly similar interface (struct folio **folios). > > For reference, the interface is: > > long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end, > struct folio **folios, unsigned int max_folios, > pgoff_t *offset) > > Maybe what you propose could even be used to further improve > memfd_pin_folios() internally? However, it must do this FOLL_PIN thingy, > so it must process each and every folio it processed. Thanks for the pointer. Yeah, I see what you mean. I guess, it can potentially allocate/add folios in a batch and then pin them? Although swap/readahead logic may make it more difficult to implement. > -- > Cheers, > > David / dhildenb