Message ID | 20250107094347.l37isnk3w2nmpx2i@AALNPWDAGOMEZ1.aal.scsc.local (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Swap Min Odrer | expand |
On 07.01.25 10:43, Daniel Gomez wrote: > Hi, Hi, > > High-capacity SSDs require writes to be aligned with the drive's > indirection unit (IU), which is typically >4 KiB, to avoid RMW. To > support swap on these devices, we need to ensure that writes do not > cross IU boundaries. So, I think this may require increasing the minimum > allocation size for swap users. How would we handle swapout/swapin when we have smaller pages (just imagine someone does a mmap(4KiB))? Could this be something that gets abstracted/handled by the swap implementation? (i.e., multiple small folios get added to the swapcache but get written out / read in as a single unit?). I recall that we have been talking about a better swap abstraction for years :) Might be a good topic for LSF/MM (might or might not be a better place than the MM alignment session).
On Tue, Jan 07, 2025 at 11:31:05AM +0100, David Hildenbrand wrote: > On 07.01.25 10:43, Daniel Gomez wrote: > > Hi, > > Hi, > > > > > High-capacity SSDs require writes to be aligned with the drive's > > indirection unit (IU), which is typically >4 KiB, to avoid RMW. To > > support swap on these devices, we need to ensure that writes do not > > cross IU boundaries. So, I think this may require increasing the minimum > > allocation size for swap users. > > How would we handle swapout/swapin when we have smaller pages (just imagine > someone does a mmap(4KiB))? Swapout would require to be aligned to the IU. An mmap of 4 KiB would have to perform an IU KiB write, e.g. 16 KiB or 32 KiB, to avoid any potential RMW penalty. So, I think aligning the mmap allocation to the IU would guarantee a write of the required granularity and alignment. But let's also look at your suggestion below with swapcache. Swapin can still be performed at LBA format levels (e.g. 4 KiB) without the same write penalty implications, and only affecting performance if I/Os are not conformant to these boundaries. So, reading at IU boundaries is preferred to get optimal performance, not a 'requirement'. > > Could this be something that gets abstracted/handled by the swap > implementation? (i.e., multiple small folios get added to the swapcache but > get written out / read in as a single unit?). Do you mean merging like in the block layer? I'm not entirely sure if this could guarantee deterministically the I/O boundaries the same way it does min order large folio allocations in the page cache. But I guess is worth exploring as optimization. > > I recall that we have been talking about a better swap abstraction for years > :) Adding Chris Li to the cc list in case he has more input. > > Might be a good topic for LSF/MM (might or might not be a better place than > the MM alignment session). Both options work for me. LSF/MM is in 12 weeks so, having a previous session would be great. Daniel > > -- > Cheers, > > David / dhildenb >
On 07.01.25 13:29, Daniel Gomez wrote: > On Tue, Jan 07, 2025 at 11:31:05AM +0100, David Hildenbrand wrote: >> On 07.01.25 10:43, Daniel Gomez wrote: >>> Hi, >> >> Hi, >> >>> >>> High-capacity SSDs require writes to be aligned with the drive's >>> indirection unit (IU), which is typically >4 KiB, to avoid RMW. To >>> support swap on these devices, we need to ensure that writes do not >>> cross IU boundaries. So, I think this may require increasing the minimum >>> allocation size for swap users. >> >> How would we handle swapout/swapin when we have smaller pages (just imagine >> someone does a mmap(4KiB))? > > Swapout would require to be aligned to the IU. An mmap of 4 KiB would > have to perform an IU KiB write, e.g. 16 KiB or 32 KiB, to avoid any > potential RMW penalty. So, I think aligning the mmap allocation to the > IU would guarantee a write of the required granularity and alignment. We must be prepared to handle and VMA layout with single-page VMAs, single-page holes etc ... :/ IMHO we should try to handle this transparently to the application. > But let's also look at your suggestion below with swapcache. > > Swapin can still be performed at LBA format levels (e.g. 4 KiB) without > the same write penalty implications, and only affecting performance > if I/Os are not conformant to these boundaries. So, reading at IU > boundaries is preferred to get optimal performance, not a 'requirement'. > >> >> Could this be something that gets abstracted/handled by the swap >> implementation? (i.e., multiple small folios get added to the swapcache but >> get written out / read in as a single unit?). > > Do you mean merging like in the block layer? I'm not entirely sure if > this could guarantee deterministically the I/O boundaries the same way > it does min order large folio allocations in the page cache. But I guess > is worth exploring as optimization. Maybe the swapcache could somehow abstract that? We currently have the swap slot allocator, that assigns slots to pages. Assuming we have a 16 KiB BS but a 4 KiB page, we might have various options to explore. For example, we could size swap slots 16 KiB, and assign even 4 KiB pages a single slot. This would waste swap space with small folios, that would go away with large folios. If we stick to 4 KiB swap slots, maybe pageout() could be taught to effectively writeback "everything" residing in the relevant swap slots that span a BS? I recall there was a discussion about atomic writes involving multiple pages, and how it is hard. Maybe with swaping it is "easier"? Absolutely no expert on that, unfortunately. Hoping Chris has some ideas. > >> >> I recall that we have been talking about a better swap abstraction for years >> :) > > Adding Chris Li to the cc list in case he has more input. > >> >> Might be a good topic for LSF/MM (might or might not be a better place than >> the MM alignment session). > > Both options work for me. LSF/MM is in 12 weeks so, having a previous > session would be great. Both work for me.
On Tue, Jan 07, 2025 at 05:41:23PM +0100, David Hildenbrand wrote: > On 07.01.25 13:29, Daniel Gomez wrote: > > On Tue, Jan 07, 2025 at 11:31:05AM +0100, David Hildenbrand wrote: > > > On 07.01.25 10:43, Daniel Gomez wrote: > > > > Hi, > > > > > > Hi, > > > > > > > > > > > High-capacity SSDs require writes to be aligned with the drive's > > > > indirection unit (IU), which is typically >4 KiB, to avoid RMW. To > > > > support swap on these devices, we need to ensure that writes do not > > > > cross IU boundaries. So, I think this may require increasing the minimum > > > > allocation size for swap users. > > > > > > How would we handle swapout/swapin when we have smaller pages (just imagine > > > someone does a mmap(4KiB))? > > > > Swapout would require to be aligned to the IU. An mmap of 4 KiB would > > have to perform an IU KiB write, e.g. 16 KiB or 32 KiB, to avoid any > > potential RMW penalty. So, I think aligning the mmap allocation to the > > IU would guarantee a write of the required granularity and alignment. > > We must be prepared to handle and VMA layout with single-page VMAs, > single-page holes etc ... :/ IMHO we should try to handle this transparently > to the application. > > > But let's also look at your suggestion below with swapcache. > > > > Swapin can still be performed at LBA format levels (e.g. 4 KiB) without > > the same write penalty implications, and only affecting performance > > if I/Os are not conformant to these boundaries. So, reading at IU > > boundaries is preferred to get optimal performance, not a 'requirement'. > > > > > > > > Could this be something that gets abstracted/handled by the swap > > > implementation? (i.e., multiple small folios get added to the swapcache but > > > get written out / read in as a single unit?). > > > > Do you mean merging like in the block layer? I'm not entirely sure if > > this could guarantee deterministically the I/O boundaries the same way > > it does min order large folio allocations in the page cache. But I guess > > is worth exploring as optimization. > > Maybe the swapcache could somehow abstract that? We currently have the swap > slot allocator, that assigns slots to pages. > > Assuming we have a 16 KiB BS but a 4 KiB page, we might have various options > to explore. > > For example, we could size swap slots 16 KiB, and assign even 4 KiB pages a > single slot. This would waste swap space with small folios, that would go > away with large folios. So batching order-0 folios in bigger slots that match the FS BS (e.g. 16 KiB) to perform disk writes, right? Can we also assign different orders to the same slot? And can we batch folios while keeping alignment to the BS (IU)? > > If we stick to 4 KiB swap slots, maybe pageout() could be taught to > effectively writeback "everything" residing in the relevant swap slots that > span a BS? > > I recall there was a discussion about atomic writes involving multiple > pages, and how it is hard. Maybe with swaping it is "easier"? Absolutely no > expert on that, unfortunately. Hoping Chris has some ideas. Not sure about the discussion but I guess the main concern for atomic and swaping is the alignment and the questions I raised above. > > > > > > > > > > I recall that we have been talking about a better swap abstraction for years > > > :) > > > > Adding Chris Li to the cc list in case he has more input. > > > > > > > > Might be a good topic for LSF/MM (might or might not be a better place than > > > the MM alignment session). > > > > Both options work for me. LSF/MM is in 12 weeks so, having a previous > > session would be great. > > Both work for me. Can we start by scheduling this topic for the next available MM session? Would be great to get initial feedback/thoughts/concerns, etc while we keep this thread going on. > > -- > Cheers, > > David / dhildenb >
>> Maybe the swapcache could somehow abstract that? We currently have the swap >> slot allocator, that assigns slots to pages. >> >> Assuming we have a 16 KiB BS but a 4 KiB page, we might have various options >> to explore. >> >> For example, we could size swap slots 16 KiB, and assign even 4 KiB pages a >> single slot. This would waste swap space with small folios, that would go >> away with large folios. > > So batching order-0 folios in bigger slots that match the FS BS (e.g. 16 > KiB) to perform disk writes, right? Batching might be one idea, but the first idea I raised here would be that the swap slot size will match the BS (e.g., 16 KiB) and contain at most one folio. So a order-0 folio would get a single slot assigned and effectively "waste" 12 KiB of disk space. An order-2 folio would get a single slot assigned and not waste any memory. An order-3 folio would get two slots assigned etc. (similar to how it is done today for non-order-0 folios) So the penalty for using small folios would be more wasted disk space on such devices. Can we also assign different orders > to the same slot? I guess yes. And can we batch folios while keeping alignment to the > BS (IU)? I assume with "batching" you would mean that we could actually have multiple folios inside a single BS, like up to 4 order-0 folios in a single 16 KiB block? That might be one way of doing it, although I suspect this can get a bit complicated. IIUC, we can perform 4 KiB read/write, but we must only have a single write per block, because otherwise we might get the RMW problems, correct? Then, maybe a mechanism to guarantee that only a single swap writeback within a BS can happen at one point in time might also be an alternative. > >> >> If we stick to 4 KiB swap slots, maybe pageout() could be taught to >> effectively writeback "everything" residing in the relevant swap slots that >> span a BS? >> >> I recall there was a discussion about atomic writes involving multiple >> pages, and how it is hard. Maybe with swaping it is "easier"? Absolutely no >> expert on that, unfortunately. Hoping Chris has some ideas. > > Not sure about the discussion but I guess the main concern for atomic > and swaping is the alignment and the questions I raised above. Yes, I think that's similar. > >> >> >>> >>>> >>>> I recall that we have been talking about a better swap abstraction for years >>>> :) >>> >>> Adding Chris Li to the cc list in case he has more input. >>> >>>> >>>> Might be a good topic for LSF/MM (might or might not be a better place than >>>> the MM alignment session). >>> >>> Both options work for me. LSF/MM is in 12 weeks so, having a previous >>> session would be great. >> >> Both work for me. > > Can we start by scheduling this topic for the next available MM session? > Would be great to get initial feedback/thoughts/concerns, etc while we > keep this thread going on. Yeah, it would probably great to present the problem and the exact constraints we have (e.g., things stupid me asks above regarding actual sizes in which we can perform reads and writes), so we can discuss possible solutions. @David R., is the slot in two weeks already taken?
On Tue, Jan 7, 2025 at 4:29 AM Daniel Gomez <da.gomez@samsung.com> wrote: > > On Tue, Jan 07, 2025 at 11:31:05AM +0100, David Hildenbrand wrote: > > On 07.01.25 10:43, Daniel Gomez wrote: > > > Hi, > > > > Hi, > > > > > > > > High-capacity SSDs require writes to be aligned with the drive's > > > indirection unit (IU), which is typically >4 KiB, to avoid RMW. To > > > support swap on these devices, we need to ensure that writes do not > > > cross IU boundaries. So, I think this may require increasing the minimum > > > allocation size for swap users. > > > > How would we handle swapout/swapin when we have smaller pages (just imagine > > someone does a mmap(4KiB))? > > Swapout would require to be aligned to the IU. An mmap of 4 KiB would > have to perform an IU KiB write, e.g. 16 KiB or 32 KiB, to avoid any > potential RMW penalty. So, I think aligning the mmap allocation to the > IU would guarantee a write of the required granularity and alignment. > But let's also look at your suggestion below with swapcache. I think only the writer needs to be grouped by IU size. Ideally the swap front end doesn't have to know about the IU size. There are many reasons forcing the swap entry size on the swap cache would be tricky. e.g. If the folio is 4K, it is tricky to force it to be 16K. Only 1 4K page is cold, the other nearby page is hot. etc etc. > > Swapin can still be performed at LBA format levels (e.g. 4 KiB) without > the same write penalty implications, and only affecting performance > if I/Os are not conformant to these boundaries. So, reading at IU > boundaries is preferred to get optimal performance, not a 'requirement'. > > > > > Could this be something that gets abstracted/handled by the swap > > implementation? (i.e., multiple small folios get added to the swapcache but > > get written out / read in as a single unit?). Yes. > > Do you mean merging like in the block layer? I'm not entirely sure if > this could guarantee deterministically the I/O boundaries the same way > it does min order large folio allocations in the page cache. But I guess > is worth exploring as optimization. > > > > > I recall that we have been talking about a better swap abstraction for years > > :) > > Adding Chris Li to the cc list in case he has more input. Sorry I'm a bit late to the party. Yes I do have some ideas I want to propose on the LSF/MM as topics, maybe early next week. Here are some highlights of it. I think we need to have a separation of the swap cache and the backing of IO of the swap file. I call it the "virtual swapfile". It is virtual in two aspect: 1) There is an up front size at swap on, but no up front allocation of the vmalloc array. The array grows as needed. 2) There is a virtual to physical swap entry mapping. The cost is 4 bytes per swap entry. But it will solve a lot of problems all together. IU size write grouping would be a good user of this virtual layer. Another usage case if we want to write a compressed zswap/zram entry into the SSD, we might actually encounter the size problem in another direction. e.g. writing swap entries smaller than 4K. I am still working on the write up. More details will come. Chris > > > > > Might be a good topic for LSF/MM (might or might not be a better place than > > the MM alignment session). > > Both options work for me. LSF/MM is in 12 weeks so, having a previous > session would be great. > > Daniel > > > > > -- > > Cheers, > > > > David / dhildenb > >
On Tue, Jan 7, 2025 at 8:41 AM David Hildenbrand <david@redhat.com> wrote: > > On 07.01.25 13:29, Daniel Gomez wrote: > > On Tue, Jan 07, 2025 at 11:31:05AM +0100, David Hildenbrand wrote: > >> On 07.01.25 10:43, Daniel Gomez wrote: > >>> Hi, > >> > >> Hi, > >> > >>> > >>> High-capacity SSDs require writes to be aligned with the drive's > >>> indirection unit (IU), which is typically >4 KiB, to avoid RMW. To > >>> support swap on these devices, we need to ensure that writes do not > >>> cross IU boundaries. So, I think this may require increasing the minimum > >>> allocation size for swap users. > >> > >> How would we handle swapout/swapin when we have smaller pages (just imagine > >> someone does a mmap(4KiB))? > > > > Swapout would require to be aligned to the IU. An mmap of 4 KiB would > > have to perform an IU KiB write, e.g. 16 KiB or 32 KiB, to avoid any > > potential RMW penalty. So, I think aligning the mmap allocation to the > > IU would guarantee a write of the required granularity and alignment. > > We must be prepared to handle and VMA layout with single-page VMAs, > single-page holes etc ... :/ IMHO we should try to handle this > transparently to the application. > > > But let's also look at your suggestion below with swapcache. > > > > Swapin can still be performed at LBA format levels (e.g. 4 KiB) without > > the same write penalty implications, and only affecting performance > > if I/Os are not conformant to these boundaries. So, reading at IU > > boundaries is preferred to get optimal performance, not a 'requirement'. > > > >> > >> Could this be something that gets abstracted/handled by the swap > >> implementation? (i.e., multiple small folios get added to the swapcache but > >> get written out / read in as a single unit?). > > > > Do you mean merging like in the block layer? I'm not entirely sure if > > this could guarantee deterministically the I/O boundaries the same way > > it does min order large folio allocations in the page cache. But I guess > > is worth exploring as optimization. > > Maybe the swapcache could somehow abstract that? We currently have the > swap slot allocator, that assigns slots to pages. > > Assuming we have a 16 KiB BS but a 4 KiB page, we might have various > options to explore. > > For example, we could size swap slots 16 KiB, and assign even 4 KiB > pages a single slot. This would waste swap space with small folios, that > would go away with large folios. We can group multiple swap 4K swap entries into one 16K write unit. There will be no waste of the SSD. > > If we stick to 4 KiB swap slots, maybe pageout() could be taught to > effectively writeback "everything" residing in the relevant swap slots > that span a BS? > > I recall there was a discussion about atomic writes involving multiple > pages, and how it is hard. Maybe with swaping it is "easier"? Absolutely > no expert on that, unfortunately. Hoping Chris has some ideas. Yes, see my other email about the "virtual swapfile" idea. More detailed write up coming next week. Chris > > > > > >> > >> I recall that we have been talking about a better swap abstraction for years > >> :) > > > > Adding Chris Li to the cc list in case he has more input. > > > >> > >> Might be a good topic for LSF/MM (might or might not be a better place than > >> the MM alignment session). > > > > Both options work for me. LSF/MM is in 12 weeks so, having a previous > > session would be great. > > Both work for me. > > -- > Cheers, > > David / dhildenb >
On Wed, Jan 8, 2025 at 12:36 PM David Hildenbrand <david@redhat.com> wrote: > > >> Maybe the swapcache could somehow abstract that? We currently have the swap > >> slot allocator, that assigns slots to pages. > >> > >> Assuming we have a 16 KiB BS but a 4 KiB page, we might have various options > >> to explore. > >> > >> For example, we could size swap slots 16 KiB, and assign even 4 KiB pages a > >> single slot. This would waste swap space with small folios, that would go > >> away with large folios. > > > > So batching order-0 folios in bigger slots that match the FS BS (e.g. 16 > > KiB) to perform disk writes, right? > > Batching might be one idea, but the first idea I raised here would be > that the swap slot size will match the BS (e.g., 16 KiB) and contain at > most one folio. > > So a order-0 folio would get a single slot assigned and effectively > "waste" 12 KiB of disk space. I prefer not to "waste" that. It will be wasted on the write amplification as well. > > An order-2 folio would get a single slot assigned and not waste any memory. > > An order-3 folio would get two slots assigned etc. (similar to how it is > done today for non-order-0 folios) > > So the penalty for using small folios would be more wasted disk space on > such devices. > > Can we also assign different orders > > to the same slot? > > I guess yes. > > And can we batch folios while keeping alignment to the > > BS (IU)? > > I assume with "batching" you would mean that we could actually have > multiple folios inside a single BS, like up to 4 order-0 folios in a > single 16 KiB block? That might be one way of doing it, although I > suspect this can get a bit complicated. That would be my preference. BTW, another usage case is that if we want to write compressed swap entries into the SSD (to reduce the wear on SSD), we will also end up with a similar situation where we want to combine multiple swap entries into a write unit. > > IIUC, we can perform 4 KiB read/write, but we must only have a single > write per block, because otherwise we might get the RMW problems, > correct? Then, maybe a mechanism to guarantee that only a single swap > writeback within a BS can happen at one point in time might also be an > alternative. Yes, I do see that batching and grouping write of the swap entries is necessary and useful. > > > > >> > >> If we stick to 4 KiB swap slots, maybe pageout() could be taught to > >> effectively writeback "everything" residing in the relevant swap slots that > >> span a BS? > >> > >> I recall there was a discussion about atomic writes involving multiple > >> pages, and how it is hard. Maybe with swaping it is "easier"? Absolutely no > >> expert on that, unfortunately. Hoping Chris has some ideas. > > > > Not sure about the discussion but I guess the main concern for atomic > > and swaping is the alignment and the questions I raised above. > > Yes, I think that's similar. Agree, it is very much similar. It can share a single solution, the "virtual swapfile". That is my proposal. > > > > >> > >> > >>> > >>>> > >>>> I recall that we have been talking about a better swap abstraction for years > >>>> :) > >>> > >>> Adding Chris Li to the cc list in case he has more input. > >>> > >>>> > >>>> Might be a good topic for LSF/MM (might or might not be a better place than > >>>> the MM alignment session). > >>> > >>> Both options work for me. LSF/MM is in 12 weeks so, having a previous > >>> session would be great. > >> > >> Both work for me. > > > > Can we start by scheduling this topic for the next available MM session? > > Would be great to get initial feedback/thoughts/concerns, etc while we > > keep this thread going on. > > Yeah, it would probably great to present the problem and the exact > constraints we have (e.g., things stupid me asks above regarding actual > sizes in which we can perform reads and writes), so we can discuss > possible solutions. > > @David R., is the slot in two weeks already taken? Hopefully I can send out the "virtual swapfile" proposal in time and we can discuss that as one of the possible approaches. Chris > > -- > Cheers, > > David / dhildenb >
On 08.01.25 22:19, Chris Li wrote: > On Wed, Jan 8, 2025 at 12:36 PM David Hildenbrand <david@redhat.com> wrote: >> >>>> Maybe the swapcache could somehow abstract that? We currently have the swap >>>> slot allocator, that assigns slots to pages. >>>> >>>> Assuming we have a 16 KiB BS but a 4 KiB page, we might have various options >>>> to explore. >>>> >>>> For example, we could size swap slots 16 KiB, and assign even 4 KiB pages a >>>> single slot. This would waste swap space with small folios, that would go >>>> away with large folios. >>> >>> So batching order-0 folios in bigger slots that match the FS BS (e.g. 16 >>> KiB) to perform disk writes, right? >> >> Batching might be one idea, but the first idea I raised here would be >> that the swap slot size will match the BS (e.g., 16 KiB) and contain at >> most one folio. >> >> So a order-0 folio would get a single slot assigned and effectively >> "waste" 12 KiB of disk space. > > I prefer not to "waste" that. It will be wasted on the write > amplification as well. If it can be implemented fairly easily, sure! :) Looking forward to hearing about the proposal!
diff --git a/mm/swapfile.c b/mm/swapfile.c index b0a9071cfe1d..80a9dbe9645a 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -3128,6 +3128,7 @@ static int claim_swapfile(struct swap_info_struct *si, struct inode *inode) si->flags |= SWP_BLKDEV; } else if (S_ISREG(inode->i_mode)) { si->bdev = inode->i_sb->s_bdev; + si->flags |= SWP_BLKDEV; } return 0;