Message ID | 20240614100902.3469724-2-usamaarif642@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: store zero pages to be swapped out in a bitmap | expand |
On 2024/6/14 18:07, Usama Arif wrote: > Approximately 10-20% of pages to be swapped out are zero pages [1]. > Rather than reading/writing these pages to flash resulting > in increased I/O and flash wear, a bitmap can be used to mark these > pages as zero at write time, and the pages can be filled at > read time if the bit corresponding to the page is set. > With this patch, NVMe writes in Meta server fleet decreased > by almost 10% with conventional swap setup (zswap disabled). > > [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ > > Signed-off-by: Usama Arif <usamaarif642@gmail.com> Looks good to me, only some small nits below. Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev> > --- > include/linux/swap.h | 1 + > mm/page_io.c | 113 ++++++++++++++++++++++++++++++++++++++++++- > mm/swapfile.c | 15 ++++++ > 3 files changed, 128 insertions(+), 1 deletion(-) > [...] > + > +static void swap_zeromap_folio_set(struct folio *folio) > +{ > + struct swap_info_struct *sis = swp_swap_info(folio->swap); > + swp_entry_t entry; > + unsigned int i; > + > + for (i = 0; i < folio_nr_pages(folio); i++) { > + entry = page_swap_entry(folio_page(folio, i)); It seems simpler to use: swp_entry_t entry = folio->swap; for (i = 0; i < folio_nr_pages(folio); i++, entry.val++) The current code is good too, no objection. > + set_bit(swp_offset(entry), sis->zeromap); > + } > +} > + [...] > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 9c6d8e557c0f..0b8270359bcf 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -747,6 +747,14 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, > unsigned long begin = offset; > unsigned long end = offset + nr_entries - 1; > void (*swap_slot_free_notify)(struct block_device *, unsigned long); > + unsigned int i; > + > + /* > + * Use atomic clear_bit operations only on zeromap instead of non-atomic > + * bitmap_clear to prevent adjacent bits corruption due to simultaneous writes. > + */ > + for (i = 0; i < nr_entries; i++) > + clear_bit(offset + i, si->zeromap); I'm wondering if we need to clear bits at all? Since the current locked folio is the owner of these bits, we always update correctly when swap_writepage(). So if these swap entries freed and reused by another folio, we won't load from backend until that another folio has gone swap_writepage(), which update these bits correctly. Maybe I missed something? Anyway, it should be no harm to clear here too. Thanks. > > if (offset < si->lowest_bit) > si->lowest_bit = offset; > @@ -2635,6 +2643,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) > free_percpu(p->cluster_next_cpu); > p->cluster_next_cpu = NULL; > vfree(swap_map); > + bitmap_free(p->zeromap); > kvfree(cluster_info); > /* Destroy swap account information */ > swap_cgroup_swapoff(p->type); > @@ -3161,6 +3170,12 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) > goto bad_swap_unlock_inode; > } > > + p->zeromap = bitmap_zalloc(maxpages, GFP_KERNEL); > + if (!p->zeromap) { > + error = -ENOMEM; > + goto bad_swap_unlock_inode; > + } > + > if (p->bdev && bdev_stable_writes(p->bdev)) > p->flags |= SWP_STABLE_WRITES; >
Usama Arif <usamaarif642@gmail.com> writes: > Approximately 10-20% of pages to be swapped out are zero pages [1]. > Rather than reading/writing these pages to flash resulting > in increased I/O and flash wear, a bitmap can be used to mark these > pages as zero at write time, and the pages can be filled at > read time if the bit corresponding to the page is set. > With this patch, NVMe writes in Meta server fleet decreased > by almost 10% with conventional swap setup (zswap disabled). > > [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ But how much did the CPU time increase? Surely the new loop is not free? -Andi
On 14/06/2024 15:45, Andi Kleen wrote: > Usama Arif <usamaarif642@gmail.com> writes: > >> Approximately 10-20% of pages to be swapped out are zero pages [1]. >> Rather than reading/writing these pages to flash resulting >> in increased I/O and flash wear, a bitmap can be used to mark these >> pages as zero at write time, and the pages can be filled at >> read time if the bit corresponding to the page is set. >> With this patch, NVMe writes in Meta server fleet decreased >> by almost 10% with conventional swap setup (zswap disabled). >> >> [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ > But how much did the CPU time increase? Surely the new loop is not free? > > -Andi It is negligible. For a zero filled page, without zero-fill optimization, the CPU would have to do page compression in zswap or dispatch write to disk, so this optimization is just replacing the CPU usage for these tasks with CPU usage for checking if page is zero-filled. This is the reason why same-filled optimization was there in zswap. Zswap should focus on actual compression and this series is just moving the optimization to swap. For a non-zero filled page, the loop quits the first instance you see non zero data and checks the last word first, so its likely going to quite very early on in the loop.
On Fri, Jun 14, 2024 at 3:09 AM Usama Arif <usamaarif642@gmail.com> wrote: > > Approximately 10-20% of pages to be swapped out are zero pages [1]. > Rather than reading/writing these pages to flash resulting > in increased I/O and flash wear, a bitmap can be used to mark these > pages as zero at write time, and the pages can be filled at > read time if the bit corresponding to the page is set. > With this patch, NVMe writes in Meta server fleet decreased > by almost 10% with conventional swap setup (zswap disabled). > > [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ > > Signed-off-by: Usama Arif <usamaarif642@gmail.com> Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
On Fri, Jun 14, 2024 at 5:06 AM Chengming Zhou <chengming.zhou@linux.dev> wrote: > > On 2024/6/14 18:07, Usama Arif wrote: > > Approximately 10-20% of pages to be swapped out are zero pages [1]. > > Rather than reading/writing these pages to flash resulting > > in increased I/O and flash wear, a bitmap can be used to mark these > > pages as zero at write time, and the pages can be filled at > > read time if the bit corresponding to the page is set. > > With this patch, NVMe writes in Meta server fleet decreased > > by almost 10% with conventional swap setup (zswap disabled). > > > > [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ > > > > Signed-off-by: Usama Arif <usamaarif642@gmail.com> > > Looks good to me, only some small nits below. > > Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev> > > > --- > > include/linux/swap.h | 1 + > > mm/page_io.c | 113 ++++++++++++++++++++++++++++++++++++++++++- > > mm/swapfile.c | 15 ++++++ > > 3 files changed, 128 insertions(+), 1 deletion(-) > > > [...] > > + > > +static void swap_zeromap_folio_set(struct folio *folio) > > +{ > > + struct swap_info_struct *sis = swp_swap_info(folio->swap); > > + swp_entry_t entry; > > + unsigned int i; > > + > > + for (i = 0; i < folio_nr_pages(folio); i++) { > > + entry = page_swap_entry(folio_page(folio, i)); > > It seems simpler to use: > > swp_entry_t entry = folio->swap; > > for (i = 0; i < folio_nr_pages(folio); i++, entry.val++) I was actually thinking we could introduce folio_swap_entry(folio, i) after the series. Multiple callers of page_swap_entry() have a folio already. It would save some compound_head() calls. Alternatively, for this patch we can introduce zeromap_update_range(zeromap, offset, size, value). Then we can use it in swap_zeromap_folio_set/cear() as well as swap_range_free(). It would also be a good place to park the comment about using atomic operations (set_bit() and clear_bit()).
On Fri, Jun 14, 2024 at 3:09 AM Usama Arif <usamaarif642@gmail.com> wrote: > > Approximately 10-20% of pages to be swapped out are zero pages [1]. > Rather than reading/writing these pages to flash resulting > in increased I/O and flash wear, a bitmap can be used to mark these > pages as zero at write time, and the pages can be filled at > read time if the bit corresponding to the page is set. > With this patch, NVMe writes in Meta server fleet decreased > by almost 10% with conventional swap setup (zswap disabled). > > [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ > > Signed-off-by: Usama Arif <usamaarif642@gmail.com> I like this version a lot :) Really clean. Reviewed-by: Nhat Pham <nphamcs@gmail.com>
diff --git a/include/linux/swap.h b/include/linux/swap.h index 3df75d62a835..ed03d421febd 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -299,6 +299,7 @@ struct swap_info_struct { signed char type; /* strange name for an index */ unsigned int max; /* extent of the swap_map */ unsigned char *swap_map; /* vmalloc'ed array of usage counts */ + unsigned long *zeromap; /* vmalloc'ed bitmap to track zero pages */ struct swap_cluster_info *cluster_info; /* cluster info. Only for SSD */ struct swap_cluster_list free_clusters; /* free clusters list */ unsigned int lowest_bit; /* index of first free in swap_map */ diff --git a/mm/page_io.c b/mm/page_io.c index 6c1c1828bb88..480b8f221d90 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -172,6 +172,88 @@ int generic_swapfile_activate(struct swap_info_struct *sis, goto out; } +static bool is_folio_page_zero_filled(struct folio *folio, int i) +{ + unsigned long *data; + unsigned int pos, last_pos = PAGE_SIZE / sizeof(*data) - 1; + bool ret = false; + + data = kmap_local_folio(folio, i * PAGE_SIZE); + if (data[last_pos]) + goto out; + for (pos = 0; pos < PAGE_SIZE / sizeof(*data); pos++) { + if (data[pos]) + goto out; + } + ret = true; +out: + kunmap_local(data); + return ret; +} + +static bool is_folio_zero_filled(struct folio *folio) +{ + unsigned int i; + + for (i = 0; i < folio_nr_pages(folio); i++) { + if (!is_folio_page_zero_filled(folio, i)) + return false; + } + return true; +} + +static void folio_zero_fill(struct folio *folio) +{ + unsigned int i; + + for (i = 0; i < folio_nr_pages(folio); i++) + clear_highpage(folio_page(folio, i)); +} + +static void swap_zeromap_folio_set(struct folio *folio) +{ + struct swap_info_struct *sis = swp_swap_info(folio->swap); + swp_entry_t entry; + unsigned int i; + + for (i = 0; i < folio_nr_pages(folio); i++) { + entry = page_swap_entry(folio_page(folio, i)); + set_bit(swp_offset(entry), sis->zeromap); + } +} + +static void swap_zeromap_folio_clear(struct folio *folio) +{ + struct swap_info_struct *sis = swp_swap_info(folio->swap); + swp_entry_t entry; + unsigned int i; + + for (i = 0; i < folio_nr_pages(folio); i++) { + entry = page_swap_entry(folio_page(folio, i)); + clear_bit(swp_offset(entry), sis->zeromap); + } +} + +/* + * Return the index of the first subpage which is not zero-filled + * according to swap_info_struct->zeromap. + * If all pages are zero-filled according to zeromap, it will return + * folio_nr_pages(folio). + */ +static unsigned int swap_zeromap_folio_test(struct folio *folio) +{ + struct swap_info_struct *sis = swp_swap_info(folio->swap); + swp_entry_t entry; + unsigned int i; + + for (i = 0; i < folio_nr_pages(folio); i++) { + entry = page_swap_entry(folio_page(folio, i)); + if (!test_bit(swp_offset(entry), sis->zeromap)) + return i; + } + return i; +} + /* * We may have stale swap cache pages in memory: notice * them here and get rid of the unnecessary final write. @@ -195,6 +277,13 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) folio_unlock(folio); return ret; } + + if (is_folio_zero_filled(folio)) { + swap_zeromap_folio_set(folio); + folio_unlock(folio); + return 0; + } + swap_zeromap_folio_clear(folio); if (zswap_store(folio)) { folio_unlock(folio); return 0; @@ -424,6 +513,26 @@ static void sio_read_complete(struct kiocb *iocb, long ret) mempool_free(sio, sio_pool); } +static bool swap_read_folio_zeromap(struct folio *folio) +{ + unsigned int idx = swap_zeromap_folio_test(folio); + + if (idx == 0) + return false; + + /* + * Swapping in a large folio that is partially in the zeromap is not + * currently handled. Return true without marking the folio uptodate so + * that an IO error is emitted (e.g. do_swap_page() will sigbus). + */ + if (WARN_ON_ONCE(idx < folio_nr_pages(folio))) + return true; + + folio_zero_fill(folio); + folio_mark_uptodate(folio); + return true; +} + static void swap_read_folio_fs(struct folio *folio, struct swap_iocb **plug) { struct swap_info_struct *sis = swp_swap_info(folio->swap); @@ -514,7 +623,9 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug) } delayacct_swapin_start(); - if (zswap_load(folio)) { + if (swap_read_folio_zeromap(folio)) { + folio_unlock(folio); + } else if (zswap_load(folio)) { folio_unlock(folio); } else if (data_race(sis->flags & SWP_FS_OPS)) { swap_read_folio_fs(folio, plug); diff --git a/mm/swapfile.c b/mm/swapfile.c index 9c6d8e557c0f..0b8270359bcf 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -747,6 +747,14 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, unsigned long begin = offset; unsigned long end = offset + nr_entries - 1; void (*swap_slot_free_notify)(struct block_device *, unsigned long); + unsigned int i; + + /* + * Use atomic clear_bit operations only on zeromap instead of non-atomic + * bitmap_clear to prevent adjacent bits corruption due to simultaneous writes. + */ + for (i = 0; i < nr_entries; i++) + clear_bit(offset + i, si->zeromap); if (offset < si->lowest_bit) si->lowest_bit = offset; @@ -2635,6 +2643,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) free_percpu(p->cluster_next_cpu); p->cluster_next_cpu = NULL; vfree(swap_map); + bitmap_free(p->zeromap); kvfree(cluster_info); /* Destroy swap account information */ swap_cgroup_swapoff(p->type); @@ -3161,6 +3170,12 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) goto bad_swap_unlock_inode; } + p->zeromap = bitmap_zalloc(maxpages, GFP_KERNEL); + if (!p->zeromap) { + error = -ENOMEM; + goto bad_swap_unlock_inode; + } + if (p->bdev && bdev_stable_writes(p->bdev)) p->flags |= SWP_STABLE_WRITES;
Approximately 10-20% of pages to be swapped out are zero pages [1]. Rather than reading/writing these pages to flash resulting in increased I/O and flash wear, a bitmap can be used to mark these pages as zero at write time, and the pages can be filled at read time if the bit corresponding to the page is set. With this patch, NVMe writes in Meta server fleet decreased by almost 10% with conventional swap setup (zswap disabled). [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ Signed-off-by: Usama Arif <usamaarif642@gmail.com> --- include/linux/swap.h | 1 + mm/page_io.c | 113 ++++++++++++++++++++++++++++++++++++++++++- mm/swapfile.c | 15 ++++++ 3 files changed, 128 insertions(+), 1 deletion(-)