mbox series

[v3,00/20] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool

Message ID 20240708063344.1096626-1-alexs@kernel.org (mailing list archive)
Headers show
Series mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool | expand

Message

alexs@kernel.org July 8, 2024, 6:33 a.m. UTC
From: Alex Shi (Tencent) <alexs@kernel.org>

According to Metthew's plan, the page descriptor will be replace by a 8
bytes mem_desc on destination purpose.
https://lore.kernel.org/lkml/YvV1KTyzZ+Jrtj9x@casper.infradead.org/

Here is a implement on zsmalloc to replace page descriptor by 'zpdesc',
which is still overlay on struct page now. but it's a step move forward
above destination.

To name the struct zpdesc instead of zsdesc, since there are still 3
zpools under zswap: zbud, z3fold, zsmalloc for now(z3fold maybe removed
soon), and we could easyly extend it to other zswap.zpool in needs.

For all zswap.zpools, they are all using single page since often used
under memory pressure. So the conversion via folio series helper is
better than page's for compound_head check saving.

For now, all zpools are using some page struct members, like page.flags
for PG_private/PG_locked. and list_head lru, page.mapping for page migration.

This patachset does not increase the descriptor size nor introduce any
functional changes, and could save about 122Kbytes zsmalloc.o size.

Thanks
Alex

---
v3->v2:
- Fix LKP reported build issue
- Update the Usage of struct zpdesc fields.
- Rebase onto latest mm-unstable commit 2073cda629a4

v1->v2: 
- Take Yosry and Yoo's suggestion to add more members in zpdesc,
- Rebase on latest mm-unstable commit 31334cf98dbd
---

Alex Shi (Tencent) (9):
  mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
  mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage
  mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
  mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it
  mm/zsmalloc: convert SetZsPageMovable and remove unused funcs
  mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc
  mm/zsmalloc: introduce __zpdesc_clear_movable
  mm/zsmalloc: introduce __zpdesc_clear_zsmalloc
  mm/zsmalloc: introduce __zpdesc_set_zsmalloc()

Hyeonggon Yoo (11):
  mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc
  mm/zsmalloc: add and use pfn/zpdesc seeking funcs
  mm/zsmalloc: convert obj_malloc() to use zpdesc
  mm/zsmalloc: convert obj_allocated() and related helpers to use zpdesc
  mm/zsmalloc: convert init_zspage() to use zpdesc
  mm/zsmalloc: convert obj_to_page() and zs_free() to use zpdesc
  mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for
    zs_page_migrate
  mm/zsmalloc: convert __free_zspage() to use zdsesc
  mm/zsmalloc: convert location_to_obj() to take zpdesc
  mm/zsmalloc: convert migrate_zspage() to use zpdesc
  mm/zsmalloc: convert get_zspage() to take zpdesc

 mm/zpdesc.h   | 146 ++++++++++++++++
 mm/zsmalloc.c | 460 +++++++++++++++++++++++++++-----------------------
 2 files changed, 398 insertions(+), 208 deletions(-)
 create mode 100644 mm/zpdesc.h

Comments

Alex Shi July 15, 2024, 1:33 a.m. UTC | #1
On 7/8/24 2:33 PM, alexs@kernel.org wrote:
> From: Alex Shi (Tencent) <alexs@kernel.org>
> 
> According to Metthew's plan, the page descriptor will be replace by a 8
> bytes mem_desc on destination purpose.
> https://lore.kernel.org/lkml/YvV1KTyzZ+Jrtj9x@casper.infradead.org/
> 
> Here is a implement on zsmalloc to replace page descriptor by 'zpdesc',
> which is still overlay on struct page now. but it's a step move forward
> above destination.
> 
> To name the struct zpdesc instead of zsdesc, since there are still 3
> zpools under zswap: zbud, z3fold, zsmalloc for now(z3fold maybe removed
> soon), and we could easyly extend it to other zswap.zpool in needs.
> 
> For all zswap.zpools, they are all using single page since often used
> under memory pressure. So the conversion via folio series helper is
> better than page's for compound_head check saving.
> 
> For now, all zpools are using some page struct members, like page.flags
> for PG_private/PG_locked. and list_head lru, page.mapping for page migration.
> 
> This patachset does not increase the descriptor size nor introduce any
> functional changes, and could save about 122Kbytes zsmalloc.o size.
> 
> Thanks
> Alex
> 

Any comments for this patchset?

Thanks
Alex

> ---
> v3->v2:
> - Fix LKP reported build issue
> - Update the Usage of struct zpdesc fields.
> - Rebase onto latest mm-unstable commit 2073cda629a4
> 
> v1->v2: 
> - Take Yosry and Yoo's suggestion to add more members in zpdesc,
> - Rebase on latest mm-unstable commit 31334cf98dbd
> ---
> 
> Alex Shi (Tencent) (9):
>   mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool
>   mm/zsmalloc: use zpdesc in trylock_zspage/lock_zspage
>   mm/zsmalloc: convert create_page_chain() and its users to use zpdesc
>   mm/zsmalloc: rename reset_page to reset_zpdesc and use zpdesc in it
>   mm/zsmalloc: convert SetZsPageMovable and remove unused funcs
>   mm/zsmalloc: convert get/set_first_obj_offset() to take zpdesc
>   mm/zsmalloc: introduce __zpdesc_clear_movable
>   mm/zsmalloc: introduce __zpdesc_clear_zsmalloc
>   mm/zsmalloc: introduce __zpdesc_set_zsmalloc()
> 
> Hyeonggon Yoo (11):
>   mm/zsmalloc: convert __zs_map_object/__zs_unmap_object to use zpdesc
>   mm/zsmalloc: add and use pfn/zpdesc seeking funcs
>   mm/zsmalloc: convert obj_malloc() to use zpdesc
>   mm/zsmalloc: convert obj_allocated() and related helpers to use zpdesc
>   mm/zsmalloc: convert init_zspage() to use zpdesc
>   mm/zsmalloc: convert obj_to_page() and zs_free() to use zpdesc
>   mm/zsmalloc: add zpdesc_is_isolated/zpdesc_zone helper for
>     zs_page_migrate
>   mm/zsmalloc: convert __free_zspage() to use zdsesc
>   mm/zsmalloc: convert location_to_obj() to take zpdesc
>   mm/zsmalloc: convert migrate_zspage() to use zpdesc
>   mm/zsmalloc: convert get_zspage() to take zpdesc
> 
>  mm/zpdesc.h   | 146 ++++++++++++++++
>  mm/zsmalloc.c | 460 +++++++++++++++++++++++++++-----------------------
>  2 files changed, 398 insertions(+), 208 deletions(-)
>  create mode 100644 mm/zpdesc.h
>
David Hildenbrand July 15, 2024, 4:21 p.m. UTC | #2
On 15.07.24 03:33, Alex Shi wrote:
> 
> 
> On 7/8/24 2:33 PM, alexs@kernel.org wrote:
>> From: Alex Shi (Tencent) <alexs@kernel.org>
>>
>> According to Metthew's plan, the page descriptor will be replace by a 8
>> bytes mem_desc on destination purpose.
>> https://lore.kernel.org/lkml/YvV1KTyzZ+Jrtj9x@casper.infradead.org/
>>
>> Here is a implement on zsmalloc to replace page descriptor by 'zpdesc',
>> which is still overlay on struct page now. but it's a step move forward
>> above destination.
>>
>> To name the struct zpdesc instead of zsdesc, since there are still 3
>> zpools under zswap: zbud, z3fold, zsmalloc for now(z3fold maybe removed
>> soon), and we could easyly extend it to other zswap.zpool in needs.
>>
>> For all zswap.zpools, they are all using single page since often used
>> under memory pressure. So the conversion via folio series helper is
>> better than page's for compound_head check saving.
>>
>> For now, all zpools are using some page struct members, like page.flags
>> for PG_private/PG_locked. and list_head lru, page.mapping for page migration.
>>
>> This patachset does not increase the descriptor size nor introduce any
>> functional changes, and could save about 122Kbytes zsmalloc.o size.
>>
>> Thanks
>> Alex
>>
> 
> Any comments for this patchset?

Planning on taking a peek soon (busy traveling), but I'm hoping that 
people more familiar with the code can provide feedback.