Message ID | 20250402191145.2841864-1-nphamcs@gmail.com |
---|---|
State | New |
Headers | show |
Series | zswap/zsmalloc: prefer the the original page's node for compressed data | expand |
On Wed, Apr 02, 2025 at 12:11:45PM -0700, Nhat Pham wrote: > Currently, zsmalloc, zswap's backend memory allocator, does not enforce > any policy for the allocation of memory for the compressed data, > instead just adopting the memory policy of the task entering reclaim, > or the default policy (prefer local node) if no such policy is > specified. This can lead to several pathological behaviors in > multi-node NUMA systems: > > 1. Systems with CXL-based memory tiering can encounter the following > inversion with zswap: the coldest pages demoted to the CXL tier > can return to the high tier when they are zswapped out, creating > memory pressure on the high tier. > > 2. Consider a direct reclaimer scanning nodes in order of allocation > preference. If it ventures into remote nodes, the memory it > compresses there should stay there. Trying to shift those contents > over to the reclaiming thread's preferred node further *increases* > its local pressure, and provoking more spills. The remote node is > also the most likely to refault this data again. This undesirable > behavior was pointed out by Johannes Weiner in [1]. > > 3. For zswap writeback, the zswap entries are organized in > node-specific LRUs, based on the node placement of the original > pages, allowing for targeted zswap writeback for specific nodes. > > However, the compressed data of a zswap entry can be placed on a > different node from the LRU it is placed on. This means that reclaim > targeted at one node might not free up memory used for zswap entries > in that node, but instead reclaiming memory in a different node. > > All of these issues will be resolved if the compressed data go to the > same node as the original page. This patch encourages this behavior by > having zswap pass the node of the original page to zsmalloc, and have > zsmalloc prefer the specified node if we need to allocate new (zs)pages > for the compressed data. > > Note that we are not strictly binding the allocation to the preferred > node. We still allow the allocation to fall back to other nodes when > the preferred node is full, or if we have zspages with slots available > on a different node. This is OK, and still a strict improvement over > the status quo: > > 1. On a system with demotion enabled, we will generally prefer > demotions over zswapping, and only zswap when pages have > already gone to the lowest tier. This patch should achieve the > desired effect for the most part. > > 2. If the preferred node is out of memory, letting the compressed data > going to other nodes can be better than the alternative (OOMs, > keeping cold memory unreclaimed, disk swapping, etc.). > > 3. If the allocation go to a separate node because we have a zspage > with slots available, at least we're not creating extra immediate > memory pressure (since the space is already allocated). > > 3. While there can be mixings, we generally reclaim pages in > same-node batches, which encourage zspage grouping that is more > likely to go to the right node. > > 4. A strict binding would require partitioning zsmalloc by node, which > is more complicated, and more prone to regression, since it reduces > the storage density of zsmalloc. We need to evaluate the tradeoff > and benchmark carefully before adopting such an involved solution. > > This patch does not fix zram, leaving its memory allocation behavior > unchanged. We leave this effort to future work. zram's zs_malloc() calls all have page context. It seems a lot easier to just fix the bug for them as well than to have two allocation APIs and verbose commentary? > -static inline struct zpdesc *alloc_zpdesc(gfp_t gfp) > +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp, const int *nid) > { > - struct page *page = alloc_page(gfp); > + struct page *page; > + > + if (nid) > + page = alloc_pages_node(*nid, gfp, 0); > + else { > + /* > + * XXX: this is the zram path. We should consider fixing zram to also > + * use alloc_pages_node() and prefer the same node as the original page. > + * > + * Note that alloc_pages_node(NUMA_NO_NODE, gfp, 0) is not equivalent > + * to allloc_page(gfp). The former will prefer the local/closest node, > + * whereas the latter will try to follow the memory policy of the current > + * process. > + */ > + page = alloc_page(gfp); > + } > > return page_zpdesc(page); > } > @@ -461,10 +476,13 @@ static void zs_zpool_destroy(void *pool) > zs_destroy_pool(pool); > } > > +static unsigned long zs_malloc_node(struct zs_pool *pool, size_t size, > + gfp_t gfp, const int *nid); > + > static int zs_zpool_malloc(void *pool, size_t size, gfp_t gfp, > - unsigned long *handle) > + unsigned long *handle, const int nid) > { > - *handle = zs_malloc(pool, size, gfp); > + *handle = zs_malloc_node(pool, size, gfp, &nid); > > if (IS_ERR_VALUE(*handle)) > return PTR_ERR((void *)*handle); > } > > > -/** > - * zs_malloc - Allocate block of given size from pool. > - * @pool: pool to allocate from > - * @size: size of block to allocate > - * @gfp: gfp flags when allocating object > - * > - * On success, handle to the allocated object is returned, > - * otherwise an ERR_PTR(). > - * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail. > - */ > -unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > +static unsigned long zs_malloc_node(struct zs_pool *pool, size_t size, > + gfp_t gfp, const int *nid) > { > unsigned long handle; > struct size_class *class; > @@ -1397,6 +1406,21 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > > return handle; > } > + > +/** > + * zs_malloc - Allocate block of given size from pool. > + * @pool: pool to allocate from > + * @size: size of block to allocate > + * @gfp: gfp flags when allocating object > + * > + * On success, handle to the allocated object is returned, > + * otherwise an ERR_PTR(). > + * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail. > + */ > +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > +{ > + return zs_malloc_node(pool, size, gfp, NULL); > +} > EXPORT_SYMBOL_GPL(zs_malloc); > > static void obj_free(int class_size, unsigned long obj)
On Wed, Apr 2, 2025 at 12:57 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > On Wed, Apr 02, 2025 at 12:11:45PM -0700, Nhat Pham wrote: > > Currently, zsmalloc, zswap's backend memory allocator, does not enforce > > any policy for the allocation of memory for the compressed data, > > instead just adopting the memory policy of the task entering reclaim, > > or the default policy (prefer local node) if no such policy is > > specified. This can lead to several pathological behaviors in > > multi-node NUMA systems: > > > > 1. Systems with CXL-based memory tiering can encounter the following > > inversion with zswap: the coldest pages demoted to the CXL tier > > can return to the high tier when they are zswapped out, creating > > memory pressure on the high tier. > > > > 2. Consider a direct reclaimer scanning nodes in order of allocation > > preference. If it ventures into remote nodes, the memory it > > compresses there should stay there. Trying to shift those contents > > over to the reclaiming thread's preferred node further *increases* > > its local pressure, and provoking more spills. The remote node is > > also the most likely to refault this data again. This undesirable > > behavior was pointed out by Johannes Weiner in [1]. > > > > 3. For zswap writeback, the zswap entries are organized in > > node-specific LRUs, based on the node placement of the original > > pages, allowing for targeted zswap writeback for specific nodes. > > > > However, the compressed data of a zswap entry can be placed on a > > different node from the LRU it is placed on. This means that reclaim > > targeted at one node might not free up memory used for zswap entries > > in that node, but instead reclaiming memory in a different node. > > > > All of these issues will be resolved if the compressed data go to the > > same node as the original page. This patch encourages this behavior by > > having zswap pass the node of the original page to zsmalloc, and have > > zsmalloc prefer the specified node if we need to allocate new (zs)pages > > for the compressed data. > > > > Note that we are not strictly binding the allocation to the preferred > > node. We still allow the allocation to fall back to other nodes when > > the preferred node is full, or if we have zspages with slots available > > on a different node. This is OK, and still a strict improvement over > > the status quo: > > > > 1. On a system with demotion enabled, we will generally prefer > > demotions over zswapping, and only zswap when pages have > > already gone to the lowest tier. This patch should achieve the > > desired effect for the most part. > > > > 2. If the preferred node is out of memory, letting the compressed data > > going to other nodes can be better than the alternative (OOMs, > > keeping cold memory unreclaimed, disk swapping, etc.). > > > > 3. If the allocation go to a separate node because we have a zspage > > with slots available, at least we're not creating extra immediate > > memory pressure (since the space is already allocated). > > > > 3. While there can be mixings, we generally reclaim pages in > > same-node batches, which encourage zspage grouping that is more > > likely to go to the right node. > > > > 4. A strict binding would require partitioning zsmalloc by node, which > > is more complicated, and more prone to regression, since it reduces > > the storage density of zsmalloc. We need to evaluate the tradeoff > > and benchmark carefully before adopting such an involved solution. > > > > This patch does not fix zram, leaving its memory allocation behavior > > unchanged. We leave this effort to future work. > > zram's zs_malloc() calls all have page context. It seems a lot easier > to just fix the bug for them as well than to have two allocation APIs > and verbose commentary? I think the recompress path doesn't quite have the context at the callsite: static int recompress_slot(struct zram *zram, u32 index, struct page *page, u64 *num_recomp_pages, u32 threshold, u32 prio, u32 prio_max) Note that the "page" argument here is allocated by zram internally, and not the original page. We can get the original page's node by asking zsmalloc to return it when it returns the compressed data, but that's quite involved, and potentially requires further zsmalloc API change.
On Wed, Apr 02, 2025 at 01:09:29PM -0700, Nhat Pham wrote: > On Wed, Apr 2, 2025 at 12:57 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > > > On Wed, Apr 02, 2025 at 12:11:45PM -0700, Nhat Pham wrote: > > > Currently, zsmalloc, zswap's backend memory allocator, does not enforce > > > any policy for the allocation of memory for the compressed data, > > > instead just adopting the memory policy of the task entering reclaim, > > > or the default policy (prefer local node) if no such policy is > > > specified. This can lead to several pathological behaviors in > > > multi-node NUMA systems: > > > > > > 1. Systems with CXL-based memory tiering can encounter the following > > > inversion with zswap: the coldest pages demoted to the CXL tier > > > can return to the high tier when they are zswapped out, creating > > > memory pressure on the high tier. > > > > > > 2. Consider a direct reclaimer scanning nodes in order of allocation > > > preference. If it ventures into remote nodes, the memory it > > > compresses there should stay there. Trying to shift those contents > > > over to the reclaiming thread's preferred node further *increases* > > > its local pressure, and provoking more spills. The remote node is > > > also the most likely to refault this data again. This undesirable > > > behavior was pointed out by Johannes Weiner in [1]. > > > > > > 3. For zswap writeback, the zswap entries are organized in > > > node-specific LRUs, based on the node placement of the original > > > pages, allowing for targeted zswap writeback for specific nodes. > > > > > > However, the compressed data of a zswap entry can be placed on a > > > different node from the LRU it is placed on. This means that reclaim > > > targeted at one node might not free up memory used for zswap entries > > > in that node, but instead reclaiming memory in a different node. > > > > > > All of these issues will be resolved if the compressed data go to the > > > same node as the original page. This patch encourages this behavior by > > > having zswap pass the node of the original page to zsmalloc, and have > > > zsmalloc prefer the specified node if we need to allocate new (zs)pages > > > for the compressed data. > > > > > > Note that we are not strictly binding the allocation to the preferred > > > node. We still allow the allocation to fall back to other nodes when > > > the preferred node is full, or if we have zspages with slots available > > > on a different node. This is OK, and still a strict improvement over > > > the status quo: > > > > > > 1. On a system with demotion enabled, we will generally prefer > > > demotions over zswapping, and only zswap when pages have > > > already gone to the lowest tier. This patch should achieve the > > > desired effect for the most part. > > > > > > 2. If the preferred node is out of memory, letting the compressed data > > > going to other nodes can be better than the alternative (OOMs, > > > keeping cold memory unreclaimed, disk swapping, etc.). > > > > > > 3. If the allocation go to a separate node because we have a zspage > > > with slots available, at least we're not creating extra immediate > > > memory pressure (since the space is already allocated). > > > > > > 3. While there can be mixings, we generally reclaim pages in > > > same-node batches, which encourage zspage grouping that is more > > > likely to go to the right node. > > > > > > 4. A strict binding would require partitioning zsmalloc by node, which > > > is more complicated, and more prone to regression, since it reduces > > > the storage density of zsmalloc. We need to evaluate the tradeoff > > > and benchmark carefully before adopting such an involved solution. > > > > > > This patch does not fix zram, leaving its memory allocation behavior > > > unchanged. We leave this effort to future work. > > > > zram's zs_malloc() calls all have page context. It seems a lot easier > > to just fix the bug for them as well than to have two allocation APIs > > and verbose commentary? > > I think the recompress path doesn't quite have the context at the callsite: > > static int recompress_slot(struct zram *zram, u32 index, struct page *page, > u64 *num_recomp_pages, u32 threshold, u32 prio, > u32 prio_max) > > Note that the "page" argument here is allocated by zram internally, > and not the original page. We can get the original page's node by > asking zsmalloc to return it when it returns the compressed data, but > that's quite involved, and potentially requires further zsmalloc API > change. Yeah, that path currently allocates the target page on the node of whoever is writing to the "recompress" file. I think it's fine to use page_to_nid() on that one. It's no worse than the current behavior. Add an /* XXX */ to recompress_store() and should somebody care to make that path generally NUMA-aware they can do so without having to garbage-collect dependencies in zsmalloc code.
On Wed, Apr 2, 2025 at 1:24 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > On Wed, Apr 02, 2025 at 01:09:29PM -0700, Nhat Pham wrote: > > On Wed, Apr 2, 2025 at 12:57 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > > > > > On Wed, Apr 02, 2025 at 12:11:45PM -0700, Nhat Pham wrote: > > > > Currently, zsmalloc, zswap's backend memory allocator, does not enforce > > > > any policy for the allocation of memory for the compressed data, > > > > instead just adopting the memory policy of the task entering reclaim, > > > > or the default policy (prefer local node) if no such policy is > > > > specified. This can lead to several pathological behaviors in > > > > multi-node NUMA systems: > > > > > > > > 1. Systems with CXL-based memory tiering can encounter the following > > > > inversion with zswap: the coldest pages demoted to the CXL tier > > > > can return to the high tier when they are zswapped out, creating > > > > memory pressure on the high tier. > > > > > > > > 2. Consider a direct reclaimer scanning nodes in order of allocation > > > > preference. If it ventures into remote nodes, the memory it > > > > compresses there should stay there. Trying to shift those contents > > > > over to the reclaiming thread's preferred node further *increases* > > > > its local pressure, and provoking more spills. The remote node is > > > > also the most likely to refault this data again. This undesirable > > > > behavior was pointed out by Johannes Weiner in [1]. > > > > > > > > 3. For zswap writeback, the zswap entries are organized in > > > > node-specific LRUs, based on the node placement of the original > > > > pages, allowing for targeted zswap writeback for specific nodes. > > > > > > > > However, the compressed data of a zswap entry can be placed on a > > > > different node from the LRU it is placed on. This means that reclaim > > > > targeted at one node might not free up memory used for zswap entries > > > > in that node, but instead reclaiming memory in a different node. > > > > > > > > All of these issues will be resolved if the compressed data go to the > > > > same node as the original page. This patch encourages this behavior by > > > > having zswap pass the node of the original page to zsmalloc, and have > > > > zsmalloc prefer the specified node if we need to allocate new (zs)pages > > > > for the compressed data. > > > > > > > > Note that we are not strictly binding the allocation to the preferred > > > > node. We still allow the allocation to fall back to other nodes when > > > > the preferred node is full, or if we have zspages with slots available > > > > on a different node. This is OK, and still a strict improvement over > > > > the status quo: > > > > > > > > 1. On a system with demotion enabled, we will generally prefer > > > > demotions over zswapping, and only zswap when pages have > > > > already gone to the lowest tier. This patch should achieve the > > > > desired effect for the most part. > > > > > > > > 2. If the preferred node is out of memory, letting the compressed data > > > > going to other nodes can be better than the alternative (OOMs, > > > > keeping cold memory unreclaimed, disk swapping, etc.). > > > > > > > > 3. If the allocation go to a separate node because we have a zspage > > > > with slots available, at least we're not creating extra immediate > > > > memory pressure (since the space is already allocated). > > > > > > > > 3. While there can be mixings, we generally reclaim pages in > > > > same-node batches, which encourage zspage grouping that is more > > > > likely to go to the right node. > > > > > > > > 4. A strict binding would require partitioning zsmalloc by node, which > > > > is more complicated, and more prone to regression, since it reduces > > > > the storage density of zsmalloc. We need to evaluate the tradeoff > > > > and benchmark carefully before adopting such an involved solution. > > > > > > > > This patch does not fix zram, leaving its memory allocation behavior > > > > unchanged. We leave this effort to future work. > > > > > > zram's zs_malloc() calls all have page context. It seems a lot easier > > > to just fix the bug for them as well than to have two allocation APIs > > > and verbose commentary? > > > > I think the recompress path doesn't quite have the context at the callsite: > > > > static int recompress_slot(struct zram *zram, u32 index, struct page *page, > > u64 *num_recomp_pages, u32 threshold, u32 prio, > > u32 prio_max) > > > > Note that the "page" argument here is allocated by zram internally, > > and not the original page. We can get the original page's node by > > asking zsmalloc to return it when it returns the compressed data, but > > that's quite involved, and potentially requires further zsmalloc API > > change. > > Yeah, that path currently allocates the target page on the node of > whoever is writing to the "recompress" file. > > I think it's fine to use page_to_nid() on that one. It's no worse than > the current behavior. Add an /* XXX */ to recompress_store() and > should somebody care to make that path generally NUMA-aware they can > do so without having to garbage-collect dependencies in zsmalloc code. SGTM. I'll fix that.
diff --git a/include/linux/zpool.h b/include/linux/zpool.h index 52f30e526607..697525cb00bd 100644 --- a/include/linux/zpool.h +++ b/include/linux/zpool.h @@ -22,7 +22,7 @@ const char *zpool_get_type(struct zpool *pool); void zpool_destroy_pool(struct zpool *pool); int zpool_malloc(struct zpool *pool, size_t size, gfp_t gfp, - unsigned long *handle); + unsigned long *handle, const int nid); void zpool_free(struct zpool *pool, unsigned long handle); @@ -64,7 +64,7 @@ struct zpool_driver { void (*destroy)(void *pool); int (*malloc)(void *pool, size_t size, gfp_t gfp, - unsigned long *handle); + unsigned long *handle, const int nid); void (*free)(void *pool, unsigned long handle); void *(*obj_read_begin)(void *pool, unsigned long handle, diff --git a/mm/zpool.c b/mm/zpool.c index 6d6d88930932..b99a7c03e735 100644 --- a/mm/zpool.c +++ b/mm/zpool.c @@ -226,20 +226,22 @@ const char *zpool_get_type(struct zpool *zpool) * @size: The amount of memory to allocate. * @gfp: The GFP flags to use when allocating memory. * @handle: Pointer to the handle to set + * @nid: The preferred node id. * * This allocates the requested amount of memory from the pool. * The gfp flags will be used when allocating memory, if the * implementation supports it. The provided @handle will be - * set to the allocated object handle. + * set to the allocated object handle. The allocation will + * prefer the NUMA node specified by @nid. * * Implementations must guarantee this to be thread-safe. * * Returns: 0 on success, negative value on error. */ int zpool_malloc(struct zpool *zpool, size_t size, gfp_t gfp, - unsigned long *handle) + unsigned long *handle, const int nid) { - return zpool->driver->malloc(zpool->pool, size, gfp, handle); + return zpool->driver->malloc(zpool->pool, size, gfp, handle, nid); } /** diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 961b270f023c..0b8a8c445fc2 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -243,9 +243,24 @@ static inline void zpdesc_dec_zone_page_state(struct zpdesc *zpdesc) dec_zone_page_state(zpdesc_page(zpdesc), NR_ZSPAGES); } -static inline struct zpdesc *alloc_zpdesc(gfp_t gfp) +static inline struct zpdesc *alloc_zpdesc(gfp_t gfp, const int *nid) { - struct page *page = alloc_page(gfp); + struct page *page; + + if (nid) + page = alloc_pages_node(*nid, gfp, 0); + else { + /* + * XXX: this is the zram path. We should consider fixing zram to also + * use alloc_pages_node() and prefer the same node as the original page. + * + * Note that alloc_pages_node(NUMA_NO_NODE, gfp, 0) is not equivalent + * to allloc_page(gfp). The former will prefer the local/closest node, + * whereas the latter will try to follow the memory policy of the current + * process. + */ + page = alloc_page(gfp); + } return page_zpdesc(page); } @@ -461,10 +476,13 @@ static void zs_zpool_destroy(void *pool) zs_destroy_pool(pool); } +static unsigned long zs_malloc_node(struct zs_pool *pool, size_t size, + gfp_t gfp, const int *nid); + static int zs_zpool_malloc(void *pool, size_t size, gfp_t gfp, - unsigned long *handle) + unsigned long *handle, const int nid) { - *handle = zs_malloc(pool, size, gfp); + *handle = zs_malloc_node(pool, size, gfp, &nid); if (IS_ERR_VALUE(*handle)) return PTR_ERR((void *)*handle); @@ -1044,7 +1062,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, */ static struct zspage *alloc_zspage(struct zs_pool *pool, struct size_class *class, - gfp_t gfp) + gfp_t gfp, const int *nid) { int i; struct zpdesc *zpdescs[ZS_MAX_PAGES_PER_ZSPAGE]; @@ -1061,7 +1079,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, for (i = 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; - zpdesc = alloc_zpdesc(gfp); + zpdesc = alloc_zpdesc(gfp, nid); if (!zpdesc) { while (--i >= 0) { zpdesc_dec_zone_page_state(zpdescs[i]); @@ -1331,17 +1349,8 @@ static unsigned long obj_malloc(struct zs_pool *pool, } -/** - * zs_malloc - Allocate block of given size from pool. - * @pool: pool to allocate from - * @size: size of block to allocate - * @gfp: gfp flags when allocating object - * - * On success, handle to the allocated object is returned, - * otherwise an ERR_PTR(). - * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail. - */ -unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) +static unsigned long zs_malloc_node(struct zs_pool *pool, size_t size, + gfp_t gfp, const int *nid) { unsigned long handle; struct size_class *class; @@ -1376,7 +1385,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) spin_unlock(&class->lock); - zspage = alloc_zspage(pool, class, gfp); + zspage = alloc_zspage(pool, class, gfp, nid); if (!zspage) { cache_free_handle(pool, handle); return (unsigned long)ERR_PTR(-ENOMEM); @@ -1397,6 +1406,21 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) return handle; } + +/** + * zs_malloc - Allocate block of given size from pool. + * @pool: pool to allocate from + * @size: size of block to allocate + * @gfp: gfp flags when allocating object + * + * On success, handle to the allocated object is returned, + * otherwise an ERR_PTR(). + * Allocation requests with size > ZS_MAX_ALLOC_SIZE will fail. + */ +unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) +{ + return zs_malloc_node(pool, size, gfp, NULL); +} EXPORT_SYMBOL_GPL(zs_malloc); static void obj_free(int class_size, unsigned long obj) diff --git a/mm/zswap.c b/mm/zswap.c index 204fb59da33c..455e9425c5f5 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -981,7 +981,7 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, zpool = pool->zpool; gfp = GFP_NOWAIT | __GFP_NORETRY | __GFP_HIGHMEM | __GFP_MOVABLE; - alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle); + alloc_ret = zpool_malloc(zpool, dlen, gfp, &handle, page_to_nid(page)); if (alloc_ret) goto unlock;
Currently, zsmalloc, zswap's backend memory allocator, does not enforce any policy for the allocation of memory for the compressed data, instead just adopting the memory policy of the task entering reclaim, or the default policy (prefer local node) if no such policy is specified. This can lead to several pathological behaviors in multi-node NUMA systems: 1. Systems with CXL-based memory tiering can encounter the following inversion with zswap: the coldest pages demoted to the CXL tier can return to the high tier when they are zswapped out, creating memory pressure on the high tier. 2. Consider a direct reclaimer scanning nodes in order of allocation preference. If it ventures into remote nodes, the memory it compresses there should stay there. Trying to shift those contents over to the reclaiming thread's preferred node further *increases* its local pressure, and provoking more spills. The remote node is also the most likely to refault this data again. This undesirable behavior was pointed out by Johannes Weiner in [1]. 3. For zswap writeback, the zswap entries are organized in node-specific LRUs, based on the node placement of the original pages, allowing for targeted zswap writeback for specific nodes. However, the compressed data of a zswap entry can be placed on a different node from the LRU it is placed on. This means that reclaim targeted at one node might not free up memory used for zswap entries in that node, but instead reclaiming memory in a different node. All of these issues will be resolved if the compressed data go to the same node as the original page. This patch encourages this behavior by having zswap pass the node of the original page to zsmalloc, and have zsmalloc prefer the specified node if we need to allocate new (zs)pages for the compressed data. Note that we are not strictly binding the allocation to the preferred node. We still allow the allocation to fall back to other nodes when the preferred node is full, or if we have zspages with slots available on a different node. This is OK, and still a strict improvement over the status quo: 1. On a system with demotion enabled, we will generally prefer demotions over zswapping, and only zswap when pages have already gone to the lowest tier. This patch should achieve the desired effect for the most part. 2. If the preferred node is out of memory, letting the compressed data going to other nodes can be better than the alternative (OOMs, keeping cold memory unreclaimed, disk swapping, etc.). 3. If the allocation go to a separate node because we have a zspage with slots available, at least we're not creating extra immediate memory pressure (since the space is already allocated). 3. While there can be mixings, we generally reclaim pages in same-node batches, which encourage zspage grouping that is more likely to go to the right node. 4. A strict binding would require partitioning zsmalloc by node, which is more complicated, and more prone to regression, since it reduces the storage density of zsmalloc. We need to evaluate the tradeoff and benchmark carefully before adopting such an involved solution. This patch does not fix zram, leaving its memory allocation behavior unchanged. We leave this effort to future work. [1]: https://lore.kernel.org/linux-mm/20250331165306.GC2110528@cmpxchg.org/ Suggested-by: Gregory Price <gourry@gourry.net> Signed-off-by: Nhat Pham <nphamcs@gmail.com> --- include/linux/zpool.h | 4 +-- mm/zpool.c | 8 +++--- mm/zsmalloc.c | 60 ++++++++++++++++++++++++++++++------------- mm/zswap.c | 2 +- 4 files changed, 50 insertions(+), 24 deletions(-) base-commit: 8c65b3b82efb3b2f0d1b6e3b3e73c6f0fd367fb5