diff mbox series

[RFC,1/2] page_pool: fix timing for checking and disabling napi_local

Message ID 20240912124514.2329991-2-linyunsheng@huawei.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series fix two bugs related to page_pool | expand

Checks

Context Check Description
netdev/series_format warning Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 16 this patch: 16
netdev/build_tools success No tools touched, skip
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/build_clang success Errors and warnings before: 16 this patch: 16
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 16 this patch: 16
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 65 lines checked
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 3 this patch: 3
netdev/source_inline success Was 0 now: 0

Commit Message

Yunsheng Lin Sept. 12, 2024, 12:45 p.m. UTC
page_pool page may be freed from skb_defer_free_flush() to
softirq context, it may cause concurrent access problem for
pool->alloc cache due to the below time window, as below,
both CPU0 and CPU1 may access the pool->alloc cache
concurrently in page_pool_empty_alloc_cache_once() and
page_pool_recycle_in_cache():

          CPU 0                           CPU1
    page_pool_destroy()          skb_defer_free_flush()
           .                               .
           .                   page_pool_put_unrefed_page()
           .                               .
           .               allow_direct = page_pool_napi_local()
           .                               .
page_pool_disable_direct_recycling()       .
           .                               .
page_pool_empty_alloc_cache_once() page_pool_recycle_in_cache()

Use rcu mechanism to avoid the above concurrent access problem.

Note, the above was found during code reviewing how to fix the
problem in [1].

1. https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/

Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
CC: Alexander Lobakin <aleksander.lobakin@intel.com>
---
 net/core/page_pool.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index a813d30d2135..18645907cec2 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -818,6 +818,8 @@  static bool page_pool_napi_local(const struct page_pool *pool)
 void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
 				  unsigned int dma_sync_size, bool allow_direct)
 {
+	rcu_read_lock();
+
 	if (!allow_direct)
 		allow_direct = page_pool_napi_local(pool);
 
@@ -828,6 +830,8 @@  void page_pool_put_unrefed_netmem(struct page_pool *pool, netmem_ref netmem,
 		recycle_stat_inc(pool, ring_full);
 		page_pool_return_page(pool, netmem);
 	}
+
+	rcu_read_unlock();
 }
 EXPORT_SYMBOL(page_pool_put_unrefed_netmem);
 
@@ -861,6 +865,7 @@  void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 	bool allow_direct;
 	bool in_softirq;
 
+	rcu_read_lock();
 	allow_direct = page_pool_napi_local(pool);
 
 	for (i = 0; i < count; i++) {
@@ -876,8 +881,10 @@  void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 			data[bulk_len++] = (__force void *)netmem;
 	}
 
-	if (!bulk_len)
+	if (!bulk_len) {
+		rcu_read_unlock();
 		return;
+	}
 
 	/* Bulk producer into ptr_ring page_pool cache */
 	in_softirq = page_pool_producer_lock(pool);
@@ -892,14 +899,18 @@  void page_pool_put_page_bulk(struct page_pool *pool, void **data,
 	page_pool_producer_unlock(pool, in_softirq);
 
 	/* Hopefully all pages was return into ptr_ring */
-	if (likely(i == bulk_len))
+	if (likely(i == bulk_len)) {
+		rcu_read_unlock();
 		return;
+	}
 
 	/* ptr_ring cache full, free remaining pages outside producer lock
 	 * since put_page() with refcnt == 1 can be an expensive operation
 	 */
 	for (; i < bulk_len; i++)
 		page_pool_return_page(pool, (__force netmem_ref)data[i]);
+
+	rcu_read_unlock();
 }
 EXPORT_SYMBOL(page_pool_put_page_bulk);
 
@@ -1121,6 +1132,12 @@  void page_pool_destroy(struct page_pool *pool)
 		return;
 
 	page_pool_disable_direct_recycling(pool);
+
+	/* Wait for the freeing side see the disabling direct recycling setting
+	 * to avoid the concurrent access to the pool->alloc cache.
+	 */
+	synchronize_rcu();
+
 	page_pool_free_frag(pool);
 
 	if (!page_pool_release(pool))