Message ID | 20250328-page-pool-track-dma-v5-2-55002af683ad@redhat.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Fix late DMA unmap crash for page pool | expand |
From: Toke Høiland-Jørgensen <toke@redhat.com> Date: Fri, 28 Mar 2025 13:19:09 +0100 > When enabling DMA mapping in page_pool, pages are kept DMA mapped until > they are released from the pool, to avoid the overhead of re-mapping the > pages every time they are used. This causes resource leaks and/or > crashes when there are pages still outstanding while the device is torn > down, because page_pool will attempt an unmap through a non-existent DMA > device on the subsequent page return. [...] > @@ -173,10 +212,10 @@ struct page_pool { > int cpuid; > u32 pages_state_hold_cnt; > > - bool has_init_callback:1; /* slow::init_callback is set */ > + bool dma_sync; /* Perform DMA sync for device */ Have you seen my comment under v3 (sorry but I missed that there was v4 already)? Can't we just test the bit atomically? > bool dma_map:1; /* Perform DMA mapping */ > - bool dma_sync:1; /* Perform DMA sync for device */ > bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ > + bool has_init_callback:1; /* slow::init_callback is set */ Thanks, Olek
在 2025/3/31 18:35, Alexander Lobakin 写道: > From: Toke Høiland-Jørgensen <toke@redhat.com> > Date: Fri, 28 Mar 2025 13:19:09 +0100 > >> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >> they are released from the pool, to avoid the overhead of re-mapping the >> pages every time they are used. This causes resource leaks and/or >> crashes when there are pages still outstanding while the device is torn >> down, because page_pool will attempt an unmap through a non-existent DMA >> device on the subsequent page return. > > [...] > >> @@ -173,10 +212,10 @@ struct page_pool { >> int cpuid; >> u32 pages_state_hold_cnt; >> >> - bool has_init_callback:1; /* slow::init_callback is set */ >> + bool dma_sync; /* Perform DMA sync for device */ > > Have you seen my comment under v3 (sorry but I missed that there was v4 > already)? Can't we just test the bit atomically? Perhaps test_bit series functions can test the bit atomically. Maybe there are more good options about this testing the bit atomically. But test_bit should implement the task that tests the bit atomically. Zhu Yanjun > >> bool dma_map:1; /* Perform DMA mapping */ >> - bool dma_sync:1; /* Perform DMA sync for device */ >> bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ >> + bool has_init_callback:1; /* slow::init_callback is set */ > > Thanks, > Olek
On 3/28/25 1:19 PM, Toke Høiland-Jørgensen wrote: > @@ -463,13 +462,21 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, > netmem_ref netmem, > u32 dma_sync_size) > { > - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) > - __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); > + if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) { Lacking a READ_ONCE() here, I think it's within compiler's right do some unexpected optimization between this read and the next one. Also it will make the double read more explicit. Thanks, Paolo
On 3/31/25 6:35 PM, Alexander Lobakin wrote: > From: Toke Høiland-Jørgensen <toke@redhat.com> > Date: Fri, 28 Mar 2025 13:19:09 +0100 > >> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >> they are released from the pool, to avoid the overhead of re-mapping the >> pages every time they are used. This causes resource leaks and/or >> crashes when there are pages still outstanding while the device is torn >> down, because page_pool will attempt an unmap through a non-existent DMA >> device on the subsequent page return. > > [...] > >> @@ -173,10 +212,10 @@ struct page_pool { >> int cpuid; >> u32 pages_state_hold_cnt; >> >> - bool has_init_callback:1; /* slow::init_callback is set */ >> + bool dma_sync; /* Perform DMA sync for device */ > > Have you seen my comment under v3 (sorry but I missed that there was v4 > already)? Can't we just test the bit atomically? My understanding is that to make such operation really atomic, we will need to access all the other bits within the same bitfield with atomic bit ops, leading to a significant code churn (and possibly some overhead). I think that using a full bool field is a better option. Thanks, Paolo
Paolo Abeni <pabeni@redhat.com> writes: > On 3/28/25 1:19 PM, Toke Høiland-Jørgensen wrote: >> @@ -463,13 +462,21 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, >> netmem_ref netmem, >> u32 dma_sync_size) >> { >> - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) >> - __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); >> + if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) { > > Lacking a READ_ONCE() here, I think it's within compiler's right do some > unexpected optimization between this read and the next one. Also it will > make the double read more explicit. Right, good point; will respin! -Toke
On 2025/4/1 1:27, Zhu Yanjun wrote: > 在 2025/3/31 18:35, Alexander Lobakin 写道: >> From: Toke Høiland-Jørgensen <toke@redhat.com> >> Date: Fri, 28 Mar 2025 13:19:09 +0100 >> >>> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >>> they are released from the pool, to avoid the overhead of re-mapping the >>> pages every time they are used. This causes resource leaks and/or >>> crashes when there are pages still outstanding while the device is torn >>> down, because page_pool will attempt an unmap through a non-existent DMA >>> device on the subsequent page return. >> >> [...] >> >>> @@ -173,10 +212,10 @@ struct page_pool { >>> int cpuid; >>> u32 pages_state_hold_cnt; >>> - bool has_init_callback:1; /* slow::init_callback is set */ >>> + bool dma_sync; /* Perform DMA sync for device */ >> >> Have you seen my comment under v3 (sorry but I missed that there was v4 >> already)? Can't we just test the bit atomically? > > Perhaps test_bit series functions can test the bit atomically. Maybe there are more good options about this testing the bit atomically. But test_bit should implement the task that tests the bit atomically. There are two reading of dma_sync in this patch, the first reading is not under rcu read lock and doing the reading without READ_ONCE(), the second reading is under rcu read lock and do the reading with READ_ONCE(). The first one seems an optimization to avoid taking the rcu read lock, why might need READ_ONCE() to make KCSAN happy if we do care about making KCSAN happy. The second one does not seems to need the atomicity by using the READ_ONCE() as it is always under RCU read lock(implicit or explicit one), and there is a rcu sync after the clearing of that bit.
On 4/1/25 09:56, Paolo Abeni wrote: > On 3/31/25 6:35 PM, Alexander Lobakin wrote: >> From: Toke Høiland-Jørgensen <toke@redhat.com> >> Date: Fri, 28 Mar 2025 13:19:09 +0100 >> >>> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >>> they are released from the pool, to avoid the overhead of re-mapping the >>> pages every time they are used. This causes resource leaks and/or >>> crashes when there are pages still outstanding while the device is torn >>> down, because page_pool will attempt an unmap through a non-existent DMA >>> device on the subsequent page return. >> >> [...] >> >>> @@ -173,10 +212,10 @@ struct page_pool { >>> int cpuid; >>> u32 pages_state_hold_cnt; >>> >>> - bool has_init_callback:1; /* slow::init_callback is set */ >>> + bool dma_sync; /* Perform DMA sync for device */ >> >> Have you seen my comment under v3 (sorry but I missed that there was v4 >> already)? Can't we just test the bit atomically? > > My understanding is that to make such operation really atomic, we will > need to access all the other bits within the same bitfield with atomic > bit ops, leading to a significant code churn (and possibly some overhead). > > I think that using a full bool field is a better option. I agree, it's better not to overcomplicate a fix, and we can always return to it later.
From: Yunsheng Lin <linyunsheng@huawei.com> Date: Tue, 1 Apr 2025 17:24:43 +0800 > On 2025/4/1 1:27, Zhu Yanjun wrote: >> 在 2025/3/31 18:35, Alexander Lobakin 写道: >>> From: Toke Høiland-Jørgensen <toke@redhat.com> >>> Date: Fri, 28 Mar 2025 13:19:09 +0100 >>> >>>> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >>>> they are released from the pool, to avoid the overhead of re-mapping the >>>> pages every time they are used. This causes resource leaks and/or >>>> crashes when there are pages still outstanding while the device is torn >>>> down, because page_pool will attempt an unmap through a non-existent DMA >>>> device on the subsequent page return. >>> >>> [...] >>> >>>> @@ -173,10 +212,10 @@ struct page_pool { >>>> int cpuid; >>>> u32 pages_state_hold_cnt; >>>> - bool has_init_callback:1; /* slow::init_callback is set */ >>>> + bool dma_sync; /* Perform DMA sync for device */ >>> >>> Have you seen my comment under v3 (sorry but I missed that there was v4 >>> already)? Can't we just test the bit atomically? >> >> Perhaps test_bit series functions can test the bit atomically. Maybe there are more good options about this testing the bit atomically. But test_bit should implement the task that tests the bit atomically. > > There are two reading of dma_sync in this patch, the first reading is not > under rcu read lock and doing the reading without READ_ONCE(), the second > reading is under rcu read lock and do the reading with READ_ONCE(). > > The first one seems an optimization to avoid taking the rcu read lock, > why might need READ_ONCE() to make KCSAN happy if we do care about making > KCSAN happy. > > The second one does not seems to need the atomicity by using the READ_ONCE() > as it is always under RCU read lock(implicit or explicit one), and there is > a rcu sync after the clearing of that bit. IOW, are you saying this change is not needed at all? Thanks, Olek
On 01/04/2025 11.51, Pavel Begunkov wrote: > On 4/1/25 09:56, Paolo Abeni wrote: >> On 3/31/25 6:35 PM, Alexander Lobakin wrote: >>> From: Toke Høiland-Jørgensen <toke@redhat.com> >>> Date: Fri, 28 Mar 2025 13:19:09 +0100 >>> >>>> When enabling DMA mapping in page_pool, pages are kept DMA mapped until >>>> they are released from the pool, to avoid the overhead of re-mapping >>>> the >>>> pages every time they are used. This causes resource leaks and/or >>>> crashes when there are pages still outstanding while the device is torn >>>> down, because page_pool will attempt an unmap through a non-existent >>>> DMA >>>> device on the subsequent page return. >>> >>> [...] >>> >>>> @@ -173,10 +212,10 @@ struct page_pool { >>>> int cpuid; >>>> u32 pages_state_hold_cnt; >>>> - bool has_init_callback:1; /* slow::init_callback is set */ >>>> + bool dma_sync; /* Perform DMA sync for device */ >>> >>> Have you seen my comment under v3 (sorry but I missed that there was v4 >>> already)? Can't we just test the bit atomically? >> >> My understanding is that to make such operation really atomic, we will >> need to access all the other bits within the same bitfield with atomic >> bit ops, leading to a significant code churn (and possibly some >> overhead). >> >> I think that using a full bool field is a better option. > > I agree, it's better not to overcomplicate a fix, and we can always > return to it later. I also agree. No need to do atomic bit operations. Let's not complicate the code because we can. I prefer keeping code readable. --Jesper
diff --git a/include/linux/poison.h b/include/linux/poison.h index 331a9a996fa8746626afa63ea462b85ca3e5938b..5351efd710d5e21cc341f7bb533b1aeea4a0808a 100644 --- a/include/linux/poison.h +++ b/include/linux/poison.h @@ -70,6 +70,10 @@ #define KEY_DESTROY 0xbd /********** net/core/page_pool.c **********/ +/* + * page_pool uses additional free bits within this value to store data, see the + * definition of PP_DMA_INDEX_MASK in include/net/page_pool/types.h + */ #define PP_SIGNATURE (0x40 + POISON_POINTER_DELTA) /********** net/core/skbuff.c **********/ diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index df0d3c1608929605224feb26173135ff37951ef8..5835d359ecd0ac75dd737736926914ef7dd60646 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -6,6 +6,7 @@ #include <linux/dma-direction.h> #include <linux/ptr_ring.h> #include <linux/types.h> +#include <linux/xarray.h> #include <net/netmem.h> #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA @@ -54,13 +55,51 @@ struct pp_alloc_cache { netmem_ref cache[PP_ALLOC_CACHE_SIZE]; }; +/* + * DMA mapping IDs + * + * When DMA-mapping a page, we allocate an ID (from an xarray) and stash this in + * the upper bits of page->pp_magic. We always want to be able to unambiguously + * identify page pool pages (using page_pool_page_is_pp()). Non-PP pages can + * have arbitrary kernel pointers stored in the same field as pp_magic (since it + * overlaps with page->lru.next), so we must ensure that we cannot mistake a + * valid kernel pointer with any of the values we write into this field. + * + * On architectures that set POISON_POINTER_DELTA, this is already ensured, + * since this value becomes part of PP_SIGNATURE; meaning we can just use the + * space between the PP_SIGNATURE value (without POISON_POINTER_DELTA), and the + * lowest bits of POISON_POINTER_DELTA. On arches where POISON_POINTER_DELTA is + * 0, we make sure that we leave the two topmost bits empty, as that guarantees + * we won't mistake a valid kernel pointer for a value we set, regardless of the + * VMSPLIT setting. + * + * Altogether, this means that the number of bits available is constrained by + * the size of an unsigned long (at the upper end, subtracting two bits per the + * above), and the definition of PP_SIGNATURE (with or without + * POISON_POINTER_DELTA). + */ +#define PP_DMA_INDEX_SHIFT (1 + __fls(PP_SIGNATURE - POISON_POINTER_DELTA)) +#if POISON_POINTER_DELTA > 0 +/* PP_SIGNATURE includes POISON_POINTER_DELTA, so limit the size of the DMA + * index to not overlap with that if set + */ +#define PP_DMA_INDEX_BITS MIN(32, __ffs(POISON_POINTER_DELTA) - PP_DMA_INDEX_SHIFT) +#else +/* Always leave out the topmost two; see above. */ +#define PP_DMA_INDEX_BITS MIN(32, BITS_PER_LONG - PP_DMA_INDEX_SHIFT - 2) +#endif + +#define PP_DMA_INDEX_MASK GENMASK(PP_DMA_INDEX_BITS + PP_DMA_INDEX_SHIFT - 1, \ + PP_DMA_INDEX_SHIFT) +#define PP_DMA_INDEX_LIMIT XA_LIMIT(1, BIT(PP_DMA_INDEX_BITS) - 1) + /* Mask used for checking in page_pool_page_is_pp() below. page->pp_magic is * OR'ed with PP_SIGNATURE after the allocation in order to preserve bit 0 for - * the head page of compound page and bit 1 for pfmemalloc page. - * page_is_pfmemalloc() is checked in __page_pool_put_page() to avoid recycling - * the pfmemalloc page. + * the head page of compound page and bit 1 for pfmemalloc page, as well as the + * bits used for the DMA index. page_is_pfmemalloc() is checked in + * __page_pool_put_page() to avoid recycling the pfmemalloc page. */ -#define PP_MAGIC_MASK ~0x3UL +#define PP_MAGIC_MASK ~(PP_DMA_INDEX_MASK | 0x3UL) /** * struct page_pool_params - page pool parameters @@ -173,10 +212,10 @@ struct page_pool { int cpuid; u32 pages_state_hold_cnt; - bool has_init_callback:1; /* slow::init_callback is set */ + bool dma_sync; /* Perform DMA sync for device */ bool dma_map:1; /* Perform DMA mapping */ - bool dma_sync:1; /* Perform DMA sync for device */ bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */ + bool has_init_callback:1; /* slow::init_callback is set */ #ifdef CONFIG_PAGE_POOL_STATS bool system:1; /* This is a global percpu pool */ #endif @@ -229,6 +268,8 @@ struct page_pool { void *mp_priv; const struct memory_provider_ops *mp_ops; + struct xarray dma_mapped; + #ifdef CONFIG_PAGE_POOL_STATS /* recycle stats are per-cpu to avoid locking */ struct page_pool_recycle_stats __percpu *recycle_stats; diff --git a/net/core/netmem_priv.h b/net/core/netmem_priv.h index f33162fd281c23e109273ba09950c5d0a2829bc9..cd95394399b40c3604934ba7898eeeeacb8aee99 100644 --- a/net/core/netmem_priv.h +++ b/net/core/netmem_priv.h @@ -5,7 +5,7 @@ static inline unsigned long netmem_get_pp_magic(netmem_ref netmem) { - return __netmem_clear_lsb(netmem)->pp_magic; + return __netmem_clear_lsb(netmem)->pp_magic & ~PP_DMA_INDEX_MASK; } static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic) @@ -15,6 +15,8 @@ static inline void netmem_or_pp_magic(netmem_ref netmem, unsigned long pp_magic) static inline void netmem_clear_pp_magic(netmem_ref netmem) { + WARN_ON_ONCE(__netmem_clear_lsb(netmem)->pp_magic & PP_DMA_INDEX_MASK); + __netmem_clear_lsb(netmem)->pp_magic = 0; } @@ -33,4 +35,28 @@ static inline void netmem_set_dma_addr(netmem_ref netmem, { __netmem_clear_lsb(netmem)->dma_addr = dma_addr; } + +static inline unsigned long netmem_get_dma_index(netmem_ref netmem) +{ + unsigned long magic; + + if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) + return 0; + + magic = __netmem_clear_lsb(netmem)->pp_magic; + + return (magic & PP_DMA_INDEX_MASK) >> PP_DMA_INDEX_SHIFT; +} + +static inline void netmem_set_dma_index(netmem_ref netmem, + unsigned long id) +{ + unsigned long magic; + + if (WARN_ON_ONCE(netmem_is_net_iov(netmem))) + return; + + magic = netmem_get_pp_magic(netmem) | (id << PP_DMA_INDEX_SHIFT); + __netmem_clear_lsb(netmem)->pp_magic = magic; +} #endif diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 7745ad924ae2d801580a6760eba9393e1cf67b01..79cc90ea1d3b5cfb9e7b200ae6b36bc40835386f 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -227,6 +227,8 @@ static int page_pool_init(struct page_pool *pool, return -EINVAL; pool->dma_map = true; + + xa_init_flags(&pool->dma_mapped, XA_FLAGS_ALLOC1); } if (pool->slow.flags & PP_FLAG_DMA_SYNC_DEV) { @@ -276,9 +278,6 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); - if (pool->dma_map) - get_device(pool->p.dev); - if (pool->slow.flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) { netdev_assert_locked(pool->slow.netdev); rxq = __netif_get_rx_queue(pool->slow.netdev, @@ -322,7 +321,7 @@ static void page_pool_uninit(struct page_pool *pool) ptr_ring_cleanup(&pool->ring, NULL); if (pool->dma_map) - put_device(pool->p.dev); + xa_destroy(&pool->dma_mapped); #ifdef CONFIG_PAGE_POOL_STATS if (!pool->system) @@ -463,13 +462,21 @@ page_pool_dma_sync_for_device(const struct page_pool *pool, netmem_ref netmem, u32 dma_sync_size) { - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) - __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size); + if (pool->dma_sync && dma_dev_need_sync(pool->p.dev)) { + rcu_read_lock(); + /* re-check under rcu_read_lock() to sync with page_pool_scrub() */ + if (READ_ONCE(pool->dma_sync)) + __page_pool_dma_sync_for_device(pool, netmem, + dma_sync_size); + rcu_read_unlock(); + } } -static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) +static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t gfp) { dma_addr_t dma; + int err; + u32 id; /* Setup DMA mapping: use 'struct page' area for storing DMA-addr * since dma_addr_t can be either 32 or 64 bits and does not always fit @@ -483,15 +490,28 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem) if (dma_mapping_error(pool->p.dev, dma)) return false; - if (page_pool_set_dma_addr_netmem(netmem, dma)) + if (in_softirq()) + err = xa_alloc(&pool->dma_mapped, &id, netmem_to_page(netmem), + PP_DMA_INDEX_LIMIT, gfp); + else + err = xa_alloc_bh(&pool->dma_mapped, &id, netmem_to_page(netmem), + PP_DMA_INDEX_LIMIT, gfp); + if (err) { + WARN_ONCE(1, "couldn't track DMA mapping, please report to netdev@"); goto unmap_failed; + } + if (page_pool_set_dma_addr_netmem(netmem, dma)) { + WARN_ONCE(1, "unexpected DMA address, please report to netdev@"); + goto unmap_failed; + } + + netmem_set_dma_index(netmem, id); page_pool_dma_sync_for_device(pool, netmem, pool->p.max_len); return true; unmap_failed: - WARN_ONCE(1, "unexpected DMA address, please report to netdev@"); dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); @@ -508,7 +528,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, if (unlikely(!page)) return NULL; - if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page)))) { + if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page_to_netmem(page), gfp))) { put_page(page); return NULL; } @@ -554,7 +574,7 @@ static noinline netmem_ref __page_pool_alloc_pages_slow(struct page_pool *pool, */ for (i = 0; i < nr_pages; i++) { netmem = pool->alloc.cache[i]; - if (dma_map && unlikely(!page_pool_dma_map(pool, netmem))) { + if (dma_map && unlikely(!page_pool_dma_map(pool, netmem, gfp))) { put_page(netmem_to_page(netmem)); continue; } @@ -656,6 +676,8 @@ void page_pool_clear_pp_info(netmem_ref netmem) static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, netmem_ref netmem) { + struct page *old, *page = netmem_to_page(netmem); + unsigned long id; dma_addr_t dma; if (!pool->dma_map) @@ -664,6 +686,17 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, */ return; + id = netmem_get_dma_index(netmem); + if (!id) + return; + + if (in_softirq()) + old = xa_cmpxchg(&pool->dma_mapped, id, page, NULL, 0); + else + old = xa_cmpxchg_bh(&pool->dma_mapped, id, page, NULL, 0); + if (old != page) + return; + dma = page_pool_get_dma_addr_netmem(netmem); /* When page is unmapped, it cannot be returned to our pool */ @@ -671,6 +704,7 @@ static __always_inline void __page_pool_release_page_dma(struct page_pool *pool, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); page_pool_set_dma_addr_netmem(netmem, 0); + netmem_set_dma_index(netmem, 0); } /* Disconnects a page (from a page_pool). API users can have a need @@ -1080,8 +1114,32 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool) static void page_pool_scrub(struct page_pool *pool) { + unsigned long id; + void *ptr; + page_pool_empty_alloc_cache_once(pool); - pool->destroy_cnt++; + if (!pool->destroy_cnt++ && pool->dma_map) { + if (pool->dma_sync) { + /* paired with READ_ONCE in + * page_pool_dma_sync_for_device() and + * __page_pool_dma_sync_for_cpu() + */ + WRITE_ONCE(pool->dma_sync, false); + + /* Make sure all concurrent returns that may see the old + * value of dma_sync (and thus perform a sync) have + * finished before doing the unmapping below. Skip the + * wait if the device doesn't actually need syncing, or + * if there are no outstanding mapped pages. + */ + if (dma_dev_need_sync(pool->p.dev) && + !xa_empty(&pool->dma_mapped)) + synchronize_net(); + } + + xa_for_each(&pool->dma_mapped, id, ptr) + __page_pool_release_page_dma(pool, page_to_netmem(ptr)); + } /* No more consumers should exist, but producers could still * be in-flight.