From patchwork Wed Dec 11 17:26:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13903995 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E60A41FF1D8; Wed, 11 Dec 2024 17:28:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938122; cv=none; b=BidSC+TLAIQjpxNvb1upjnbB2ymMKfI81RHrouVtC4CDZrc5VsqagMoibxT7KYVmfslSjPLfaPMfebmyfXo/4hXRNV/nCGY7KLtj4uj5vng/4aflieAAZSSh7wGDb/pROO5dtkcI8zS3Johf6u39mgo1w370gAL6Q7BN4659rT0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938122; c=relaxed/simple; bh=coimtJvSzbRkkZlz8bU5k7oSmrKJ14aYGjTo77fKFlU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nbvd4j0JuL5oTwtjOdDMnMVtQaYc4pWjjh7K+QCx2r0FZap2iEV58oxfZpnC+F8BfAcBcT9ol93jlRxZsy+8GVUlC3UFS9oPk5uM2zg0Hc6SuCTiVe+2e1U92L2k98VWXaj1PbLB7ZIOpG4ZGFnSU8Mr3ldRdPAXfr2SjJGGfCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bf909Bn9; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bf909Bn9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938121; x=1765474121; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=coimtJvSzbRkkZlz8bU5k7oSmrKJ14aYGjTo77fKFlU=; b=bf909Bn9CdX9iaeGBpZA7n4/xBvDzWXkOG348l5P1QdxjacxivA5OuCD hcQWnldBPccPnbQuQN8cXe60dje68z9eiI5S2879hUA9bs5xodsKVz0pD jflyGbtGwRcdvWnlivqbg4MFSbvQD04Uw2XH31jrCAuOKgT4+w+6QbkWD 6cgV+LX8gdM6Pqx0uhw8a6GIv/u3lNfokW1LfEEVAuls5kacmntIA1ci7 yk5JQzBWxcwYjCsvJAWxew5cXIWWCb7YVqTVA9cviif0cimVLmFK/oJY7 8wo9wKqhi/q9o40hqHFpBulrqSjRNuw2rz1EjJDkSbtQhcss/k9PNGZhO g==; X-CSE-ConnectionGUID: aKraRuulT562rg0kOPr40w== X-CSE-MsgGUID: i59N4UttT9Ouq01Mz4Q5cA== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859458" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859458" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:28:41 -0800 X-CSE-ConnectionGUID: 7QK2WAQOQ8mK4a9sIjzukg== X-CSE-MsgGUID: 39ITWS6hQ++Dzz4D9rY+og== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122084" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:28:35 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 01/12] page_pool: allow mixing PPs within one bulk Date: Wed, 11 Dec 2024 18:26:38 +0100 Message-ID: <20241211172649.761483-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The main reason for this change was to allow mixing pages from different &page_pools within one &xdp_buff/&xdp_frame. Why not? With stuff like devmem and io_uring zerocopy Rx, it's required to have separate PPs for header buffers and payload buffers. Adjust xdp_return_frame_bulk() and page_pool_put_netmem_bulk(), so that they won't be tied to a particular pool. Let the latter create a separate bulk of pages which's PP is different from the first netmem of the bulk and process it after the main loop. This greatly optimizes xdp_return_frame_bulk(): no more hashtable lookups and forced flushes on PP mismatch. Also make xdp_flush_frame_bulk() inline, as it's just one if + function call + one u32 read, not worth extending the call ladder. Co-developed-by: Toke Høiland-Jørgensen # iterative Signed-off-by: Toke Høiland-Jørgensen Suggested-by: Jakub Kicinski # while (count) Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 6 +- include/net/xdp.h | 16 +++-- net/core/page_pool.c | 109 ++++++++++++++++++++++------------ net/core/xdp.c | 29 +-------- 4 files changed, 87 insertions(+), 73 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 1ea16b0e9c79..05a864031271 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -259,8 +259,7 @@ void page_pool_disable_direct_recycling(struct page_pool *pool); void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), const struct xdp_mem_info *mem); -void page_pool_put_netmem_bulk(struct page_pool *pool, netmem_ref *data, - u32 count); +void page_pool_put_netmem_bulk(netmem_ref *data, u32 count); #else static inline void page_pool_destroy(struct page_pool *pool) { @@ -272,8 +271,7 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool, { } -static inline void page_pool_put_netmem_bulk(struct page_pool *pool, - netmem_ref *data, u32 count) +static inline void page_pool_put_netmem_bulk(netmem_ref *data, u32 count) { } #endif diff --git a/include/net/xdp.h b/include/net/xdp.h index f4020b29122f..9e7eb8223513 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -11,6 +11,8 @@ #include #include /* skb_shared_info */ +#include + /** * DOC: XDP RX-queue information * @@ -193,14 +195,12 @@ xdp_frame_is_frag_pfmemalloc(const struct xdp_frame *frame) #define XDP_BULK_QUEUE_SIZE 16 struct xdp_frame_bulk { int count; - void *xa; netmem_ref q[XDP_BULK_QUEUE_SIZE]; }; static __always_inline void xdp_frame_bulk_init(struct xdp_frame_bulk *bq) { - /* bq->count will be zero'ed when bq->xa gets updated */ - bq->xa = NULL; + bq->count = 0; } static inline struct skb_shared_info * @@ -317,10 +317,18 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, void xdp_return_frame(struct xdp_frame *xdpf); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf); void xdp_return_buff(struct xdp_buff *xdp); -void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); +static inline void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq) +{ + if (unlikely(!bq->count)) + return; + + page_pool_put_netmem_bulk(bq->q, bq->count); + bq->count = 0; +} + static __always_inline unsigned int xdp_get_frame_len(const struct xdp_frame *xdpf) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 4c85b77cfdac..10cef95f12e3 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -839,9 +839,41 @@ void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, } EXPORT_SYMBOL(page_pool_put_unrefed_page); +static void page_pool_recycle_ring_bulk(struct page_pool *pool, + netmem_ref *bulk, + u32 bulk_len) +{ + bool in_softirq; + u32 i; + + /* Bulk produce into ptr_ring page_pool cache */ + in_softirq = page_pool_producer_lock(pool); + + for (i = 0; i < bulk_len; i++) { + if (__ptr_ring_produce(&pool->ring, (__force void *)bulk[i])) { + /* ring full */ + recycle_stat_inc(pool, ring_full); + break; + } + } + + page_pool_producer_unlock(pool, in_softirq); + recycle_stat_add(pool, ring, i); + + /* Hopefully all pages were returned into ptr_ring */ + if (likely(i == bulk_len)) + return; + + /* + * ptr_ring cache is full, free remaining pages outside producer lock + * since put_page() with refcnt == 1 can be an expensive operation. + */ + for (; i < bulk_len; i++) + page_pool_return_page(pool, bulk[i]); +} + /** * page_pool_put_netmem_bulk() - release references on multiple netmems - * @pool: pool from which pages were allocated * @data: array holding netmem references * @count: number of entries in @data * @@ -854,52 +886,55 @@ EXPORT_SYMBOL(page_pool_put_unrefed_page); * Please note the caller must not use data area after running * page_pool_put_netmem_bulk(), as this function overwrites it. */ -void page_pool_put_netmem_bulk(struct page_pool *pool, netmem_ref *data, - u32 count) +void page_pool_put_netmem_bulk(netmem_ref *data, u32 count) { - int i, bulk_len = 0; - bool allow_direct; - bool in_softirq; - - allow_direct = page_pool_napi_local(pool); + u32 bulk_len = 0; - for (i = 0; i < count; i++) { + for (u32 i = 0; i < count; i++) { netmem_ref netmem = netmem_compound_head(data[i]); - /* It is not the last user for the page frag case */ - if (!page_pool_is_last_ref(netmem)) - continue; - - netmem = __page_pool_put_page(pool, netmem, -1, allow_direct); - /* Approved for bulk recycling in ptr_ring cache */ - if (netmem) + if (page_pool_is_last_ref(netmem)) data[bulk_len++] = netmem; } - if (!bulk_len) - return; - - /* Bulk producer into ptr_ring page_pool cache */ - in_softirq = page_pool_producer_lock(pool); - for (i = 0; i < bulk_len; i++) { - if (__ptr_ring_produce(&pool->ring, (__force void *)data[i])) { - /* ring full */ - recycle_stat_inc(pool, ring_full); - break; + count = bulk_len; + while (count) { + netmem_ref bulk[XDP_BULK_QUEUE_SIZE]; + struct page_pool *pool = NULL; + bool allow_direct; + u32 foreign = 0; + + bulk_len = 0; + + for (u32 i = 0; i < count; i++) { + struct page_pool *netmem_pp; + netmem_ref netmem = data[i]; + + netmem_pp = netmem_get_pp(netmem); + if (unlikely(!pool)) { + pool = netmem_pp; + allow_direct = page_pool_napi_local(pool); + } else if (netmem_pp != pool) { + /* + * If the netmem belongs to a different + * page_pool, save it for another round. + */ + data[foreign++] = netmem; + continue; + } + + netmem = __page_pool_put_page(pool, netmem, -1, + allow_direct); + /* Approved for bulk recycling in ptr_ring cache */ + if (netmem) + bulk[bulk_len++] = netmem; } - } - recycle_stat_add(pool, ring, i); - page_pool_producer_unlock(pool, in_softirq); - /* Hopefully all pages was return into ptr_ring */ - if (likely(i == bulk_len)) - return; + if (bulk_len) + page_pool_recycle_ring_bulk(pool, bulk, bulk_len); - /* ptr_ring cache full, free remaining pages outside producer lock - * since put_page() with refcnt == 1 can be an expensive operation - */ - for (; i < bulk_len; i++) - page_pool_return_page(pool, data[i]); + count = foreign; + } } EXPORT_SYMBOL(page_pool_put_netmem_bulk); diff --git a/net/core/xdp.c b/net/core/xdp.c index 938ad15c9857..56127e8ec85f 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -511,46 +511,19 @@ EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); * xdp_frame_bulk is usually stored/allocated on the function * call-stack to avoid locking penalties. */ -void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq) -{ - struct xdp_mem_allocator *xa = bq->xa; - - if (unlikely(!xa || !bq->count)) - return; - - page_pool_put_netmem_bulk(xa->page_pool, bq->q, bq->count); - /* bq->xa is not cleared to save lookup, if mem.id same in next bulk */ - bq->count = 0; -} -EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk); /* Must be called with rcu_read_lock held */ void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq) { - struct xdp_mem_info *mem = &xdpf->mem; - struct xdp_mem_allocator *xa; - - if (mem->type != MEM_TYPE_PAGE_POOL) { + if (xdpf->mem.type != MEM_TYPE_PAGE_POOL) { xdp_return_frame(xdpf); return; } - xa = bq->xa; - if (unlikely(!xa)) { - xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); - bq->count = 0; - bq->xa = xa; - } - if (bq->count == XDP_BULK_QUEUE_SIZE) xdp_flush_frame_bulk(bq); - if (unlikely(mem->id != xa->mem.id)) { - xdp_flush_frame_bulk(bq); - bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); - } - if (unlikely(xdp_frame_has_frags(xdpf))) { struct skb_shared_info *sinfo; int i; From patchwork Wed Dec 11 17:26:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13903996 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 172111FF61D; Wed, 11 Dec 2024 17:28:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938128; cv=none; b=i+X7WmbttVARAbI/nIqxAHGs0JPLfBDHadmQPGKqmsEv+nlmmARlTsqHruKdpY0enS76bkpd1OQoREwKxqG7Y4icfM8uJmA6wXKPG8jeoKxro9XBGIWapDQoaIooM6hNcbLQ985mOCmplsYgDU/nJ0Wy8lKT5+1/M3afod3xiZM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938128; c=relaxed/simple; bh=sNYSchVDPrcoOeYeVbMs01y1bKIAIisYYh9FOHDl+nE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ukKCm928a/Yaku52RQFWh94oD4+5au/QZKB02js7eXl9O/NSHYxf7YQCeGJ8yT1zvSf9zNY14jJ0rw+AjCoPODA4JnirsUhfZx9kurwrU5QxGxGf+5v9QE5Wo7xpyZEXM+XS9ku7wV2B4ZhQ4I1T7YH0E7yvtKco/AmqMbShwcI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ce7Kq6ry; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ce7Kq6ry" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938126; x=1765474126; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sNYSchVDPrcoOeYeVbMs01y1bKIAIisYYh9FOHDl+nE=; b=Ce7Kq6ryuJ65PpNxMMlgQkjTLXuDS2ZN1Qt/yb/ZvqWoyU5+QZEMW/uT 4dbqyEAkA2TLFfs3pJb2DTkToqbnXrdN4oBQsFy3nazbtWRx7r/u/iFQc tu5prMc+0KvppN/UdwLnrHdEN8iAu6GU8QYYXpeh1h3YKDAxVH4vY1ml6 T52Rh2GaabgYGl+Op2xNNWSDjL785zdKN3tE/4BrVUyUM4JbiG0pm1uY3 pd+diVY6bAg8GYlONYJiYU35gHsw72pawm9T6DG1aeIi8cTKlJHFUtuSz AMMrUwHQcPm6z1f1meFWmn8dqn9v0Fsy4ZF/7wGhtfF2LhRq5XbYNu8fc A==; X-CSE-ConnectionGUID: oSPznYTuQEG6y/plcg4/CA== X-CSE-MsgGUID: gAyDYVLXRmSwY5EgaFpLuw== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859478" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859478" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:28:46 -0800 X-CSE-ConnectionGUID: Y4wSIdTITBSlUW6lVCZjKw== X-CSE-MsgGUID: t+KeKOUmT2S+dNrsQnbVJg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122107" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:28:40 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 02/12] xdp: get rid of xdp_frame::mem.id Date: Wed, 11 Dec 2024 18:26:39 +0100 Message-ID: <20241211172649.761483-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Initially, xdp_frame::mem.id was used to search for the corresponding &page_pool to return the page correctly. However, after that struct page was extended to have a direct pointer to its PP (netmem has it as well), further keeping of this field makes no sense. xdp_return_frame_bulk() still used it to do a lookup, and this leftover is now removed. Remove xdp_frame::mem and replace it with ::mem_type, as only memory type still matters and we need to know it to be able to free the frame correctly. As a cute side effect, we can now make every scalar field in &xdp_frame of 4 byte width, speeding up accesses to them. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 14 +++++----- .../net/ethernet/freescale/dpaa/dpaa_eth.c | 2 +- drivers/net/veth.c | 4 +-- kernel/bpf/cpumap.c | 2 +- net/bpf/test_run.c | 4 +-- net/core/filter.c | 12 ++++---- net/core/xdp.c | 28 +++++++++---------- 7 files changed, 33 insertions(+), 33 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 9e7eb8223513..1c260869a353 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -169,13 +169,13 @@ xdp_get_buff_len(const struct xdp_buff *xdp) struct xdp_frame { void *data; - u16 len; - u16 headroom; + u32 len; + u32 headroom; u32 metasize; /* uses lower 8-bits */ /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, - * while mem info is valid on remote CPU. + * while mem_type is valid on remote CPU. */ - struct xdp_mem_info mem; + enum xdp_mem_type mem_type:32; struct net_device *dev_rx; /* used by cpumap */ u32 frame_sz; u32 flags; /* supported values defined in xdp_buff_flags */ @@ -306,13 +306,13 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) if (unlikely(xdp_update_frame_from_buff(xdp, xdp_frame) < 0)) return NULL; - /* rxq only valid until napi_schedule ends, convert to xdp_mem_info */ - xdp_frame->mem = xdp->rxq->mem; + /* rxq only valid until napi_schedule ends, convert to xdp_mem_type */ + xdp_frame->mem_type = xdp->rxq->mem.type; return xdp_frame; } -void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, +void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, struct xdp_buff *xdp); void xdp_return_frame(struct xdp_frame *xdpf); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf); diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index bf5baef5c3e0..4948b4906584 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2281,7 +2281,7 @@ static int dpaa_a050385_wa_xdpf(struct dpaa_priv *priv, new_xdpf->len = xdpf->len; new_xdpf->headroom = priv->tx_headroom; new_xdpf->frame_sz = DPAA_BP_RAW_SIZE; - new_xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; + new_xdpf->mem_type = MEM_TYPE_PAGE_ORDER0; /* Release the initial buffer */ xdp_return_frame_rx_napi(xdpf); diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 07ebb800edf1..01251868a9c2 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -634,7 +634,7 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, break; case XDP_TX: orig_frame = *frame; - xdp->rxq->mem = frame->mem; + xdp->rxq->mem.type = frame->mem_type; if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) { trace_xdp_exception(rq->dev, xdp_prog, act); frame = &orig_frame; @@ -646,7 +646,7 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, goto xdp_xmit; case XDP_REDIRECT: orig_frame = *frame; - xdp->rxq->mem = frame->mem; + xdp->rxq->mem.type = frame->mem_type; if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) { frame = &orig_frame; stats->rx_drops++; diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index a2f46785ac3b..774accbd4a22 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -190,7 +190,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, int err; rxq.dev = xdpf->dev_rx; - rxq.mem = xdpf->mem; + rxq.mem.type = xdpf->mem_type; /* TODO: report queue_index to xdp_rxq_info */ xdp_convert_frame_to_buff(xdpf, &xdp); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 501ec4249fed..9ae2a7f1738b 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -153,7 +153,7 @@ static void xdp_test_run_init_page(netmem_ref netmem, void *arg) new_ctx->data = new_ctx->data_meta + meta_len; xdp_update_frame_from_buff(new_ctx, frm); - frm->mem = new_ctx->rxq->mem; + frm->mem_type = new_ctx->rxq->mem.type; memcpy(&head->orig_ctx, new_ctx, sizeof(head->orig_ctx)); } @@ -246,7 +246,7 @@ static void reset_ctx(struct xdp_page_head *head) head->ctx.data_meta = head->orig_ctx.data_meta; head->ctx.data_end = head->orig_ctx.data_end; xdp_update_frame_from_buff(&head->ctx, head->frame); - head->frame->mem = head->orig_ctx.rxq->mem; + head->frame->mem_type = head->orig_ctx.rxq->mem.type; } static int xdp_recv_frames(struct xdp_frame **frames, int nframes, diff --git a/net/core/filter.c b/net/core/filter.c index fac245065b0a..6c036708634b 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4119,13 +4119,13 @@ static int bpf_xdp_frags_increase_tail(struct xdp_buff *xdp, int offset) } static void bpf_xdp_shrink_data_zc(struct xdp_buff *xdp, int shrink, - struct xdp_mem_info *mem_info, bool release) + enum xdp_mem_type mem_type, bool release) { struct xdp_buff *zc_frag = xsk_buff_get_tail(xdp); if (release) { xsk_buff_del_tail(zc_frag); - __xdp_return(NULL, mem_info, false, zc_frag); + __xdp_return(NULL, mem_type, false, zc_frag); } else { zc_frag->data_end -= shrink; } @@ -4134,18 +4134,18 @@ static void bpf_xdp_shrink_data_zc(struct xdp_buff *xdp, int shrink, static bool bpf_xdp_shrink_data(struct xdp_buff *xdp, skb_frag_t *frag, int shrink) { - struct xdp_mem_info *mem_info = &xdp->rxq->mem; + enum xdp_mem_type mem_type = xdp->rxq->mem.type; bool release = skb_frag_size(frag) == shrink; - if (mem_info->type == MEM_TYPE_XSK_BUFF_POOL) { - bpf_xdp_shrink_data_zc(xdp, shrink, mem_info, release); + if (mem_type == MEM_TYPE_XSK_BUFF_POOL) { + bpf_xdp_shrink_data_zc(xdp, shrink, mem_type, release); goto out; } if (release) { struct page *page = skb_frag_page(frag); - __xdp_return(page_address(page), mem_info, false, NULL); + __xdp_return(page_address(page), mem_type, false, NULL); } out: diff --git a/net/core/xdp.c b/net/core/xdp.c index 56127e8ec85f..d367571c5838 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -430,12 +430,12 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_attach_page_pool); * is used for those calls sites. Thus, allowing for faster recycling * of xdp_frames/pages in those cases. */ -void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, +void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, struct xdp_buff *xdp) { struct page *page; - switch (mem->type) { + switch (mem_type) { case MEM_TYPE_PAGE_POOL: page = virt_to_head_page(data); if (napi_direct && xdp_return_frame_no_direct()) @@ -458,7 +458,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, break; default: /* Not possible, checked in xdp_rxq_info_reg_mem_model() */ - WARN(1, "Incorrect XDP memory type (%d) usage", mem->type); + WARN(1, "Incorrect XDP memory type (%d) usage", mem_type); break; } } @@ -475,10 +475,10 @@ void xdp_return_frame(struct xdp_frame *xdpf) for (i = 0; i < sinfo->nr_frags; i++) { struct page *page = skb_frag_page(&sinfo->frags[i]); - __xdp_return(page_address(page), &xdpf->mem, false, NULL); + __xdp_return(page_address(page), xdpf->mem_type, false, NULL); } out: - __xdp_return(xdpf->data, &xdpf->mem, false, NULL); + __xdp_return(xdpf->data, xdpf->mem_type, false, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame); @@ -494,10 +494,10 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) for (i = 0; i < sinfo->nr_frags; i++) { struct page *page = skb_frag_page(&sinfo->frags[i]); - __xdp_return(page_address(page), &xdpf->mem, true, NULL); + __xdp_return(page_address(page), xdpf->mem_type, true, NULL); } out: - __xdp_return(xdpf->data, &xdpf->mem, true, NULL); + __xdp_return(xdpf->data, xdpf->mem_type, true, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); @@ -516,7 +516,7 @@ EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq) { - if (xdpf->mem.type != MEM_TYPE_PAGE_POOL) { + if (xdpf->mem_type != MEM_TYPE_PAGE_POOL) { xdp_return_frame(xdpf); return; } @@ -553,10 +553,11 @@ void xdp_return_buff(struct xdp_buff *xdp) for (i = 0; i < sinfo->nr_frags; i++) { struct page *page = skb_frag_page(&sinfo->frags[i]); - __xdp_return(page_address(page), &xdp->rxq->mem, true, xdp); + __xdp_return(page_address(page), xdp->rxq->mem.type, true, + xdp); } out: - __xdp_return(xdp->data, &xdp->rxq->mem, true, xdp); + __xdp_return(xdp->data, xdp->rxq->mem.type, true, xdp); } EXPORT_SYMBOL_GPL(xdp_return_buff); @@ -602,7 +603,7 @@ struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp) xdpf->headroom = 0; xdpf->metasize = metasize; xdpf->frame_sz = PAGE_SIZE; - xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; + xdpf->mem_type = MEM_TYPE_PAGE_ORDER0; xsk_buff_free(xdp); return xdpf; @@ -672,7 +673,7 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, * - RX ring dev queue index (skb_record_rx_queue) */ - if (xdpf->mem.type == MEM_TYPE_PAGE_POOL) + if (xdpf->mem_type == MEM_TYPE_PAGE_POOL) skb_mark_for_recycle(skb); /* Allow SKB to reuse area used by xdp_frame */ @@ -719,8 +720,7 @@ struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf) nxdpf = addr; nxdpf->data = addr + headroom; nxdpf->frame_sz = PAGE_SIZE; - nxdpf->mem.type = MEM_TYPE_PAGE_ORDER0; - nxdpf->mem.id = 0; + nxdpf->mem_type = MEM_TYPE_PAGE_ORDER0; return nxdpf; } From patchwork Wed Dec 11 17:26:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13903997 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFEAA200BB7; Wed, 11 Dec 2024 17:28:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938132; cv=none; b=NhCVvj9gHVOBRXIPfyathY2u6uy1AOjpoC73jv7Acm1TwWoitZ83D11zAHpZ620JMw6VhosXpnh+UH4C485SrbKIPrP7WNOLuJYACXbP+ZmcgQ9VXF3QJhWVaYTutWVa1lWbKa5z3hceo9jcyH6LEY5nCJOQgcYnuNXKBrEFpow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938132; c=relaxed/simple; bh=hTiIfVzdQAzN245gpoI2ewyExpDiWN8oZxf82j9mqk0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f0R/aUkNpA0zZqB6/XILvKKB85UtHvqtWpTL4JUk+GWFRhrzKGbJYKVz/L94ZRBJG59GdJBGTMdWEr1DmkyXxM8K4f/zWJ1gKBDL5qGq7q1tXQDxgzMKviWOqX67b1P9C3xqJDP17T08IkyAEwBuR3yFl0YfVIb396h38BplIkk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KK7lEU8N; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KK7lEU8N" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938131; x=1765474131; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hTiIfVzdQAzN245gpoI2ewyExpDiWN8oZxf82j9mqk0=; b=KK7lEU8N8Gn1gGLhqhwGEb/Mgk1webETEtz9xBd6nQYsTFtLS7pwQMl1 2cVN63Y1WN2tB8WjdoIfU2HhSMptwJhOdEUu5P1ZSrAf4QQYe+tJhpn3t uTY2UG7giBGGMVGaoBlu2j1lZwg4ZnMFm4DXL7iR2WTtajEddBsjZ/nAE wZiH8Wv9jZat6Xv5Z3Ru1ahwFytlTxdoS/+I9uALtYdJEwjlZ3EFqX1/p vBq5Y0wSZ3OpwuSzHmsJxcIPbqgtoqK4zMJrolP30TmI1AMYTF7EK2uNW zsOyfHTTloZlDtwUcoyrRKV1evEO+mfYfdqvbwfQjnu0lqZY3853VBsWn g==; X-CSE-ConnectionGUID: dkhO6uXVTum+9NnNlA9eXA== X-CSE-MsgGUID: YdqoXONfQHGvdM6w4zPlfg== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859498" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859498" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:28:51 -0800 X-CSE-ConnectionGUID: +jF23o3NTgai6nnEomDi/g== X-CSE-MsgGUID: zQf8JWCKS+maBHJgmCNvwQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122125" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:28:45 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 03/12] xdp: make __xdp_return() MP-agnostic Date: Wed, 11 Dec 2024 18:26:40 +0100 Message-ID: <20241211172649.761483-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently, __xdp_return() takes pointer to the virtual memory to free a buffer. Apart from that this sometimes provokes redundant data <--> page conversions, taking data pointer effectively prevents lots of XDP code to support non-page-backed buffers, as there's no mapping for the non-host memory (data is always NULL). Just convert it to always take netmem reference. For xdp_return_{buff,frame*}(), this chops off one page_address() per each frag and adds one virt_to_netmem() (same as virt_to_page()) per header buffer. For __xdp_return() itself, it removes one virt_to_page() for MEM_TYPE_PAGE_POOL and another one for MEM_TYPE_PAGE_ORDER0, adding one page_address() for [not really common nowadays] MEM_TYPE_PAGE_SHARED, but the main effect is that the abovementioned functions won't die or memleak anymore if the frame has non-host memory attached and will correctly free those. Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 4 ++-- net/core/filter.c | 9 +++------ net/core/xdp.c | 47 +++++++++++++++++++---------------------------- 3 files changed, 24 insertions(+), 36 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 1c260869a353..d2089cfecefd 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -312,8 +312,8 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) return xdp_frame; } -void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, - struct xdp_buff *xdp); +void __xdp_return(netmem_ref netmem, enum xdp_mem_type mem_type, + bool napi_direct, struct xdp_buff *xdp); void xdp_return_frame(struct xdp_frame *xdpf); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf); void xdp_return_buff(struct xdp_buff *xdp); diff --git a/net/core/filter.c b/net/core/filter.c index 6c036708634b..5fea874025d3 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4125,7 +4125,7 @@ static void bpf_xdp_shrink_data_zc(struct xdp_buff *xdp, int shrink, if (release) { xsk_buff_del_tail(zc_frag); - __xdp_return(NULL, mem_type, false, zc_frag); + __xdp_return(0, mem_type, false, zc_frag); } else { zc_frag->data_end -= shrink; } @@ -4142,11 +4142,8 @@ static bool bpf_xdp_shrink_data(struct xdp_buff *xdp, skb_frag_t *frag, goto out; } - if (release) { - struct page *page = skb_frag_page(frag); - - __xdp_return(page_address(page), mem_type, false, NULL); - } + if (release) + __xdp_return(skb_frag_netmem(frag), mem_type, false, NULL); out: return release; diff --git a/net/core/xdp.c b/net/core/xdp.c index d367571c5838..f1165a35411b 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -430,27 +430,25 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_attach_page_pool); * is used for those calls sites. Thus, allowing for faster recycling * of xdp_frames/pages in those cases. */ -void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, - struct xdp_buff *xdp) +void __xdp_return(netmem_ref netmem, enum xdp_mem_type mem_type, + bool napi_direct, struct xdp_buff *xdp) { - struct page *page; - switch (mem_type) { case MEM_TYPE_PAGE_POOL: - page = virt_to_head_page(data); + netmem = netmem_compound_head(netmem); if (napi_direct && xdp_return_frame_no_direct()) napi_direct = false; /* No need to check ((page->pp_magic & ~0x3UL) == PP_SIGNATURE) * as mem->type knows this a page_pool page */ - page_pool_put_full_page(page->pp, page, napi_direct); + page_pool_put_full_netmem(netmem_get_pp(netmem), netmem, + napi_direct); break; case MEM_TYPE_PAGE_SHARED: - page_frag_free(data); + page_frag_free(__netmem_address(netmem)); break; case MEM_TYPE_PAGE_ORDER0: - page = virt_to_page(data); /* Assumes order0 page*/ - put_page(page); + put_page(__netmem_to_page(netmem)); break; case MEM_TYPE_XSK_BUFF_POOL: /* NB! Only valid from an xdp_buff! */ @@ -466,38 +464,34 @@ void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, void xdp_return_frame(struct xdp_frame *xdpf) { struct skb_shared_info *sinfo; - int i; if (likely(!xdp_frame_has_frags(xdpf))) goto out; sinfo = xdp_get_shared_info_from_frame(xdpf); - for (i = 0; i < sinfo->nr_frags; i++) { - struct page *page = skb_frag_page(&sinfo->frags[i]); + for (u32 i = 0; i < sinfo->nr_frags; i++) + __xdp_return(skb_frag_netmem(&sinfo->frags[i]), xdpf->mem_type, + false, NULL); - __xdp_return(page_address(page), xdpf->mem_type, false, NULL); - } out: - __xdp_return(xdpf->data, xdpf->mem_type, false, NULL); + __xdp_return(virt_to_netmem(xdpf->data), xdpf->mem_type, false, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) { struct skb_shared_info *sinfo; - int i; if (likely(!xdp_frame_has_frags(xdpf))) goto out; sinfo = xdp_get_shared_info_from_frame(xdpf); - for (i = 0; i < sinfo->nr_frags; i++) { - struct page *page = skb_frag_page(&sinfo->frags[i]); + for (u32 i = 0; i < sinfo->nr_frags; i++) + __xdp_return(skb_frag_netmem(&sinfo->frags[i]), xdpf->mem_type, + true, NULL); - __xdp_return(page_address(page), xdpf->mem_type, true, NULL); - } out: - __xdp_return(xdpf->data, xdpf->mem_type, true, NULL); + __xdp_return(virt_to_netmem(xdpf->data), xdpf->mem_type, true, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); @@ -544,20 +538,17 @@ EXPORT_SYMBOL_GPL(xdp_return_frame_bulk); void xdp_return_buff(struct xdp_buff *xdp) { struct skb_shared_info *sinfo; - int i; if (likely(!xdp_buff_has_frags(xdp))) goto out; sinfo = xdp_get_shared_info_from_buff(xdp); - for (i = 0; i < sinfo->nr_frags; i++) { - struct page *page = skb_frag_page(&sinfo->frags[i]); + for (u32 i = 0; i < sinfo->nr_frags; i++) + __xdp_return(skb_frag_netmem(&sinfo->frags[i]), + xdp->rxq->mem.type, true, xdp); - __xdp_return(page_address(page), xdp->rxq->mem.type, true, - xdp); - } out: - __xdp_return(xdp->data, xdp->rxq->mem.type, true, xdp); + __xdp_return(virt_to_netmem(xdp->data), xdp->rxq->mem.type, true, xdp); } EXPORT_SYMBOL_GPL(xdp_return_buff); From patchwork Wed Dec 11 17:26:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13903998 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39F1E202F96; Wed, 11 Dec 2024 17:28:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938138; cv=none; b=HBYs5iTsA0rgYRVQbVhxgQYIHcQTPnXbBWzI2ujR0N9sWPqfrtlr+y5zsYsFiUKDFgiyb7xEYVsg56TQcGYtVNRg4uZd8sX5FSiLo6jX0qcckotqGbXmAlK21pHuTRpKRt/1fAzAC9aRtTibf5bjIYbM225FzjtUouzAgQNDvW4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938138; c=relaxed/simple; bh=M9Ciy3tkJShPPbC2RKdcmdtYSeXJngDCkPUGImKkRFw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ze+mzc6N3PmoVZaLIx8SlAezPsAnFKLbSFImD3pUb6VkrfV7vI7iJ6OtpNUORrCxvYV5GJHeayBk4mC1RX7icZlr70P77FGQ8tt/SKSuPaSs1SBdU30fL+Hu/xRzdhIGQ8hDjY/w5EDV4NOfVHVI/hxwCbUGYf6n5ItyWV7Kmmc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=D7KdBl1W; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="D7KdBl1W" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938136; x=1765474136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=M9Ciy3tkJShPPbC2RKdcmdtYSeXJngDCkPUGImKkRFw=; b=D7KdBl1Wn2VeWW0vCMmKsDpcX8Bu6d1zsldisnUXkvsh7M2zuRMwQOgR gwmMM12KmfN2uoXlb7nq5Cp3vx4A3UGzi/F+Yi74q/y9Z87clm0hCTmMs vim//z3BPnLf2510yezDKouaomi+gBEFgb8n3CKDEhmbn9tJKzvyxXzwS IPxSj4EX8mkUvqdQS7Ja9sJ9WBHTQrm2nUQkaaGC2cqs8Ze+8me1uBLIZ 7sGIpfp1lbDtAp9tfujix1pR2E105DkQ4Dedewz5r0dv5Dy9Bjc/cOQBJ PmrilSwZ0Z+LeEIKA3iGt2pCR8bPLnvrMsPWlzCPjJK1wjbMvRj15Tdo8 g==; X-CSE-ConnectionGUID: 4UK6RYS4RbK+MPutg13NXA== X-CSE-MsgGUID: 6JbKwcWvSFmCd6dWeBLx1Q== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859521" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859521" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:28:56 -0800 X-CSE-ConnectionGUID: rnJl5QmMRiqZAwxtTn8rNw== X-CSE-MsgGUID: VB1Tt8HkSF6OHDZHjvTjyQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122164" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:28:50 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 04/12] xdp: add generic xdp_buff_add_frag() Date: Wed, 11 Dec 2024 18:26:41 +0100 Message-ID: <20241211172649.761483-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The code piece which would attach a frag to &xdp_buff is almost identical across the drivers supporting XDP multi-buffer on Rx. Make it a generic elegant "oneliner". Also, I see lots of drivers calculating frags_truesize as `xdp->frame_sz * nr_frags`. I can't say this is fully correct, since frags might be backed by chunks of different sizes, especially with stuff like the header split. Even page_pool_alloc() can give you two different truesizes on two subsequent requests to allocate the same buffer size. Add a field to &skb_shared_info (unionized as there's no free slot currently on x86_64) to track the "true" truesize. It can be used later when updating an skb. Reviewed-by: Maciej Fijalkowski Signed-off-by: Alexander Lobakin --- include/linux/skbuff.h | 16 +++++-- include/net/xdp.h | 96 +++++++++++++++++++++++++++++++++++++++++- net/core/xdp.c | 11 +++++ 3 files changed, 118 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 69624b394cd9..8bcf14ae6789 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -608,11 +608,19 @@ struct skb_shared_info { * Warning : all fields before dataref are cleared in __alloc_skb() */ atomic_t dataref; - unsigned int xdp_frags_size; - /* Intermediate layers must ensure that destructor_arg - * remains valid until skb destructor */ - void * destructor_arg; + union { + struct { + u32 xdp_frags_size; + u32 xdp_frags_truesize; + }; + + /* + * Intermediate layers must ensure that destructor_arg + * remains valid until skb destructor. + */ + void *destructor_arg; + }; /* must be last field, see pskb_expand_head() */ skb_frag_t frags[MAX_SKB_FRAGS]; diff --git a/include/net/xdp.h b/include/net/xdp.h index d2089cfecefd..11139c210b49 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -167,6 +167,93 @@ xdp_get_buff_len(const struct xdp_buff *xdp) return len; } +void xdp_return_frag(netmem_ref netmem, const struct xdp_buff *xdp); + +/** + * __xdp_buff_add_frag - attach frag to &xdp_buff + * @xdp: XDP buffer to attach the frag to + * @netmem: network memory containing the frag + * @offset: offset at which the frag starts + * @size: size of the frag + * @truesize: total memory size occupied by the frag + * @try_coalesce: whether to try coalescing the frags (not valid for XSk) + * + * Attach frag to the XDP buffer. If it currently has no frags attached, + * initialize the related fields, otherwise check that the frag number + * didn't reach the limit of ``MAX_SKB_FRAGS``. If possible, try coalescing + * the frag with the previous one. + * The function doesn't check/update the pfmemalloc bit. Please use the + * non-underscored wrapper in drivers. + * + * Return: true on success, false if there's no space for the frag in + * the shared info struct. + */ +static inline bool __xdp_buff_add_frag(struct xdp_buff *xdp, netmem_ref netmem, + u32 offset, u32 size, u32 truesize, + bool try_coalesce) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + skb_frag_t *prev; + u32 nr_frags; + + if (!xdp_buff_has_frags(xdp)) { + xdp_buff_set_frags_flag(xdp); + + nr_frags = 0; + sinfo->xdp_frags_size = 0; + sinfo->xdp_frags_truesize = 0; + + goto fill; + } + + nr_frags = sinfo->nr_frags; + prev = &sinfo->frags[nr_frags - 1]; + + if (try_coalesce && netmem == skb_frag_netmem(prev) && + offset == skb_frag_off(prev) + skb_frag_size(prev)) { + skb_frag_size_add(prev, size); + /* Guaranteed to only decrement the refcount */ + xdp_return_frag(netmem, xdp); + } else if (unlikely(nr_frags == MAX_SKB_FRAGS)) { + return false; + } else { +fill: + __skb_fill_netmem_desc_noacc(sinfo, nr_frags++, netmem, + offset, size); + } + + sinfo->nr_frags = nr_frags; + sinfo->xdp_frags_size += size; + sinfo->xdp_frags_truesize += truesize; + + return true; +} + +/** + * xdp_buff_add_frag - attach frag to &xdp_buff + * @xdp: XDP buffer to attach the frag to + * @netmem: network memory containing the frag + * @offset: offset at which the frag starts + * @size: size of the frag + * @truesize: total memory size occupied by the frag + * + * Version of __xdp_buff_add_frag() which takes care of the pfmemalloc bit. + * + * Return: true on success, false if there's no space for the frag in + * the shared info struct. + */ +static inline bool xdp_buff_add_frag(struct xdp_buff *xdp, netmem_ref netmem, + u32 offset, u32 size, u32 truesize) +{ + if (!__xdp_buff_add_frag(xdp, netmem, offset, size, truesize, true)) + return false; + + if (unlikely(netmem_is_pfmemalloc(netmem))) + xdp_buff_set_frag_pfmemalloc(xdp); + + return true; +} + struct xdp_frame { void *data; u32 len; @@ -230,7 +317,14 @@ xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags, unsigned int size, unsigned int truesize, bool pfmemalloc) { - skb_shinfo(skb)->nr_frags = nr_frags; + struct skb_shared_info *sinfo = skb_shinfo(skb); + + sinfo->nr_frags = nr_frags; + /* + * ``destructor_arg`` is unionized with ``xdp_frags_{,true}size``, + * reset it after that these fields aren't used anymore. + */ + sinfo->destructor_arg = NULL; skb->len += size; skb->data_len += size; diff --git a/net/core/xdp.c b/net/core/xdp.c index f1165a35411b..a66a4e036f53 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -535,6 +535,17 @@ void xdp_return_frame_bulk(struct xdp_frame *xdpf, } EXPORT_SYMBOL_GPL(xdp_return_frame_bulk); +/** + * xdp_return_frag -- free one XDP frag or decrement its refcount + * @netmem: network memory reference to release + * @xdp: &xdp_buff to release the frag for + */ +void xdp_return_frag(netmem_ref netmem, const struct xdp_buff *xdp) +{ + __xdp_return(netmem, xdp->rxq->mem.type, true, NULL); +} +EXPORT_SYMBOL_GPL(xdp_return_frag); + void xdp_return_buff(struct xdp_buff *xdp) { struct skb_shared_info *sinfo; From patchwork Wed Dec 11 17:26:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13903999 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F215F1FECC2; Wed, 11 Dec 2024 17:29:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938142; cv=none; b=UfDVt4le421WE5NbzgtfB26Xr0MF/INJTKMLUIUqkGSPYbBg4m+JicGcLbk13lfhzVN/8k+3aIcqwLssWp33q4e1Cl7lk/05nRm0lwnbcaW2GxzQVGWQztSmyTJGT+vXgLEiIvN+hXAK0mKJypWg1KJ8MPf/Pwx9zmdHmvf/EYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938142; c=relaxed/simple; bh=/uKo+WrFDGA87vIgprblCzqKpeygMRpaco2uSgh/wIc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=X8fvXue/qRaWSfsFjtIPbv6X3Gnjwab0Y2Fii6RxX/H51RljTV5DoXILM1neuzMew2cxUjjo0oXLTZ8wfvkCyLzH2Jt/anMVil2EYBUdBZAl27bNXJZBcBXuGGnJirnZK0Lh32yDLHRStpe2eCXvvWRl4mgBXKuZPRT5FhDQtdY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=OcNIKRIv; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OcNIKRIv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938141; x=1765474141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/uKo+WrFDGA87vIgprblCzqKpeygMRpaco2uSgh/wIc=; b=OcNIKRIvX4HB/WAbkTsK/YvPUUty/f+PiGOVfuTdFZKWwRgVyeUAWRaH oP+TFJYW2dDDzWshqKMZQPXuzDWlG+XrqRsV4bk33li3w2hWxcEPDebpu U7me+pHas/6m1ogwxEgeTTezrhXXVUhOem5mQNFQVF2jyE8ehBxuSyhTv bXPWZ0Cua1CQ4lzUASq7M8bbkZiO+EgQ/by632PaQcm2HohYVSliGf4U5 SemFYYT/hPNUWolZrfpa23bdoBOXXWSzLs0U3PH2aEQEQEu2tKqdM+qnq 5yzTUc5GETwsO7tjg2pQptC7RY6Le426xhnlp5ItfHapT5jbk36OClEfV w==; X-CSE-ConnectionGUID: 14yzc4WmRGCTDhJKsMyC6w== X-CSE-MsgGUID: tPsDryEFQxm15+Xabw432g== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859539" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859539" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:01 -0800 X-CSE-ConnectionGUID: t0juJqVGT6uwaHMaehvdjA== X-CSE-MsgGUID: +n/kNm+bR/ee1+5d9HQ11g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122192" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:28:56 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 05/12] xdp: add generic xdp_build_skb_from_buff() Date: Wed, 11 Dec 2024 18:26:42 +0100 Message-ID: <20241211172649.761483-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org The code which builds an skb from an &xdp_buff keeps multiplying itself around the drivers with almost no changes. Let's try to stop that by adding a generic function. Unlike __xdp_build_skb_from_frame(), always allocate an skbuff head using napi_build_skb() and make use of the available xdp_rxq pointer to assign the Rx queue index. In case of PP-backed buffer, mark the skb to be recycled, as every PP user's been switched to recycle skbs. Reviewed-by: Toke Høiland-Jørgensen Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 1 + net/core/xdp.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index 11139c210b49..aa24fa78cbe6 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -336,6 +336,7 @@ xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags, void xdp_warn(const char *msg, const char *func, const int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) +struct sk_buff *xdp_build_skb_from_buff(const struct xdp_buff *xdp); struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, diff --git a/net/core/xdp.c b/net/core/xdp.c index a66a4e036f53..c4d824cf27da 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -629,6 +629,61 @@ int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp) } EXPORT_SYMBOL_GPL(xdp_alloc_skb_bulk); +/** + * xdp_build_skb_from_buff - create an skb from &xdp_buff + * @xdp: &xdp_buff to convert to an skb + * + * Perform common operations to create a new skb to pass up the stack from + * &xdp_buff: allocate an skb head from the NAPI percpu cache, initialize + * skb data pointers and offsets, set the recycle bit if the buff is + * PP-backed, Rx queue index, protocol and update frags info. + * + * Return: new &sk_buff on success, %NULL on error. + */ +struct sk_buff *xdp_build_skb_from_buff(const struct xdp_buff *xdp) +{ + const struct xdp_rxq_info *rxq = xdp->rxq; + const struct skb_shared_info *sinfo; + struct sk_buff *skb; + u32 nr_frags = 0; + int metalen; + + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + } + + skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz); + if (unlikely(!skb)) + return NULL; + + skb_reserve(skb, xdp->data - xdp->data_hard_start); + __skb_put(skb, xdp->data_end - xdp->data); + + metalen = xdp->data - xdp->data_meta; + if (metalen > 0) + skb_metadata_set(skb, metalen); + + if (rxq->mem.type == MEM_TYPE_PAGE_POOL && is_page_pool_compiled_in()) + skb_mark_for_recycle(skb); + + skb_record_rx_queue(skb, rxq->queue_index); + + if (unlikely(nr_frags)) { + u32 tsize; + + tsize = sinfo->xdp_frags_truesize ? : nr_frags * xdp->frame_sz; + xdp_update_skb_shared_info(skb, nr_frags, + sinfo->xdp_frags_size, tsize, + xdp_buff_is_frag_pfmemalloc(xdp)); + } + + skb->protocol = eth_type_trans(skb, rxq->dev); + + return skb; +} +EXPORT_SYMBOL_GPL(xdp_build_skb_from_buff); + struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) From patchwork Wed Dec 11 17:26:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904000 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F0C8520408B; Wed, 11 Dec 2024 17:29:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938147; cv=none; b=eA9ikS+/OgYErmRBFVHB3teOpOfUZJbIMPMZt4ysKjfOp6Gp17eum9Pasrd4F7dDfzr2SYZNnooRjXCmDnltva3PBHX/Gz18Xv03hsQJ6P5i6Zcsaeg14g1CxlkU9uJNYT9M35EtlFXiJM4Qij6yWvar+mci9WI2412t+Cuwpos= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938147; c=relaxed/simple; bh=2w0d0BMhD3ttKFPpGz1VqdFuInI1L++k3zgnlxidPrA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WVB3hVASVMNrOwsIhLNHg6gShg7u9Wvrp2AvIeYb730BjJO3ysqIqWTXoxVHXe+NF288AFwmea7bX4lYmcBy4/K+xwrnD4Bk8zyKhqLe3jd5kI9tU18cpb7dG+LSEgAZ4l869eQxWoOUOXe9jPZscTqIm/8qB8ANQaCY6MNIXNA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=aVfEztJM; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aVfEztJM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938146; x=1765474146; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2w0d0BMhD3ttKFPpGz1VqdFuInI1L++k3zgnlxidPrA=; b=aVfEztJM5qzteeWGv9Rz+0V/llJ5QBnGEM8OEBAEfKmVpxw549/CPUke YTouTs0A0gJ8Xw/59vh18kRBOF9e1Ofw7xJRvaUWCGrzpr6zCgJwYRYTp Nm3Xyl7SUrG1ysuu5zaGIUeEE4z1hohMIKPWbiAj9SWYDgvsd8P7Ibz+Y eALP3ukD3hNiO7XgUNaRu6zoFG+E/2EE+z1jhf6TaPaJM37jKCu9Lz+2t dr3v52HViB3k4uYS5KeGc0mbYpxL3hDPii1F4A/zPNTGMgPYJwz+q2q5I caeD/khLuZGMUBkVucDuydfXndwRSF+4/2Wxxq6KX4KQpEhGVDPjW7Lb6 A==; X-CSE-ConnectionGUID: QQA9fLeYQJCbeq/7QXPKxg== X-CSE-MsgGUID: rfnF+ZdBQ1qXwe7zldQPHg== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859561" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859561" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:06 -0800 X-CSE-ConnectionGUID: TU8MU8leTRupATZEqQJOag== X-CSE-MsgGUID: pJH7OKTLTiWIYJfjdi/viA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122267" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:01 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 06/12] xsk: make xsk_buff_add_frag really add the frag via __xdp_buff_add_frag() Date: Wed, 11 Dec 2024 18:26:43 +0100 Message-ID: <20241211172649.761483-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently, xsk_buff_add_frag() only adds the frag to pool's linked list, not doing anything with the &xdp_buff. The drivers do that manually and the logic is the same. Make it really add an skb frag, just like xdp_buff_add_frag() does that, and freeing frags on error if needed. This allows to remove repeating code from i40e and ice and not add the same code again and again. Acked-by: Maciej Fijalkowski Signed-off-by: Alexander Lobakin --- include/net/xdp_sock_drv.h | 18 ++++++++++-- drivers/net/ethernet/intel/i40e/i40e_xsk.c | 30 ++------------------ drivers/net/ethernet/intel/ice/ice_xsk.c | 32 ++-------------------- 3 files changed, 20 insertions(+), 60 deletions(-) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index f3175a5d28f7..86620c818965 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -136,11 +136,21 @@ static inline void xsk_buff_free(struct xdp_buff *xdp) xp_free(xskb); } -static inline void xsk_buff_add_frag(struct xdp_buff *xdp) +static inline bool xsk_buff_add_frag(struct xdp_buff *head, + struct xdp_buff *xdp) { - struct xdp_buff_xsk *frag = container_of(xdp, struct xdp_buff_xsk, xdp); + const void *data = xdp->data; + struct xdp_buff_xsk *frag; + + if (!__xdp_buff_add_frag(head, virt_to_netmem(data), + offset_in_page(data), xdp->data_end - data, + xdp->frame_sz, false)) + return false; + frag = container_of(xdp, struct xdp_buff_xsk, xdp); list_add_tail(&frag->list_node, &frag->pool->xskb_list); + + return true; } static inline struct xdp_buff *xsk_buff_get_frag(const struct xdp_buff *first) @@ -357,8 +367,10 @@ static inline void xsk_buff_free(struct xdp_buff *xdp) { } -static inline void xsk_buff_add_frag(struct xdp_buff *xdp) +static inline bool xsk_buff_add_frag(struct xdp_buff *head, + struct xdp_buff *xdp) { + return false; } static inline struct xdp_buff *xsk_buff_get_frag(const struct xdp_buff *first) diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c index 4e885df789ef..e28f1905a4a0 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c +++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c @@ -395,32 +395,6 @@ static void i40e_handle_xdp_result_zc(struct i40e_ring *rx_ring, WARN_ON_ONCE(1); } -static int -i40e_add_xsk_frag(struct i40e_ring *rx_ring, struct xdp_buff *first, - struct xdp_buff *xdp, const unsigned int size) -{ - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(first); - - if (!xdp_buff_has_frags(first)) { - sinfo->nr_frags = 0; - sinfo->xdp_frags_size = 0; - xdp_buff_set_frags_flag(first); - } - - if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) { - xsk_buff_free(first); - return -ENOMEM; - } - - __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, - virt_to_page(xdp->data_hard_start), - XDP_PACKET_HEADROOM, size); - sinfo->xdp_frags_size += size; - xsk_buff_add_frag(xdp); - - return 0; -} - /** * i40e_clean_rx_irq_zc - Consumes Rx packets from the hardware ring * @rx_ring: Rx ring @@ -486,8 +460,10 @@ int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget) if (!first) first = bi; - else if (i40e_add_xsk_frag(rx_ring, first, bi, size)) + else if (!xsk_buff_add_frag(first, bi)) { + xsk_buff_free(first); break; + } if (++next_to_process == count) next_to_process = 0; diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 334ae945d640..8975d2971bc3 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -801,35 +801,6 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, return result; } -static int -ice_add_xsk_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *first, - struct xdp_buff *xdp, const unsigned int size) -{ - struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(first); - - if (!size) - return 0; - - if (!xdp_buff_has_frags(first)) { - sinfo->nr_frags = 0; - sinfo->xdp_frags_size = 0; - xdp_buff_set_frags_flag(first); - } - - if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) { - xsk_buff_free(first); - return -ENOMEM; - } - - __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, - virt_to_page(xdp->data_hard_start), - XDP_PACKET_HEADROOM, size); - sinfo->xdp_frags_size += size; - xsk_buff_add_frag(xdp); - - return 0; -} - /** * ice_clean_rx_irq_zc - consumes packets from the hardware ring * @rx_ring: AF_XDP Rx ring @@ -895,7 +866,8 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, if (!first) { first = xdp; - } else if (ice_add_xsk_frag(rx_ring, first, xdp, size)) { + } else if (likely(size) && !xsk_buff_add_frag(first, xdp)) { + xsk_buff_free(first); break; } From patchwork Wed Dec 11 17:26:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904002 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BFCD1FECDB; Wed, 11 Dec 2024 17:29:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938163; cv=none; b=nNjx71mrsIEXWygccqav6cRhrEd5gNL4gi8aE31Y0qz5zGD7yWxOlmCybpsrUhYimcGAD4B8JaKqMX2phTwxlRrnWz7euw3EyU5b2H9+SbJDviPDc/A1XinIHqvAurYg3rRBkk0Uxhr5/PgpmFbUBwNWyIlkneo0WgT4+dIlDks= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938163; c=relaxed/simple; bh=U7GC0Mobi6oqTnzo3KZq64LBwVtU2r6C3y5WZwOf2Us=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cBr7XmX6gBzBJGQn4qrrrYzxqWa+3gqW7kN35EbsA4oE+g1F2DUi5OQkwXhhHPvZtJFKzipe0tKV2RVnevdbcvbos9hH8VTMPQuVojLxsWb3wX53JQuZ31+NdExQmtJYqXbXVZHCEWo78HdBZlM8tRR9qZQxmiOOS8SGEj+eESk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LXKxvFjR; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LXKxvFjR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938162; x=1765474162; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U7GC0Mobi6oqTnzo3KZq64LBwVtU2r6C3y5WZwOf2Us=; b=LXKxvFjRZL3hGn0X+s9WBoOuDfJ4229WNObDgDSQ+abTVILv/pbC9r2c /ST2Cd5SO7vxgjemYr85gGxoUyPR9CKYJjXXB9eY7zRvTsmMSFxUCvjCH dcRx4dGEVXW+EWPLRnUMeQr+xewoQgYYAlwTDDPznO3jfw5Zelv071Tsv mO1zBgqROs1rljEr4wf1fdlYwglgsgvAGyaCsEFhgrkc0N0GXRGhNhYME 9/SyRbsXSOrpcja3siGac6zvkMgd25EfklIeN4rebJTe6PaaqwCpaNU2N 7nKD1VTH6S4gR/XAdk6nMTHt+Byg0BlsNoolpiIzcxyQw/B3kcKstBHf5 A==; X-CSE-ConnectionGUID: eYS0mebsTlWaB+SkQCbAaA== X-CSE-MsgGUID: d4onestlQ6SO1D6Z7RSEeQ== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859597" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859597" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:11 -0800 X-CSE-ConnectionGUID: XPc9JSjHScSBioVy/Ke5mA== X-CSE-MsgGUID: GYEeKn08TFeEi3hEgS5ckQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122304" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:06 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 07/12] xsk: add generic XSk &xdp_buff -> skb conversion Date: Wed, 11 Dec 2024 18:26:44 +0100 Message-ID: <20241211172649.761483-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Same as with converting &xdp_buff to skb on Rx, the code which allocates a new skb and copies the XSk frame there is identical across the drivers, so make it generic. This includes copying all the frags if they are present in the original buff. System percpu Page Pools help here a lot: when available, allocate pages from there instead of the MM layer. This greatly improves XDP_PASS performance on XSk: instead of page_alloc() + page_free(), the net core recycles the same pages, so the only overhead left is memcpy()s. Note that the passed buff gets freed if the conversion is done w/o any error, assuming you don't need this buffer after you convert it to an skb. Reviewed-by: Maciej Fijalkowski Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 1 + net/core/xdp.c | 138 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) diff --git a/include/net/xdp.h b/include/net/xdp.h index aa24fa78cbe6..6da0e746cf75 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -337,6 +337,7 @@ void xdp_warn(const char *msg, const char *func, const int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) struct sk_buff *xdp_build_skb_from_buff(const struct xdp_buff *xdp); +struct sk_buff *xdp_build_skb_from_zc(struct xdp_buff *xdp); struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, diff --git a/net/core/xdp.c b/net/core/xdp.c index c4d824cf27da..6e319e00ae78 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -22,6 +22,8 @@ #include #include +#include "dev.h" + #define REG_STATE_NEW 0x0 #define REG_STATE_REGISTERED 0x1 #define REG_STATE_UNREGISTERED 0x2 @@ -684,6 +686,142 @@ struct sk_buff *xdp_build_skb_from_buff(const struct xdp_buff *xdp) } EXPORT_SYMBOL_GPL(xdp_build_skb_from_buff); +/** + * xdp_copy_frags_from_zc - copy frags from XSk buff to skb + * @skb: skb to copy frags to + * @xdp: XSk &xdp_buff from which the frags will be copied + * @pp: &page_pool backing page allocation, if available + * + * Copy all frags from XSk &xdp_buff to the skb to pass it up the stack. + * Allocate a new page / page frag for each frag, copy it and attach to + * the skb. + * + * Return: true on success, false on page allocation fail. + */ +static noinline bool xdp_copy_frags_from_zc(struct sk_buff *skb, + const struct xdp_buff *xdp, + struct page_pool *pp) +{ + const struct skb_shared_info *xinfo; + struct skb_shared_info *sinfo; + u32 nr_frags, ts; + + xinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = xinfo->nr_frags; + sinfo = skb_shinfo(skb); + +#if IS_ENABLED(CONFIG_PAGE_POOL) + ts = 0; +#else + ts = xinfo->xdp_frags_truesize ? : nr_frags * xdp->frame_sz; +#endif + + for (u32 i = 0; i < nr_frags; i++) { + u32 len = skb_frag_size(&xinfo->frags[i]); + void *data; +#if IS_ENABLED(CONFIG_PAGE_POOL) + u32 truesize = len; + + data = page_pool_dev_alloc_va(pp, &truesize); + ts += truesize; +#else + data = napi_alloc_frag(len); +#endif + if (unlikely(!data)) + return false; + + memcpy(data, skb_frag_address(&xinfo->frags[i]), + LARGEST_ALIGN(len)); + __skb_fill_netmem_desc(skb, sinfo->nr_frags++, + virt_to_netmem(data), + offset_in_page(data), len); + } + + xdp_update_skb_shared_info(skb, nr_frags, xinfo->xdp_frags_size, + ts, false); + + return true; +} + +/** + * xdp_build_skb_from_zc - create an skb from XSk &xdp_buff + * @xdp: source XSk buff + * + * Similar to xdp_build_skb_from_buff(), but for XSk frames. Allocate an skb + * head, new page for the head, copy the data and initialize the skb fields. + * If there are frags, allocate new pages for them and copy. + * If Page Pool is available, the function allocates memory from the system + * percpu pools to try recycling the pages, otherwise it uses the NAPI page + * frag caches. + * If new skb was built successfully, @xdp is returned to XSk pool's freelist. + * On error, it remains untouched and the caller must take care of this. + * + * Return: new &sk_buff on success, %NULL on error. + */ +struct sk_buff *xdp_build_skb_from_zc(struct xdp_buff *xdp) +{ + const struct xdp_rxq_info *rxq = xdp->rxq; + u32 len = xdp->data_end - xdp->data_meta; + struct page_pool *pp; + struct sk_buff *skb; + int metalen; +#if IS_ENABLED(CONFIG_PAGE_POOL) + u32 truesize; + void *data; + + pp = this_cpu_read(system_page_pool); + truesize = xdp->frame_sz; + + data = page_pool_dev_alloc_va(pp, &truesize); + if (unlikely(!data)) + return NULL; + + skb = napi_build_skb(data, truesize); + if (unlikely(!skb)) { + page_pool_free_va(pp, data, true); + return NULL; + } + + skb_mark_for_recycle(skb); + skb_reserve(skb, xdp->data_meta - xdp->data_hard_start); +#else /* !CONFIG_PAGE_POOL */ + struct napi_struct *napi; + + pp = NULL; + napi = napi_by_id(rxq->napi_id); + if (likely(napi)) + skb = napi_alloc_skb(napi, len); + else + skb = __netdev_alloc_skb_ip_align(rxq->dev, len, + GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; +#endif /* !CONFIG_PAGE_POOL */ + + memcpy(__skb_put(skb, len), xdp->data_meta, LARGEST_ALIGN(len)); + + metalen = xdp->data - xdp->data_meta; + if (metalen > 0) { + skb_metadata_set(skb, metalen); + __skb_pull(skb, metalen); + } + + skb_record_rx_queue(skb, rxq->queue_index); + + if (unlikely(xdp_buff_has_frags(xdp)) && + unlikely(!xdp_copy_frags_from_zc(skb, xdp, pp))) { + napi_consume_skb(skb, true); + return NULL; + } + + xsk_buff_free(xdp); + + skb->protocol = eth_type_trans(skb, rxq->dev); + + return skb; +} +EXPORT_SYMBOL_GPL(xdp_build_skb_from_zc); + struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) From patchwork Wed Dec 11 17:26:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904003 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23D621FECDD; Wed, 11 Dec 2024 17:29:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938164; cv=none; b=nl+iYgFFVeVmOaczXboDNYpLfB5CocZi5KwZLQ8DW/EiidtLcAIAjWY1g2bJ8ITRlmYd8CgCWTyanB0jwrhgWvo4WYzGbnnX10JxDjxQWxGBlsf8ATzXz9MAfKbnk70mWGTU+vd9avBjdSeQurwn3GX9WM5ecMXn15glfbBlTUM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938164; c=relaxed/simple; bh=QHpXlZOzq22J4C0EVmTBgvs+wZhVamSl5nD1C7T9BCc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pRBeOBCzPAznzBsRPGjW61r0d9V75w3JTayKEyQRTQ3pp3umBWzMEq88MaC2oV1BXGbJ40sFeRrxga3ZP0VWwpJaz4MmOg//sOS7aBIkQgph+YOzMcukambS7oHPn5dbFIuk8KMa4Un+Sq1o1xErl42Wk737bahP8JTqvgY5uJo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=To2/f7l5; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="To2/f7l5" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938162; x=1765474162; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QHpXlZOzq22J4C0EVmTBgvs+wZhVamSl5nD1C7T9BCc=; b=To2/f7l5uYGK1xNCplwg9UUIfs1O2qoM9QtFfORL8UsfJTeQ2pz1v+DQ XA205o1UZ24mqa3j6Y3n17lyyqkPjAVzl8MOmAf7KOdkLi0/w8ijE67n+ UDlMmEWjzs4+NXFIYIAl52Lu8FbG3VJkgiXBDLdTSyFrbRyJ20wfJJu1R lo97t97ijWzyPbusSPUHBxbNQQlpY0ZazUVhcsa1lBjxtr80Z5MevGuCj BCeX9sHknAog+fqhoZEDc5pAtErXB1fHZCU/RfEUJDNROOnaSK6BAJ2Br b7Twjd9pQIQhdQKppEqIMig6mR2KpOsRCPBOxrsAu9RWR2XkG0zoGZPWv w==; X-CSE-ConnectionGUID: UwmILZfDQ8mXa+KLjXMLEg== X-CSE-MsgGUID: sZe0YnYVSNqE/PUQE0t3nw== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859613" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859613" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:16 -0800 X-CSE-ConnectionGUID: 9ZoNiuZnSbqkobESVkuoow== X-CSE-MsgGUID: 8pGUOR1YTSSNDq8o0Jp5Ag== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122318" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:11 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 08/12] xsk: add helper to get &xdp_desc's DMA and meta pointer in one go Date: Wed, 11 Dec 2024 18:26:45 +0100 Message-ID: <20241211172649.761483-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Currently, when you send an XSk frame with metadata, you need to do the following: * call external xsk_buff_raw_get_dma(); * call inline xsk_buff_get_metadata(), which calls external xsk_buff_raw_get_data() and then do some inline checks. This effectively means that the following piece: addr = pool->unaligned ? xp_unaligned_add_offset_to_addr(addr) : addr; is done twice per frame, plus you have 2 external calls per frame, plus this: meta = pool->addrs + addr - pool->tx_metadata_len; if (unlikely(!xsk_buff_valid_tx_metadata(meta))) is always inlined, even if there's no meta or it's invalid. Add xsk_buff_raw_get_ctx() (xp_raw_get_ctx() to be precise) to do that in one go. It returns a small structure with 2 fields: DMA address, filled unconditionally, and metadata pointer, valid only if it's present. The address correction is performed only once and you also have only 1 external call per XSk frame, which does all the calculations and checks outside of your hotpath. You only need to check `if (ctx.meta)` for the metadata presence. Signed-off-by: Alexander Lobakin --- include/net/xdp_sock_drv.h | 23 +++++++++++++++++++++ include/net/xsk_buff_pool.h | 8 ++++++++ net/xdp/xsk_buff_pool.c | 40 +++++++++++++++++++++++++++++++++++++ 3 files changed, 71 insertions(+) diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h index 86620c818965..7fd1709deef5 100644 --- a/include/net/xdp_sock_drv.h +++ b/include/net/xdp_sock_drv.h @@ -205,6 +205,23 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr) return xp_raw_get_data(pool, addr); } +/** + * xsk_buff_raw_get_ctx - get &xdp_desc context + * @pool: XSk buff pool desc address belongs to + * @addr: desc address (from userspace) + * + * Wrapper for xp_raw_get_ctx() to be used in drivers, see its kdoc for + * details. + * + * Return: new &xdp_desc_ctx struct containing desc's DMA address and metadata + * pointer, if it is present and valid (initialized to %NULL otherwise). + */ +static inline struct xdp_desc_ctx +xsk_buff_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + return xp_raw_get_ctx(pool, addr); +} + #define XDP_TXMD_FLAGS_VALID ( \ XDP_TXMD_FLAGS_TIMESTAMP | \ XDP_TXMD_FLAGS_CHECKSUM | \ @@ -402,6 +419,12 @@ static inline void *xsk_buff_raw_get_data(struct xsk_buff_pool *pool, u64 addr) return NULL; } +static inline struct xdp_desc_ctx +xsk_buff_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + return (struct xdp_desc_ctx){ }; +} + static inline bool xsk_buff_valid_tx_metadata(struct xsk_tx_metadata *meta) { return false; diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index 50779406bc2d..1dcd4d71468a 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -141,6 +141,14 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max); bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count); void *xp_raw_get_data(struct xsk_buff_pool *pool, u64 addr); dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr); + +struct xdp_desc_ctx { + dma_addr_t dma; + struct xsk_tx_metadata *meta; +}; + +struct xdp_desc_ctx xp_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr); + static inline dma_addr_t xp_get_dma(struct xdp_buff_xsk *xskb) { return xskb->dma; diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index ae71da7d2cd6..02c42caec9f4 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -715,3 +715,43 @@ dma_addr_t xp_raw_get_dma(struct xsk_buff_pool *pool, u64 addr) (addr & ~PAGE_MASK); } EXPORT_SYMBOL(xp_raw_get_dma); + +/** + * xp_raw_get_ctx - get &xdp_desc context + * @pool: XSk buff pool desc address belongs to + * @addr: desc address (from userspace) + * + * Helper for getting desc's DMA address and metadata pointer, if present. + * Saves one call on hotpath, double calculation of the actual address, + * and inline checks for metadata presence and sanity. + * Please use xsk_buff_raw_get_ctx() in drivers instead. + * + * Return: new &xdp_desc_ctx struct containing desc's DMA address and metadata + * pointer, if it is present and valid (initialized to %NULL otherwise). + */ +struct xdp_desc_ctx xp_raw_get_ctx(const struct xsk_buff_pool *pool, u64 addr) +{ + struct xsk_tx_metadata *meta; + struct xdp_desc_ctx ret; + + addr = pool->unaligned ? xp_unaligned_add_offset_to_addr(addr) : addr; + ret = (typeof(ret)){ + /* Same logic as in xp_raw_get_dma() */ + .dma = (pool->dma_pages[addr >> PAGE_SHIFT] & + ~XSK_NEXT_PG_CONTIG_MASK) + (addr & ~PAGE_MASK), + }; + + if (!pool->tx_metadata_len) + goto out; + + /* Same logic as in xp_raw_get_data() + xsk_buff_get_metadata() */ + meta = pool->addrs + addr - pool->tx_metadata_len; + if (unlikely(!xsk_buff_valid_tx_metadata(meta))) + goto out; + + ret.meta = meta; + +out: + return ret; +} +EXPORT_SYMBOL(xp_raw_get_ctx); From patchwork Wed Dec 11 17:26:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904004 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCC0A1FF611; Wed, 11 Dec 2024 17:29:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938165; cv=none; b=AzBS7uLT5Jdf/dpHZYenU4z1KoUjsFENejnheC+DdG1pqHJBB3d/QJvd7WdhXQxx3ApJgsel1yMuR26uma+p3pvyQyWCJrlR/HacB0UjigQ5l3fm84XQe6lBOVHjDvlXyDdlh4vnkNMo6vDf2PXXm5Sg0a7n+AA3hx/OGeA/auU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938165; c=relaxed/simple; bh=Mzh5PvWdAWrhD20G+Z/0QHFjHx3XB3ES4Wu24RLMMF8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d31yM+MNm3/RKjIUj4mp4Vk8J9eA+tuXM+dCMlsHhmscc1pYMy2jGNmIOeCVdAfi8S90MHHJwc9zKS/5p9FzyAU1Oma7EnpksXFs9Kv+aIt0AAY+gnW34LgUq1s0OI3O3uUbToSuOXF/nEt7ol2aIIfFJcw1YF+ydJAj/bLP3y4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=W36KrReO; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="W36KrReO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938164; x=1765474164; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mzh5PvWdAWrhD20G+Z/0QHFjHx3XB3ES4Wu24RLMMF8=; b=W36KrReOsfqcDgHkYLjVyTyXEcnUBBNvdeVSE+Cns9RXLiGV1zkEWcBG h9r5cBx/KASHlgPRXsdXVCVCbE3U/AQPJqqAHbwgdPEbELqaiqTnFeAu+ HOuLjm4Zt2/yGukIvo0wSjL4lSC1zYucDb5m5jwb0bVqxm6YQKCR30HVT /er3vafmm/4ncTQVgVog9/aEuGe3zRUoOVdNbG84V31nkjeiuvl9R2Fs0 JAg8Y9Cc0z83jpFEl0vQf+O1aUAJNLkoB7VLTDUoiKyIWUfthA+IsPXMu e//dz5fENPjMNDTl9DP6+/saTJbYzbETSlYiRtB9xNyzxwSGkOMDtBtrq g==; X-CSE-ConnectionGUID: VpDTzVl1SV+ESU0/loOpkA== X-CSE-MsgGUID: BmC8msgNRcaTW0XFhi6fQw== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859655" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859655" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:21 -0800 X-CSE-ConnectionGUID: myHfyw3uSAOPluvrbhRfIg== X-CSE-MsgGUID: TDOizA4FTKqhh9ve6cEVdA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122363" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:15 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 09/12] page_pool: add a couple of netmem counterparts Date: Wed, 11 Dec 2024 18:26:46 +0100 Message-ID: <20241211172649.761483-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Add the following Page Pool netmem wrappers to be able to implement an MP-agnostic driver: * page_pool{,_dev}_alloc_best_fit_netmem() Same as page_pool{,_dev}_alloc(). Make the latter a wrapper around the new helper (as a page is always a netmem, but not vice versa). 'page_pool_alloc_netmem' is already busy, hence '_best_fit' (which also says what the helper tries to do). * page_pool_dma_sync_for_cpu_netmem() Same as page_pool_dma_sync_for_cpu(). Performs DMA sync only if the netmem comes from the host. Signed-off-by: Alexander Lobakin --- include/net/page_pool/helpers.h | 46 ++++++++++++++++++++++++++------- 1 file changed, 37 insertions(+), 9 deletions(-) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 26caa2c20912..d75d10678958 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -115,22 +115,22 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, return page_pool_alloc_frag(pool, offset, size, gfp); } -static inline struct page *page_pool_alloc(struct page_pool *pool, - unsigned int *offset, - unsigned int *size, gfp_t gfp) +static inline netmem_ref +page_pool_alloc_best_fit_netmem(struct page_pool *pool, unsigned int *offset, + unsigned int *size, gfp_t gfp) { unsigned int max_size = PAGE_SIZE << pool->p.order; - struct page *page; + netmem_ref netmem; if ((*size << 1) > max_size) { *size = max_size; *offset = 0; - return page_pool_alloc_pages(pool, gfp); + return page_pool_alloc_netmem(pool, gfp); } - page = page_pool_alloc_frag(pool, offset, *size, gfp); - if (unlikely(!page)) - return NULL; + netmem = page_pool_alloc_frag_netmem(pool, offset, *size, gfp); + if (unlikely(!netmem)) + return 0; /* There is very likely not enough space for another fragment, so append * the remaining size to the current fragment to avoid truesize @@ -141,7 +141,25 @@ static inline struct page *page_pool_alloc(struct page_pool *pool, pool->frag_offset = max_size; } - return page; + return netmem; +} + +static inline netmem_ref +page_pool_dev_alloc_best_fit_netmem(struct page_pool *pool, + unsigned int *offset, + unsigned int *size) +{ + gfp_t gfp = GFP_ATOMIC | __GFP_NOWARN; + + return page_pool_alloc_best_fit_netmem(pool, offset, size, gfp); +} + +static inline struct page *page_pool_alloc(struct page_pool *pool, + unsigned int *offset, + unsigned int *size, gfp_t gfp) +{ + return netmem_to_page(page_pool_alloc_best_fit_netmem(pool, offset, + size, gfp)); } /** @@ -440,6 +458,16 @@ static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool, page_pool_get_dma_dir(pool)); } +static inline void +page_pool_dma_sync_for_cpu_netmem(const struct page_pool *pool, + netmem_ref netmem, u32 offset, + u32 dma_sync_size) +{ + if (!netmem_is_net_iov(netmem)) + page_pool_dma_sync_for_cpu(pool, netmem_to_page(netmem), + offset, dma_sync_size); +} + static inline bool page_pool_put(struct page_pool *pool) { return refcount_dec_and_test(&pool->user_cnt); From patchwork Wed Dec 11 17:26:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904005 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9861207A39; Wed, 11 Dec 2024 17:29:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938167; cv=none; b=g8ccGwgXaTPhQnsWQWQZKIZPdETQj8kMh4mDSSaRjZ0LUtkxfQUSX0FxZ7s/5sbRn1jMfs5CfEdSxJ7aVSK8CAMPeFNz+owsoDiqbftcKbTVIuCTqXRuzDe1VhgXp3r5q84nGRQJr2KsHfB/aoUsk9bDWFw1+4wgtdNkC+C7H4A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938167; c=relaxed/simple; bh=WYUMYW1SB8Xt0LN/LVRR3OBaayXs3+auaJsIGlgXPRk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NmYMe0nKISE1qD2AkLW//eVvsKe7QZ0UHIozfGSTcI6+SgGquz1Wi8aBQrDkiu+MOz+nxYAiIGWzLzwJNoNUoVXLmXTzSPjzJ2R6KlQsoPMAm0xxfkM5D8XLss3ehmo2LSwnpKTjXFicbM9RLDyk/+KoRSNqY3Op7Uw0697+zfw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Z4ccC4Nz; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Z4ccC4Nz" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938166; x=1765474166; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WYUMYW1SB8Xt0LN/LVRR3OBaayXs3+auaJsIGlgXPRk=; b=Z4ccC4Nzuc+icGIlYJNnsQD/aSXvxO4dQYSqYJMQObLTsByA5I3qcHQt +5dwk9h7ru60aUn5bggeLm1yqcOCjYjS0zjIBhoRw2WOb2yspdViI/QQb mqTMcB2IEikXd6isu4jQgkswlGH6PlzGnjIOYR5ycII1PKZZbHrMIGytM 87QdR13Mc2Iz1rOqdUMnWRqoVVyt8HwG7cby7QHq6ifP0VDh5QhuilD5e WQkFdqarrWXwLhj+dtMyvbfiVqFRQB4z+lozSkzyM7tniC96tNEdU38ll pOC8RX4kV5w9bB3MD57trujg+QlN+2BgBQx+iQbMbjCxLNfz7zaf0dKmM Q==; X-CSE-ConnectionGUID: ZsOi6dfLRE2GkcUDSd8uuQ== X-CSE-MsgGUID: cQlKKoS9RZqGxdEh6jiWNQ== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859681" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859681" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:26 -0800 X-CSE-ConnectionGUID: XX/qovBlTdGDKj6EhB1Seg== X-CSE-MsgGUID: ve/RU88CQDuYSF1lRAXDRA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122386" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:20 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 10/12] skbuff: allow 2-4-argument skb_frag_dma_map() Date: Wed, 11 Dec 2024 18:26:47 +0100 Message-ID: <20241211172649.761483-11-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag), DMA_TO_DEVICE) is repeated across dozens of drivers and really wants a shorthand. Add a macro which will count args and handle all possible number from 2 to 5. Semantics: skb_frag_dma_map(dev, frag) -> __skb_frag_dma_map(dev, frag, 0, skb_frag_size(frag), DMA_TO_DEVICE) skb_frag_dma_map(dev, frag, offset) -> __skb_frag_dma_map(dev, frag, offset, skb_frag_size(frag) - offset, DMA_TO_DEVICE) skb_frag_dma_map(dev, frag, offset, size) -> __skb_frag_dma_map(dev, frag, offset, size, DMA_TO_DEVICE) skb_frag_dma_map(dev, frag, offset, size, dir) -> __skb_frag_dma_map(dev, frag, offset, size, dir) No object code size changes for the existing callers. Users passing less arguments also won't have bigger size comparing to the full equivalent call. Signed-off-by: Alexander Lobakin --- include/linux/skbuff.h | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 8bcf14ae6789..bb2b751d274a 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3682,7 +3682,7 @@ static inline void skb_frag_page_copy(skb_frag_t *fragto, bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio); /** - * skb_frag_dma_map - maps a paged fragment via the DMA API + * __skb_frag_dma_map - maps a paged fragment via the DMA API * @dev: the device to map the fragment to * @frag: the paged fragment to map * @offset: the offset within the fragment (starting at the @@ -3692,15 +3692,36 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio); * * Maps the page associated with @frag to @device. */ -static inline dma_addr_t skb_frag_dma_map(struct device *dev, - const skb_frag_t *frag, - size_t offset, size_t size, - enum dma_data_direction dir) +static inline dma_addr_t __skb_frag_dma_map(struct device *dev, + const skb_frag_t *frag, + size_t offset, size_t size, + enum dma_data_direction dir) { return dma_map_page(dev, skb_frag_page(frag), skb_frag_off(frag) + offset, size, dir); } +#define skb_frag_dma_map(dev, frag, ...) \ + CONCATENATE(_skb_frag_dma_map, \ + COUNT_ARGS(__VA_ARGS__))(dev, frag, ##__VA_ARGS__) + +#define __skb_frag_dma_map1(dev, frag, offset, uf, uo) ({ \ + const skb_frag_t *uf = (frag); \ + size_t uo = (offset); \ + \ + __skb_frag_dma_map(dev, uf, uo, skb_frag_size(uf) - uo, \ + DMA_TO_DEVICE); \ +}) +#define _skb_frag_dma_map1(dev, frag, offset) \ + __skb_frag_dma_map1(dev, frag, offset, __UNIQUE_ID(frag_), \ + __UNIQUE_ID(offset_)) +#define _skb_frag_dma_map0(dev, frag) \ + _skb_frag_dma_map1(dev, frag, 0) +#define _skb_frag_dma_map2(dev, frag, offset, size) \ + __skb_frag_dma_map(dev, frag, offset, size, DMA_TO_DEVICE) +#define _skb_frag_dma_map3(dev, frag, offset, size, dir) \ + __skb_frag_dma_map(dev, frag, offset, size, dir) + static inline struct sk_buff *pskb_copy(struct sk_buff *skb, gfp_t gfp_mask) { From patchwork Wed Dec 11 17:26:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904006 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0EAE225A5E; Wed, 11 Dec 2024 17:29:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938172; cv=none; b=etW3v+sXKbHYIqgSeYefPhWOpoV1lHg3p+HcU+bo96ue4JmPMjt/8TgdOVxk8SRBWu+fPrh//KWVq5Kc6CyfAPtEjPenHckc5pujeSKatNnVfsWvSMzpepDfS1+AxmybycfWgHOD4KMQ6Z1ThoWUTm60qu55wD4E5Pl9PfvnfG4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938172; c=relaxed/simple; bh=Hj3Ed674/goNpKUME1dgCcKm75c46TNcR4upTjpWwA0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rQ3O8hb/HURoCXnM1ryCaOkNzMNJgVHGMaKLrrYdo/CqVo0mba4TLrkNlLil13mBlbj8qkO3MVGdxgu0YpeWK1nAQHphjGrF8dSRy0jz6Ktx36ITnToOqDS/oeZARdjVENx+NIM46SXuMDKsKrl2Kb/+ynQGAIt5xhSfGfV86V4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Th4KcVtw; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Th4KcVtw" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938171; x=1765474171; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hj3Ed674/goNpKUME1dgCcKm75c46TNcR4upTjpWwA0=; b=Th4KcVtwo/8e9IMWcuQKNp2Tdl3xx13lXbDZ1WlMDqoxSJGSTM1IBJno aE6pU+C5y1Q491Qh1tyGszTwUXm2xyiqmRj2M5amXEEXuns5MlKAwEbmu x/tobPWDu1zzLh5knhZF7hNrqctJCI198D84fYd06DuvoVbCqqzCR3rMV uGOoBMGTXPUHhbU5Od+gVkFIgy/g+3DpmdjY7oxUjtY9J1VOBnI3+ASo0 KeZ/AeW9zKr8PDWpaSwS+RTFyzw4YbKessih6VhLLAO9TFisdiPRdSSS5 x8bkQ8dxjq9RSXECUuG7SZ5yhz5goPgAT0ZLM9rgeCLSoXQ5ObydugQZv A==; X-CSE-ConnectionGUID: ga23n61zRfWxkmqBEgdt7w== X-CSE-MsgGUID: 81gyF54vRUir+zO2oLLDrA== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859702" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859702" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:31 -0800 X-CSE-ConnectionGUID: sUUIA7wTTtWK1/4hzW7Z1Q== X-CSE-MsgGUID: IKjM3BetSHOby7BRvbWF3Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122405" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:26 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 11/12] jump_label: export static_key_slow_{inc,dec}_cpuslocked() Date: Wed, 11 Dec 2024 18:26:48 +0100 Message-ID: <20241211172649.761483-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org Sometimes, there's a need to modify a lot of static keys or modify the same key multiple times in a loop. In that case, it seems more optimal to lock cpu_read_lock once and then call _cpuslocked() variants. The enable/disable functions are already exported, the refcounted counterparts however are not. Fix that to allow modules to save some cycles. Signed-off-by: Alexander Lobakin --- kernel/jump_label.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index 93a822d3c468..1034c0348995 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -182,6 +182,7 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key) } return true; } +EXPORT_SYMBOL_GPL(static_key_slow_inc_cpuslocked); bool static_key_slow_inc(struct static_key *key) { @@ -342,6 +343,7 @@ void static_key_slow_dec_cpuslocked(struct static_key *key) STATIC_KEY_CHECK_USE(key); __static_key_slow_dec_cpuslocked(key); } +EXPORT_SYMBOL_GPL(static_key_slow_dec_cpuslocked); void __static_key_slow_dec_deferred(struct static_key *key, struct delayed_work *work, From patchwork Wed Dec 11 17:26:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13904007 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B68E4225A5E; Wed, 11 Dec 2024 17:29:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.8 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938177; cv=none; b=f81sewMlP75qWBPYCnG/9Bg/Ue8FeGf7SKYWBSG0qkivwETeEXD3ov8PoagsL/JpvOMlckw+PjN0nHPML8FY8P/SEiBxDOtQZd/PUPjeeCVHTKnndoxLTzztNEq6nhhStro2gm2kFUnabDe8Sz+KAWqYL2oUzE/9UHbsvQBTLe0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733938177; c=relaxed/simple; bh=KHzL8QEqHZUQrHH+gxbt09/SaI0VfgXWHYbRV+8/Ixg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IVN5CKo7M2xPztWU9S62pr6cjAUGUY86dfHJX4MVn9wt/YP/Yx2jtLT7UJX5hbLpE9dzcbpA3zX4tA8ZSWw9pFCWWZUoYFEmiMvCT2vfWIzikvAlK7cQptFeYDjyRufrBGAFtTXr3NRDTMyf6unlMYOgXCbyjSeMXuDrPAxcs0w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HMbpwB7a; arc=none smtp.client-ip=192.198.163.8 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HMbpwB7a" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733938176; x=1765474176; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KHzL8QEqHZUQrHH+gxbt09/SaI0VfgXWHYbRV+8/Ixg=; b=HMbpwB7aBw3LZ+PRKhaaJ0+qGR3IccXJPC5HHc1wqHPGy6GCVA/p5qO+ Pwn8szHGVW17QhM4FS8Q/MXvMSFS3YLzqMmrA9kGDvtpR0EeEv1IfA8Zq queMdkIvmIUuklpAfBeHTVrWNDSTQ+SMTRIlS4F0gvMbQOhEj+S/09SnQ FJ9Ifm1kfd4OXSg5kS2stgy+CnNCMCWA6/W6UfIw78/p7Mx/nrOcaLY4y viMhDp1R0nCbau+TllnTtZQ+ZFUtJWIxyS+RTsNt+Jx26RA2UgxBTVWvk W4bATmxJgKgiVB7XsnJ3fThwI7PrJ1gy2nMTX+18zJvBGYuBA5+aQYZDv A==; X-CSE-ConnectionGUID: GoCxghfLRTuxiuEut/f1eQ== X-CSE-MsgGUID: Te8BwVVyQB24o+ivG7F7JA== X-IronPort-AV: E=McAfee;i="6700,10204,11283"; a="51859718" X-IronPort-AV: E=Sophos;i="6.12,226,1728975600"; d="scan'208";a="51859718" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2024 09:29:35 -0800 X-CSE-ConnectionGUID: zG45zWfkTGWCrfVGa+iRjw== X-CSE-MsgGUID: gsX/4hw6SW2Y9O8yYzyNnQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="119122423" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa002.fm.intel.com with ESMTP; 11 Dec 2024 09:29:30 -0800 From: Alexander Lobakin To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Peter Zijlstra , Josh Poimboeuf , "Jose E. Marchesi" , =?utf-8?q?Toke_H=C3=B8iland-?= =?utf-8?q?J=C3=B8rgensen?= , Magnus Karlsson , Maciej Fijalkowski , Przemek Kitszel , Jason Baron , Casey Schaufler , Nathan Chancellor , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 12/12] unroll: add generic loop unroll helpers Date: Wed, 11 Dec 2024 18:26:49 +0100 Message-ID: <20241211172649.761483-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241211172649.761483-1-aleksander.lobakin@intel.com> References: <20241211172649.761483-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org There are cases when we need to explicitly unroll loops. For example, cache operations, filling DMA descriptors on very high speeds etc. Add compiler-specific attribute macros to give the compiler a hint that we'd like to unroll a loop. Example usage: #define UNROLL_BATCH 8 unrolled_count(UNROLL_BATCH) for (u32 i = 0; i < UNROLL_BATCH; i++) op(priv, i); Note that sometimes the compilers won't unroll loops if they think this would have worse optimization and perf than without unrolling, and that unroll attributes are available only starting GCC 8. For older compiler versions, no hints/attributes will be applied. For better unrolling/parallelization, don't have any variables that interfere between iterations except for the iterator itself. Co-developed-by: Jose E. Marchesi # pragmas Signed-off-by: Jose E. Marchesi Reviewed-by: Przemek Kitszel Signed-off-by: Alexander Lobakin --- include/linux/unroll.h | 44 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/include/linux/unroll.h b/include/linux/unroll.h index d42fd6366373..69b6ea74d7c1 100644 --- a/include/linux/unroll.h +++ b/include/linux/unroll.h @@ -9,6 +9,50 @@ #include +#ifdef CONFIG_CC_IS_CLANG +#define __pick_unrolled(x, y) _Pragma(#x) +#elif CONFIG_GCC_VERSION >= 80000 +#define __pick_unrolled(x, y) _Pragma(#y) +#else +#define __pick_unrolled(x, y) /* not supported */ +#endif + +/** + * unrolled - loop attributes to ask the compiler to unroll it + * + * Usage: + * + * #define BATCH 8 + * + * unrolled_count(BATCH) + * for (u32 i = 0; i < BATCH; i++) + * // loop body without cross-iteration dependencies + * + * This is only a hint and the compiler is free to disable unrolling if it + * thinks the count is suboptimal and may hurt performance and/or hugely + * increase object code size. + * Not having any cross-iteration dependencies (i.e. when iter x + 1 depends + * on what iter x will do with variables) is not a strict requirement, but + * provides best performance and object code size. + * Available only on Clang and GCC 8.x onwards. + */ + +/* Ask the compiler to pick an optimal unroll count, Clang only */ +#define unrolled \ + __pick_unrolled(clang loop unroll(enable), /* nothing */) + +/* Unroll each @n iterations of a loop */ +#define unrolled_count(n) \ + __pick_unrolled(clang loop unroll_count(n), GCC unroll n) + +/* Unroll the whole loop */ +#define unrolled_full \ + __pick_unrolled(clang loop unroll(full), GCC unroll 65534) + +/* Never unroll a loop */ +#define unrolled_none \ + __pick_unrolled(clang loop unroll(disable), GCC unroll 1) + #define UNROLL(N, MACRO, args...) CONCATENATE(__UNROLL_, N)(MACRO, args) #define __UNROLL_0(MACRO, args...)