From patchwork Sat Feb 22 18:30:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 13986806 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E252770E2; Sat, 22 Feb 2025 18:31:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249069; cv=none; b=sYJMRBZ/E5aTFOfehslEr3t+IFXtHA7oMviLlTvjy9SSKsVjs2nRST3GX61kvDfSG17YAAW2/Ti2bHKt6K9AI1iweNyJs7lB9vUKePIq1A9zkaKYXf/3iI2cOe7naMntz9EX1S4nHuK5HDaFenttpC3M1kvTfDBvyOftUw/bKsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249069; c=relaxed/simple; bh=38CB9debAPDMQ8RDwySRO3SUuWp+lyyWDRiWwde74vg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OtFClFpTWxNZSjcFGNTA9eORY/nYT/9id3vfiOpbsYGmt3GLaAJxfR2kKGqvvjP1equHZhnhjsMU9xLf9R5xnWPaK9YBQSnW4qZKgh76BIJjWXboVCG0Q+0kQLpt7x9oGzbQTfHv8/QW0U0Hj7aJczD3JR10hUegYD2fPSuRyqE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=jrLtDS02; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jrLtDS02" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21c2f1b610dso91368975ad.0; Sat, 22 Feb 2025 10:31:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740249067; x=1740853867; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=onrvRy8A9CruBPz6kTY9C62B6nelGllJawVsW5K9X8c=; b=jrLtDS02NYnMrrr2mxF8pMfqZQJW2jtCEdrxXWMJ8WawIJ5+0rF1u31t0xDpTJmn4w s6ewkMMnJYxgBzoyXFtiHbqp6kar8FFHAKXBW6fL6C9beeafwALgR803/qlWXWQ4tGgL YW/FAHIc0CZ/Kxn8IrhU38AVewowzdIo9XgTRb4998nkeXmJGJmL0a1FPUy6uNyyq/SH 2oGoS4xV4NRdTsdOBmiqted0ast+iJLg20Mm9P9A4fzQMTqManu5yLa6tTTOZK1XBhXh bvY/PBGbe0Nc5pWZN/le2JPsvaVV7DWQ9wy/oY4LEgl1KFv4r3XxEBs9fN2ICTZVAhjS cfzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740249067; x=1740853867; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=onrvRy8A9CruBPz6kTY9C62B6nelGllJawVsW5K9X8c=; b=eqXXMbWA+QJ8FUfY6ukRlptrVLEe2e7eUPdvjg28Y2sSri8pz5Zh0J/3hbTDtjy1Hp 4R8Fju3dVRk8VD56Z2E09ny8AqholxR4/qDuSvp9lQmnx2+KYi8KYrinIzOg4bgHUKHf 8Kh411ylPoOJ4I1NgGTpmwM2Uc0oPDcDI3MEpwdJ/IDB5CnvVhVNWToRJJXLNKNTiQ3Y rkeFgYomxcdP7g+e5yLxtEU/jYs1CemVqoCm4+z8Dm+DsChO4P03SX1ZrFEBHajVevMN Kr9AUtEG9h1Gwo9bnyjXPb8n6eaRI1XRq7D2TmdDVQ1Xic6oaC7aqIrEpcbGfoXmdnSM G62g== X-Gm-Message-State: AOJu0YzSLjYOR9WWpYwA4d+m3Y0awdIMsolQfrjnOu9uWGl/44r03ugo QghS2JF+U8wEqP0dRlbgbySzriuFNhPuUPKcrzmMsGhFytZck1RjFDUJkQ== X-Gm-Gg: ASbGncubtvX1SxGHkk+pFinEYDlBhado8mVrJ7hNpzAPLsIgL16niL7mEvkVOb86Zd5 2MmUsnD1p/o+l67oSnu9LsLnCGJhbK2SNGmT+BUdafwTzVDXuQo7I9i/mU2W38V+DW/LIvnzeAn InL4axDZv9R5Bw5q5VJqeUpYq1lzsKJTMfCKM3KpGLWbp25lLjpsfIJdSCZJUCsXipHtJ+z9/O7 GzS8/lOaZNNfhGy9KPFcwgR/Uw4yct8Dr3+PzWDkHsLH4ajfzeY8B5tXv6PKemsS6GpmhG9sjeY FCyQbmdC99PFpNqAs+y3uqzcdMJRfWsllBFsf1ZCrdtSu1FJyn4sn4I= X-Google-Smtp-Source: AGHT+IF4gPyQpGKGTxrJexorYqQEqUew/Tga5Xx7dnnIRcN2cxD5qAUAmisU+M/bXeJ1eMsBuWdxPw== X-Received: by 2002:a05:6a20:1593:b0:1ee:efb2:f68c with SMTP id adf61e73a8af0-1eef3c9ae11mr13042831637.12.1740249067009; Sat, 22 Feb 2025 10:31:07 -0800 (PST) Received: from pop-os.hsd1.ca.comcast.net ([2601:647:6881:9060:2714:159c:631a:37c0]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73250dd701bsm16442959b3a.131.2025.02.22.10.31.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 22 Feb 2025 10:31:06 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, john.fastabend@gmail.com, jakub@cloudflare.com, zhoufeng.zf@bytedance.com, zijianzhang@bytedance.com, Cong Wang Subject: [Patch bpf-next 1/4] skmsg: rename sk_msg_alloc() to sk_msg_expand() Date: Sat, 22 Feb 2025 10:30:54 -0800 Message-Id: <20250222183057.800800-2-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250222183057.800800-1-xiyou.wangcong@gmail.com> References: <20250222183057.800800-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Cong Wang The name sk_msg_alloc is misleading, that function does not allocate sk_msg at all, it simply refills sock page frags. Rename it to sk_msg_expand() to better reflect what it actually does. Signed-off-by: Cong Wang --- include/linux/skmsg.h | 4 ++-- net/core/skmsg.c | 6 +++--- net/ipv4/tcp_bpf.c | 2 +- net/tls/tls_sw.c | 6 +++--- net/xfrm/espintcp.c | 2 +- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 0b9095a281b8..d6f0a8cd73c4 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -121,8 +121,8 @@ struct sk_psock { struct rcu_work rwork; }; -int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, - int elem_first_coalesce); +int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, + int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, u32 off, u32 len); void sk_msg_trim(struct sock *sk, struct sk_msg *msg, int len); diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 0ddc4c718833..4695cbd9c16f 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -24,8 +24,8 @@ static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) return false; } -int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, - int elem_first_coalesce) +int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, + int elem_first_coalesce) { struct page_frag *pfrag = sk_page_frag(sk); u32 osize = msg->sg.size; @@ -82,7 +82,7 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, sk_msg_trim(sk, msg, osize); return ret; } -EXPORT_SYMBOL_GPL(sk_msg_alloc); +EXPORT_SYMBOL_GPL(sk_msg_expand); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, u32 off, u32 len) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index ba581785adb4..85b64ffc20c6 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -530,7 +530,7 @@ static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) } osize = msg_tx->sg.size; - err = sk_msg_alloc(sk, msg_tx, msg_tx->sg.size + copy, msg_tx->sg.end - 1); + err = sk_msg_expand(sk, msg_tx, msg_tx->sg.size + copy, msg_tx->sg.end - 1); if (err) { if (err != -ENOSPC) goto wait_for_memory; diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 914d4e1516a3..338b373c8fc5 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -324,7 +324,7 @@ static int tls_alloc_encrypted_msg(struct sock *sk, int len) struct tls_rec *rec = ctx->open_rec; struct sk_msg *msg_en = &rec->msg_encrypted; - return sk_msg_alloc(sk, msg_en, len, 0); + return sk_msg_expand(sk, msg_en, len, 0); } static int tls_clone_plaintext_msg(struct sock *sk, int required) @@ -619,8 +619,8 @@ static int tls_split_open_record(struct sock *sk, struct tls_rec *from, new = tls_get_rec(sk); if (!new) return -ENOMEM; - ret = sk_msg_alloc(sk, &new->msg_encrypted, msg_opl->sg.size + - tx_overhead_size, 0); + ret = sk_msg_expand(sk, &new->msg_encrypted, msg_opl->sg.size + + tx_overhead_size, 0); if (ret < 0) { tls_free_rec(sk, new); return ret; diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c index fe82e2d07300..4fd03edb4497 100644 --- a/net/xfrm/espintcp.c +++ b/net/xfrm/espintcp.c @@ -351,7 +351,7 @@ static int espintcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) sk_msg_init(&emsg->skmsg); while (1) { /* only -ENOMEM is possible since we don't coalesce */ - err = sk_msg_alloc(sk, &emsg->skmsg, msglen, 0); + err = sk_msg_expand(sk, &emsg->skmsg, msglen, 0); if (!err) break; From patchwork Sat Feb 22 18:30:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 13986807 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61BBF20B7F7; Sat, 22 Feb 2025 18:31:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249071; cv=none; b=UhYEgPI1h5THZXKykrPNypP/ZP6UITOaqkvnqjqz+qUeMskC1oRAloFr4BiZiJ12vLSKpOhElHHc2U5LMDb7Go5Wp1Lf3qKAaG9HoFIx9jVWaWNgY4MTFB8U78TCvlF5280DeIQqnlS3ZdYKg/oFohszIj+JobSxyVnWiB8OKqo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249071; c=relaxed/simple; bh=TI6cHoVsKSwW28kVC+WGOqjAE3DLRsM5y2hXa7t02hU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DjFRFfGRET1lsQfyY0SMcC0memwOG2g8O8pe60eZhkShElLgWn8dpuj6TVC1pfBefeArzU3/9eMkErQ3VgLifKLX4fsj9++o0zCCJiM+85J2MjN773tJmRLM8JjWmZF67mC5Y9fvyAFYzEkWwCHeNxn2lwn1bGYa0FkGnDetBJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Th1rh+8B; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Th1rh+8B" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-2fc1f410186so7340712a91.0; Sat, 22 Feb 2025 10:31:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740249068; x=1740853868; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5wOjim4xlZk082hyKSlOHixdefH2Pq7iWo7w1+sb9so=; b=Th1rh+8B/1pdz+F63EcWeRYsJJMxqhQvggxkzIyQxTFQfXzBFZ3SRs5cK7jJon3tl/ vU8upeVoWQmOKnFxnHOJw8/g7bB6XDJa8+aO8qThSbdA4xcU0H86GtXy+qQpp2ieyocK QT8dVU1bk6ME7+xT/rQoZk2I9EKRYszZj9lLwumL5n0vs0DA/EhGaRvqHV+T2Pl0u+Aa dbE09LNV16UDKFTF/TzSK/MNfXQqUXtS2cGNQYVT2PQMgGYwTbPVwF+rn1hUjAz0CnVq Xyq3sNYltmz6/BgcpgcWHUcTLFPFKcXqxUSi/M7a52qvnbnqjuJjAw2E5YRE0CSPYMJg 8Jvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740249068; x=1740853868; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5wOjim4xlZk082hyKSlOHixdefH2Pq7iWo7w1+sb9so=; b=QbX95ZXkk5Rl+02mSejVdFTmcLLqu0SJEScIKjhbOUePZ+8xgj+FK+iSJx27O/wG++ yBQNCt8O+stCaEcY5St72NQOnY8a5I0mWJJsHd57RHosZd/6tT5jBb3CEp7P3o3k7Jv9 +ZSqn1N8NL4O16GFRwFnaRzzb+hZ4T+47CLSedoPs1lDkwSFOGxrsda5PI7WMAD2CJae p6jYsGGdwzTH2o2ZayODi18B8+MYhYti/4Qqcg5Hhg7rMcSeBRvzWE3hl/ay7980cAi5 J60CWO9UUHYN1RwX3BNlv/gSJeQwg8ofaHvyHTR6KwFlxNGyjx98KZSy/QjFT+v/NsSI 091g== X-Gm-Message-State: AOJu0YwnVT3cb3LFL8psv9XYHhOauqHoclHmtzusLUBVpQvYaZxqKLv3 0qoNWUFKMazg2kJcLRFdBNcHwi+/iP64vFlhh1ZntOtUcdvbIUdmVHuipg== X-Gm-Gg: ASbGnctRtHXYhIMBouLTZ9xRrikK61mRXGa5UAnl0+FqnTxl3qHVyHo06cJDlQm4jLn VQV4LwurZ+8q9DJuVKhiL0Tr2AC/GPnYa/4BiNAl8jnt1Suiy64sofeoUY3XZljTbS9rfFQqPOL fQ2gdu8IozAdFa9INUf0ql8gElNsVO62CGlv0QeOGTIDUokd4X1js6oKAOmTRz55sx0ZabaJfTo ywv8FjYlOKMHCPVJRKKnDZ1RuTE3LcrTGLfY8C/N6N1ftxBzUsA0B2TBSPYkoO7PLlqPYHMtJHw 88o7d9QDHyIc5fca10KnWrwqP5hAXA/idNXWBcmz7M3jP9gJjawnWmo= X-Google-Smtp-Source: AGHT+IFiQllMOo7IZ27br3pEb/4UmsFTPjSJV4qpYiYd939IwuiLI8MyRqU3mqUt3nFVh/YEpAa1Cg== X-Received: by 2002:a05:6a20:a11b:b0:1ee:cfec:9e5e with SMTP id adf61e73a8af0-1eef3e1ec9fmr12613439637.21.1740249068189; Sat, 22 Feb 2025 10:31:08 -0800 (PST) Received: from pop-os.hsd1.ca.comcast.net ([2601:647:6881:9060:2714:159c:631a:37c0]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73250dd701bsm16442959b3a.131.2025.02.22.10.31.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 22 Feb 2025 10:31:07 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, john.fastabend@gmail.com, jakub@cloudflare.com, zhoufeng.zf@bytedance.com, zijianzhang@bytedance.com, Cong Wang Subject: [Patch bpf-next 2/4] skmsg: implement slab allocator cache for sk_msg Date: Sat, 22 Feb 2025 10:30:55 -0800 Message-Id: <20250222183057.800800-3-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250222183057.800800-1-xiyou.wangcong@gmail.com> References: <20250222183057.800800-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Zijian Zhang Optimizing redirect ingress performance requires frequent allocation and deallocation of sk_msg structures. Introduce a dedicated kmem_cache for sk_msg to reduce memory allocation overhead and improve performance. Reviewed-by: Cong Wang Signed-off-by: Zijian Zhang --- include/linux/skmsg.h | 21 ++++++++++++--------- net/core/skmsg.c | 28 +++++++++++++++++++++------- net/ipv4/tcp_bpf.c | 5 ++--- 3 files changed, 35 insertions(+), 19 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index d6f0a8cd73c4..bf28ce9b5fdb 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -121,6 +121,7 @@ struct sk_psock { struct rcu_work rwork; }; +struct sk_msg *sk_msg_alloc(gfp_t gfp); int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, @@ -143,6 +144,8 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg, int len, int flags); bool sk_msg_is_readable(struct sock *sk); +extern struct kmem_cache *sk_msg_cachep; + static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 bytes) { WARN_ON(i == msg->sg.end && bytes); @@ -319,6 +322,13 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb) kfree_skb(skb); } +static inline void kfree_sk_msg(struct sk_msg *msg) +{ + if (msg->skb) + consume_skb(msg->skb); + kmem_cache_free(sk_msg_cachep, msg); +} + static inline bool sk_psock_queue_msg(struct sk_psock *psock, struct sk_msg *msg) { @@ -330,7 +340,7 @@ static inline bool sk_psock_queue_msg(struct sk_psock *psock, ret = true; } else { sk_msg_free(psock->sk, msg); - kfree(msg); + kfree_sk_msg(msg); ret = false; } spin_unlock_bh(&psock->ingress_lock); @@ -378,13 +388,6 @@ static inline bool sk_psock_queue_empty(const struct sk_psock *psock) return psock ? list_empty(&psock->ingress_msg) : true; } -static inline void kfree_sk_msg(struct sk_msg *msg) -{ - if (msg->skb) - consume_skb(msg->skb); - kfree(msg); -} - static inline void sk_psock_report_error(struct sk_psock *psock, int err) { struct sock *sk = psock->sk; @@ -441,7 +444,7 @@ static inline void sk_psock_cork_free(struct sk_psock *psock) { if (psock->cork) { sk_msg_free(psock->sk, psock->cork); - kfree(psock->cork); + kfree_sk_msg(psock->cork); psock->cork = NULL; } } diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 4695cbd9c16f..25c53c8c9857 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -10,6 +10,8 @@ #include #include +struct kmem_cache *sk_msg_cachep; + static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) { if (msg->sg.end > msg->sg.start && @@ -503,16 +505,17 @@ bool sk_msg_is_readable(struct sock *sk) } EXPORT_SYMBOL_GPL(sk_msg_is_readable); -static struct sk_msg *alloc_sk_msg(gfp_t gfp) +struct sk_msg *sk_msg_alloc(gfp_t gfp) { struct sk_msg *msg; - msg = kzalloc(sizeof(*msg), gfp | __GFP_NOWARN); + msg = kmem_cache_zalloc(sk_msg_cachep, gfp | __GFP_NOWARN); if (unlikely(!msg)) return NULL; sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS); return msg; } +EXPORT_SYMBOL_GPL(sk_msg_alloc); static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk, struct sk_buff *skb) @@ -523,7 +526,7 @@ static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk, if (!sk_rmem_schedule(sk, skb, skb->truesize)) return NULL; - return alloc_sk_msg(GFP_KERNEL); + return sk_msg_alloc(GFP_KERNEL); } static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb, @@ -592,7 +595,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb, skb_set_owner_r(skb, sk); err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg); if (err < 0) - kfree(msg); + kfree_sk_msg(msg); return err; } @@ -603,7 +606,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb, static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb, u32 off, u32 len) { - struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC); + struct sk_msg *msg = sk_msg_alloc(GFP_ATOMIC); struct sock *sk = psock->sk; int err; @@ -612,7 +615,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb skb_set_owner_r(skb, sk); err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg); if (err < 0) - kfree(msg); + kfree_sk_msg(msg); return err; } @@ -781,7 +784,7 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock) if (!msg->skb) atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc); sk_msg_free(psock->sk, msg); - kfree(msg); + kfree_sk_msg(msg); } } @@ -1266,3 +1269,14 @@ void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock) sk->sk_data_ready = psock->saved_data_ready; psock->saved_data_ready = NULL; } + +static int __init sk_msg_cachep_init(void) +{ + sk_msg_cachep = kmem_cache_create("sk_msg_cachep", + sizeof(struct sk_msg), + 0, + SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, + NULL); + return 0; +} +late_initcall(sk_msg_cachep_init); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 85b64ffc20c6..f0ef41c951e2 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -38,7 +38,7 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock, struct sk_msg *tmp; int i, ret = 0; - tmp = kzalloc(sizeof(*tmp), __GFP_NOWARN | GFP_KERNEL); + tmp = sk_msg_alloc(GFP_KERNEL); if (unlikely(!tmp)) return -ENOMEM; @@ -406,8 +406,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, msg->cork_bytes > msg->sg.size && !enospc) { psock->cork_bytes = msg->cork_bytes - msg->sg.size; if (!psock->cork) { - psock->cork = kzalloc(sizeof(*psock->cork), - GFP_ATOMIC | __GFP_NOWARN); + psock->cork = sk_msg_alloc(GFP_ATOMIC); if (!psock->cork) return -ENOMEM; } From patchwork Sat Feb 22 18:30:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 13986808 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FD3720C008; Sat, 22 Feb 2025 18:31:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249072; cv=none; b=pMq3pm125phQj45mmkyFrairtcPXrnTtEcFWadVKXqgNXlUa/1lJT1krY5oAQcvKYDoTQGRITb1BtKx04QQtwu4C89LBhhnXmBeg7bWSyXMmh+9034fUenwCY6p5d2e/EehgBzWazVrdfh5PSZ6Pah8g+iRBLxO2a/kZKNZxJOc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249072; c=relaxed/simple; bh=gRk2yRabhWhPWhADGM+Ok2+mzIHtLDY2SIR0tGl8w4I=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=A/tQYjC7grpHcFAvsbDC4hUIfl8pAEU2Q5/X2VBcxTZY+o52xu/4H8p4Q1f8KPk28Lrou0GAZszaMRHAs++cEl8rl4pDinPM+5VNndAx6PzTB0yF2oW4GSILcWvD+B3rXvAMRZrOntGEFoCrBTYOdVVQDSC4PLbHlDt9UzSReV4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Ng68kHbs; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ng68kHbs" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-220e989edb6so87235665ad.1; Sat, 22 Feb 2025 10:31:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740249069; x=1740853869; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9hBQjwKCpi/rRVV/Ve5h9twAin5N57JR7ZeRSA+Dds0=; b=Ng68kHbswcHraLFi4WERh3QsALEMYgqge2XCZxtTYy33AQ9AlL8Nwn8kvqrQ/ibZf0 Lg4MIVa0kjfWxM577BC6NWziBflriTa28UcxdYu8IkjbBsXpJt5mJBUCNVPnhTO67lfY Dajg5kmTvANsmTla6aphLuBeVTIFh1aYtbhCsOzGqNj5PhCrj4bn71LVsu7SOmp0K+2t sMQLcJbr614Lsy1kBI1WAqsAaJk5ICWX70oxssrqQv7RtB6guB9MOcuzHjjPg9my9Lfq m1xv9NqhORyknUoRZrwlz44avEDjsyqLn4xHHvZWLX3J1krAmHPzDvdIsLhirsHpLiOm 64oQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740249069; x=1740853869; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9hBQjwKCpi/rRVV/Ve5h9twAin5N57JR7ZeRSA+Dds0=; b=C9kNobamtR4kEblky/5a/WSdI+hd3MFe2ciibF+/phoHs0ZPqa/duwsNrCxxo9D1l0 K2jXODUam5q6zCHrxASVl6gRpj3egMSvRjtCOf+RFbdyE5H90gG3J1P0A/wDqKvmpFD3 SBpuz5fDGl1qxLi4JidXQeSCrneVGZBr6Sc2/riDerd+32A5Xr/xL6MGI2hycc01dLsR Wf7+SVE66KzNXTCzN5vqfqnOgEDdAtlCyT/Asi7jqfXmqVKwF2WV/X16G840vjetfWaa T+XZPOBuN5VowX/6/np2AQsrAWf6/iQLP15lrRJBHZsHQ5fgkgOArqF4RwwKL9bU7ZEZ N45g== X-Gm-Message-State: AOJu0Yz/DJpCXOW4AzvK/RZyMcNXwYTgS3Ab83lde7u/x73KXofaqY90 XtYzH7zh3ATgKYuS+iDrE+SP2C4ZNzsm5CtmM9Eo5gTyO3IXhtM27kFKMg== X-Gm-Gg: ASbGncub3CTa6QLBhgey80QrjU7WpIJRmgIPDgs6KQX/AY2E7HbXcgxt1kVA7F+DPVE Zfov13+c5UrbFc3qzS4JuoAB9/UHcHGzr6MJA1K0Idu2wbFc03dCyPgyFCHUvybjowM1E45/Ri5 1ESKUATzWVw2/tRFj5Ikf175d5yEvc7VvSNxtg9mm6t0e0/QxVMVdMX4iH/x7chbED99ZRxVrZ2 uWyUHb0sfDxCF/v7iqJUi5liA1QV+OXr2RH5X/aOM3WTmqVDYYWPjqnjId6A7XqExZWj6s6+PuZ MdRzIF4IXbhpzPbzJIgwtIZ+Jbx3wlHjQeTm2xBN0plPcc1tvfsbZqM= X-Google-Smtp-Source: AGHT+IEcJ5NIW7/K149wntM9GRaFkqfROC7xpdTfpsgut2DclceiUxD+9v++C9/yfx1G78OhKHi1Hw== X-Received: by 2002:a05:6a00:8d0:b0:724:59e0:5d22 with SMTP id d2e1a72fcca58-73426d8b897mr12304801b3a.20.1740249069452; Sat, 22 Feb 2025 10:31:09 -0800 (PST) Received: from pop-os.hsd1.ca.comcast.net ([2601:647:6881:9060:2714:159c:631a:37c0]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73250dd701bsm16442959b3a.131.2025.02.22.10.31.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 22 Feb 2025 10:31:08 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, john.fastabend@gmail.com, jakub@cloudflare.com, zhoufeng.zf@bytedance.com, zijianzhang@bytedance.com, Cong Wang Subject: [Patch bpf-next 3/4] skmsg: use bitfields for struct sk_psock Date: Sat, 22 Feb 2025 10:30:56 -0800 Message-Id: <20250222183057.800800-4-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250222183057.800800-1-xiyou.wangcong@gmail.com> References: <20250222183057.800800-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Cong Wang psock->eval can only have 4 possible values, make it 8-bit is sufficient. psock->redir_ingress is just a boolean, using 1 bit is enough. Signed-off-by: Cong Wang --- include/linux/skmsg.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index bf28ce9b5fdb..beaf79b2b68b 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -85,8 +85,8 @@ struct sk_psock { struct sock *sk_redir; u32 apply_bytes; u32 cork_bytes; - u32 eval; - bool redir_ingress; /* undefined if sk_redir is null */ + unsigned int eval : 8; + unsigned int redir_ingress : 1; /* undefined if sk_redir is null */ struct sk_msg *cork; struct sk_psock_progs progs; #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) From patchwork Sat Feb 22 18:30:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 13986809 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 965B220C46B; Sat, 22 Feb 2025 18:31:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249074; cv=none; b=JsdsTbHmRP21rffHvXJTNe2OPxcIwaWdxqgHg1OsKeqXCTao53pHhDe2fQaU/tjMle/rVfMRTktMRlEDIT/BV/Locx0Vtoj0Pt86hHfQN1t83a8qSMsEBOzP9KWjotlItcM0VvNa1ichC/u9dJvX4w1uPd6pLumtQ8sfJi/YAmk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740249074; c=relaxed/simple; bh=BNJQxhVemeF2rui7uqWnX4QUGtWHz5udfqJaSs/0jLc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hqud6oU5HQsrMP60fBu2LnlE8b9twU5gGft8WDcviyRDooLDcIcfywy2omVUlsgTByNCKk22+d7o/FitzEOeI6v0DjgqM6UZa6KTlN8TtIE+lAe8WbI8Jk40ls+0ctoBG4tAaJZt+Wb9xlN6AlQdJvQHvGhs4mqSwFoBKk6D2rY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=P5LYry3Z; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="P5LYry3Z" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-221050f3f00so70961155ad.2; Sat, 22 Feb 2025 10:31:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1740249071; x=1740853871; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NXsg36/bH91fqCSaqtS2utCScPqNE1lpqUyqrcS3cJ8=; b=P5LYry3ZTCaO0Is5Vy9Urw6Grru6qUd1Ep/bMhA4GGLKWaYt22Mu21J2jncPd8ppb0 bQiAMjMWoDS7WJPVjtIqnG0DPDLEAfjOzjy7c1KbH8EvhPm/jbY9kOfc6dLm/XPuhN1a Gb4nIZb4t/81rI0xjqQDDahx8YMuaQ55mZ8oFmvN4Gx8n3o79r5RtEETbMGorXx1faj0 5Q/Y4o1iIEVXtc1SU1G0mBY30buw9xcMa+KWkPc2eyDA7yf2EV1Jc6sT22Ge3QJxor+k EKVsQdYw7YbopVtpmYfsyBc1aKdb0np7XT6W59Cm39ssddGtf9+hIR7m9bD6jK/0SxPw nn7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740249071; x=1740853871; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NXsg36/bH91fqCSaqtS2utCScPqNE1lpqUyqrcS3cJ8=; b=dHB2me+VSyFn8zEYNHjbm3Iz7W1KweplV49e3Bh7yDiXNB0wggByvSM0vNOkmqbjQk kv1LQeJznLJVFuvAamWUpjg2JN1uYewIcJ+6bfvOGaSJ4klrwCoT3+zkfFmI7b2lq5gL SNaMu5BKLh1dPQpP2fH1dJBYzpi6+HpQO3RNWtFCX2xdIpvP2S+0i7q2mhMhcSCPzb/s Q165qt7g5BBa4UtXh0z4PnD/ReVdfM7Mo9Fdt+DriuvBmnl7zbVpug/KYlb04dbFiClC oo67b3To7iTLvry90rzjMn9pcBtvVOJa0sIyjcxlWntJOB8hRBNyErB4RKvaHylsUp1i d19w== X-Gm-Message-State: AOJu0YxuQUL/UiL0BRddr5ziEfFnEuOSWmHg/vIXh11iYlt/8XJLqlPa 90SuEC/qN5w26hxYaZ4cVFe6dps3ihxKhZum9oG8XkkwzT4Ur7si0TgjeQ== X-Gm-Gg: ASbGnctEXyCa0ogLIR7y7lfbTGlAXZbQvFXra/y7LVhC/Vkzr1CFbRoxMjla1ammPJD chmx9Gr0PpLaax96vvTW9ePZFcA+XZ6x+fJ2BZXZdDGi9wOHmtf/z7Fu9hyFkOCmJnls7mnIL+/ 9IrG8PZnD1/XhtGV5BvpjYbLPbUVaYrJK0m2QlACRfDGqiDHTTGbgIf+8nIzpVV32njA2VuWuzD 5ecw+jGceYN9CgQO6ludeiVDjDAmwYFiqAV5z873VcTJryBxOTQZvaeNEJBiPQbkM//jT2Q5hCZ E2lw8f9QFBIJHMp+9raA8T8IGJgfkjVN0bhojIjl3+ktn//Vbmo+5e8= X-Google-Smtp-Source: AGHT+IHRysilBQkVn11eiqOnppaV51eo4ww/OR2tbSN0HeyHAHG/clSpBA63NiwTZDq/R9EPb3tCKQ== X-Received: by 2002:a05:6a20:1586:b0:1ee:bed7:f9c4 with SMTP id adf61e73a8af0-1eef3c58fd4mr15325551637.8.1740249070775; Sat, 22 Feb 2025 10:31:10 -0800 (PST) Received: from pop-os.hsd1.ca.comcast.net ([2601:647:6881:9060:2714:159c:631a:37c0]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-73250dd701bsm16442959b3a.131.2025.02.22.10.31.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 22 Feb 2025 10:31:10 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, john.fastabend@gmail.com, jakub@cloudflare.com, zhoufeng.zf@bytedance.com, zijianzhang@bytedance.com, Amery Hung , Cong Wang Subject: [Patch bpf-next 4/4] tcp_bpf: improve ingress redirection performance with message corking Date: Sat, 22 Feb 2025 10:30:57 -0800 Message-Id: <20250222183057.800800-5-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250222183057.800800-1-xiyou.wangcong@gmail.com> References: <20250222183057.800800-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Zijian Zhang The TCP_BPF ingress redirection path currently lacks the message corking mechanism found in standard TCP. This causes the sender to wake up the receiver for every message, even when messages are small, resulting in reduced throughput compared to regular TCP in certain scenarios. This change introduces a kernel worker-based intermediate layer to provide automatic message corking for TCP_BPF. While this adds a slight latency overhead, it significantly improves overall throughput by reducing unnecessary wake-ups and reducing the sock lock contention. Reviewed-by: Amery Hung Co-developed-by: Cong Wang Signed-off-by: Cong Wang Signed-off-by: Zijian Zhang --- include/linux/skmsg.h | 19 ++++ net/core/skmsg.c | 139 ++++++++++++++++++++++++++++- net/ipv4/tcp_bpf.c | 197 ++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 347 insertions(+), 8 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index beaf79b2b68b..c6e0da4044db 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -15,6 +15,8 @@ #define MAX_MSG_FRAGS MAX_SKB_FRAGS #define NR_MSG_FRAG_IDS (MAX_MSG_FRAGS + 1) +/* GSO size for TCP BPF backlog processing */ +#define TCP_BPF_GSO_SIZE 65536 enum __sk_action { __SK_DROP = 0, @@ -85,8 +87,10 @@ struct sk_psock { struct sock *sk_redir; u32 apply_bytes; u32 cork_bytes; + u32 backlog_since_notify; unsigned int eval : 8; unsigned int redir_ingress : 1; /* undefined if sk_redir is null */ + unsigned int backlog_work_delayed : 1; struct sk_msg *cork; struct sk_psock_progs progs; #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) @@ -97,6 +101,9 @@ struct sk_psock { struct sk_buff_head ingress_skb; struct list_head ingress_msg; spinlock_t ingress_lock; + struct list_head backlog_msg; + /* spin_lock for backlog_msg and backlog_since_notify */ + spinlock_t backlog_msg_lock; unsigned long state; struct list_head link; spinlock_t link_lock; @@ -117,11 +124,13 @@ struct sk_psock { struct mutex work_mutex; struct sk_psock_work_state work_state; struct delayed_work work; + struct delayed_work backlog_work; struct sock *sk_pair; struct rcu_work rwork; }; struct sk_msg *sk_msg_alloc(gfp_t gfp); +bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce); int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, @@ -396,9 +405,19 @@ static inline void sk_psock_report_error(struct sk_psock *psock, int err) sk_error_report(sk); } +void sk_psock_backlog_msg(struct sk_psock *psock); struct sk_psock *sk_psock_init(struct sock *sk, int node); void sk_psock_stop(struct sk_psock *psock); +static inline void sk_psock_run_backlog_work(struct sk_psock *psock, + bool delayed) +{ + if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + return; + psock->backlog_work_delayed = delayed; + schedule_delayed_work(&psock->backlog_work, delayed ? 1 : 0); +} + #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock); void sk_psock_start_strp(struct sock *sk, struct sk_psock *psock); diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 25c53c8c9857..32507163fd2d 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -12,7 +12,7 @@ struct kmem_cache *sk_msg_cachep; -static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) +bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) { if (msg->sg.end > msg->sg.start && elem_first_coalesce < msg->sg.end) @@ -707,6 +707,118 @@ static void sk_psock_backlog(struct work_struct *work) mutex_unlock(&psock->work_mutex); } +static bool backlog_notify(struct sk_psock *psock, bool m_sched_failed, + bool ingress_empty) +{ + /* Notify if: + * 1. We have corked enough bytes + * 2. We have already delayed notification + * 3. Memory allocation failed + * 4. Ingress queue was empty and we're about to add data + */ + return psock->backlog_since_notify >= TCP_BPF_GSO_SIZE || + psock->backlog_work_delayed || + m_sched_failed || + ingress_empty; +} + +static bool backlog_xfer_to_local(struct sk_psock *psock, struct sock *sk_from, + struct list_head *local_head, u32 *tot_size) +{ + struct sock *sk = psock->sk; + struct sk_msg *msg, *tmp; + u32 size = 0; + + list_for_each_entry_safe(msg, tmp, &psock->backlog_msg, list) { + if (msg->sk != sk_from) + break; + + if (!__sk_rmem_schedule(sk, msg->sg.size, false)) + return true; + + list_move_tail(&msg->list, local_head); + sk_wmem_queued_add(msg->sk, -msg->sg.size); + sock_put(msg->sk); + msg->sk = NULL; + psock->backlog_since_notify += msg->sg.size; + size += msg->sg.size; + } + + *tot_size = size; + return false; +} + +/* This function handles the transfer of backlogged messages from the sender + * backlog queue to the ingress queue of the peer socket. Notification of data + * availability will be sent under some conditions. + */ +void sk_psock_backlog_msg(struct sk_psock *psock) +{ + bool rmem_schedule_failed = false; + struct sock *sk_from = NULL; + struct sock *sk = psock->sk; + LIST_HEAD(local_head); + struct sk_msg *msg; + bool should_notify; + u32 tot_size = 0; + + if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + return; + + lock_sock(sk); + spin_lock(&psock->backlog_msg_lock); + + msg = list_first_entry_or_null(&psock->backlog_msg, + struct sk_msg, list); + if (!msg) { + should_notify = !list_empty(&psock->ingress_msg); + spin_unlock(&psock->backlog_msg_lock); + goto notify; + } + + sk_from = msg->sk; + sock_hold(sk_from); + + rmem_schedule_failed = backlog_xfer_to_local(psock, sk_from, + &local_head, &tot_size); + should_notify = backlog_notify(psock, rmem_schedule_failed, + list_empty(&psock->ingress_msg)); + spin_unlock(&psock->backlog_msg_lock); + + spin_lock_bh(&psock->ingress_lock); + list_splice_tail_init(&local_head, &psock->ingress_msg); + spin_unlock_bh(&psock->ingress_lock); + + atomic_add(tot_size, &sk->sk_rmem_alloc); + sk_mem_charge(sk, tot_size); + +notify: + if (should_notify) { + psock->backlog_since_notify = 0; + sk_psock_data_ready(sk, psock); + if (!list_empty(&psock->backlog_msg)) + sk_psock_run_backlog_work(psock, rmem_schedule_failed); + } else { + sk_psock_run_backlog_work(psock, true); + } + release_sock(sk); + + if (sk_from) { + bool slow = lock_sock_fast(sk_from); + + sk_mem_uncharge(sk_from, tot_size); + unlock_sock_fast(sk_from, slow); + sock_put(sk_from); + } +} + +static void sk_psock_backlog_msg_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + + sk_psock_backlog_msg(container_of(dwork, struct sk_psock, backlog_work)); +} + struct sk_psock *sk_psock_init(struct sock *sk, int node) { struct sk_psock *psock; @@ -744,8 +856,11 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node) INIT_DELAYED_WORK(&psock->work, sk_psock_backlog); mutex_init(&psock->work_mutex); + INIT_DELAYED_WORK(&psock->backlog_work, sk_psock_backlog_msg_work); INIT_LIST_HEAD(&psock->ingress_msg); spin_lock_init(&psock->ingress_lock); + INIT_LIST_HEAD(&psock->backlog_msg); + spin_lock_init(&psock->backlog_msg_lock); skb_queue_head_init(&psock->ingress_skb); sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED); @@ -799,6 +914,26 @@ static void __sk_psock_zap_ingress(struct sk_psock *psock) __sk_psock_purge_ingress_msg(psock); } +static void __sk_psock_purge_backlog_msg(struct sk_psock *psock) +{ + struct sk_msg *msg, *tmp; + + spin_lock(&psock->backlog_msg_lock); + list_for_each_entry_safe(msg, tmp, &psock->backlog_msg, list) { + struct sock *sk_from = msg->sk; + bool slow; + + list_del(&msg->list); + slow = lock_sock_fast(sk_from); + sk_wmem_queued_add(sk_from, -msg->sg.size); + sock_put(sk_from); + sk_msg_free(sk_from, msg); + unlock_sock_fast(sk_from, slow); + kfree_sk_msg(msg); + } + spin_unlock(&psock->backlog_msg_lock); +} + static void sk_psock_link_destroy(struct sk_psock *psock) { struct sk_psock_link *link, *tmp; @@ -828,7 +963,9 @@ static void sk_psock_destroy(struct work_struct *work) sk_psock_done_strp(psock); cancel_delayed_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->backlog_work); __sk_psock_zap_ingress(psock); + __sk_psock_purge_backlog_msg(psock); mutex_destroy(&psock->work_mutex); psock_progs_drop(&psock->progs); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index f0ef41c951e2..82d437210f6f 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -381,6 +381,183 @@ static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, return ret; } +static int tcp_bpf_coalesce_msg(struct sk_msg *last, struct sk_msg *msg, + u32 *apply_bytes_ptr, u32 *tot_size) +{ + struct scatterlist *sge_from, *sge_to; + u32 apply_bytes = *apply_bytes_ptr; + bool apply = apply_bytes; + int i = msg->sg.start; + u32 size; + + while (i != msg->sg.end) { + int last_sge_idx = last->sg.end; + + if (sk_msg_full(last)) + break; + + sge_from = sk_msg_elem(msg, i); + sk_msg_iter_var_prev(last_sge_idx); + sge_to = &last->sg.data[last_sge_idx]; + + size = (apply && apply_bytes < sge_from->length) ? + apply_bytes : sge_from->length; + if (sk_msg_try_coalesce_ok(last, last_sge_idx) && + sg_page(sge_to) == sg_page(sge_from) && + sge_to->offset + sge_to->length == sge_from->offset) { + sge_to->length += size; + } else { + sge_to = &last->sg.data[last->sg.end]; + sg_unmark_end(sge_to); + sg_set_page(sge_to, sg_page(sge_from), size, + sge_from->offset); + get_page(sg_page(sge_to)); + sk_msg_iter_next(last, end); + } + + sge_from->length -= size; + sge_from->offset += size; + + if (sge_from->length == 0) { + put_page(sg_page(sge_to)); + sk_msg_iter_var_next(i); + } + + msg->sg.size -= size; + last->sg.size += size; + *tot_size += size; + + if (apply) { + apply_bytes -= size; + if (!apply_bytes) + break; + } + } + + if (apply) + *apply_bytes_ptr = apply_bytes; + + msg->sg.start = i; + return i; +} + +static void tcp_bpf_xfer_msg(struct sk_msg *dst, struct sk_msg *msg, + u32 *apply_bytes_ptr, u32 *tot_size) +{ + u32 apply_bytes = *apply_bytes_ptr; + bool apply = apply_bytes; + struct scatterlist *sge; + int i = msg->sg.start; + u32 size; + + do { + sge = sk_msg_elem(msg, i); + size = (apply && apply_bytes < sge->length) ? + apply_bytes : sge->length; + + sk_msg_xfer(dst, msg, i, size); + *tot_size += size; + if (sge->length) + get_page(sk_msg_page(dst, i)); + sk_msg_iter_var_next(i); + dst->sg.end = i; + if (apply) { + apply_bytes -= size; + if (!apply_bytes) { + if (sge->length) + sk_msg_iter_var_prev(i); + break; + } + } + } while (i != msg->sg.end); + + if (apply) + *apply_bytes_ptr = apply_bytes; + msg->sg.start = i; +} + +static int tcp_bpf_ingress_backlog(struct sock *sk, struct sock *sk_redir, + struct sk_msg *msg, u32 apply_bytes) +{ + bool ingress_msg_empty = false; + bool apply = apply_bytes; + struct sk_psock *psock; + struct sk_msg *tmp; + u32 tot_size = 0; + int ret = 0; + u8 nonagle; + + psock = sk_psock_get(sk_redir); + if (unlikely(!psock)) + return -EPIPE; + + spin_lock(&psock->backlog_msg_lock); + /* If possible, coalesce the curr sk_msg to the last sk_msg from the + * psock->backlog_msg. + */ + if (!list_empty(&psock->backlog_msg)) { + struct sk_msg *last; + + last = list_last_entry(&psock->backlog_msg, struct sk_msg, list); + if (last->sk == sk) { + int i = tcp_bpf_coalesce_msg(last, msg, &apply_bytes, + &tot_size); + + if (i == msg->sg.end || (apply && !apply_bytes)) + goto out_unlock; + } + } + + /* Otherwise, allocate a new sk_msg and transfer the data from the + * passed in msg to it. + */ + tmp = sk_msg_alloc(GFP_ATOMIC); + if (!tmp) { + ret = -ENOMEM; + spin_unlock(&psock->backlog_msg_lock); + goto error; + } + + tmp->sk = sk; + sock_hold(tmp->sk); + tmp->sg.start = msg->sg.start; + tcp_bpf_xfer_msg(tmp, msg, &apply_bytes, &tot_size); + + ingress_msg_empty = list_empty(&psock->ingress_msg); + list_add_tail(&tmp->list, &psock->backlog_msg); + +out_unlock: + spin_unlock(&psock->backlog_msg_lock); + sk_wmem_queued_add(sk, tot_size); + + /* At this point, the data has been handled well. If one of the + * following conditions is met, we can notify the peer socket in + * the context of this system call immediately. + * 1. If the write buffer has been used up; + * 2. Or, the message size is larger than TCP_BPF_GSO_SIZE; + * 3. Or, the ingress queue was empty; + * 4. Or, the tcp socket is set to no_delay. + * Otherwise, kick off the backlog work so that we can have some + * time to wait for any incoming messages before sending a + * notification to the peer socket. + */ + nonagle = tcp_sk(sk)->nonagle; + if (!sk_stream_memory_free(sk) || + tot_size >= TCP_BPF_GSO_SIZE || ingress_msg_empty || + (!(nonagle & TCP_NAGLE_CORK) && (nonagle & TCP_NAGLE_OFF))) { + release_sock(sk); + psock->backlog_work_delayed = false; + sk_psock_backlog_msg(psock); + lock_sock(sk); + } else { + sk_psock_run_backlog_work(psock, false); + } + +error: + sk_psock_put(sk_redir, psock); + return ret; +} + static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, struct sk_msg *msg, int *copied, int flags) { @@ -442,18 +619,24 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, cork = true; psock->cork = NULL; } - release_sock(sk); - origsize = msg->sg.size; - ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress, - msg, tosend, flags); - sent = origsize - msg->sg.size; + if (redir_ingress) { + ret = tcp_bpf_ingress_backlog(sk, sk_redir, msg, tosend); + } else { + release_sock(sk); + + origsize = msg->sg.size; + ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress, + msg, tosend, flags); + sent = origsize - msg->sg.size; + + lock_sock(sk); + sk_mem_uncharge(sk, sent); + } if (eval == __SK_REDIRECT) sock_put(sk_redir); - lock_sock(sk); - sk_mem_uncharge(sk, sent); if (unlikely(ret < 0)) { int free = sk_msg_free(sk, msg);