From patchwork Thu Mar 6 22:02:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 14005386 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE43A206F38; Thu, 6 Mar 2025 22:02:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298564; cv=none; b=oFNF+FPnK9QhYKDh3LD3VQ5veHvB5+V3D+eWn04CRLtUZrLBfxP991BR2UyFgJbDob41ftInlq3JfuoGsh0+ivXeVHYZvIDKrREdQr5yw3r87oa7vfrg88knXWpy9J4CCRNkb05N1hX1NYnHGS9ZEH5mbg6aKx975Q1wrZ5dKPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298564; c=relaxed/simple; bh=38CB9debAPDMQ8RDwySRO3SUuWp+lyyWDRiWwde74vg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=GzgaqyOgSf5NwHG++vXev9FOg1XqruK0q7lRVpwWsyYHoUim3oft4JBe1xGBpcxxf4SHYwoiHXaQ86vDwDL5DIaCCklsu/oklhjJHWtRwgpXJzshBFX8bun6mLevUzTeTG19aezEo1mnUzoPz6etUdWeWm68PjOvuSJ2EyiCqSI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=CLmjb60s; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CLmjb60s" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-223f4c06e9fso23129135ad.1; Thu, 06 Mar 2025 14:02:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741298562; x=1741903362; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=onrvRy8A9CruBPz6kTY9C62B6nelGllJawVsW5K9X8c=; b=CLmjb60senMz0BT6GJIlNTZ2tR2eL0PoCMFi/InNXPGp+f0BsOYi/8A1bHyzb90yS/ Lpg+npQBBbJSJ7TrUcGuMY3V8DLUejImrj7VM19eWIdBctVbq/K7M7YVs1lM4WWoiW6t N5Zj4ZwTdJ61U7fmalcnNFEcOMr5AOmhXaEpVLMFXcdx8JYtl+pJbO7fvGhkjp7I8FrT YzgUFa/WgpiOOet24W5ye4+ZsIjgErYfEFFmC5DJ9sjeeC6eTp39+MkJlZZYurtDjwZs xRNS0AwZxzEQToe4I9NjAuo3yYGa1EFU6Yc5SjkaiXv88uTauGy8ZDr/9wb3viPNhx1g Ir9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741298562; x=1741903362; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=onrvRy8A9CruBPz6kTY9C62B6nelGllJawVsW5K9X8c=; b=gpTEPBfvwKYKVQxQ2lou8MkG3FEHJYz2kHx3tEFp+h0Zy0hQPg6xQu6zSiUdSMrUhy SOYwzTuoMy+BXTy2cRUQzKMEguIVAsqc+YKX11spZ+OyWA/Eg5MIZhgET+8SRLKUwrBK BPxrGE3Ygp3lJSwq39tMmoVg63DZY4eUj3aWbDRGz7MAARgiueRrxjKGEOTKC6uNQjKh VuMTMz3OS9oaiGjUl9SLcmJLdWpmth0dwFk7p9hEtfPEvZAJVQu4/et68lpoZteJizIA hmCe4Vvstok1CruYhSoSgmOXyPfFBPgxttvoTOL5uq4o4axb0zphbAn1zuH+rxeqUkIh xgKA== X-Gm-Message-State: AOJu0YxGFagsPAtckNxUjQqEmxWOPlXp+1UKfJc0n/vwMS2UHxcApSPI 2gQHFG+A4ock9bBA12JEYzYrPrzdDSAz9er9R8uYS1IWWNRYOQLdawL79w== X-Gm-Gg: ASbGnctUbufUMeF5jg8ueVejgJbqSWC+dOCgtPJb7sIkWTftpoq0hWyxzMSVp6OD4TT WceLFp2QmOoY+VClGFlt0NBx1NJrMOEE472TJl6v1eZNAXKEkWQZQ8lbbjuv7AvmkYNERxpBC6H guLM51wVnCQbdFoLAoSU8Gvfd3u7WuqCwZ88q/e07lsSzHX3x/d9mhaT7Rvk7qMj0abE+IufBRz U/CqIHa9dbCffOGjApds3wzAEQGGfV/hcddv7cbVKle9X1LqWrK+/ry19tGxofuNc5t7r4kM2Cr uoJHeg9PVQQ9DHrms+arT95fZXCb+9M++oRDESMe4h3Ffz8Hjw6e390= X-Google-Smtp-Source: AGHT+IHjuMgqGSD1VP9dZt8z7KT/lfkxUK/fG71Yenlep20O3WnTOZObr83dYIJIESzVIclMhUm7xA== X-Received: by 2002:a17:902:cf12:b0:223:fbbe:599c with SMTP id d9443c01a7336-22409476ec5mr87402055ad.19.1741298561536; Thu, 06 Mar 2025 14:02:41 -0800 (PST) Received: from pop-os.scu.edu ([129.210.115.104]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-224109ddfa6sm17478775ad.33.2025.03.06.14.02.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Mar 2025 14:02:40 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, jakub@cloudflare.com, john.fastabend@gmail.com, zhoufeng.zf@bytedance.com, Cong Wang Subject: [Patch bpf-next v2 1/4] skmsg: rename sk_msg_alloc() to sk_msg_expand() Date: Thu, 6 Mar 2025 14:02:02 -0800 Message-Id: <20250306220205.53753-2-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250306220205.53753-1-xiyou.wangcong@gmail.com> References: <20250306220205.53753-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Cong Wang The name sk_msg_alloc is misleading, that function does not allocate sk_msg at all, it simply refills sock page frags. Rename it to sk_msg_expand() to better reflect what it actually does. Signed-off-by: Cong Wang --- include/linux/skmsg.h | 4 ++-- net/core/skmsg.c | 6 +++--- net/ipv4/tcp_bpf.c | 2 +- net/tls/tls_sw.c | 6 +++--- net/xfrm/espintcp.c | 2 +- 5 files changed, 10 insertions(+), 10 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 0b9095a281b8..d6f0a8cd73c4 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -121,8 +121,8 @@ struct sk_psock { struct rcu_work rwork; }; -int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, - int elem_first_coalesce); +int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, + int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, u32 off, u32 len); void sk_msg_trim(struct sock *sk, struct sk_msg *msg, int len); diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 0ddc4c718833..4695cbd9c16f 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -24,8 +24,8 @@ static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) return false; } -int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, - int elem_first_coalesce) +int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, + int elem_first_coalesce) { struct page_frag *pfrag = sk_page_frag(sk); u32 osize = msg->sg.size; @@ -82,7 +82,7 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, sk_msg_trim(sk, msg, osize); return ret; } -EXPORT_SYMBOL_GPL(sk_msg_alloc); +EXPORT_SYMBOL_GPL(sk_msg_expand); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, u32 off, u32 len) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index ba581785adb4..85b64ffc20c6 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -530,7 +530,7 @@ static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) } osize = msg_tx->sg.size; - err = sk_msg_alloc(sk, msg_tx, msg_tx->sg.size + copy, msg_tx->sg.end - 1); + err = sk_msg_expand(sk, msg_tx, msg_tx->sg.size + copy, msg_tx->sg.end - 1); if (err) { if (err != -ENOSPC) goto wait_for_memory; diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 914d4e1516a3..338b373c8fc5 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -324,7 +324,7 @@ static int tls_alloc_encrypted_msg(struct sock *sk, int len) struct tls_rec *rec = ctx->open_rec; struct sk_msg *msg_en = &rec->msg_encrypted; - return sk_msg_alloc(sk, msg_en, len, 0); + return sk_msg_expand(sk, msg_en, len, 0); } static int tls_clone_plaintext_msg(struct sock *sk, int required) @@ -619,8 +619,8 @@ static int tls_split_open_record(struct sock *sk, struct tls_rec *from, new = tls_get_rec(sk); if (!new) return -ENOMEM; - ret = sk_msg_alloc(sk, &new->msg_encrypted, msg_opl->sg.size + - tx_overhead_size, 0); + ret = sk_msg_expand(sk, &new->msg_encrypted, msg_opl->sg.size + + tx_overhead_size, 0); if (ret < 0) { tls_free_rec(sk, new); return ret; diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c index fe82e2d07300..4fd03edb4497 100644 --- a/net/xfrm/espintcp.c +++ b/net/xfrm/espintcp.c @@ -351,7 +351,7 @@ static int espintcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) sk_msg_init(&emsg->skmsg); while (1) { /* only -ENOMEM is possible since we don't coalesce */ - err = sk_msg_alloc(sk, &emsg->skmsg, msglen, 0); + err = sk_msg_expand(sk, &emsg->skmsg, msglen, 0); if (!err) break; From patchwork Thu Mar 6 22:02:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 14005387 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D48B278147; Thu, 6 Mar 2025 22:02:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298565; cv=none; b=g9cJGlPer2JFu8UWLQFewm8qJcy8lIkZoSjWTbHHXw5znzc9Bb3HJ+Z+F+Ec3GWzQv1H7qLG7cQEpBiquC9b+2Xv8JN2GKUd3S8+YaJTtDtObWEQyX+8oX6vZv1n6JS21QCRbCeRQYaqGT1ySMvxgIgxoYSlROG0PPA7B8rfvd4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298565; c=relaxed/simple; bh=TI6cHoVsKSwW28kVC+WGOqjAE3DLRsM5y2hXa7t02hU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QuZE8smAZlCk2LiqrASeKkU4yunyJZi8k7/E1F0gvAClOtNrVjXMgWGHK7JzbuH8AYom2xmuC/rDSwJhSQyA1OEZta4Hnn9t3boYiUx3gvHh6owj+fJlfLqqnsyCIzMVtarNmnVIp4cDvc0EBhYhmMUsBeRHNyNl/FWGH+w4gvg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=GW9v3IJd; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="GW9v3IJd" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-2239aa5da08so19348835ad.3; Thu, 06 Mar 2025 14:02:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741298563; x=1741903363; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5wOjim4xlZk082hyKSlOHixdefH2Pq7iWo7w1+sb9so=; b=GW9v3IJdCNPj0wy5DeFwV7gw/FDc9rjC3nor9KXOCTNWySkWdzfYQU6viExB3GqzoA Pv6eAGc3HBZqYdz8HYAl4j93t8tvj9eoOM5FHUKSY8TjkbfjqhywP/Snv57MJ860Hl53 Zt2prLhKdVlQr5PtQZDHVatKoVv/cVRWt4QT2Al4KcBqJ/GadoBAiVNDEDAtyMtiOJW2 gwKgtPWH+ps/XGIH+XykOYRs/KqWe0I9P7CZpO93nCTQuR7xwJcMJ50eukCUzwVslRYk TNTLUklBed6C84xkOQW/xJ4cfgKD9CPpZ85ABfFmv/3V5YABJxK5mA8tvTf/l689iMtZ lOBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741298563; x=1741903363; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5wOjim4xlZk082hyKSlOHixdefH2Pq7iWo7w1+sb9so=; b=kSgwgMSmfPZpj3pBbeejnqTu5Q4qCUrz8idD9JjtEFYxOtUCOvknixb4UCEpiSSSB0 Tcm7otuGGFLDt31rfxq2xx45xnR5USamkwQyL4TRqFp8Egp7VOHlr1lidObmmwZEhMQv +OyR+vacp5GqzekyplS8x+A/v63qvTJ7Bf7l23Tx21f9SMM7+Ve1GhKI0zWOIvgzmpLc Zv4vPef9YWE9TFc8WRbz9WIJwwnRbhNk29oU+KAcNgks2uCXzq0EbMfHe26txnpnOGjF KW7bW+gT/6G54++x1BEhuyYAnT2EUrTaOoeU+ipulMnqrH75GZ1gZfx+egcR4xsJuF4a RhkA== X-Gm-Message-State: AOJu0Ywv6Pu8ddgvVtNixJVo3kWkihWdsS+0Yzt0AanFwNTLrxlflGXd ak0oxiZP8tGpJE3r+4lYfKTm9raASaqmZcWawvIzjcyZT6gg8b6+dTicCw== X-Gm-Gg: ASbGncsYVss1F7jQnHuDErJERLb+K0OxdfMQehvn407kqyyAroKzKm+5r7UIFcG4ULP fq3BCYW7burkN+3FRT59ppiGANG0cZ8gIaCrJe0OiqxJwzib1bw+M+1TCZh01F+GGLBk84Bx51O CAOOqGyTVqEJI4t/XTbWLDkoXof7zNoLFwMeGyDP7K5KF6uhfPlI5sxTTKD1tysRoBfDtjF0jRo NVCq8fYmocYVEqKit96Wxqu1L59gD9JMfV+dCxxvRIV1DdnXhWnfEZzf6gIKfq/J+GGG8XMMk8Q TnA5PtR0Xv2TZvzf8gmDHOWUpH3mgc6ZQlSt89XwIvVLuaLZlP4T8fA= X-Google-Smtp-Source: AGHT+IHlmg5yjU0TwztG/03Ghzlrq+tvO+b2rhfuDORrvfmvfKYyXT4W0o9z7BxkDk5FzA/TOnCXHQ== X-Received: by 2002:a17:902:e88d:b0:21f:768:cced with SMTP id d9443c01a7336-22428886828mr12993815ad.8.1741298562961; Thu, 06 Mar 2025 14:02:42 -0800 (PST) Received: from pop-os.scu.edu ([129.210.115.104]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-224109ddfa6sm17478775ad.33.2025.03.06.14.02.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Mar 2025 14:02:42 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, jakub@cloudflare.com, john.fastabend@gmail.com, zhoufeng.zf@bytedance.com, Zijian Zhang , Cong Wang Subject: [Patch bpf-next v2 2/4] skmsg: implement slab allocator cache for sk_msg Date: Thu, 6 Mar 2025 14:02:03 -0800 Message-Id: <20250306220205.53753-3-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250306220205.53753-1-xiyou.wangcong@gmail.com> References: <20250306220205.53753-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Zijian Zhang Optimizing redirect ingress performance requires frequent allocation and deallocation of sk_msg structures. Introduce a dedicated kmem_cache for sk_msg to reduce memory allocation overhead and improve performance. Reviewed-by: Cong Wang Signed-off-by: Zijian Zhang --- include/linux/skmsg.h | 21 ++++++++++++--------- net/core/skmsg.c | 28 +++++++++++++++++++++------- net/ipv4/tcp_bpf.c | 5 ++--- 3 files changed, 35 insertions(+), 19 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index d6f0a8cd73c4..bf28ce9b5fdb 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -121,6 +121,7 @@ struct sk_psock { struct rcu_work rwork; }; +struct sk_msg *sk_msg_alloc(gfp_t gfp); int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, @@ -143,6 +144,8 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg, int len, int flags); bool sk_msg_is_readable(struct sock *sk); +extern struct kmem_cache *sk_msg_cachep; + static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 bytes) { WARN_ON(i == msg->sg.end && bytes); @@ -319,6 +322,13 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb) kfree_skb(skb); } +static inline void kfree_sk_msg(struct sk_msg *msg) +{ + if (msg->skb) + consume_skb(msg->skb); + kmem_cache_free(sk_msg_cachep, msg); +} + static inline bool sk_psock_queue_msg(struct sk_psock *psock, struct sk_msg *msg) { @@ -330,7 +340,7 @@ static inline bool sk_psock_queue_msg(struct sk_psock *psock, ret = true; } else { sk_msg_free(psock->sk, msg); - kfree(msg); + kfree_sk_msg(msg); ret = false; } spin_unlock_bh(&psock->ingress_lock); @@ -378,13 +388,6 @@ static inline bool sk_psock_queue_empty(const struct sk_psock *psock) return psock ? list_empty(&psock->ingress_msg) : true; } -static inline void kfree_sk_msg(struct sk_msg *msg) -{ - if (msg->skb) - consume_skb(msg->skb); - kfree(msg); -} - static inline void sk_psock_report_error(struct sk_psock *psock, int err) { struct sock *sk = psock->sk; @@ -441,7 +444,7 @@ static inline void sk_psock_cork_free(struct sk_psock *psock) { if (psock->cork) { sk_msg_free(psock->sk, psock->cork); - kfree(psock->cork); + kfree_sk_msg(psock->cork); psock->cork = NULL; } } diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 4695cbd9c16f..25c53c8c9857 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -10,6 +10,8 @@ #include #include +struct kmem_cache *sk_msg_cachep; + static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) { if (msg->sg.end > msg->sg.start && @@ -503,16 +505,17 @@ bool sk_msg_is_readable(struct sock *sk) } EXPORT_SYMBOL_GPL(sk_msg_is_readable); -static struct sk_msg *alloc_sk_msg(gfp_t gfp) +struct sk_msg *sk_msg_alloc(gfp_t gfp) { struct sk_msg *msg; - msg = kzalloc(sizeof(*msg), gfp | __GFP_NOWARN); + msg = kmem_cache_zalloc(sk_msg_cachep, gfp | __GFP_NOWARN); if (unlikely(!msg)) return NULL; sg_init_marker(msg->sg.data, NR_MSG_FRAG_IDS); return msg; } +EXPORT_SYMBOL_GPL(sk_msg_alloc); static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk, struct sk_buff *skb) @@ -523,7 +526,7 @@ static struct sk_msg *sk_psock_create_ingress_msg(struct sock *sk, if (!sk_rmem_schedule(sk, skb, skb->truesize)) return NULL; - return alloc_sk_msg(GFP_KERNEL); + return sk_msg_alloc(GFP_KERNEL); } static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb, @@ -592,7 +595,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb, skb_set_owner_r(skb, sk); err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg); if (err < 0) - kfree(msg); + kfree_sk_msg(msg); return err; } @@ -603,7 +606,7 @@ static int sk_psock_skb_ingress(struct sk_psock *psock, struct sk_buff *skb, static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb, u32 off, u32 len) { - struct sk_msg *msg = alloc_sk_msg(GFP_ATOMIC); + struct sk_msg *msg = sk_msg_alloc(GFP_ATOMIC); struct sock *sk = psock->sk; int err; @@ -612,7 +615,7 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb skb_set_owner_r(skb, sk); err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg); if (err < 0) - kfree(msg); + kfree_sk_msg(msg); return err; } @@ -781,7 +784,7 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock) if (!msg->skb) atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc); sk_msg_free(psock->sk, msg); - kfree(msg); + kfree_sk_msg(msg); } } @@ -1266,3 +1269,14 @@ void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock) sk->sk_data_ready = psock->saved_data_ready; psock->saved_data_ready = NULL; } + +static int __init sk_msg_cachep_init(void) +{ + sk_msg_cachep = kmem_cache_create("sk_msg_cachep", + sizeof(struct sk_msg), + 0, + SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, + NULL); + return 0; +} +late_initcall(sk_msg_cachep_init); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 85b64ffc20c6..f0ef41c951e2 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -38,7 +38,7 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock, struct sk_msg *tmp; int i, ret = 0; - tmp = kzalloc(sizeof(*tmp), __GFP_NOWARN | GFP_KERNEL); + tmp = sk_msg_alloc(GFP_KERNEL); if (unlikely(!tmp)) return -ENOMEM; @@ -406,8 +406,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, msg->cork_bytes > msg->sg.size && !enospc) { psock->cork_bytes = msg->cork_bytes - msg->sg.size; if (!psock->cork) { - psock->cork = kzalloc(sizeof(*psock->cork), - GFP_ATOMIC | __GFP_NOWARN); + psock->cork = sk_msg_alloc(GFP_ATOMIC); if (!psock->cork) return -ENOMEM; } From patchwork Thu Mar 6 22:02:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 14005388 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7566B27C851; Thu, 6 Mar 2025 22:02:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298568; cv=none; b=sh4QHGjEtFgRHoPlbOi6MZU+yTaGfqYajPq5w+2ntC78kvISPAZ1pzLT4ViSs4o8kM2/dBXyhRVRElfywJtpIUiNNZEauCkbmX41nugV5RSVlgSC089161tbanzikz14QP2oogkUoghojBUFG+9BAbKasfN1DZ0kjFEa2k4Rcms= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298568; c=relaxed/simple; bh=NZNkBAwQbsUxrtMZrOEp6ge8DpXMzhulB2bC4tpr02A=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=l6rTk3VYoo7LpkKURYfWpi2K2ek+4W8PyXUlYdi4SXwqSU4GKOUhhcMyNMFKsBclCcEdo7nktODEUAiv4CgyLItC/JhBEZ43jaW5chcVCjCT3sblYJraR1RHGoxSqVv6UiBf8yoUejpISwy7oXu22wkZXFWwrWTr0/e9/hczZy0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=QLBPFFvV; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QLBPFFvV" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-2240b4de12bso29998185ad.2; Thu, 06 Mar 2025 14:02:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741298564; x=1741903364; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7Mz7s+ZJoM3aBynkNWzUTB18vUlVlu4ivuZhF6UuKwc=; b=QLBPFFvV/u8c5nfCxM8aeZyKEaoH1f0wJ82Rwed5sSB9IhXOoiSoG14K8ctbMYJEsW hefmE9I0oNTmA+cJQ4+4iBLuMC5lWXjE8S2B9rXRa0vNS3EXVQhziCx+ThAnHLUAHK8n 4UiUXgOGOMp5COlX9XZkoITF/1T53ORep4HC6jtSbIbigP1Xggkk1WbRT3gQcsEJSADH kBRLQt/U5kjgXHnBMUx/RnepVFN2nHTScO5fv6RKsVEc2bVi3qubgMqJHUeO44xM+88B 9mvaDszjTm0uOKM2tNBOq1Dh8Nmkvkmu+V4OZT0NnvrFLIswHqwL+878EpQO8Ta9uibl wyQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741298564; x=1741903364; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7Mz7s+ZJoM3aBynkNWzUTB18vUlVlu4ivuZhF6UuKwc=; b=jhAS3qxpHJFJKOcumL0Q0Xbic6KoxwUgmoGfz8FvEMWLf3+BQQ+VkZQIXur3SDjRTD eFGdW6Q6JwTROf0QE4XnmnPClK/MzwAaFRr3zyqiaPU+80zHP8/DLYMSBarVd9dUtkBD aT4rLgz89lxiY05jUIB8AkDhAV6JIL50vYBV1FgDlT4rIMbdjUWFpMOszEsMaZapWok1 z9GJzFukaAqeLEHpW76W8niVLTb1mfzO3olVgDLfs4iyCLLxykFGwaiUFh4tmSqD0KVl WWd2pcI9cwil2KYXv1Fb4uoT4myOwavk9Dvwkv2snTv+eeqFxZPnSFwU1eU9VUkgR3JH NFPg== X-Gm-Message-State: AOJu0YxAuV1p9DGhAqBNPX5k0iGYSXE7hrIAPAQusRaydTI9QGEK4utn AUu+QHyHtPECFRxa82lXvyCzVf0K85sNM/IwEAHW6jm4CxluWOQIpelqmg== X-Gm-Gg: ASbGncscVPUNrf7RWjLQ8h7TQdk3yW4FQXLZhTHCSRSzyHV72nEgHOzLtwRi7TAAUcQ dIy6hYwMoSNShe6CtsiTGW+RA+c74MknUDOgqAszyAsW0PN9K0N8a1yGWI+5soFr9+iuMvFl5HB GCKOP0gsy4eG62v0xiOJ0kXwqbrFIRxEWuesOV1ZgR9UB4zzIuSphW3O+n6Ht+f8qs9jpXhLXdy 6JyoL3Zla1NcGP3+EyLWamOGjBaB37o2pEPYZHWjfymrtHTnAY+IyGYf06bZNXMVVP+ay4tUDcM 3jrbeLBhS9sD+48c2UOtRth/Ddy7Fq9tnFxNsyS0Dq4vgy6cT46e2mI= X-Google-Smtp-Source: AGHT+IG48cNgyktdjmZekwhnwNAb+P2mx1ffJ2M98h4e/N7OMPCDEs2/D5LN0U5SgKHr0T13MgUpXg== X-Received: by 2002:a17:902:eb81:b0:224:1609:a74a with SMTP id d9443c01a7336-22428ab78eemr16616655ad.34.1741298564269; Thu, 06 Mar 2025 14:02:44 -0800 (PST) Received: from pop-os.scu.edu ([129.210.115.104]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-224109ddfa6sm17478775ad.33.2025.03.06.14.02.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Mar 2025 14:02:43 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, jakub@cloudflare.com, john.fastabend@gmail.com, zhoufeng.zf@bytedance.com, Cong Wang Subject: [Patch bpf-next v2 3/4] skmsg: save some space in struct sk_psock Date: Thu, 6 Mar 2025 14:02:04 -0800 Message-Id: <20250306220205.53753-4-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250306220205.53753-1-xiyou.wangcong@gmail.com> References: <20250306220205.53753-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Cong Wang This patch aims to save some space in struct sk_psock and prepares for the next patch which will add more fields. psock->eval can only have 4 possible values, make it 8-bit is sufficient. psock->redir_ingress is just a boolean, using 1 bit is enough. Signed-off-by: Cong Wang --- include/linux/skmsg.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index bf28ce9b5fdb..7620f170c4b1 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -85,8 +85,8 @@ struct sk_psock { struct sock *sk_redir; u32 apply_bytes; u32 cork_bytes; - u32 eval; - bool redir_ingress; /* undefined if sk_redir is null */ + u8 eval; + u8 redir_ingress : 1; /* undefined if sk_redir is null */ struct sk_msg *cork; struct sk_psock_progs progs; #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) From patchwork Thu Mar 6 22:02:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 14005389 X-Patchwork-Delegate: bpf@iogearbox.net Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0819327C873; Thu, 6 Mar 2025 22:02:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298568; cv=none; b=PIXI5hvoysiA5Sh9HTYv2h8Px80I7nMYUGQfE+Oy/jEUbVL4a3HMGjQt+NjAFjjWseOe7MTvFmgEdC5S7NsJm4ISEHh6rJcbrpRBf9gJDOit4DB8cwZ0bEz+/v/9Np17GX3T+Tgu/aq47ezQPBsBhXVgtFK+ANqRR/vrh+mikas= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741298568; c=relaxed/simple; bh=x5NxNijXKivXEJOS7av3uTcW3M2ZUHwI4eMi0CriAx4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=de9Ggyx8RruaQw6uNAuCfOpzwXyyrm6Cr/l+JeuWlEVNtiowuzezkAQIZcDlFkHP7uguSKVFcl9ma0jQoRjRSumFf9MmAHzL32GSNFLEI5t32A20cNb44GwLcz59Zcd2qbiZKDR5VIorzvgssq0LXw3iJBKMXSJ/7I03p5MEsVg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UUGU2S7z; arc=none smtp.client-ip=209.85.214.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UUGU2S7z" Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-2240b4de12bso29998925ad.2; Thu, 06 Mar 2025 14:02:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741298566; x=1741903366; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aEWaeuMj8wkgxtzeJ5VWC4rlkU29HiqQYCgHRNUEAn4=; b=UUGU2S7z2eHFVA7E2y6uc/YzEkj70xBjCMkwxQ1XlOf/52uXvhMykgpPlmSAxM6Rvw a26ndIUhqfZZwJIml6PSz15mXFgoXVwm0IKoE/yKGkNkzF131i0gRrZBWi519EbyubsM P4GItuMat4GvdaxXdXdCAuIyrbAaRsQkhHiZoBRbxjH1OTpGasPQDqCmFNUhb70+UGUT av+7HyZt5WSHvG4bxcxoV9bHkJc+LkvhoLIMa/IPlN3RGRkyFsxwok+PtaaALhJT7+Ci 4lxkRD3yt2juwUZybTTk3i9O4o1Cpv+e7lCxzDQNbDeqTnaR3askNKoVy08lLWdaT2nS 5HFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741298566; x=1741903366; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aEWaeuMj8wkgxtzeJ5VWC4rlkU29HiqQYCgHRNUEAn4=; b=RfloCj9+UhCzQW4awsLp63qOw5W1MmG9alW5VjIIh8PUC8zDhqP4H47bM7pAz5vfGR 8F8Ys83XW0vmNyM90a4yig93sfyYfMgd+WJUWqC5ybK133yfEon9nCkyt4n/6BLON4Sm JsP0Nge+gG28ZYr1YVsLXMJC+p3uRNC0CFeaItTJz0g8t4Jn59eaTaFV+QoN4+uxm1er EBGKas3W/SrwSXNUvJq7zWJgC3OWzXPeDc0mWv1Xt+XvtGAIVO6qAXd2py1zjnKJ8tEB l0Mo7DCP8DEtSbJzQF7jtwAJ9Qp9qEYKjH7pHN8to0YfARTkj6fzx/JgVIfgDAaxToRF 4ZRA== X-Gm-Message-State: AOJu0Yx/jCPhtgxLCNWlQiEAHb+bymPZHa1Svs5vXaic7E83DwQTU0w1 TWrNW/pldkRubT31qwpOmZyCQhAnJ6eSR4l9w/+YRAWitG5W9L/OaeLVtA== X-Gm-Gg: ASbGncvBLIBdD2ZlqBKeNJcKs0bhXfYbx2I45MEj6EAJL2oo0BRREfaHOimtFXZWNk+ BOZnlzKWvVy1RyRZH4ztJRTEYpfCyDZduomBUvkgb/o2eFGpXHVuWA9Jj6ru3I3YGXasgMQKHt9 /JONeJAc4e5iaALr3Wo7uCwQs33yI1Nw/vi+VjM+bIvaRd+QRvTV0Cmero0GppB7D4hUqSgWZrr zvP0XaAtwT1wUyX3HWa8NPY8TA8WhY9EjPtMMal/nYHa6/fhc/l5dMaR99RV91D6cNUTtWgA4H3 k33Sp23ITL3p4Yf2YyPn8a+8+x0R/H2ff78MLTgiml065hnUwSLUdo0= X-Google-Smtp-Source: AGHT+IHXtVcKfC97MtukUkNo68wfYQ48IHSbNmjBIr/8B0w7pjbsbiw69q7ugDG/QAxEteF/+4FRqw== X-Received: by 2002:a17:902:ccca:b0:223:fabd:4f76 with SMTP id d9443c01a7336-224289946c6mr12511215ad.30.1741298565683; Thu, 06 Mar 2025 14:02:45 -0800 (PST) Received: from pop-os.scu.edu ([129.210.115.104]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-224109ddfa6sm17478775ad.33.2025.03.06.14.02.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 06 Mar 2025 14:02:45 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: bpf@vger.kernel.org, jakub@cloudflare.com, john.fastabend@gmail.com, zhoufeng.zf@bytedance.com, Zijian Zhang , Amery Hung , Cong Wang Subject: [Patch bpf-next v2 4/4] tcp_bpf: improve ingress redirection performance with message corking Date: Thu, 6 Mar 2025 14:02:05 -0800 Message-Id: <20250306220205.53753-5-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250306220205.53753-1-xiyou.wangcong@gmail.com> References: <20250306220205.53753-1-xiyou.wangcong@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Zijian Zhang The TCP_BPF ingress redirection path currently lacks the message corking mechanism found in standard TCP. This causes the sender to wake up the receiver for every message, even when messages are small, resulting in reduced throughput compared to regular TCP in certain scenarios. This change introduces a kernel worker-based intermediate layer to provide automatic message corking for TCP_BPF. While this adds a slight latency overhead, it significantly improves overall throughput by reducing unnecessary wake-ups and reducing the sock lock contention. Reviewed-by: Amery Hung Co-developed-by: Cong Wang Signed-off-by: Cong Wang Signed-off-by: Zijian Zhang --- include/linux/skmsg.h | 19 ++++ net/core/skmsg.c | 139 ++++++++++++++++++++++++++++- net/ipv4/tcp_bpf.c | 197 ++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 347 insertions(+), 8 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 7620f170c4b1..2531428168ad 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -15,6 +15,8 @@ #define MAX_MSG_FRAGS MAX_SKB_FRAGS #define NR_MSG_FRAG_IDS (MAX_MSG_FRAGS + 1) +/* GSO size for TCP BPF backlog processing */ +#define TCP_BPF_GSO_SIZE 65536 enum __sk_action { __SK_DROP = 0, @@ -85,8 +87,10 @@ struct sk_psock { struct sock *sk_redir; u32 apply_bytes; u32 cork_bytes; + u32 backlog_since_notify; u8 eval; u8 redir_ingress : 1; /* undefined if sk_redir is null */ + u8 backlog_work_delayed : 1; struct sk_msg *cork; struct sk_psock_progs progs; #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) @@ -97,6 +101,9 @@ struct sk_psock { struct sk_buff_head ingress_skb; struct list_head ingress_msg; spinlock_t ingress_lock; + struct list_head backlog_msg; + /* spin_lock for backlog_msg and backlog_since_notify */ + spinlock_t backlog_msg_lock; unsigned long state; struct list_head link; spinlock_t link_lock; @@ -117,11 +124,13 @@ struct sk_psock { struct mutex work_mutex; struct sk_psock_work_state work_state; struct delayed_work work; + struct delayed_work backlog_work; struct sock *sk_pair; struct rcu_work rwork; }; struct sk_msg *sk_msg_alloc(gfp_t gfp); +bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce); int sk_msg_expand(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce); int sk_msg_clone(struct sock *sk, struct sk_msg *dst, struct sk_msg *src, @@ -396,9 +405,19 @@ static inline void sk_psock_report_error(struct sk_psock *psock, int err) sk_error_report(sk); } +void sk_psock_backlog_msg(struct sk_psock *psock); struct sk_psock *sk_psock_init(struct sock *sk, int node); void sk_psock_stop(struct sk_psock *psock); +static inline void sk_psock_run_backlog_work(struct sk_psock *psock, + bool delayed) +{ + if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + return; + psock->backlog_work_delayed = delayed; + schedule_delayed_work(&psock->backlog_work, delayed ? 1 : 0); +} + #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER) int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock); void sk_psock_start_strp(struct sock *sk, struct sk_psock *psock); diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 25c53c8c9857..32507163fd2d 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -12,7 +12,7 @@ struct kmem_cache *sk_msg_cachep; -static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) +bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) { if (msg->sg.end > msg->sg.start && elem_first_coalesce < msg->sg.end) @@ -707,6 +707,118 @@ static void sk_psock_backlog(struct work_struct *work) mutex_unlock(&psock->work_mutex); } +static bool backlog_notify(struct sk_psock *psock, bool m_sched_failed, + bool ingress_empty) +{ + /* Notify if: + * 1. We have corked enough bytes + * 2. We have already delayed notification + * 3. Memory allocation failed + * 4. Ingress queue was empty and we're about to add data + */ + return psock->backlog_since_notify >= TCP_BPF_GSO_SIZE || + psock->backlog_work_delayed || + m_sched_failed || + ingress_empty; +} + +static bool backlog_xfer_to_local(struct sk_psock *psock, struct sock *sk_from, + struct list_head *local_head, u32 *tot_size) +{ + struct sock *sk = psock->sk; + struct sk_msg *msg, *tmp; + u32 size = 0; + + list_for_each_entry_safe(msg, tmp, &psock->backlog_msg, list) { + if (msg->sk != sk_from) + break; + + if (!__sk_rmem_schedule(sk, msg->sg.size, false)) + return true; + + list_move_tail(&msg->list, local_head); + sk_wmem_queued_add(msg->sk, -msg->sg.size); + sock_put(msg->sk); + msg->sk = NULL; + psock->backlog_since_notify += msg->sg.size; + size += msg->sg.size; + } + + *tot_size = size; + return false; +} + +/* This function handles the transfer of backlogged messages from the sender + * backlog queue to the ingress queue of the peer socket. Notification of data + * availability will be sent under some conditions. + */ +void sk_psock_backlog_msg(struct sk_psock *psock) +{ + bool rmem_schedule_failed = false; + struct sock *sk_from = NULL; + struct sock *sk = psock->sk; + LIST_HEAD(local_head); + struct sk_msg *msg; + bool should_notify; + u32 tot_size = 0; + + if (!sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + return; + + lock_sock(sk); + spin_lock(&psock->backlog_msg_lock); + + msg = list_first_entry_or_null(&psock->backlog_msg, + struct sk_msg, list); + if (!msg) { + should_notify = !list_empty(&psock->ingress_msg); + spin_unlock(&psock->backlog_msg_lock); + goto notify; + } + + sk_from = msg->sk; + sock_hold(sk_from); + + rmem_schedule_failed = backlog_xfer_to_local(psock, sk_from, + &local_head, &tot_size); + should_notify = backlog_notify(psock, rmem_schedule_failed, + list_empty(&psock->ingress_msg)); + spin_unlock(&psock->backlog_msg_lock); + + spin_lock_bh(&psock->ingress_lock); + list_splice_tail_init(&local_head, &psock->ingress_msg); + spin_unlock_bh(&psock->ingress_lock); + + atomic_add(tot_size, &sk->sk_rmem_alloc); + sk_mem_charge(sk, tot_size); + +notify: + if (should_notify) { + psock->backlog_since_notify = 0; + sk_psock_data_ready(sk, psock); + if (!list_empty(&psock->backlog_msg)) + sk_psock_run_backlog_work(psock, rmem_schedule_failed); + } else { + sk_psock_run_backlog_work(psock, true); + } + release_sock(sk); + + if (sk_from) { + bool slow = lock_sock_fast(sk_from); + + sk_mem_uncharge(sk_from, tot_size); + unlock_sock_fast(sk_from, slow); + sock_put(sk_from); + } +} + +static void sk_psock_backlog_msg_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + + sk_psock_backlog_msg(container_of(dwork, struct sk_psock, backlog_work)); +} + struct sk_psock *sk_psock_init(struct sock *sk, int node) { struct sk_psock *psock; @@ -744,8 +856,11 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node) INIT_DELAYED_WORK(&psock->work, sk_psock_backlog); mutex_init(&psock->work_mutex); + INIT_DELAYED_WORK(&psock->backlog_work, sk_psock_backlog_msg_work); INIT_LIST_HEAD(&psock->ingress_msg); spin_lock_init(&psock->ingress_lock); + INIT_LIST_HEAD(&psock->backlog_msg); + spin_lock_init(&psock->backlog_msg_lock); skb_queue_head_init(&psock->ingress_skb); sk_psock_set_state(psock, SK_PSOCK_TX_ENABLED); @@ -799,6 +914,26 @@ static void __sk_psock_zap_ingress(struct sk_psock *psock) __sk_psock_purge_ingress_msg(psock); } +static void __sk_psock_purge_backlog_msg(struct sk_psock *psock) +{ + struct sk_msg *msg, *tmp; + + spin_lock(&psock->backlog_msg_lock); + list_for_each_entry_safe(msg, tmp, &psock->backlog_msg, list) { + struct sock *sk_from = msg->sk; + bool slow; + + list_del(&msg->list); + slow = lock_sock_fast(sk_from); + sk_wmem_queued_add(sk_from, -msg->sg.size); + sock_put(sk_from); + sk_msg_free(sk_from, msg); + unlock_sock_fast(sk_from, slow); + kfree_sk_msg(msg); + } + spin_unlock(&psock->backlog_msg_lock); +} + static void sk_psock_link_destroy(struct sk_psock *psock) { struct sk_psock_link *link, *tmp; @@ -828,7 +963,9 @@ static void sk_psock_destroy(struct work_struct *work) sk_psock_done_strp(psock); cancel_delayed_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->backlog_work); __sk_psock_zap_ingress(psock); + __sk_psock_purge_backlog_msg(psock); mutex_destroy(&psock->work_mutex); psock_progs_drop(&psock->progs); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index f0ef41c951e2..82d437210f6f 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -381,6 +381,183 @@ static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, return ret; } +static int tcp_bpf_coalesce_msg(struct sk_msg *last, struct sk_msg *msg, + u32 *apply_bytes_ptr, u32 *tot_size) +{ + struct scatterlist *sge_from, *sge_to; + u32 apply_bytes = *apply_bytes_ptr; + bool apply = apply_bytes; + int i = msg->sg.start; + u32 size; + + while (i != msg->sg.end) { + int last_sge_idx = last->sg.end; + + if (sk_msg_full(last)) + break; + + sge_from = sk_msg_elem(msg, i); + sk_msg_iter_var_prev(last_sge_idx); + sge_to = &last->sg.data[last_sge_idx]; + + size = (apply && apply_bytes < sge_from->length) ? + apply_bytes : sge_from->length; + if (sk_msg_try_coalesce_ok(last, last_sge_idx) && + sg_page(sge_to) == sg_page(sge_from) && + sge_to->offset + sge_to->length == sge_from->offset) { + sge_to->length += size; + } else { + sge_to = &last->sg.data[last->sg.end]; + sg_unmark_end(sge_to); + sg_set_page(sge_to, sg_page(sge_from), size, + sge_from->offset); + get_page(sg_page(sge_to)); + sk_msg_iter_next(last, end); + } + + sge_from->length -= size; + sge_from->offset += size; + + if (sge_from->length == 0) { + put_page(sg_page(sge_to)); + sk_msg_iter_var_next(i); + } + + msg->sg.size -= size; + last->sg.size += size; + *tot_size += size; + + if (apply) { + apply_bytes -= size; + if (!apply_bytes) + break; + } + } + + if (apply) + *apply_bytes_ptr = apply_bytes; + + msg->sg.start = i; + return i; +} + +static void tcp_bpf_xfer_msg(struct sk_msg *dst, struct sk_msg *msg, + u32 *apply_bytes_ptr, u32 *tot_size) +{ + u32 apply_bytes = *apply_bytes_ptr; + bool apply = apply_bytes; + struct scatterlist *sge; + int i = msg->sg.start; + u32 size; + + do { + sge = sk_msg_elem(msg, i); + size = (apply && apply_bytes < sge->length) ? + apply_bytes : sge->length; + + sk_msg_xfer(dst, msg, i, size); + *tot_size += size; + if (sge->length) + get_page(sk_msg_page(dst, i)); + sk_msg_iter_var_next(i); + dst->sg.end = i; + if (apply) { + apply_bytes -= size; + if (!apply_bytes) { + if (sge->length) + sk_msg_iter_var_prev(i); + break; + } + } + } while (i != msg->sg.end); + + if (apply) + *apply_bytes_ptr = apply_bytes; + msg->sg.start = i; +} + +static int tcp_bpf_ingress_backlog(struct sock *sk, struct sock *sk_redir, + struct sk_msg *msg, u32 apply_bytes) +{ + bool ingress_msg_empty = false; + bool apply = apply_bytes; + struct sk_psock *psock; + struct sk_msg *tmp; + u32 tot_size = 0; + int ret = 0; + u8 nonagle; + + psock = sk_psock_get(sk_redir); + if (unlikely(!psock)) + return -EPIPE; + + spin_lock(&psock->backlog_msg_lock); + /* If possible, coalesce the curr sk_msg to the last sk_msg from the + * psock->backlog_msg. + */ + if (!list_empty(&psock->backlog_msg)) { + struct sk_msg *last; + + last = list_last_entry(&psock->backlog_msg, struct sk_msg, list); + if (last->sk == sk) { + int i = tcp_bpf_coalesce_msg(last, msg, &apply_bytes, + &tot_size); + + if (i == msg->sg.end || (apply && !apply_bytes)) + goto out_unlock; + } + } + + /* Otherwise, allocate a new sk_msg and transfer the data from the + * passed in msg to it. + */ + tmp = sk_msg_alloc(GFP_ATOMIC); + if (!tmp) { + ret = -ENOMEM; + spin_unlock(&psock->backlog_msg_lock); + goto error; + } + + tmp->sk = sk; + sock_hold(tmp->sk); + tmp->sg.start = msg->sg.start; + tcp_bpf_xfer_msg(tmp, msg, &apply_bytes, &tot_size); + + ingress_msg_empty = list_empty(&psock->ingress_msg); + list_add_tail(&tmp->list, &psock->backlog_msg); + +out_unlock: + spin_unlock(&psock->backlog_msg_lock); + sk_wmem_queued_add(sk, tot_size); + + /* At this point, the data has been handled well. If one of the + * following conditions is met, we can notify the peer socket in + * the context of this system call immediately. + * 1. If the write buffer has been used up; + * 2. Or, the message size is larger than TCP_BPF_GSO_SIZE; + * 3. Or, the ingress queue was empty; + * 4. Or, the tcp socket is set to no_delay. + * Otherwise, kick off the backlog work so that we can have some + * time to wait for any incoming messages before sending a + * notification to the peer socket. + */ + nonagle = tcp_sk(sk)->nonagle; + if (!sk_stream_memory_free(sk) || + tot_size >= TCP_BPF_GSO_SIZE || ingress_msg_empty || + (!(nonagle & TCP_NAGLE_CORK) && (nonagle & TCP_NAGLE_OFF))) { + release_sock(sk); + psock->backlog_work_delayed = false; + sk_psock_backlog_msg(psock); + lock_sock(sk); + } else { + sk_psock_run_backlog_work(psock, false); + } + +error: + sk_psock_put(sk_redir, psock); + return ret; +} + static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, struct sk_msg *msg, int *copied, int flags) { @@ -442,18 +619,24 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock, cork = true; psock->cork = NULL; } - release_sock(sk); - origsize = msg->sg.size; - ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress, - msg, tosend, flags); - sent = origsize - msg->sg.size; + if (redir_ingress) { + ret = tcp_bpf_ingress_backlog(sk, sk_redir, msg, tosend); + } else { + release_sock(sk); + + origsize = msg->sg.size; + ret = tcp_bpf_sendmsg_redir(sk_redir, redir_ingress, + msg, tosend, flags); + sent = origsize - msg->sg.size; + + lock_sock(sk); + sk_mem_uncharge(sk, sent); + } if (eval == __SK_REDIRECT) sock_put(sk_redir); - lock_sock(sk); - sk_mem_uncharge(sk, sent); if (unlikely(ret < 0)) { int free = sk_msg_free(sk, msg);