From patchwork Fri May 19 04:06:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247650 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 622F4633; Fri, 19 May 2023 04:07:07 +0000 (UTC) Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B025AE72; Thu, 18 May 2023 21:07:04 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-64d30ab1f89so199954b3a.3; Thu, 18 May 2023 21:07:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469224; x=1687061224; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WYY6gCktLyS2evy87Xcih8x5aN4rar4oFOrscwwNY2s=; b=IMhAJf6NPe+cc72DxftOaUXQwaWLpM4ITKP+o4lQ3WreAFBveMKJpbqW7qSYQpAkeV MB4P2rC8675XEmo2AoB04kbhPLJDEhF9T1PG5rkBOIQy6DqureB+ay+YVE4QHphCPAND MLO8gRwf+ItJ2YMmksvFNXoJEiqdYfv8a98YeL42uFXD1V2gGK1Rj5ChMUfimjn1AU/4 qXBnqUs7jECKbCjE/D58Tkpih/0Wzx7v0KeMcFc35xZC13koU9UdJNDbN5HssSW1BMvp 4wDDLMVdcC2iFxOYL3ny77syPJxj3vu5SAQmb6JEuHAlaCsLwbznJC30mmtY4bj+M7SW 6UcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469224; x=1687061224; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WYY6gCktLyS2evy87Xcih8x5aN4rar4oFOrscwwNY2s=; b=cShvbvz2HactFFoQ4T30fvTm5KHLnHfOquDOeAMQN1iKVJJ3C8uwLBOMxt+9jFmvNT d1Tooj/FCflf8Q4PzVZmAR5YdpwBfeE33OPhmJSDtjHRKMKMJ7jfUeU679cuOPM9suS0 eUuKyFa3n7rsxTg0Y/YjPwjkDYgCYcqwnbep0Xzx/kATwO3JVabGgDL38yb+M91PKdoP 67qibKLtxD0VBndc6gPv8aCrEcJb3bDQnVKVnkduiIRDXc6NHUQ2vxiONet91h566Hv0 eMXvfZ0XGEqEfglZsKPdQlOI/QslRVYb4SVWG1vEXjfThARttbtDmnN6kd7xwuUAq0KX Viww== X-Gm-Message-State: AC+VfDyP1uutXIpxjYSCudkppud/RosxIdqvJtRLSnwMXpSEBCrWbnNX YL+D1k+v6mhPxSlAA33miTM= X-Google-Smtp-Source: ACHHUZ5sCHurt4rG7X9p8GBuk5TAjWKa4mjR7nVTS1Ul5syWnjsGVyTmIx2IlObkkKs6LltqNIzLuA== X-Received: by 2002:a05:6a20:9147:b0:106:93b:aa9a with SMTP id x7-20020a056a20914700b00106093baa9amr773416pzc.48.1684469224075; Thu, 18 May 2023 21:07:04 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:03 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 01/14] bpf: sockmap, pass skb ownership through read_skb Date: Thu, 18 May 2023 21:06:46 -0700 Message-Id: <20230519040659.670644-2-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net The read_skb hook calls consume_skb() now, but this means that if the recv_actor program wants to use the skb it needs to inc the ref cnt so that the consume_skb() doesn't kfree the sk_buff. This is problematic because in some error cases under memory pressure we may need to linearize the sk_buff from sk_psock_skb_ingress_enqueue(). Then we get this, skb_linearize() __pskb_pull_tail() pskb_expand_head() BUG_ON(skb_shared(skb)) Because we incremented users refcnt from sk_psock_verdict_recv() we hit the bug on with refcnt > 1 and trip it. To fix lets simply pass ownership of the sk_buff through the skb_read call. Then we can drop the consume from read_skb handlers and assume the verdict recv does any required kfree. Bug found while testing in our CI which runs in VMs that hit memory constraints rather regularly. William tested TCP read_skb handlers. [ 106.536188] ------------[ cut here ]------------ [ 106.536197] kernel BUG at net/core/skbuff.c:1693! [ 106.536479] invalid opcode: 0000 [#1] PREEMPT SMP PTI [ 106.536726] CPU: 3 PID: 1495 Comm: curl Not tainted 5.19.0-rc5 #1 [ 106.537023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ArchLinux 1.16.0-1 04/01/2014 [ 106.537467] RIP: 0010:pskb_expand_head+0x269/0x330 [ 106.538585] RSP: 0018:ffffc90000138b68 EFLAGS: 00010202 [ 106.538839] RAX: 000000000000003f RBX: ffff8881048940e8 RCX: 0000000000000a20 [ 106.539186] RDX: 0000000000000002 RSI: 0000000000000000 RDI: ffff8881048940e8 [ 106.539529] RBP: ffffc90000138be8 R08: 00000000e161fd1a R09: 0000000000000000 [ 106.539877] R10: 0000000000000018 R11: 0000000000000000 R12: ffff8881048940e8 [ 106.540222] R13: 0000000000000003 R14: 0000000000000000 R15: ffff8881048940e8 [ 106.540568] FS: 00007f277dde9f00(0000) GS:ffff88813bd80000(0000) knlGS:0000000000000000 [ 106.540954] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 106.541227] CR2: 00007f277eeede64 CR3: 000000000ad3e000 CR4: 00000000000006e0 [ 106.541569] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 106.541915] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 106.542255] Call Trace: [ 106.542383] [ 106.542487] __pskb_pull_tail+0x4b/0x3e0 [ 106.542681] skb_ensure_writable+0x85/0xa0 [ 106.542882] sk_skb_pull_data+0x18/0x20 [ 106.543084] bpf_prog_b517a65a242018b0_bpf_skskb_http_verdict+0x3a9/0x4aa9 [ 106.543536] ? migrate_disable+0x66/0x80 [ 106.543871] sk_psock_verdict_recv+0xe2/0x310 [ 106.544258] ? sk_psock_write_space+0x1f0/0x1f0 [ 106.544561] tcp_read_skb+0x7b/0x120 [ 106.544740] tcp_data_queue+0x904/0xee0 [ 106.544931] tcp_rcv_established+0x212/0x7c0 [ 106.545142] tcp_v4_do_rcv+0x174/0x2a0 [ 106.545326] tcp_v4_rcv+0xe70/0xf60 [ 106.545500] ip_protocol_deliver_rcu+0x48/0x290 [ 106.545744] ip_local_deliver_finish+0xa7/0x150 Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Reported-by: William Findlay Tested-by: William Findlay Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- net/core/skmsg.c | 2 -- net/ipv4/tcp.c | 1 - net/ipv4/udp.c | 7 ++----- net/unix/af_unix.c | 7 ++----- net/vmw_vsock/virtio_transport_common.c | 5 +---- 5 files changed, 5 insertions(+), 17 deletions(-) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index f81883759d38..4a3dc8d27295 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -1183,8 +1183,6 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb) int ret = __SK_DROP; int len = skb->len; - skb_get(skb); - rcu_read_lock(); psock = sk_psock(sk); if (unlikely(!psock)) { diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 4d6392c16b7a..e914e3446377 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1773,7 +1773,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk)); tcp_flags = TCP_SKB_CB(skb)->tcp_flags; used = recv_actor(sk, skb); - consume_skb(skb); if (used < 0) { if (!copied) copied = used; diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index aa32afd871ee..9482def1f310 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1818,7 +1818,7 @@ EXPORT_SYMBOL(__skb_recv_udp); int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) { struct sk_buff *skb; - int err, copied; + int err; try_again: skb = skb_recv_udp(sk, MSG_DONTWAIT, &err); @@ -1837,10 +1837,7 @@ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) } WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk)); - copied = recv_actor(sk, skb); - kfree_skb(skb); - - return copied; + return recv_actor(sk, skb); } EXPORT_SYMBOL(udp_read_skb); diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index cc695c9f09ec..e7728b57a8c7 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -2553,7 +2553,7 @@ static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor) { struct unix_sock *u = unix_sk(sk); struct sk_buff *skb; - int err, copied; + int err; mutex_lock(&u->iolock); skb = skb_recv_datagram(sk, MSG_DONTWAIT, &err); @@ -2561,10 +2561,7 @@ static int unix_read_skb(struct sock *sk, skb_read_actor_t recv_actor) if (!skb) return err; - copied = recv_actor(sk, skb); - kfree_skb(skb); - - return copied; + return recv_actor(sk, skb); } /* diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index e4878551f140..b769fc258931 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -1441,7 +1441,6 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto struct sock *sk = sk_vsock(vsk); struct sk_buff *skb; int off = 0; - int copied; int err; spin_lock_bh(&vvs->rx_lock); @@ -1454,9 +1453,7 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto if (!skb) return err; - copied = recv_actor(sk, skb); - kfree_skb(skb); - return copied; + return recv_actor(sk, skb); } EXPORT_SYMBOL_GPL(virtio_transport_read_skb); From patchwork Fri May 19 04:06:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247651 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 767EE10E2; Fri, 19 May 2023 04:07:08 +0000 (UTC) Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F5F410DF; Thu, 18 May 2023 21:07:06 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-64d24136685so298291b3a.1; Thu, 18 May 2023 21:07:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469226; x=1687061226; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2hgnG9cuwOu31Gm/saBWLMh9Jksf/Q9YgmS45zoBKS4=; b=H2m4f8VFLfiwjNb0I1/pXnlA/JOO/TlLHEO7U497QWAACnFvduCPSQn3WMV2+/pR4k SJXgvSWeXtBA2UEkXgXR5MWtHohQAP2pbYGRxO6IXzxrij6/lU7QJuKxUXGaOYeu3bW+ Oq5Xndr1xTsp2qtZWtmFcvtIb/gAJQcmoSYhfQkaSohtMcFN+oWYQhVmbRwi/Qlymqti EisbI3gjSrdN1DR/F5pFm7oCZbKGYlHg3eeB0YwwTABP4j+AaemdYw8AKqEAiNv1ZitG aHsQWinX1QGaCVqLGtsCTCk4fw2U1P4YoVJw7IHEsRbFgLGN32Ok5Ez86QyWjWMNqXjH XQfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469226; x=1687061226; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2hgnG9cuwOu31Gm/saBWLMh9Jksf/Q9YgmS45zoBKS4=; b=aG5T9roSA9fWmWOy2EESG86cT1yzsg/Fs1NkUk81sRicgLdMFIVIIQH9ji/nVV7yC5 OpiogH0WTXPHdPVHo7WUBhIJdyyJC5TtQ4VIf2aPqaaFUfnohWGUUAZrFnOyOS8+lmaJ rXBc7MwpaMBl52nwTYk4oJoMV/Rjoao4d3iojmW2P+ehMfY/mJj+YbG7S/OSzVRT8R5B TZIuPEn9k+/5FV5bGy9EKRK1cTqo7yNE/t5feQcZNo322OVl3V3B32pvldicyUwNK+5x m3QmIgmrcRmjcyNMnj7SbRDDd0owpokcwQt0odQjbtVguleggw9myDuLzOVfYqMXJFXP JU3Q== X-Gm-Message-State: AC+VfDynN4+Uw6hqHpPWnnDcZa0pzYog63vZzmMrngh9WwmJ3xb6Aj50 +/4ER1NisJBwhSXGbh1/WAw= X-Google-Smtp-Source: ACHHUZ6hk2hNPh8iyLrKiQFoR9EIyVLnUvCW1c3GNt3bUseDujS76FqVEeF9wPdYvrVsEPGafwZNmA== X-Received: by 2002:a05:6a00:2d02:b0:63b:6279:1039 with SMTP id fa2-20020a056a002d0200b0063b62791039mr1673775pfb.0.1684469225715; Thu, 18 May 2023 21:07:05 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:05 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 02/14] bpf: sockmap, convert schedule_work into delayed_work Date: Thu, 18 May 2023 21:06:47 -0700 Message-Id: <20230519040659.670644-3-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Sk_buffs are fed into sockmap verdict programs either from a strparser (when the user might want to decide how framing of skb is done by attaching another parser program) or directly through tcp_read_sock. The tcp_read_sock is the preferred method for performance when the BPF logic is a stream parser. The flow for Cilium's common use case with a stream parser is, tcp_read_sock() sk_psock_verdict_recv ret = bpf_prog_run_pin_on_cpu() sk_psock_verdict_apply(sock, skb, ret) // if system is under memory pressure or app is slow we may // need to queue skb. Do this queuing through ingress_skb and // then kick timer to wake up handler skb_queue_tail(ingress_skb, skb) schedule_work(work); The work queue is wired up to sk_psock_backlog(). This will then walk the ingress_skb skb list that holds our sk_buffs that could not be handled, but should be OK to run at some later point. However, its possible that the workqueue doing this work still hits an error when sending the skb. When this happens the skbuff is requeued on a temporary 'state' struct kept with the workqueue. This is necessary because its possible to partially send an skbuff before hitting an error and we need to know how and where to restart when the workqueue runs next. Now for the trouble, we don't rekick the workqueue. This can cause a stall where the skbuff we just cached on the state variable might never be sent. This happens when its the last packet in a flow and no further packets come along that would cause the system to kick the workqueue from that side. To fix we could do simple schedule_work(), but while under memory pressure it makes sense to back off some instead of continue to retry repeatedly. So instead to fix convert schedule_work to schedule_delayed_work and add backoff logic to reschedule from backlog queue on errors. Its not obvious though what a good backoff is so use '1'. To test we observed some flakes whil running NGINX compliance test with sockmap we attributed these failed test to this bug and subsequent issue. From on list discussion. This commit bec217197b41("skmsg: Schedule psock work if the cached skb exists on the psock") was intended to address similar race, but had a couple cases it missed. Most obvious it only accounted for receiving traffic on the local socket so if redirecting into another socket we could still get an sk_buff stuck here. Next it missed the case where copied=0 in the recv() handler and then we wouldn't kick the scheduler. Also its sub-optimal to require userspace to kick the internal mechanisms of sockmap to wake it up and copy data to user. It results in an extra syscall and requires the app to actual handle the EAGAIN correctly. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Tested-by: William Findlay Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- include/linux/skmsg.h | 2 +- net/core/skmsg.c | 21 ++++++++++++++------- net/core/sock_map.c | 3 ++- 3 files changed, 17 insertions(+), 9 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 84f787416a54..904ff9a32ad6 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -105,7 +105,7 @@ struct sk_psock { struct proto *sk_proto; struct mutex work_mutex; struct sk_psock_work_state work_state; - struct work_struct work; + struct delayed_work work; struct rcu_work rwork; }; diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 4a3dc8d27295..0a9ee2acac0b 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -482,7 +482,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg, } out: if (psock->work_state.skb && copied > 0) - schedule_work(&psock->work); + schedule_delayed_work(&psock->work, 0); return copied; } EXPORT_SYMBOL_GPL(sk_msg_recvmsg); @@ -640,7 +640,8 @@ static void sk_psock_skb_state(struct sk_psock *psock, static void sk_psock_backlog(struct work_struct *work) { - struct sk_psock *psock = container_of(work, struct sk_psock, work); + struct delayed_work *dwork = to_delayed_work(work); + struct sk_psock *psock = container_of(dwork, struct sk_psock, work); struct sk_psock_work_state *state = &psock->work_state; struct sk_buff *skb = NULL; bool ingress; @@ -680,6 +681,12 @@ static void sk_psock_backlog(struct work_struct *work) if (ret == -EAGAIN) { sk_psock_skb_state(psock, state, skb, len, off); + + /* Delay slightly to prioritize any + * other work that might be here. + */ + if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) + schedule_delayed_work(&psock->work, 1); goto end; } /* Hard errors break pipe and stop xmit. */ @@ -734,7 +741,7 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node) INIT_LIST_HEAD(&psock->link); spin_lock_init(&psock->link_lock); - INIT_WORK(&psock->work, sk_psock_backlog); + INIT_DELAYED_WORK(&psock->work, sk_psock_backlog); mutex_init(&psock->work_mutex); INIT_LIST_HEAD(&psock->ingress_msg); spin_lock_init(&psock->ingress_lock); @@ -823,7 +830,7 @@ static void sk_psock_destroy(struct work_struct *work) sk_psock_done_strp(psock); - cancel_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->work); mutex_destroy(&psock->work_mutex); psock_progs_drop(&psock->progs); @@ -938,7 +945,7 @@ static int sk_psock_skb_redirect(struct sk_psock *from, struct sk_buff *skb) } skb_queue_tail(&psock_other->ingress_skb, skb); - schedule_work(&psock_other->work); + schedule_delayed_work(&psock_other->work, 0); spin_unlock_bh(&psock_other->ingress_lock); return 0; } @@ -1018,7 +1025,7 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, spin_lock_bh(&psock->ingress_lock); if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) { skb_queue_tail(&psock->ingress_skb, skb); - schedule_work(&psock->work); + schedule_delayed_work(&psock->work, 0); err = 0; } spin_unlock_bh(&psock->ingress_lock); @@ -1049,7 +1056,7 @@ static void sk_psock_write_space(struct sock *sk) psock = sk_psock(sk); if (likely(psock)) { if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) - schedule_work(&psock->work); + schedule_delayed_work(&psock->work, 0); write_space = psock->saved_write_space; } rcu_read_unlock(); diff --git a/net/core/sock_map.c b/net/core/sock_map.c index 7c189c2e2fbf..00afb66cd095 100644 --- a/net/core/sock_map.c +++ b/net/core/sock_map.c @@ -1644,9 +1644,10 @@ void sock_map_close(struct sock *sk, long timeout) rcu_read_unlock(); sk_psock_stop(psock); release_sock(sk); - cancel_work_sync(&psock->work); + cancel_delayed_work_sync(&psock->work); sk_psock_put(sk, psock); } + /* Make sure we do not recurse. This is a bug. * Leak the socket instead of crashing on a stack overflow. */ From patchwork Fri May 19 04:06:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247652 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADA551FBD; Fri, 19 May 2023 04:07:08 +0000 (UTC) Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDC71E72; Thu, 18 May 2023 21:07:07 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-64d30ab1f89so199981b3a.3; Thu, 18 May 2023 21:07:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469227; x=1687061227; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8LDFpEJF2r3u4aoZRDO+MEf7dfGfkdbpGtCFF1Q2dVs=; b=Kkr81qZgFqB/mxntE3Z1t17gIOVxmursheA3tL6Cb+4eLkPeN25X63l+aRd+5Hj6zr uWRViyLHVubWab1m/CHOdQEr332yuXpUtjDvvsxR/DGWk/P3nyOA4sQCukjkzfVOy+gb Z1nmsr4qPF6nJgfD8SOextxUh8p0dscRX6Gn4efwYAQyF2zzjXOFyjxbczaQo8NX5M7m 0yI+WN4x/GLTmIZSUzVfdJLtNXkNndltEF9eVMAvIwVaHy0lPQV/u+ra23QfZsM26ewK RdXSw10Mbg4RkKUCMKQFG2Xs9SO6KW9zBug609ZUBhg0Woz8dbDzulPeguom+OliPV0l 8CNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469227; x=1687061227; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8LDFpEJF2r3u4aoZRDO+MEf7dfGfkdbpGtCFF1Q2dVs=; b=JVmf9XiV05YR0J2QyPnAjJnPcTL0cZREy41lnm0+5dRk6edgkBDweyvKc9Fq0WxOep HH+8e2VFsvmg9vGMyqrFFWuvQ/tJKjwGp4GsL3j6423LxkmIVv0Z6AmPjybs3Gs7+iRI OdeagEyo5foJbto7K1deMm6V8EwUN1Sw2omoMUXCQllJGaXq9c/KiDoagXp4GmC/fLyv CfA7yeo27mNBVVjxinmLqUeTwRvyt945HWBaaPHfKEHDSkiuQPXBcU+9KMdJmn6g1Bey ZsYwJG8sJPZ86V65bMw7XBydEJzBv6UtRxD6FGUnqW6dCVaCGVmalmaI3ZboM8B7Lq4U jO6Q== X-Gm-Message-State: AC+VfDwygBOOYN6rmea4TD+xpvjsYH9Zw8fZ2L0uHwaZCl4L7xwzz3rq 8IAkangRkuWLiJp/ewHsy1I= X-Google-Smtp-Source: ACHHUZ5yBZ0pHBzZnt8pwVffHoXA/hpzrTVp00Kk91klBG5ZAvk07Oajj8s3py+gWD2b2FkNfzSwYQ== X-Received: by 2002:a05:6a00:2da8:b0:64b:20cd:6d52 with SMTP id fb40-20020a056a002da800b0064b20cd6d52mr1591191pfb.14.1684469227387; Thu, 18 May 2023 21:07:07 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:06 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 03/14] bpf: sockmap, reschedule is now done through backlog Date: Thu, 18 May 2023 21:06:48 -0700 Message-Id: <20230519040659.670644-4-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net Now that the backlog manages the reschedule() logic correctly we can drop the partial fix to reschedule from recvmsg hook. Rescheduling on recvmsg hook was added to address a corner case where we still had data in the backlog state but had nothing to kick it and reschedule the backlog worker to run and finish copying data out of the state. This had a couple limitations, first it required user space to kick it introducing an unnecessary EBUSY and retry. Second it only handled the ingress case and egress redirects would still be hung. With the correct fix, pushing the reschedule logic down to where the enomem error occurs we can drop this fix. Reviewed-by: Jakub Sitnicki Fixes: bec217197b412 ("skmsg: Schedule psock work if the cached skb exists on the psock") Signed-off-by: John Fastabend --- net/core/skmsg.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 0a9ee2acac0b..76ff15f8bb06 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -481,8 +481,6 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg, msg_rx = sk_psock_peek_msg(psock); } out: - if (psock->work_state.skb && copied > 0) - schedule_delayed_work(&psock->work, 0); return copied; } EXPORT_SYMBOL_GPL(sk_msg_recvmsg); From patchwork Fri May 19 04:06:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247653 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3F5A210B; Fri, 19 May 2023 04:07:10 +0000 (UTC) Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AA53E73; Thu, 18 May 2023 21:07:09 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-64d11974b45so1193249b3a.2; Thu, 18 May 2023 21:07:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469229; x=1687061229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AMWQXrGcXhHgmJZ5RnEojce3chjmU+rt7GqrTxtYxbI=; b=XHaardnPEifH/R32rMu9DinacdOZf9jHwweQHKIA1joC1PXxhynjs5sVL5QzGqT6eU mYjI/s9NIH/gRYtxmXC2mZr9k4HMOrHi9Na8JjeenEV/4j3rRR5+3P6uvMWVyO6q5k0a aQv802WH4ZQfax26UYrYwulw/XMLoNGnimzQWn+gxXqATJRKbG3x18Z1vRjoOMvrT/C2 L8wqkrMC6CjEID//qgvkvmfYdUyUaVctDRUm3HHtChojSVtY6bFY/f9wNXw7ol57FdLs U5qH2VldS0Q6euOIw8Oxt7lrdJn6QGrwG6gQthNwgzDWVzbd0akpZpKqOLud+WWpUHlN khdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469229; x=1687061229; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AMWQXrGcXhHgmJZ5RnEojce3chjmU+rt7GqrTxtYxbI=; b=NCSsyaeGte2pasesSQgF1Nu7Mt5TWbsuA5B1K8cTbUzk6sMvFLumQXbjTyQ/iJ64bD Vy8DPKASQ5BS8d9dxyd4mZr6GpymrOqwQa+XmtHnIo0H/SJu05YkmFb3/8dhiIr/y8Zu PPpp+uXV/dZH9G3nbKH35eGcJMxq3rDvSXdkWO3XmGKywwdZ99fE502YMQkqaXciY7sg 0l9IXMCEKoT+ULhjAN6lZumKOc87sGlIJc9n2jmg3RjRHqfPvOVgKW4VEnZNte1caYTW WGx2IIct8K+VCyog4v/2YQGSTUdqCy6id6Za874/ald3IoJBnElAt6H1Hg9cwaXP9opY gNLA== X-Gm-Message-State: AC+VfDwK45goYwMaZ6IuLJEsX6B3RO64qOtEKVaZiimi+M5XRerGuaRI H0YSvL1f+LhUoUQfmVHB+30= X-Google-Smtp-Source: ACHHUZ5/59OS9nKt1VDSPplE08/Ykp1WMFLrhrmnDCEv5BYBCX9YZ4MPgk9mLSKAZ3pFtPfXkC2TzQ== X-Received: by 2002:a05:6a21:3a4a:b0:105:ade8:b1bb with SMTP id zu10-20020a056a213a4a00b00105ade8b1bbmr665058pzb.45.1684469228959; Thu, 18 May 2023 21:07:08 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:08 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 04/14] bpf: sockmap, improved check for empty queue Date: Thu, 18 May 2023 21:06:49 -0700 Message-Id: <20230519040659.670644-5-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net We noticed some rare sk_buffs were stepping past the queue when system was under memory pressure. The general theory is to skip enqueueing sk_buffs when its not necessary which is the normal case with a system that is properly provisioned for the task, no memory pressure and enough cpu assigned. But, if we can't allocate memory due to an ENOMEM error when enqueueing the sk_buff into the sockmap receive queue we push it onto a delayed workqueue to retry later. When a new sk_buff is received we then check if that queue is empty. However, there is a problem with simply checking the queue length. When a sk_buff is being processed from the ingress queue but not yet on the sockmap msg receive queue its possible to also recv a sk_buff through normal path. It will check the ingress queue which is zero and then skip ahead of the pkt being processed. Previously we used sock lock from both contexts which made the problem harder to hit, but not impossible. To fix instead of popping the skb from the queue entirely we peek the skb from the queue and do the copy there. This ensures checks to the queue length are non-zero while skb is being processed. Then finally when the entire skb has been copied to user space queue or another socket we pop it off the queue. This way the queue length check allows bypassing the queue only after the list has been completely processed. To reproduce issue we run NGINX compliance test with sockmap running and observe some flakes in our testing that we attributed to this issue. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Tested-by: William Findlay Suggested-by: Jakub Sitnicki Signed-off-by: John Fastabend --- include/linux/skmsg.h | 1 - net/core/skmsg.c | 32 ++++++++------------------------ 2 files changed, 8 insertions(+), 25 deletions(-) diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h index 904ff9a32ad6..054d7911bfc9 100644 --- a/include/linux/skmsg.h +++ b/include/linux/skmsg.h @@ -71,7 +71,6 @@ struct sk_psock_link { }; struct sk_psock_work_state { - struct sk_buff *skb; u32 len; u32 off; }; diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 76ff15f8bb06..bcd45a99a3db 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -622,16 +622,12 @@ static int sk_psock_handle_skb(struct sk_psock *psock, struct sk_buff *skb, static void sk_psock_skb_state(struct sk_psock *psock, struct sk_psock_work_state *state, - struct sk_buff *skb, int len, int off) { spin_lock_bh(&psock->ingress_lock); if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) { - state->skb = skb; state->len = len; state->off = off; - } else { - sock_drop(psock->sk, skb); } spin_unlock_bh(&psock->ingress_lock); } @@ -642,23 +638,17 @@ static void sk_psock_backlog(struct work_struct *work) struct sk_psock *psock = container_of(dwork, struct sk_psock, work); struct sk_psock_work_state *state = &psock->work_state; struct sk_buff *skb = NULL; + u32 len = 0, off = 0; bool ingress; - u32 len, off; int ret; mutex_lock(&psock->work_mutex); - if (unlikely(state->skb)) { - spin_lock_bh(&psock->ingress_lock); - skb = state->skb; + if (unlikely(state->len)) { len = state->len; off = state->off; - state->skb = NULL; - spin_unlock_bh(&psock->ingress_lock); } - if (skb) - goto start; - while ((skb = skb_dequeue(&psock->ingress_skb))) { + while ((skb = skb_peek(&psock->ingress_skb))) { len = skb->len; off = 0; if (skb_bpf_strparser(skb)) { @@ -667,7 +657,6 @@ static void sk_psock_backlog(struct work_struct *work) off = stm->offset; len = stm->full_len; } -start: ingress = skb_bpf_ingress(skb); skb_bpf_redirect_clear(skb); do { @@ -677,8 +666,7 @@ static void sk_psock_backlog(struct work_struct *work) len, ingress); if (ret <= 0) { if (ret == -EAGAIN) { - sk_psock_skb_state(psock, state, skb, - len, off); + sk_psock_skb_state(psock, state, len, off); /* Delay slightly to prioritize any * other work that might be here. @@ -690,15 +678,16 @@ static void sk_psock_backlog(struct work_struct *work) /* Hard errors break pipe and stop xmit. */ sk_psock_report_error(psock, ret ? -ret : EPIPE); sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED); - sock_drop(psock->sk, skb); goto end; } off += ret; len -= ret; } while (len); - if (!ingress) + skb = skb_dequeue(&psock->ingress_skb); + if (!ingress) { kfree_skb(skb); + } } end: mutex_unlock(&psock->work_mutex); @@ -791,11 +780,6 @@ static void __sk_psock_zap_ingress(struct sk_psock *psock) skb_bpf_redirect_clear(skb); sock_drop(psock->sk, skb); } - kfree_skb(psock->work_state.skb); - /* We null the skb here to ensure that calls to sk_psock_backlog - * do not pick up the free'd skb. - */ - psock->work_state.skb = NULL; __sk_psock_purge_ingress_msg(psock); } @@ -814,7 +798,6 @@ void sk_psock_stop(struct sk_psock *psock) spin_lock_bh(&psock->ingress_lock); sk_psock_clear_state(psock, SK_PSOCK_TX_ENABLED); sk_psock_cork_free(psock); - __sk_psock_zap_ingress(psock); spin_unlock_bh(&psock->ingress_lock); } @@ -829,6 +812,7 @@ static void sk_psock_destroy(struct work_struct *work) sk_psock_done_strp(psock); cancel_delayed_work_sync(&psock->work); + __sk_psock_zap_ingress(psock); mutex_destroy(&psock->work_mutex); psock_progs_drop(&psock->progs); From patchwork Fri May 19 04:06:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247654 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B63AA3D78; Fri, 19 May 2023 04:07:12 +0000 (UTC) Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3174FE72; Thu, 18 May 2023 21:07:11 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-64d24136685so298337b3a.1; Thu, 18 May 2023 21:07:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469231; x=1687061231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1X8MavyMxt9ea7RvB7abRtbZx5QchGwJbc+sm6sp6gI=; b=hOwECO3gcvWe89i8a5T8D6FdIdAd/t+bTWTcC5aFGd8NVjFqFfBm1zSa+1JH+IvTRs ym8oEsNLaNoVfBqVNw+Qk1j1IEqJy0bp5Ubx8iR63xLPGNnHzllPxNFybEuVvamjfj5L V4kvWlgi/oKdX31vj+GG/zArpo18y1Q5oEYL7cJRmM2Fi4Yy62XghutLiy/VuRJ7vMVy /G3jXTIaDEz7212aPHqZZzb1GRBLfCX6opj9ucJqZ0IKC8G+i4+NwND7WBHe7qphZWGh hF9qxAm/bD3BCKAkmvhH1gOhlKn6MwAvVN8glcTSYO8ivEejk/e15FwbAIgb90LgKLK8 gDLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469231; x=1687061231; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1X8MavyMxt9ea7RvB7abRtbZx5QchGwJbc+sm6sp6gI=; b=ArTl9iu5ZgAMq2rhNhhEUXvZAr4rk+qUYFvrE4j/NVicFI7poGncXqSzkrjJqVelr8 N8OncJXxaH7MUf8bqD6UCk2SlyyBd4Kr/r+bu4XsgwzTB5JUHWXd6Et3ZivDyVgjrzKb gS4s6FlfYBjc/BLVVpXm75tmQrdyEhWE+0FtxRnul+1Yj2mWTaFzDHPsPTg3ADcD8KiG rqhEuBo/9nzH7RCjha6CVwgyVImb85v4p8b0lYAEFgZ9KfZiVjoPrLFhqKMIS76q9X4K dHYKnUPrNFPX9h/VCdNN2ehn9f5k+TLV7cA+w8ejKg8kkpeEnUhUtlcjxyQj2glAUsEZ jWNA== X-Gm-Message-State: AC+VfDxeLMwZCeTd41sOVqMJxEvW2aMo4Us2YYJGc3towJxS2aaqkURg ZwXPpNvMtQGegvvSep8LRWNw5MM+Cds= X-Google-Smtp-Source: ACHHUZ58CEaACgGkAm217vDl+4vVFp7Nudv+G7Bgc8aINHlMTNLt8T9sMzzd+dqemad9k/nvNTTlQA== X-Received: by 2002:a05:6a20:8e29:b0:101:281c:494 with SMTP id y41-20020a056a208e2900b00101281c0494mr1616178pzj.27.1684469230718; Thu, 18 May 2023 21:07:10 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:10 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 05/14] bpf: sockmap, handle fin correctly Date: Thu, 18 May 2023 21:06:50 -0700 Message-Id: <20230519040659.670644-6-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net The sockmap code is returning EAGAIN after a FIN packet is received and no more data is on the receive queue. Correct behavior is to return 0 to the user and the user can then close the socket. The EAGAIN causes many apps to retry which masks the problem. Eventually the socket is evicted from the sockmap because its released from sockmap sock free handling. The issue creates a delay and can cause some errors on application side. To fix this check on sk_msg_recvmsg side if length is zero and FIN flag is set then set return to zero. A selftest will be added to check this condition. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Tested-by: William Findlay Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- net/ipv4/tcp_bpf.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 2e9547467edb..73c13642d47f 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -174,6 +174,24 @@ static int tcp_msg_wait_data(struct sock *sk, struct sk_psock *psock, return ret; } +static bool is_next_msg_fin(struct sk_psock *psock) +{ + struct scatterlist *sge; + struct sk_msg *msg_rx; + int i; + + msg_rx = sk_psock_peek_msg(psock); + i = msg_rx->sg.start; + sge = sk_msg_elem(msg_rx, i); + if (!sge->length) { + struct sk_buff *skb = msg_rx->skb; + + if (skb && TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) + return true; + } + return false; +} + static int tcp_bpf_recvmsg_parser(struct sock *sk, struct msghdr *msg, size_t len, @@ -196,6 +214,19 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, lock_sock(sk); msg_bytes_ready: copied = sk_msg_recvmsg(sk, psock, msg, len, flags); + /* The typical case for EFAULT is the socket was gracefully + * shutdown with a FIN pkt. So check here the other case is + * some error on copy_page_to_iter which would be unexpected. + * On fin return correct return code to zero. + */ + if (copied == -EFAULT) { + bool is_fin = is_next_msg_fin(psock); + + if (is_fin) { + copied = 0; + goto out; + } + } if (!copied) { long timeo; int data; From patchwork Fri May 19 04:06:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247655 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B34B63E; Fri, 19 May 2023 04:07:15 +0000 (UTC) Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8FF7E72; Thu, 18 May 2023 21:07:12 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-6439f186366so2070000b3a.2; Thu, 18 May 2023 21:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469232; x=1687061232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xp4LbGPkNU6EN2wBzfXws+yiU1Dph6UgcZtiAxf5AYY=; b=E8cUiAjEV50hequJmKSCcMH8ZhPHf09rS+NggM51NB4QkqDFrpXG2T6rb6BGC7e3ni 4hPHciyKYKlfFOZxDN6AhN5RZw3NXiF2yBg/ll3FDjLdLiUkgAE1LuE7Z8V0Fuiu+E1+ o3HUf9Rvvv2Wvjwh70mr5SVQq2tM7htFpsDN6SH5MiS1P6LpL//32FF68kny7fkv6Rtj z6H7CsZ+vjTFyZJ/jal+OrJ3Q376nNs1ySYfUz1j0xE7sazbG4VYcAdFBZx0JWEuDmIB Z3myJ36vqSvOyUNLdvuPGY25ArsWGS31dSpjbK2W4eQ1jPrufJp+lpB9ucMh52UtFqzI 2b7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469232; x=1687061232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xp4LbGPkNU6EN2wBzfXws+yiU1Dph6UgcZtiAxf5AYY=; b=bu0ZxOyz7n1N1Gp06cnoCvXZqzMb0LFAspVHjr16pOKwf7QACVrhAhHHunpebslq3M SktU/izVpXXQ21iryuYJsaJxlUNgJFOsDSTe1npfM2BNklzy5O/Nh1J6aouKIyNqdOvZ bM6wxoxKIDLgjMwXwvncqVfnARoGRy+H+PuUibvmV8gqLufiJ83iDWMcCYeDhgeDYeI5 BsnRhQvntNCIvJXxTEA4N8W0iwcKW7eZEdBGIxbJJkGd5kNiPwEV1VKQLaFUjaP3w9sx do1Zxj5GXBAYMb8gY3sj/7unm39/aOBIXClTWmDmE1VFYPWbAVA756qVKfuDvRSmIYT3 Ge+g== X-Gm-Message-State: AC+VfDxdr4CnE0l2CwrmTJid7tXoL0B8eM/401eWZYPA/QDkllqAqP7I OuvcK+nMayNHVBZ6Q8CKFbY= X-Google-Smtp-Source: ACHHUZ4ClWGYB5TybX4dFagoIfg5SXEZMY5oRIcf6aBIkV92h3jFsKf3a0gMRCMvOxwNNWrBhkMAwA== X-Received: by 2002:a05:6a00:2ea4:b0:64a:f8c9:a421 with SMTP id fd36-20020a056a002ea400b0064af8c9a421mr1605041pfb.32.1684469232239; Thu, 18 May 2023 21:07:12 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:11 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 06/14] bpf: sockmap, TCP data stall on recv before accept Date: Thu, 18 May 2023 21:06:51 -0700 Message-Id: <20230519040659.670644-7-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net A common mechanism to put a TCP socket into the sockmap is to hook the BPF_SOCK_OPS_{ACTIVE_PASSIVE}_ESTABLISHED_CB event with a BPF program that can map the socket info to the correct BPF verdict parser. When the user adds the socket to the map the psock is created and the new ops are assigned to ensure the verdict program will 'see' the sk_buffs as they arrive. Part of this process hooks the sk_data_ready op with a BPF specific handler to wake up the BPF verdict program when data is ready to read. The logic is simple enough (posted here for easy reading) static void sk_psock_verdict_data_ready(struct sock *sk) { struct socket *sock = sk->sk_socket; if (unlikely(!sock || !sock->ops || !sock->ops->read_skb)) return; sock->ops->read_skb(sk, sk_psock_verdict_recv); } The oversight here is sk->sk_socket is not assigned until the application accepts() the new socket. However, its entirely ok for the peer application to do a connect() followed immediately by sends. The socket on the receiver is sitting on the backlog queue of the listening socket until its accepted and the data is queued up. If the peer never accepts the socket or is slow it will eventually hit data limits and rate limit the session. But, important for BPF sockmap hooks when this data is received TCP stack does the sk_data_ready() call but the read_skb() for this data is never called because sk_socket is missing. The data sits on the sk_receive_queue. Then once the socket is accepted if we never receive more data from the peer there will be no further sk_data_ready calls and all the data is still on the sk_receive_queue(). Then user calls recvmsg after accept() and for TCP sockets in sockmap we use the tcp_bpf_recvmsg_parser() handler. The handler checks for data in the sk_msg ingress queue expecting that the BPF program has already run from the sk_data_ready hook and enqueued the data as needed. So we are stuck. To fix do an unlikely check in recvmsg handler for data on the sk_receive_queue and if it exists wake up data_ready. We have the sock locked in both read_skb and recvmsg so should avoid having multiple runners. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- net/ipv4/tcp_bpf.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 73c13642d47f..01dd76be1a58 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -212,6 +212,26 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, return tcp_recvmsg(sk, msg, len, flags, addr_len); lock_sock(sk); + + /* We may have received data on the sk_receive_queue pre-accept and + * then we can not use read_skb in this context because we haven't + * assigned a sk_socket yet so have no link to the ops. The work-around + * is to check the sk_receive_queue and in these cases read skbs off + * queue again. The read_skb hook is not running at this point because + * of lock_sock so we avoid having multiple runners in read_skb. + */ + if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) { + tcp_data_ready(sk); + /* This handles the ENOMEM errors if we both receive data + * pre accept and are already under memory pressure. At least + * let user know to retry. + */ + if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) { + copied = -EAGAIN; + goto out; + } + } + msg_bytes_ready: copied = sk_msg_recvmsg(sk, psock, msg, len, flags); /* The typical case for EFAULT is the socket was gracefully From patchwork Fri May 19 04:06:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247656 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED19663E; Fri, 19 May 2023 04:07:16 +0000 (UTC) Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CC5410DF; Thu, 18 May 2023 21:07:14 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-64d2e8a842cso373861b3a.3; Thu, 18 May 2023 21:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469234; x=1687061234; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XBzXJoALt8kblYyN0yx9KfML90NNLIp7TFuBtKGUqOs=; b=pCkLWdLdLf43X8acqSlsVDpZz2RLcsFRbMF+ARDRkjFtXiPzFs0jfDpgMl+YwRVKKG w4zCdeev1UtxUEqKNPQAEPRWOG+rgBVouPdgq1mtFqXgKRDvO32h4TsAxGAqmigyguxw keATIiys4vGJ3Dg/FTSJQ2cYzY+20aszCLd/ua1mloSysgf0d1s8kMszTpu3o7J1wILO cRzYhITYX5zDHR+zq2Ccj3Lwa2bIV23W93NN9HwFenBiWivUQm0UH3hEeYlmBNgC/MGc a0I7skUqPAR4XHXqWRZJ7E8AqgtHIkQ11193HrGvgvRPCJgSqRgKnXoFvyWnJfyr1AN+ ZufQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469234; x=1687061234; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XBzXJoALt8kblYyN0yx9KfML90NNLIp7TFuBtKGUqOs=; b=MubpphvavlTfKZHrG9mMdESQL4B4tHIox4CWliGmgXNjZD9/5pPHKVX0R+5O3QdMMx 3g5Ln4UW0yL+jNuQNyLgHD6/Y3GotNdH3JLFn8uCJ1dkIuo0YQ44c3nrl0oPDH7vy3K0 FJ05YpBTZ8CdwC8lVz3CgvZLvlcYxCWfxk0lyb3Id9A7yS3DfSR0uIy2nGwXuEpdCpWT vgzSrfA4guu6FhU4sUjOIWqsRVNebuVtr14Fvnjt+lDKnDK/SgCUYmwbEdlm0h+enAgK +YORcTQ00UTCE2Pga0DXk/Ww4IylDQ/OJ7BBSj+R1S7HdNdNzap/5yvzyTgpslCCi0pM S+WA== X-Gm-Message-State: AC+VfDz/wCp6f09yrGeCK4NgcoFTnbzGNepfiU2OtMe67Eot3HFU89+B 8V4nZZf0ZOR2Iu9+tgB1CX0= X-Google-Smtp-Source: ACHHUZ7sPty9TFUj87sxflk5CxnHO18e2GTA95ySkkbN7PVTn/uq0qwl2julFX5eE8MsgmV+kVweTQ== X-Received: by 2002:a05:6a00:1703:b0:63f:1eb3:824b with SMTP id h3-20020a056a00170300b0063f1eb3824bmr1800762pfc.17.1684469233748; Thu, 18 May 2023 21:07:13 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:13 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 07/14] bpf: sockmap, wake up polling after data copy Date: Thu, 18 May 2023 21:06:52 -0700 Message-Id: <20230519040659.670644-8-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net When TCP stack has data ready to read sk_data_ready() is called. Sockmap overwrites this with its own handler to call into BPF verdict program. But, the original TCP socket had sock_def_readable that would additionally wake up any user space waiters with sk_wake_async(). Sockmap saved the callback when the socket was created so call the saved data ready callback and then we can wake up any epoll() logic waiting on the read. Note we call on 'copied >= 0' to account for returning 0 when a FIN is received because we need to wake up user for this as well so they can do the recvmsg() -> 0 and detect the shutdown. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- net/core/skmsg.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/net/core/skmsg.c b/net/core/skmsg.c index bcd45a99a3db..08be5f409fb8 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -1199,12 +1199,21 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb) static void sk_psock_verdict_data_ready(struct sock *sk) { struct socket *sock = sk->sk_socket; + int copied; trace_sk_data_ready(sk); if (unlikely(!sock || !sock->ops || !sock->ops->read_skb)) return; - sock->ops->read_skb(sk, sk_psock_verdict_recv); + copied = sock->ops->read_skb(sk, sk_psock_verdict_recv); + if (copied >= 0) { + struct sk_psock *psock; + + rcu_read_lock(); + psock = sk_psock(sk); + psock->saved_data_ready(sk); + rcu_read_unlock(); + } } void sk_psock_start_verdict(struct sock *sk, struct sk_psock *psock) From patchwork Fri May 19 04:06:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247657 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 459FD566C; Fri, 19 May 2023 04:07:17 +0000 (UTC) Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EA77E72; Thu, 18 May 2023 21:07:15 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-64d2a613ec4so694366b3a.1; Thu, 18 May 2023 21:07:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469235; x=1687061235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PKVXOAHKFA789J7ho2XD1RIAcfngnJV04ADcLg/bfiw=; b=LLFPI+8KzgGRbUMcwk5A6wP8ZXh942zGWTTTHT0IggZSR7HeHyJb2DAJk8Ucc1bS8i +FdSBmUY7DNhGb6hrCSiiNMdxehfMbOcY3bNHxnxRbyBPNIWtkbpnDQHXXv7cCvk7rBn 4hheE/wN+MnnXQpgrd7lyay7TAzg9Z4SOsluMylC5L7cI2RuGjz/J4Pc6C+6bIqKDpoX lyBnY2MfeqYch5X380lnT5G9IV8P6cJ+5zeFS2ScEGruikBUOqpmbvbOgwr/vYbgo5R/ rD1uWVKQCbvudeSfnz4E4PJ5gHTJEzukZONRDdIviDwJqwkeYV+VT+2uxmunBHWnpd6h Sdqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469235; x=1687061235; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PKVXOAHKFA789J7ho2XD1RIAcfngnJV04ADcLg/bfiw=; b=jj4ER7gu0corRGtbe/597Rk6YCrvids1/DD63+HUdXIVkea0ME0mK7d8DfPrHctPbR PoIh10f2EMPqrP/Fm09Ck9WxBGLpzSk/pXKocnhdi5oAw2DzEP7UHaj2dVtgLOyr/JAF aVBMyT6P+NoqNKvFmjch070VRE2RcqeQsMCGj2H/9skyWnhDQtFot35uo3S54JYq+DXU Qcvy0aFkyhUxX5Ucs6nYmLC9LiXBJq0/Xnbfz9EaYyx9bTv+phbznSPGvtskD8C4D8mX XPe3wtEwRex4VUeqCdyKvQPEhNYTeA3vaUTXaqgvz/AbaWupLoLnFSIWkfQbMpYycl2k ptdQ== X-Gm-Message-State: AC+VfDztIrX8MoyxiBjYGXkdgN0vpeQMA9Q615byaso0o6bEEKoiHZp+ gIqtLIZtqfsd8D60cihgIhk= X-Google-Smtp-Source: ACHHUZ4hPtngrGHlu65fSeE9RU+PtsMjTgDp6L/LLEumU8sLVC35i/jUvtocSUqlVvnVJHW1gX3qtQ== X-Received: by 2002:a05:6a00:99c:b0:63d:5de3:b3f2 with SMTP id u28-20020a056a00099c00b0063d5de3b3f2mr1659168pfg.18.1684469235188; Thu, 18 May 2023 21:07:15 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:14 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 08/14] bpf: sockmap, incorrectly handling copied_seq Date: Thu, 18 May 2023 21:06:53 -0700 Message-Id: <20230519040659.670644-9-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net The read_skb() logic is incrementing the tcp->copied_seq which is used for among other things calculating how many outstanding bytes can be read by the application. This results in application errors, if the application does an ioctl(FIONREAD) we return zero because this is calculated from the copied_seq value. To fix this we move tcp->copied_seq accounting into the recv handler so that we update these when the recvmsg() hook is called and data is in fact copied into user buffers. This gives an accurate FIONREAD value as expected and improves ACK handling. Before we were calling the tcp_rcv_space_adjust() which would update 'number of bytes copied to user in last RTT' which is wrong for programs returning SK_PASS. The bytes are only copied to the user when recvmsg is handled. Doing the fix for recvmsg is straightforward, but fixing redirect and SK_DROP pkts is a bit tricker. Build a tcp_psock_eat() helper and then call this from skmsg handlers. This fixes another issue where a broken socket with a BPF program doing a resubmit could hang the receiver. This happened because although read_skb() consumed the skb through sock_drop() it did not update the copied_seq. Now if a single reccv socket is redirecting to many sockets (for example for lb) the receiver sk will be hung even though we might expect it to continue. The hang comes from not updating the copied_seq numbers and memory pressure resulting from that. We have a slight layer problem of calling tcp_eat_skb even if its not a TCP socket. To fix we could refactor and create per type receiver handlers. I decided this is more work than we want in the fix and we already have some small tweaks depending on caller that use the helper skb_bpf_strparser(). So we extend that a bit and always set the strparser bit when it is in use and then we can gate the seq_copied updates on this. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Signed-off-by: John Fastabend --- include/net/tcp.h | 10 ++++++++++ net/core/skmsg.c | 15 +++++++-------- net/ipv4/tcp.c | 10 +--------- net/ipv4/tcp_bpf.c | 28 +++++++++++++++++++++++++++- 4 files changed, 45 insertions(+), 18 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 04a31643cda3..18a038d16434 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1470,6 +1470,8 @@ static inline void tcp_adjust_rcv_ssthresh(struct sock *sk) } void tcp_cleanup_rbuf(struct sock *sk, int copied); +void __tcp_cleanup_rbuf(struct sock *sk, int copied); + /* We provision sk_rcvbuf around 200% of sk_rcvlowat. * If 87.5 % (7/8) of the space has been consumed, we want to override @@ -2326,6 +2328,14 @@ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore); void tcp_bpf_clone(const struct sock *sk, struct sock *newsk); #endif /* CONFIG_BPF_SYSCALL */ +#ifdef CONFIG_INET +void tcp_eat_skb(struct sock *sk, struct sk_buff *skb); +#else +static inline void tcp_eat_skb(struct sock *sk, struct sk_buff *skb) +{ +} +#endif + int tcp_bpf_sendmsg_redir(struct sock *sk, bool ingress, struct sk_msg *msg, u32 bytes, int flags); #endif /* CONFIG_NET_SOCK_MSG */ diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 08be5f409fb8..a9060e1f0e43 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -979,10 +979,8 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, err = -EIO; sk_other = psock->sk; if (sock_flag(sk_other, SOCK_DEAD) || - !sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) { - skb_bpf_redirect_clear(skb); + !sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) goto out_free; - } skb_bpf_set_ingress(skb); @@ -1011,18 +1009,19 @@ static int sk_psock_verdict_apply(struct sk_psock *psock, struct sk_buff *skb, err = 0; } spin_unlock_bh(&psock->ingress_lock); - if (err < 0) { - skb_bpf_redirect_clear(skb); + if (err < 0) goto out_free; - } } break; case __SK_REDIRECT: + tcp_eat_skb(psock->sk, skb); err = sk_psock_skb_redirect(psock, skb); break; case __SK_DROP: default: out_free: + skb_bpf_redirect_clear(skb); + tcp_eat_skb(psock->sk, skb); sock_drop(psock->sk, skb); } @@ -1067,8 +1066,7 @@ static void sk_psock_strp_read(struct strparser *strp, struct sk_buff *skb) skb_dst_drop(skb); skb_bpf_redirect_clear(skb); ret = bpf_prog_run_pin_on_cpu(prog, skb); - if (ret == SK_PASS) - skb_bpf_set_strparser(skb); + skb_bpf_set_strparser(skb); ret = sk_psock_map_verd(ret, skb_bpf_redirect_fetch(skb)); skb->sk = NULL; } @@ -1176,6 +1174,7 @@ static int sk_psock_verdict_recv(struct sock *sk, struct sk_buff *skb) psock = sk_psock(sk); if (unlikely(!psock)) { len = 0; + tcp_eat_skb(sk, skb); sock_drop(sk, skb); goto out; } diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index e914e3446377..a60f6f4e7cd9 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1571,7 +1571,7 @@ static int tcp_peek_sndq(struct sock *sk, struct msghdr *msg, int len) * calculation of whether or not we must ACK for the sake of * a window update. */ -static void __tcp_cleanup_rbuf(struct sock *sk, int copied) +void __tcp_cleanup_rbuf(struct sock *sk, int copied) { struct tcp_sock *tp = tcp_sk(sk); bool time_to_ack = false; @@ -1786,14 +1786,6 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor) break; } } - WRITE_ONCE(tp->copied_seq, seq); - - tcp_rcv_space_adjust(sk); - - /* Clean up data we have read: This will do ACK frames. */ - if (copied > 0) - __tcp_cleanup_rbuf(sk, copied); - return copied; } EXPORT_SYMBOL(tcp_read_skb); diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 01dd76be1a58..5f93918c063c 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -11,6 +11,24 @@ #include #include +void tcp_eat_skb(struct sock *sk, struct sk_buff *skb) +{ + struct tcp_sock *tcp; + int copied; + + if (!skb || !skb->len || !sk_is_tcp(sk)) + return; + + if (skb_bpf_strparser(skb)) + return; + + tcp = tcp_sk(sk); + copied = tcp->copied_seq + skb->len; + WRITE_ONCE(tcp->copied_seq, copied); + tcp_rcv_space_adjust(sk); + __tcp_cleanup_rbuf(sk, skb->len); +} + static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock, struct sk_msg *msg, u32 apply_bytes, int flags) { @@ -198,8 +216,10 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, int flags, int *addr_len) { + struct tcp_sock *tcp = tcp_sk(sk); + u32 seq = tcp->copied_seq; struct sk_psock *psock; - int copied; + int copied = 0; if (unlikely(flags & MSG_ERRQUEUE)) return inet_recv_error(sk, msg, len, addr_len); @@ -244,9 +264,11 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, if (is_fin) { copied = 0; + seq++; goto out; } } + seq += copied; if (!copied) { long timeo; int data; @@ -284,6 +306,10 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, copied = -EAGAIN; } out: + WRITE_ONCE(tcp->copied_seq, seq); + tcp_rcv_space_adjust(sk); + if (copied > 0) + __tcp_cleanup_rbuf(sk, copied); release_sock(sk); sk_psock_put(sk, psock); return copied; From patchwork Fri May 19 04:06:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247658 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E42A566C; Fri, 19 May 2023 04:07:19 +0000 (UTC) Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 764BBE72; Thu, 18 May 2023 21:07:17 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id d2e1a72fcca58-64d18d772bdso1727254b3a.3; Thu, 18 May 2023 21:07:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469237; x=1687061237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UbD3kmlAbOYvz8IVqqw1Zw8S+MqE+0PB6ufwO0HCQN4=; b=NXZgZAneqxQI/PSbKGrGiqdiGDbuDAAlrVPCugkhoHDaREUg+NKGjRLY0duqh+ER4T YhCKany4GKDyttiVeGBR+ly4szPjiNgcKNYxPLHtQ6IRq02MBvLxcg7yJBWwwOx5AnYA 1bB+bCcZIQRoIJ/crYO7IA7HnteSJLXotLl3BxRJGQ1B/e2DhHiOfBgxsOaPIYOxyp3k dRX2rt71Cyeyh8PsA+UcrHVkD0L85IBht4GrvK0iq0YO2p/bIJmO8Mo5jU7fLsyxtNeZ oRZLgohmEjcB6tT9USmEGgSOZKWPwTym9Zn/OHe9EoBe9UKjh35FpiuuDXTLzxvS3M6h OLqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469237; x=1687061237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UbD3kmlAbOYvz8IVqqw1Zw8S+MqE+0PB6ufwO0HCQN4=; b=cpMxibxXgH5Rj92kRjqXGtNC9BfIgUljeOq5KqsOPZdO0wVS2L4tqkFuC9FmSc0SR+ CgcQe6PkYxv/pB710M09gWckACL3pnCnfjQNSvF1tdRyvgAG0C1hSRd/lVLHgR3PkMNl 2LUobBAUkNYREWZQZyJhpkhbt6079t3r0Koh1IsBlzmPnuVN3dOmi5boR/sMsUXw28Z0 YAPnW/psMIPkwpotCGMKx2NqKNmtzC+sjIGbVgg5Bu30QLIRRX0Ig5UfK4M/WFi/6RQv RMchv3Heychzm/voQFxZfb5R2oDRgN+D+ZEUoAROM+jTU4JENeV0O+77thPZTXnHxu69 Ygbw== X-Gm-Message-State: AC+VfDx+N0x1bQigWPt1+V1JG1dAAnbCTsiKq8eAO4tUJrrAU6SyrF1J 8nRYFvrT7N4GjEtUAt/pVvJeag4PhpY= X-Google-Smtp-Source: ACHHUZ6q5Kzgod4qAKJsXsQj/LDzWyVNFYg+eB2GkfvhtQ9oubA3l9k9Y5Zx86DLt6LmXREMW/llFA== X-Received: by 2002:a05:6a00:2d28:b0:64a:f730:154b with SMTP id fa40-20020a056a002d2800b0064af730154bmr1693395pfb.5.1684469236851; Thu, 18 May 2023 21:07:16 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:16 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 09/14] bpf: sockmap, pull socket helpers out of listen test for general use Date: Thu, 18 May 2023 21:06:54 -0700 Message-Id: <20230519040659.670644-10-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net No functional change here we merely pull the helpers in sockmap_listen.c into a header file so we can use these in other programs. The tests we are about to add aren't really _listen tests so doesn't make sense to add them here. Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- .../bpf/prog_tests/sockmap_helpers.h | 267 ++++++++++++++++++ .../selftests/bpf/prog_tests/sockmap_listen.c | 258 +---------------- 2 files changed, 268 insertions(+), 257 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h new file mode 100644 index 000000000000..d516418a0a18 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h @@ -0,0 +1,267 @@ +#ifndef __SOCKMAP_HELPERS__ +#define __SOCKMAP_HELPERS__ + +#include + +#define IO_TIMEOUT_SEC 30 +#define MAX_STRERR_LEN 256 +#define MAX_TEST_NAME 80 + +#define __always_unused __attribute__((__unused__)) + +#define _FAIL(errnum, fmt...) \ + ({ \ + error_at_line(0, (errnum), __func__, __LINE__, fmt); \ + CHECK_FAIL(true); \ + }) +#define FAIL(fmt...) _FAIL(0, fmt) +#define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) +#define FAIL_LIBBPF(err, msg) \ + ({ \ + char __buf[MAX_STRERR_LEN]; \ + libbpf_strerror((err), __buf, sizeof(__buf)); \ + FAIL("%s: %s", (msg), __buf); \ + }) + +/* Wrappers that fail the test on error and report it. */ + +#define xaccept_nonblock(fd, addr, len) \ + ({ \ + int __ret = \ + accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ + if (__ret == -1) \ + FAIL_ERRNO("accept"); \ + __ret; \ + }) + +#define xbind(fd, addr, len) \ + ({ \ + int __ret = bind((fd), (addr), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("bind"); \ + __ret; \ + }) + +#define xclose(fd) \ + ({ \ + int __ret = close((fd)); \ + if (__ret == -1) \ + FAIL_ERRNO("close"); \ + __ret; \ + }) + +#define xconnect(fd, addr, len) \ + ({ \ + int __ret = connect((fd), (addr), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("connect"); \ + __ret; \ + }) + +#define xgetsockname(fd, addr, len) \ + ({ \ + int __ret = getsockname((fd), (addr), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("getsockname"); \ + __ret; \ + }) + +#define xgetsockopt(fd, level, name, val, len) \ + ({ \ + int __ret = getsockopt((fd), (level), (name), (val), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("getsockopt(" #name ")"); \ + __ret; \ + }) + +#define xlisten(fd, backlog) \ + ({ \ + int __ret = listen((fd), (backlog)); \ + if (__ret == -1) \ + FAIL_ERRNO("listen"); \ + __ret; \ + }) + +#define xsetsockopt(fd, level, name, val, len) \ + ({ \ + int __ret = setsockopt((fd), (level), (name), (val), (len)); \ + if (__ret == -1) \ + FAIL_ERRNO("setsockopt(" #name ")"); \ + __ret; \ + }) + +#define xsend(fd, buf, len, flags) \ + ({ \ + ssize_t __ret = send((fd), (buf), (len), (flags)); \ + if (__ret == -1) \ + FAIL_ERRNO("send"); \ + __ret; \ + }) + +#define xrecv_nonblock(fd, buf, len, flags) \ + ({ \ + ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ + IO_TIMEOUT_SEC); \ + if (__ret == -1) \ + FAIL_ERRNO("recv"); \ + __ret; \ + }) + +#define xsocket(family, sotype, flags) \ + ({ \ + int __ret = socket(family, sotype, flags); \ + if (__ret == -1) \ + FAIL_ERRNO("socket"); \ + __ret; \ + }) + +#define xbpf_map_delete_elem(fd, key) \ + ({ \ + int __ret = bpf_map_delete_elem((fd), (key)); \ + if (__ret < 0) \ + FAIL_ERRNO("map_delete"); \ + __ret; \ + }) + +#define xbpf_map_lookup_elem(fd, key, val) \ + ({ \ + int __ret = bpf_map_lookup_elem((fd), (key), (val)); \ + if (__ret < 0) \ + FAIL_ERRNO("map_lookup"); \ + __ret; \ + }) + +#define xbpf_map_update_elem(fd, key, val, flags) \ + ({ \ + int __ret = bpf_map_update_elem((fd), (key), (val), (flags)); \ + if (__ret < 0) \ + FAIL_ERRNO("map_update"); \ + __ret; \ + }) + +#define xbpf_prog_attach(prog, target, type, flags) \ + ({ \ + int __ret = \ + bpf_prog_attach((prog), (target), (type), (flags)); \ + if (__ret < 0) \ + FAIL_ERRNO("prog_attach(" #type ")"); \ + __ret; \ + }) + +#define xbpf_prog_detach2(prog, target, type) \ + ({ \ + int __ret = bpf_prog_detach2((prog), (target), (type)); \ + if (__ret < 0) \ + FAIL_ERRNO("prog_detach2(" #type ")"); \ + __ret; \ + }) + +#define xpthread_create(thread, attr, func, arg) \ + ({ \ + int __ret = pthread_create((thread), (attr), (func), (arg)); \ + errno = __ret; \ + if (__ret) \ + FAIL_ERRNO("pthread_create"); \ + __ret; \ + }) + +#define xpthread_join(thread, retval) \ + ({ \ + int __ret = pthread_join((thread), (retval)); \ + errno = __ret; \ + if (__ret) \ + FAIL_ERRNO("pthread_join"); \ + __ret; \ + }) + +static inline int poll_read(int fd, unsigned int timeout_sec) +{ + struct timeval timeout = { .tv_sec = timeout_sec }; + fd_set rfds; + int r; + + FD_ZERO(&rfds); + FD_SET(fd, &rfds); + + r = select(fd + 1, &rfds, NULL, NULL, &timeout); + if (r == 0) + errno = ETIME; + + return r == 1 ? 0 : -1; +} + +static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, + unsigned int timeout_sec) +{ + if (poll_read(fd, timeout_sec)) + return -1; + + return accept(fd, addr, len); +} + +static inline int recv_timeout(int fd, void *buf, size_t len, int flags, + unsigned int timeout_sec) +{ + if (poll_read(fd, timeout_sec)) + return -1; + + return recv(fd, buf, len, flags); +} + +static inline void init_addr_loopback4(struct sockaddr_storage *ss, + socklen_t *len) +{ + struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); + + addr4->sin_family = AF_INET; + addr4->sin_port = 0; + addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); + *len = sizeof(*addr4); +} + +static inline void init_addr_loopback6(struct sockaddr_storage *ss, + socklen_t *len) +{ + struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); + + addr6->sin6_family = AF_INET6; + addr6->sin6_port = 0; + addr6->sin6_addr = in6addr_loopback; + *len = sizeof(*addr6); +} + +static inline void init_addr_loopback_vsock(struct sockaddr_storage *ss, + socklen_t *len) +{ + struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss)); + + addr->svm_family = AF_VSOCK; + addr->svm_port = VMADDR_PORT_ANY; + addr->svm_cid = VMADDR_CID_LOCAL; + *len = sizeof(*addr); +} + +static inline void init_addr_loopback(int family, struct sockaddr_storage *ss, + socklen_t *len) +{ + switch (family) { + case AF_INET: + init_addr_loopback4(ss, len); + return; + case AF_INET6: + init_addr_loopback6(ss, len); + return; + case AF_VSOCK: + init_addr_loopback_vsock(ss, len); + return; + default: + FAIL("unsupported address family %d", family); + } +} + +static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) +{ + return (struct sockaddr *)ss; +} + +#endif // __SOCKMAP_HELPERS__ diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c index 141c1e5944ee..3deffaa04885 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c @@ -32,263 +32,7 @@ #include "test_progs.h" #include "test_sockmap_listen.skel.h" -#define IO_TIMEOUT_SEC 30 -#define MAX_STRERR_LEN 256 -#define MAX_TEST_NAME 80 - -#define __always_unused __attribute__((__unused__)) - -#define _FAIL(errnum, fmt...) \ - ({ \ - error_at_line(0, (errnum), __func__, __LINE__, fmt); \ - CHECK_FAIL(true); \ - }) -#define FAIL(fmt...) _FAIL(0, fmt) -#define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) -#define FAIL_LIBBPF(err, msg) \ - ({ \ - char __buf[MAX_STRERR_LEN]; \ - libbpf_strerror((err), __buf, sizeof(__buf)); \ - FAIL("%s: %s", (msg), __buf); \ - }) - -/* Wrappers that fail the test on error and report it. */ - -#define xaccept_nonblock(fd, addr, len) \ - ({ \ - int __ret = \ - accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ - if (__ret == -1) \ - FAIL_ERRNO("accept"); \ - __ret; \ - }) - -#define xbind(fd, addr, len) \ - ({ \ - int __ret = bind((fd), (addr), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("bind"); \ - __ret; \ - }) - -#define xclose(fd) \ - ({ \ - int __ret = close((fd)); \ - if (__ret == -1) \ - FAIL_ERRNO("close"); \ - __ret; \ - }) - -#define xconnect(fd, addr, len) \ - ({ \ - int __ret = connect((fd), (addr), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("connect"); \ - __ret; \ - }) - -#define xgetsockname(fd, addr, len) \ - ({ \ - int __ret = getsockname((fd), (addr), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("getsockname"); \ - __ret; \ - }) - -#define xgetsockopt(fd, level, name, val, len) \ - ({ \ - int __ret = getsockopt((fd), (level), (name), (val), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("getsockopt(" #name ")"); \ - __ret; \ - }) - -#define xlisten(fd, backlog) \ - ({ \ - int __ret = listen((fd), (backlog)); \ - if (__ret == -1) \ - FAIL_ERRNO("listen"); \ - __ret; \ - }) - -#define xsetsockopt(fd, level, name, val, len) \ - ({ \ - int __ret = setsockopt((fd), (level), (name), (val), (len)); \ - if (__ret == -1) \ - FAIL_ERRNO("setsockopt(" #name ")"); \ - __ret; \ - }) - -#define xsend(fd, buf, len, flags) \ - ({ \ - ssize_t __ret = send((fd), (buf), (len), (flags)); \ - if (__ret == -1) \ - FAIL_ERRNO("send"); \ - __ret; \ - }) - -#define xrecv_nonblock(fd, buf, len, flags) \ - ({ \ - ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ - IO_TIMEOUT_SEC); \ - if (__ret == -1) \ - FAIL_ERRNO("recv"); \ - __ret; \ - }) - -#define xsocket(family, sotype, flags) \ - ({ \ - int __ret = socket(family, sotype, flags); \ - if (__ret == -1) \ - FAIL_ERRNO("socket"); \ - __ret; \ - }) - -#define xbpf_map_delete_elem(fd, key) \ - ({ \ - int __ret = bpf_map_delete_elem((fd), (key)); \ - if (__ret < 0) \ - FAIL_ERRNO("map_delete"); \ - __ret; \ - }) - -#define xbpf_map_lookup_elem(fd, key, val) \ - ({ \ - int __ret = bpf_map_lookup_elem((fd), (key), (val)); \ - if (__ret < 0) \ - FAIL_ERRNO("map_lookup"); \ - __ret; \ - }) - -#define xbpf_map_update_elem(fd, key, val, flags) \ - ({ \ - int __ret = bpf_map_update_elem((fd), (key), (val), (flags)); \ - if (__ret < 0) \ - FAIL_ERRNO("map_update"); \ - __ret; \ - }) - -#define xbpf_prog_attach(prog, target, type, flags) \ - ({ \ - int __ret = \ - bpf_prog_attach((prog), (target), (type), (flags)); \ - if (__ret < 0) \ - FAIL_ERRNO("prog_attach(" #type ")"); \ - __ret; \ - }) - -#define xbpf_prog_detach2(prog, target, type) \ - ({ \ - int __ret = bpf_prog_detach2((prog), (target), (type)); \ - if (__ret < 0) \ - FAIL_ERRNO("prog_detach2(" #type ")"); \ - __ret; \ - }) - -#define xpthread_create(thread, attr, func, arg) \ - ({ \ - int __ret = pthread_create((thread), (attr), (func), (arg)); \ - errno = __ret; \ - if (__ret) \ - FAIL_ERRNO("pthread_create"); \ - __ret; \ - }) - -#define xpthread_join(thread, retval) \ - ({ \ - int __ret = pthread_join((thread), (retval)); \ - errno = __ret; \ - if (__ret) \ - FAIL_ERRNO("pthread_join"); \ - __ret; \ - }) - -static int poll_read(int fd, unsigned int timeout_sec) -{ - struct timeval timeout = { .tv_sec = timeout_sec }; - fd_set rfds; - int r; - - FD_ZERO(&rfds); - FD_SET(fd, &rfds); - - r = select(fd + 1, &rfds, NULL, NULL, &timeout); - if (r == 0) - errno = ETIME; - - return r == 1 ? 0 : -1; -} - -static int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, - unsigned int timeout_sec) -{ - if (poll_read(fd, timeout_sec)) - return -1; - - return accept(fd, addr, len); -} - -static int recv_timeout(int fd, void *buf, size_t len, int flags, - unsigned int timeout_sec) -{ - if (poll_read(fd, timeout_sec)) - return -1; - - return recv(fd, buf, len, flags); -} - -static void init_addr_loopback4(struct sockaddr_storage *ss, socklen_t *len) -{ - struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); - - addr4->sin_family = AF_INET; - addr4->sin_port = 0; - addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); - *len = sizeof(*addr4); -} - -static void init_addr_loopback6(struct sockaddr_storage *ss, socklen_t *len) -{ - struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); - - addr6->sin6_family = AF_INET6; - addr6->sin6_port = 0; - addr6->sin6_addr = in6addr_loopback; - *len = sizeof(*addr6); -} - -static void init_addr_loopback_vsock(struct sockaddr_storage *ss, socklen_t *len) -{ - struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss)); - - addr->svm_family = AF_VSOCK; - addr->svm_port = VMADDR_PORT_ANY; - addr->svm_cid = VMADDR_CID_LOCAL; - *len = sizeof(*addr); -} - -static void init_addr_loopback(int family, struct sockaddr_storage *ss, - socklen_t *len) -{ - switch (family) { - case AF_INET: - init_addr_loopback4(ss, len); - return; - case AF_INET6: - init_addr_loopback6(ss, len); - return; - case AF_VSOCK: - init_addr_loopback_vsock(ss, len); - return; - default: - FAIL("unsupported address family %d", family); - } -} - -static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) -{ - return (struct sockaddr *)ss; -} +#include "sockmap_helpers.h" static int enable_reuseport(int s, int progfd) { From patchwork Fri May 19 04:06:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247659 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 837D5566C; Fri, 19 May 2023 04:07:20 +0000 (UTC) Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F43310DF; Thu, 18 May 2023 21:07:19 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id d2e1a72fcca58-643a1fed360so2031961b3a.3; Thu, 18 May 2023 21:07:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469238; x=1687061238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/61Dd2DU59fhyIeFPysOqmpRUlvbvp/lnU8ooWFLyfU=; b=nWup8tl+OCaSvE7OAegJ/p+J9GdFWD2Qps5HA3kdb/TRCu+02HXMwbHds1V6WBx2Ki P/pIF0HX/bkcfj4fUI2NTLNRrWHFVX0BzR+MepmtpZPewR19qFcvsHeUfGEFy1DCDwPz tHw/QsUTAvNiIeevflTPBXiVwgBry58OMom3d1Qd36zKDB3bjYNHDmNl1tTUSSC/puO+ 2J7Fx/K9T3UFiIaTlKrJLKCxFfW00obdh8Ok326N1qvq8FU7C3w/DtmE5j+zApVOQhAI eha16YgJHyDROdPNyX6JOdkrkLcECWpuRWMaL0xzH+E12mNaTw8fiJQtaeKdSixV3E47 fDVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469238; x=1687061238; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/61Dd2DU59fhyIeFPysOqmpRUlvbvp/lnU8ooWFLyfU=; b=OAGRY2qvhwDJhp6vBgio7MpwBkrSO0UuIDawXaF/1GJsEaCVFVUK/d2qN+aZp8uj4Y JD/8dnV4dRxZLcVlcMRLk5t3ANtsvvqc9R3n37wC+YFNa1K8IsuVnj8RVi1gjYzvvI5G 4DwPfBHwJyB93We89NPWXMO/CSGyrE9ebl/oaJnwofjYIkipwdOg9E4q3UzJO/eOHl4R aCBJPyGRdhk+M//BSPcLb2/qF6kdCfLv8wUx+e9bexu1AwljM35sgm1BKeLM/DLQSjKu 8w8erQfaM0R0T7rG5B8SDnZx9SRRAi+RDSKoN7gI/bRNzqY3CfmEJvZcAz6/hnZ5UzyA muhQ== X-Gm-Message-State: AC+VfDwpkD7OgQHbxNJBIiBIxH5kwCAXzDhmCJufkHkRjF/xcJ/MdN8V dH4U3fmIDjW73j6lT3bLBEN85OIcdiU= X-Google-Smtp-Source: ACHHUZ5/m11Ma4/qQgzwI75q/jZtXARehbwupCwq8JCVzD45/6d9b0Ymzdz/b2B06MbuhUXE2cGYTg== X-Received: by 2002:a05:6a20:2624:b0:100:a2ca:7ca8 with SMTP id i36-20020a056a20262400b00100a2ca7ca8mr611421pze.54.1684469238436; Thu, 18 May 2023 21:07:18 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:17 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 10/14] bpf: sockmap, build helper to create connected socket pair Date: Thu, 18 May 2023 21:06:55 -0700 Message-Id: <20230519040659.670644-11-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net A common operation for testing is to spin up a pair of sockets that are connected. Then we can use these to run specific tests that need to send data, check BPF programs and so on. The sockmap_listen programs already have this logic lets move it into the new sockmap_helpers header file for general use. Signed-off-by: John Fastabend --- .../bpf/prog_tests/sockmap_helpers.h | 118 ++++++++++++++++++ .../selftests/bpf/prog_tests/sockmap_listen.c | 107 +--------------- 2 files changed, 123 insertions(+), 102 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h index d516418a0a18..8b4a540cbd65 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h @@ -264,4 +264,122 @@ static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) return (struct sockaddr *)ss; } +static inline int add_to_sockmap(int sock_mapfd, int fd1, int fd2) +{ + u64 value; + u32 key; + int err; + + key = 0; + value = fd1; + err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); + if (err) + return err; + + key = 1; + value = fd2; + return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); +} + +static inline int create_pair(int s, int family, int sotype, int *c, int *p) +{ + struct sockaddr_storage addr; + socklen_t len; + int err = 0; + + len = sizeof(addr); + err = xgetsockname(s, sockaddr(&addr), &len); + if (err) + return err; + + *c = xsocket(family, sotype, 0); + if (*c < 0) + return errno; + err = xconnect(*c, sockaddr(&addr), len); + if (err) { + err = errno; + goto close_cli0; + } + + *p = xaccept_nonblock(s, NULL, NULL); + if (*p < 0) { + err = errno; + goto close_cli0; + } + return err; +close_cli0: + close(*c); + return err; +} + +static inline int create_socket_pairs(int s, int family, int sotype, + int *c0, int *c1, int *p0, int *p1) +{ + int err; + + err = create_pair(s, family, sotype, c0, p0); + if (err) + return err; + + err = create_pair(s, family, sotype, c1, p1); + if (err) { + close(*c0); + close(*p0); + } + return err; +} + +static inline int enable_reuseport(int s, int progfd) +{ + int err, one = 1; + + err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); + if (err) + return -1; + err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, + sizeof(progfd)); + if (err) + return -1; + + return 0; +} + +static inline int socket_loopback_reuseport(int family, int sotype, int progfd) +{ + struct sockaddr_storage addr; + socklen_t len; + int err, s; + + init_addr_loopback(family, &addr, &len); + + s = xsocket(family, sotype, 0); + if (s == -1) + return -1; + + if (progfd >= 0) + enable_reuseport(s, progfd); + + err = xbind(s, sockaddr(&addr), len); + if (err) + goto close; + + if (sotype & SOCK_DGRAM) + return s; + + err = xlisten(s, SOMAXCONN); + if (err) + goto close; + + return s; +close: + xclose(s); + return -1; +} + +static inline int socket_loopback(int family, int sotype) +{ + return socket_loopback_reuseport(family, sotype, -1); +} + + #endif // __SOCKMAP_HELPERS__ diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c index 3deffaa04885..5ba808186186 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c @@ -34,58 +34,6 @@ #include "sockmap_helpers.h" -static int enable_reuseport(int s, int progfd) -{ - int err, one = 1; - - err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); - if (err) - return -1; - err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, - sizeof(progfd)); - if (err) - return -1; - - return 0; -} - -static int socket_loopback_reuseport(int family, int sotype, int progfd) -{ - struct sockaddr_storage addr; - socklen_t len; - int err, s; - - init_addr_loopback(family, &addr, &len); - - s = xsocket(family, sotype, 0); - if (s == -1) - return -1; - - if (progfd >= 0) - enable_reuseport(s, progfd); - - err = xbind(s, sockaddr(&addr), len); - if (err) - goto close; - - if (sotype & SOCK_DGRAM) - return s; - - err = xlisten(s, SOMAXCONN); - if (err) - goto close; - - return s; -close: - xclose(s); - return -1; -} - -static int socket_loopback(int family, int sotype) -{ - return socket_loopback_reuseport(family, sotype, -1); -} - static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused, int family, int sotype, int mapfd) { @@ -728,31 +676,12 @@ static const char *redir_mode_str(enum redir_mode mode) } } -static int add_to_sockmap(int sock_mapfd, int fd1, int fd2) -{ - u64 value; - u32 key; - int err; - - key = 0; - value = fd1; - err = xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); - if (err) - return err; - - key = 1; - value = fd2; - return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); -} - static void redir_to_connected(int family, int sotype, int sock_mapfd, int verd_mapfd, enum redir_mode mode) { const char *log_prefix = redir_mode_str(mode); - struct sockaddr_storage addr; int s, c0, c1, p0, p1; unsigned int pass; - socklen_t len; int err, n; u32 key; char b; @@ -763,36 +692,13 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd, if (s < 0) return; - len = sizeof(addr); - err = xgetsockname(s, sockaddr(&addr), &len); + err = create_socket_pairs(s, family, sotype, &c0, &c1, &p0, &p1); if (err) goto close_srv; - c0 = xsocket(family, sotype, 0); - if (c0 < 0) - goto close_srv; - err = xconnect(c0, sockaddr(&addr), len); - if (err) - goto close_cli0; - - p0 = xaccept_nonblock(s, NULL, NULL); - if (p0 < 0) - goto close_cli0; - - c1 = xsocket(family, sotype, 0); - if (c1 < 0) - goto close_peer0; - err = xconnect(c1, sockaddr(&addr), len); - if (err) - goto close_cli1; - - p1 = xaccept_nonblock(s, NULL, NULL); - if (p1 < 0) - goto close_cli1; - err = add_to_sockmap(sock_mapfd, p0, p1); if (err) - goto close_peer1; + goto close; n = write(mode == REDIR_INGRESS ? c1 : p1, "a", 1); if (n < 0) @@ -800,12 +706,12 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd, if (n == 0) FAIL("%s: incomplete write", log_prefix); if (n < 1) - goto close_peer1; + goto close; key = SK_PASS; err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass); if (err) - goto close_peer1; + goto close; if (pass != 1) FAIL("%s: want pass count 1, have %d", log_prefix, pass); n = recv_timeout(c0, &b, 1, 0, IO_TIMEOUT_SEC); @@ -814,13 +720,10 @@ static void redir_to_connected(int family, int sotype, int sock_mapfd, if (n == 0) FAIL("%s: incomplete recv", log_prefix); -close_peer1: +close: xclose(p1); -close_cli1: xclose(c1); -close_peer0: xclose(p0); -close_cli0: xclose(c0); close_srv: xclose(s); From patchwork Fri May 19 04:06:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247660 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 490F55697; Fri, 19 May 2023 04:07:23 +0000 (UTC) Received: from mail-pf1-x42c.google.com (mail-pf1-x42c.google.com [IPv6:2607:f8b0:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97DA5E72; Thu, 18 May 2023 21:07:20 -0700 (PDT) Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-64d2a87b9daso680747b3a.0; Thu, 18 May 2023 21:07:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469240; x=1687061240; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kLga20o0ypa3YZ6F5VBKmdyV1GP+jP4eW2M1n6l95P8=; b=KY7o82VwS7Nq1wvHAWJFen4bamTSynIiV61kY/OeTYwrJjtPNKYCDIgQNAXf5aBJ20 aRaaabW+IckoBXxSzlNKASOTrCYARiSezx+arliNGRudX1MRJu+wTIdF3h3MB6dpQlY0 BmeEc9PUFtdjz+CNdXBpb7KDZCq6T6509lwgI2XvTc3IEGxkDkhILC64YXSQ0G/ZZa/L HxSgIuyjxFyBc3W6azCWk6c5HZytFtqG9IOoXhY/bDo2PURKZF3b+rtqtdWxlR7FJnzx YMIGURmxD1AavXzge5dDI8f5LedAoFte3A5b+JqOEB5bR+A0kIavzvqqQCEdT/HOnWXu 8kew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469240; x=1687061240; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kLga20o0ypa3YZ6F5VBKmdyV1GP+jP4eW2M1n6l95P8=; b=OLg0PELnNu7UFo2CjIsW/vOfkcRghKnYwMKMeFGKrc9OlMedjBHltqfIkXEQoRX8Q+ y56TXjJw+ZNAr0BwNTpaJsBCXF/3Jxpa2JrLQUUHvs5YmJ8RlE1/hSMTji7WyTtHDtkT jfFR1q4qJ7Y9AR+/tIIVONHU2rO6ziz55badivcgVDRHFRDfmt/1I8CWcrTw+lWsbmLa 62K52u0GRXmlK3Jdum/btMNs/1+WcKu21QnYfYkzU+SB9nu45KhU5UpuOtPl1TiY7LCu PX9leBpC6ambjFhUT077ico3GBOae6zKAoS0zRjx/JrKJR2LYXMDu4h96vLU+HPKz4Xp zhwQ== X-Gm-Message-State: AC+VfDzLRYo4dZu5ywCm52s5ZhVzn7m8erhNRfwMSRzkjJUl8+8aIL4h 3CKeSIJzkc14+ytAqHCzYvw= X-Google-Smtp-Source: ACHHUZ6auJgq5/qH8nBA6MfFjxOmwUUBBa9Pyi1GC2erDSrIy1rlI8rha2pkkp+rc9+8dZfUmI6reA== X-Received: by 2002:a05:6a00:1596:b0:63b:8778:99e4 with SMTP id u22-20020a056a00159600b0063b877899e4mr1535981pfk.2.1684469240077; Thu, 18 May 2023 21:07:20 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:19 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 11/14] bpf: sockmap, test shutdown() correctly exits epoll and recv()=0 Date: Thu, 18 May 2023 21:06:56 -0700 Message-Id: <20230519040659.670644-12-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net When session gracefully shutdowns epoll needs to wake up and any recv() readers should return 0 not the -EAGAIN they previously returned. Note we use epoll instead of select to test the epoll wake on shutdown event as well. Signed-off-by: John Fastabend --- .../selftests/bpf/prog_tests/sockmap_basic.c | 62 +++++++++++++++++++ .../bpf/progs/test_sockmap_pass_prog.c | 32 ++++++++++ 2 files changed, 94 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index 0ce25a967481..615a8164c8f0 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -2,6 +2,7 @@ // Copyright (c) 2020 Cloudflare #include #include +#include #include "test_progs.h" #include "test_skmsg_load_helpers.skel.h" @@ -9,8 +10,11 @@ #include "test_sockmap_invalid_update.skel.h" #include "test_sockmap_skb_verdict_attach.skel.h" #include "test_sockmap_progs_query.skel.h" +#include "test_sockmap_pass_prog.skel.h" #include "bpf_iter_sockmap.skel.h" +#include "sockmap_helpers.h" + #define TCP_REPAIR 19 /* TCP sock is under repair right now */ #define TCP_REPAIR_ON 1 @@ -350,6 +354,62 @@ static void test_sockmap_progs_query(enum bpf_attach_type attach_type) test_sockmap_progs_query__destroy(skel); } +#define MAX_EVENTS 10 +static void test_sockmap_skb_verdict_shutdown(void) +{ + struct epoll_event ev, events[MAX_EVENTS]; + int n, err, map, verdict, s, c1, p1; + struct test_sockmap_pass_prog *skel; + int epollfd; + int zero = 0; + char b; + + skel = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open_and_load")) + return; + + verdict = bpf_program__fd(skel->progs.prog_skb_verdict); + map = bpf_map__fd(skel->maps.sock_map_rx); + + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); + if (!ASSERT_OK(err, "bpf_prog_attach")) + goto out; + + s = socket_loopback(AF_INET, SOCK_STREAM); + if (s < 0) + goto out; + err = create_pair(s, AF_INET, SOCK_STREAM, &c1, &p1); + if (err < 0) + goto out; + + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); + if (err < 0) + goto out_close; + + shutdown(p1, SHUT_WR); + + ev.events = EPOLLIN; + ev.data.fd = c1; + + epollfd = epoll_create1(0); + if (!ASSERT_GT(epollfd, -1, "epoll_create(0)")) + goto out_close; + err = epoll_ctl(epollfd, EPOLL_CTL_ADD, c1, &ev); + if (!ASSERT_OK(err, "epoll_ctl(EPOLL_CTL_ADD)")) + goto out_close; + err = epoll_wait(epollfd, events, MAX_EVENTS, -1); + if (!ASSERT_EQ(err, 1, "epoll_wait(fd)")) + goto out_close; + + n = recv(c1, &b, 1, SOCK_NONBLOCK); + ASSERT_EQ(n, 0, "recv_timeout(fin)"); +out_close: + close(c1); + close(p1); +out: + test_sockmap_pass_prog__destroy(skel); +} + void test_sockmap_basic(void) { if (test__start_subtest("sockmap create_update_free")) @@ -384,4 +444,6 @@ void test_sockmap_basic(void) test_sockmap_progs_query(BPF_SK_SKB_STREAM_VERDICT); if (test__start_subtest("sockmap skb_verdict progs query")) test_sockmap_progs_query(BPF_SK_SKB_VERDICT); + if (test__start_subtest("sockmap skb_verdict shutdown")) + test_sockmap_skb_verdict_shutdown(); } diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c b/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c new file mode 100644 index 000000000000..1d86a717a290 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_sockmap_pass_prog.c @@ -0,0 +1,32 @@ +#include +#include +#include + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_rx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_tx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_msg SEC(".maps"); + +SEC("sk_skb") +int prog_skb_verdict(struct __sk_buff *skb) +{ + return SK_PASS; +} + +char _license[] SEC("license") = "GPL"; From patchwork Fri May 19 04:06:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247661 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 490BF566C; Fri, 19 May 2023 04:07:23 +0000 (UTC) Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [IPv6:2607:f8b0:4864:20::436]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CC6A10CF; Thu, 18 May 2023 21:07:22 -0700 (PDT) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-64d341bdedcso104187b3a.3; Thu, 18 May 2023 21:07:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469241; x=1687061241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BJKVFV10Wskzd6UgwZOel3wsQif2pwvMqBZ8BBnsjqM=; b=WsvyN66+TfkgaltRV9Ur7u1FMNeVA3UucwZUt1duuRt7KY5imZpzsD36LP7bWVM8f0 yxTIAnzxy91OPvSMPCUiVnMNWNXVT+ELR9PXeJGZIPnwGQXZPLSrmFGJLPgnFuJ4eI5U X5pow9ZpAIpzwePM47cwMHzMpkGY6ZSyMZz/Q7KMoXwWBMgcOpnxt33kCQwYOzlhYaN9 xozx6aGuh6YLWL1PQk/33ApCe8+nkEd+q1MvTYQqWSej/g7dZUt5SOW04sZ1E2Q4GmF3 n6zLG7sGIEDEoOB4Cl34z84vnMq2KWXCRGvhOBMG+EdhsujG7A2cFZJzRny9GiIBNXWd epGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469241; x=1687061241; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BJKVFV10Wskzd6UgwZOel3wsQif2pwvMqBZ8BBnsjqM=; b=cB5LT7i6TEeHNhheYL96GEr5yT0FE5mGBjIAB0U6AZcVOyaqXzgcvFXhzDrE6P5z2t rzKHvk9v9tsG4RLbE/ey02vFxKxaGPG8dahyyJHfJCOq2vpPPktKs2UF9LUwz/2SQzRi WdcxfmoQDc+kVNQd7yjpdLZHIn/j1RTJ06mf5o22vd17Mr1rsNkBTPqzRQqHjpv3/A66 e0LNhz8x7Ent5dO4xZ0JJWfGnwlvEuVKWdcD697X4IhuxNCV0feNvttC6fIxg6swCUaN 98f762TE5Yyce+m/uJneqjnJvCUJxYiQntGyoqzA3QJmxIMPYAoC6brq4ZDy8Y2m7PbG 42yA== X-Gm-Message-State: AC+VfDyO4W7m3hX5zJBjhIoJkQrcMhg/zlFyb/kDPb2GW+x5rjvBpJBA tR+/u3oMfpMeCS23HYeDfOw= X-Google-Smtp-Source: ACHHUZ7oB02emgalktNEIyQrUQ3/9Ry0232PUFXb8GKGQ/Q9jd7rl0nEqoF+1/dXvMWVoKYfvGzojQ== X-Received: by 2002:a05:6a00:c96:b0:64a:ff32:7347 with SMTP id a22-20020a056a000c9600b0064aff327347mr1661519pfv.13.1684469241544; Thu, 18 May 2023 21:07:21 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:21 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 12/14] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer Date: Thu, 18 May 2023 21:06:57 -0700 Message-Id: <20230519040659.670644-13-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net A bug was reported where ioctl(FIONREAD) returned zero even though the socket with a SK_SKB verdict program attached had bytes in the msg queue. The result is programs may hang or more likely try to recover, but use suboptimal buffer sizes. Add a test to check that ioctl(FIONREAD) returns the correct number of bytes. Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- .../selftests/bpf/prog_tests/sockmap_basic.c | 48 +++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index 615a8164c8f0..fe56049f6568 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -410,6 +410,52 @@ static void test_sockmap_skb_verdict_shutdown(void) test_sockmap_pass_prog__destroy(skel); } +static void test_sockmap_skb_verdict_fionread(void) +{ + int err, map, verdict, s, c0, c1, p0, p1; + struct test_sockmap_pass_prog *skel; + int zero = 0, sent, recvd, avail; + char buf[256] = "0123456789"; + + skel = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(skel, "open_and_load")) + return; + + verdict = bpf_program__fd(skel->progs.prog_skb_verdict); + map = bpf_map__fd(skel->maps.sock_map_rx); + + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); + if (!ASSERT_OK(err, "bpf_prog_attach")) + goto out; + + s = socket_loopback(AF_INET, SOCK_STREAM); + if (!ASSERT_GT(s, -1, "socket_loopback(s)")) + goto out; + err = create_socket_pairs(s, AF_INET, SOCK_STREAM, &c0, &c1, &p0, &p1); + if (!ASSERT_OK(err, "create_socket_pairs(s)")) + goto out; + + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); + if (!ASSERT_OK(err, "bpf_map_update_elem(c1)")) + goto out_close; + + sent = xsend(p1, &buf, sizeof(buf), 0); + ASSERT_EQ(sent, sizeof(buf), "xsend(p0)"); + err = ioctl(c1, FIONREAD, &avail); + ASSERT_OK(err, "ioctl(FIONREAD) error"); + ASSERT_EQ(avail, sizeof(buf), "ioctl(FIONREAD)"); + recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); + ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + +out_close: + close(c0); + close(p0); + close(c1); + close(p1); +out: + test_sockmap_pass_prog__destroy(skel); +} + void test_sockmap_basic(void) { if (test__start_subtest("sockmap create_update_free")) @@ -446,4 +492,6 @@ void test_sockmap_basic(void) test_sockmap_progs_query(BPF_SK_SKB_VERDICT); if (test__start_subtest("sockmap skb_verdict shutdown")) test_sockmap_skb_verdict_shutdown(); + if (test__start_subtest("sockmap skb_verdict fionread")) + test_sockmap_skb_verdict_fionread(); } From patchwork Fri May 19 04:06:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247662 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6C556124; Fri, 19 May 2023 04:07:24 +0000 (UTC) Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 855D910CF; Thu, 18 May 2023 21:07:23 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-643bb9cdd6eso3017623b3a.1; Thu, 18 May 2023 21:07:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469243; x=1687061243; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yWuuJj1UmKb8bhc9koutosmHK7yiCrZ5XUNipDsC3uQ=; b=SBieIJ9PGC62EZ54O86OERm1VhfA4fCRv94nfNf6bqU1H5iD7ZoqYKG5s6N8L/jjib u5KodjbiUZHVOPDHPuNZ60hi3cDypb4oSBTjbGxJSnNgkYOJDtRs0UAvDTfqC9ud3Xie agDLci6qw3cHsOXrUwiBq1ELT+JXS1v7Jzc/66XFvI9y3Crbi50yT14k5739MWyFLMaz 48YU5XLRKazh1yZ5et73cdmD8N/BBSmDcN3zQtu+6e5xduZlL52YIhcKs9suax1yubjW bzLhJDEcfqH6pwoWi5HqHpr2tfr+Adtgx3dVyV6rq55pAGZHC/X55CZJQfrOI7KtBze+ Lt5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469243; x=1687061243; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yWuuJj1UmKb8bhc9koutosmHK7yiCrZ5XUNipDsC3uQ=; b=YFtNSCWECqvlUosrH5b4QPLYa243aVNS1PXLhsN/ZSq7TCTKSP9X9Kq1J1e5aaaOS1 lR+gRoauq/QloqueizDZp62kzN+uWnQ7lBx4N9rsA58Jr8dqsfux61nnkmZjnolqX9+/ 6f/40T9VuerHC1jrVRWPbr9lnXXwvNGDthdWbYWUefJ700oK8B+aS3jhz3j1MruwC/mX rza6bNi/C4FVE8rkcsD8axmarXTPYvB0cXfylxLNvUCfdMWAFu/vvg+np1Uy/e8BIAna fbqYxm2y2sFMFOOlAs0358Z8j787ioCtsdwjSrVr1RUoYpodEuDxnehjdnk6Rd8vPMOY sy/A== X-Gm-Message-State: AC+VfDwV5RbA0dlV9pilSrv0l4TVFDyovky1dN5kX1UQy9OcWfJVyS/4 8uXrqUXvV7qiAuQoFexT+NQ= X-Google-Smtp-Source: ACHHUZ5xiwgjgAQr2bA20HfVb5cWVxijLztjAfXwEdc3o8qsWE9pF8xPxQcC0oQZKh/TMnKsIWuAaA== X-Received: by 2002:a05:6a00:21cf:b0:644:18ec:4510 with SMTP id t15-20020a056a0021cf00b0064418ec4510mr1497987pfj.9.1684469243046; Thu, 18 May 2023 21:07:23 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:22 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 13/14] bpf: sockmap, test FIONREAD returns correct bytes in rx buffer with drops Date: Thu, 18 May 2023 21:06:58 -0700 Message-Id: <20230519040659.670644-14-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net When BPF program drops pkts the sockmap logic 'eats' the packet and updates copied_seq. In the PASS case where the sk_buff is accepted we update copied_seq from recvmsg path so we need a new test to handle the drop case. Original patch series broke this resulting in test_sockmap_skb_verdict_fionread:PASS:ioctl(FIONREAD) error 0 nsec test_sockmap_skb_verdict_fionread:FAIL:ioctl(FIONREAD) unexpected ioctl(FIONREAD): actual 1503041772 != expected 256 #176/17 sockmap_basic/sockmap skb_verdict fionread on drop:FAIL After updated patch with fix. #176/16 sockmap_basic/sockmap skb_verdict fionread:OK #176/17 sockmap_basic/sockmap skb_verdict fionread on drop:OK Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- .../selftests/bpf/prog_tests/sockmap_basic.c | 47 ++++++++++++++----- .../bpf/progs/test_sockmap_drop_prog.c | 32 +++++++++++++ 2 files changed, 66 insertions(+), 13 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c index fe56049f6568..064cc5e8d9ad 100644 --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c @@ -11,6 +11,7 @@ #include "test_sockmap_skb_verdict_attach.skel.h" #include "test_sockmap_progs_query.skel.h" #include "test_sockmap_pass_prog.skel.h" +#include "test_sockmap_drop_prog.skel.h" #include "bpf_iter_sockmap.skel.h" #include "sockmap_helpers.h" @@ -410,19 +411,31 @@ static void test_sockmap_skb_verdict_shutdown(void) test_sockmap_pass_prog__destroy(skel); } -static void test_sockmap_skb_verdict_fionread(void) +static void test_sockmap_skb_verdict_fionread(bool pass_prog) { + int expected, zero = 0, sent, recvd, avail; int err, map, verdict, s, c0, c1, p0, p1; - struct test_sockmap_pass_prog *skel; - int zero = 0, sent, recvd, avail; + struct test_sockmap_pass_prog *pass; + struct test_sockmap_drop_prog *drop; char buf[256] = "0123456789"; - skel = test_sockmap_pass_prog__open_and_load(); - if (!ASSERT_OK_PTR(skel, "open_and_load")) - return; + if (pass_prog) { + pass = test_sockmap_pass_prog__open_and_load(); + if (!ASSERT_OK_PTR(pass, "open_and_load")) + return; + verdict = bpf_program__fd(pass->progs.prog_skb_verdict); + map = bpf_map__fd(pass->maps.sock_map_rx); + expected = sizeof(buf); + } else { + drop = test_sockmap_drop_prog__open_and_load(); + if (!ASSERT_OK_PTR(drop, "open_and_load")) + return; + verdict = bpf_program__fd(drop->progs.prog_skb_verdict); + map = bpf_map__fd(drop->maps.sock_map_rx); + /* On drop data is consumed immediately and copied_seq inc'd */ + expected = 0; + } - verdict = bpf_program__fd(skel->progs.prog_skb_verdict); - map = bpf_map__fd(skel->maps.sock_map_rx); err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); if (!ASSERT_OK(err, "bpf_prog_attach")) @@ -443,9 +456,12 @@ static void test_sockmap_skb_verdict_fionread(void) ASSERT_EQ(sent, sizeof(buf), "xsend(p0)"); err = ioctl(c1, FIONREAD, &avail); ASSERT_OK(err, "ioctl(FIONREAD) error"); - ASSERT_EQ(avail, sizeof(buf), "ioctl(FIONREAD)"); - recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); - ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + ASSERT_EQ(avail, expected, "ioctl(FIONREAD)"); + /* On DROP test there will be no data to read */ + if (pass_prog) { + recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC); + ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)"); + } out_close: close(c0); @@ -453,7 +469,10 @@ static void test_sockmap_skb_verdict_fionread(void) close(c1); close(p1); out: - test_sockmap_pass_prog__destroy(skel); + if (pass_prog) + test_sockmap_pass_prog__destroy(pass); + else + test_sockmap_drop_prog__destroy(drop); } void test_sockmap_basic(void) @@ -493,5 +512,7 @@ void test_sockmap_basic(void) if (test__start_subtest("sockmap skb_verdict shutdown")) test_sockmap_skb_verdict_shutdown(); if (test__start_subtest("sockmap skb_verdict fionread")) - test_sockmap_skb_verdict_fionread(); + test_sockmap_skb_verdict_fionread(true); + if (test__start_subtest("sockmap skb_verdict fionread on drop")) + test_sockmap_skb_verdict_fionread(false); } diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c b/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c new file mode 100644 index 000000000000..29314805ce42 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_sockmap_drop_prog.c @@ -0,0 +1,32 @@ +#include +#include +#include + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_rx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_tx SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __uint(max_entries, 20); + __type(key, int); + __type(value, int); +} sock_map_msg SEC(".maps"); + +SEC("sk_skb") +int prog_skb_verdict(struct __sk_buff *skb) +{ + return SK_DROP; +} + +char _license[] SEC("license") = "GPL"; From patchwork Fri May 19 04:06:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247663 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF9D1651; Fri, 19 May 2023 04:07:26 +0000 (UTC) Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1FECB10CF; Thu, 18 May 2023 21:07:25 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-64d3578c25bso66908b3a.3; Thu, 18 May 2023 21:07:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469244; x=1687061244; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=srUsONv1+BtA+seq8Jh4M9M/ZYn/sI/8M3Ij3rhpTaI=; b=Be7akhCJUfV5fkHB5/UpF9jQD4RsYfU92vDvyttP2p1FX023WcXkBBAIdblYsSQXId vqlCFj+scgUgzf5dWeTrCP2+Xmk1JX7hJPcW2s1I3Ck2i0MhyOe2wBnYfOacyZf3HJVd K4r8ysppjvFu5AJ2MKEc/xSOBEo14MEk28USsNPO43Dc6ZRQ6dSY0Otb7+dzrpGRjhv9 uBMSM8Es1L4gObN2FZr+l4dLNSfaJYcfJUn+GjBx1/WEbl0j/QDPOOIB0xVYerhBywnZ v/ROaKzxmL6j9ToAu0zgq22WOjVhW5/i0GQ/73EMJFjm19Ds+wGph/f4ceQYsb/I/Amk noOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469244; x=1687061244; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=srUsONv1+BtA+seq8Jh4M9M/ZYn/sI/8M3Ij3rhpTaI=; b=C+C51LZIk8Hp3zcqpHp90870glxXveiFUbRdHMtOm3/U6Y/N6RjbPJ0NKPoyKzqkpw IejmL0XzrOnRcbCySTSS3GGlSWKGN+jFQBPp3/wlgRfzN4UxvDzKnInYDL08WIU5NDoO Xblc7zyr+Jx4WYM8En03gF36YKQ9P1gSxBKdSX49t6MB0qAlPHRDPQwYqswjkViMEr1c +sJyj/ruxsN4/vUJukaUC2VXG/p9icmG5a2Ye6wQVt2Uwicu8xbUyh7WniYtGAk/5Dnl mcnOAQyoKVd4re1ndyP/kJ5qNe0csM0sr1A6CzhpL1tvzls9ULJIxsQPNufRuhIJu4ik H/7A== X-Gm-Message-State: AC+VfDw0hL5mWegudn7FnGCTzLacOK3NkAoDcX6ZWAhaBCpcXtAUr3BU apL03sIqLPE0VH/zteI8bbuA90UjG8A= X-Google-Smtp-Source: ACHHUZ6kffR844BYAm18m8i+Q90qf4Qu48hndwOGeszXVC6XFhgT1RonZY4xC0JbilVeDJf9tduVkg== X-Received: by 2002:a05:6a00:2e06:b0:643:aa2:4dcd with SMTP id fc6-20020a056a002e0600b006430aa24dcdmr1555078pfb.16.1684469244535; Thu, 18 May 2023 21:07:24 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:24 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 14/14] bpf: sockmap, test progs verifier error with latest clang Date: Thu, 18 May 2023 21:06:59 -0700 Message-Id: <20230519040659.670644-15-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net With a relatively recent clang (7090c10273119) and with this commit to fix warnings in selftests (c8ed668593972) that uses __sink(err) to resolve unused variables. We get the following verifier error. root@6e731a24b33a:/host/tools/testing/selftests/bpf# ./test_sockmap libbpf: prog 'bpf_sockmap': BPF program load failed: Permission denied libbpf: prog 'bpf_sockmap': -- BEGIN PROG LOAD LOG -- 0: R1=ctx(off=0,imm=0) R10=fp0 ; op = (int) skops->op; 0: (61) r2 = *(u32 *)(r1 +0) ; R1=ctx(off=0,imm=0) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) ; switch (op) { 1: (16) if w2 == 0x4 goto pc+5 ; R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) 2: (56) if w2 != 0x5 goto pc+15 ; R2_w=5 ; lport = skops->local_port; 3: (61) r2 = *(u32 *)(r1 +68) ; R1=ctx(off=0,imm=0) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) ; if (lport == 10000) { 4: (56) if w2 != 0x2710 goto pc+13 18: R1=ctx(off=0,imm=0) R2=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 ; __sink(err); 18: (bc) w1 = w0 R0 !read_ok processed 18 insns (limit 1000000) max_states_per_insn 0 total_states 2 peak_states 2 mark_read 1 -- END PROG LOAD LOG -- libbpf: prog 'bpf_sockmap': failed to load: -13 libbpf: failed to load object 'test_sockmap_kern.bpf.o' load_bpf_file: (-1) No such file or directory ERROR: (-1) load bpf failed libbpf: prog 'bpf_sockmap': BPF program load failed: Permission denied libbpf: prog 'bpf_sockmap': -- BEGIN PROG LOAD LOG -- 0: R1=ctx(off=0,imm=0) R10=fp0 ; op = (int) skops->op; 0: (61) r2 = *(u32 *)(r1 +0) ; R1=ctx(off=0,imm=0) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) ; switch (op) { 1: (16) if w2 == 0x4 goto pc+5 ; R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) 2: (56) if w2 != 0x5 goto pc+15 ; R2_w=5 ; lport = skops->local_port; 3: (61) r2 = *(u32 *)(r1 +68) ; R1=ctx(off=0,imm=0) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) ; if (lport == 10000) { 4: (56) if w2 != 0x2710 goto pc+13 18: R1=ctx(off=0,imm=0) R2=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 ; __sink(err); 18: (bc) w1 = w0 R0 !read_ok processed 18 insns (limit 1000000) max_states_per_insn 0 total_states 2 peak_states 2 mark_read 1 -- END PROG LOAD LOG -- libbpf: prog 'bpf_sockmap': failed to load: -13 libbpf: failed to load object 'test_sockhash_kern.bpf.o' load_bpf_file: (-1) No such file or directory ERROR: (-1) load bpf failed libbpf: prog 'bpf_sockmap': BPF program load failed: Permission denied libbpf: prog 'bpf_sockmap': -- BEGIN PROG LOAD LOG -- 0: R1=ctx(off=0,imm=0) R10=fp0 ; op = (int) skops->op; 0: (61) r2 = *(u32 *)(r1 +0) ; R1=ctx(off=0,imm=0) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) ; switch (op) { 1: (16) if w2 == 0x4 goto pc+5 ; R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) 2: (56) if w2 != 0x5 goto pc+15 ; R2_w=5 ; lport = skops->local_port; 3: (61) r2 = *(u32 *)(r1 +68) ; R1=ctx(off=0,imm=0) R2_w=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) ; if (lport == 10000) { 4: (56) if w2 != 0x2710 goto pc+13 18: R1=ctx(off=0,imm=0) R2=scalar(umax=4294967295,var_off=(0x0; 0xffffffff)) R10=fp0 ; __sink(err); 18: (bc) w1 = w0 R0 !read_ok processed 18 insns (limit 1000000) max_states_per_insn 0 total_states 2 peak_states 2 mark_read 1 -- END PROG LOAD LOG -- To fix simply remove the err value because its not actually used anywhere in the testing. We can investigate the root cause later. Future patch should probably actually test the err value as well. Although if the map updates fail they will get caught eventually by userspace. Fixes: c8ed668593972 ("selftests/bpf: fix lots of silly mistakes pointed out by compiler") Signed-off-by: John Fastabend --- .../testing/selftests/bpf/progs/test_sockmap_kern.h | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h index baf9ebc6d903..99d2ea9fb658 100644 --- a/tools/testing/selftests/bpf/progs/test_sockmap_kern.h +++ b/tools/testing/selftests/bpf/progs/test_sockmap_kern.h @@ -191,7 +191,7 @@ SEC("sockops") int bpf_sockmap(struct bpf_sock_ops *skops) { __u32 lport, rport; - int op, err, ret; + int op, ret; op = (int) skops->op; @@ -203,10 +203,10 @@ int bpf_sockmap(struct bpf_sock_ops *skops) if (lport == 10000) { ret = 1; #ifdef SOCKMAP - err = bpf_sock_map_update(skops, &sock_map, &ret, + bpf_sock_map_update(skops, &sock_map, &ret, BPF_NOEXIST); #else - err = bpf_sock_hash_update(skops, &sock_map, &ret, + bpf_sock_hash_update(skops, &sock_map, &ret, BPF_NOEXIST); #endif } @@ -218,10 +218,10 @@ int bpf_sockmap(struct bpf_sock_ops *skops) if (bpf_ntohl(rport) == 10001) { ret = 10; #ifdef SOCKMAP - err = bpf_sock_map_update(skops, &sock_map, &ret, + bpf_sock_map_update(skops, &sock_map, &ret, BPF_NOEXIST); #else - err = bpf_sock_hash_update(skops, &sock_map, &ret, + bpf_sock_hash_update(skops, &sock_map, &ret, BPF_NOEXIST); #endif } @@ -230,8 +230,6 @@ int bpf_sockmap(struct bpf_sock_ops *skops) break; } - __sink(err); - return 0; }