From patchwork Fri May 19 04:06:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Fastabend X-Patchwork-Id: 13247655 X-Patchwork-Delegate: bpf@iogearbox.net Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B34B63E; Fri, 19 May 2023 04:07:15 +0000 (UTC) Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8FF7E72; Thu, 18 May 2023 21:07:12 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-6439f186366so2070000b3a.2; Thu, 18 May 2023 21:07:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684469232; x=1687061232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xp4LbGPkNU6EN2wBzfXws+yiU1Dph6UgcZtiAxf5AYY=; b=E8cUiAjEV50hequJmKSCcMH8ZhPHf09rS+NggM51NB4QkqDFrpXG2T6rb6BGC7e3ni 4hPHciyKYKlfFOZxDN6AhN5RZw3NXiF2yBg/ll3FDjLdLiUkgAE1LuE7Z8V0Fuiu+E1+ o3HUf9Rvvv2Wvjwh70mr5SVQq2tM7htFpsDN6SH5MiS1P6LpL//32FF68kny7fkv6Rtj z6H7CsZ+vjTFyZJ/jal+OrJ3Q376nNs1ySYfUz1j0xE7sazbG4VYcAdFBZx0JWEuDmIB Z3myJ36vqSvOyUNLdvuPGY25ArsWGS31dSpjbK2W4eQ1jPrufJp+lpB9ucMh52UtFqzI 2b7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684469232; x=1687061232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xp4LbGPkNU6EN2wBzfXws+yiU1Dph6UgcZtiAxf5AYY=; b=bu0ZxOyz7n1N1Gp06cnoCvXZqzMb0LFAspVHjr16pOKwf7QACVrhAhHHunpebslq3M SktU/izVpXXQ21iryuYJsaJxlUNgJFOsDSTe1npfM2BNklzy5O/Nh1J6aouKIyNqdOvZ bM6wxoxKIDLgjMwXwvncqVfnARoGRy+H+PuUibvmV8gqLufiJ83iDWMcCYeDhgeDYeI5 BsnRhQvntNCIvJXxTEA4N8W0iwcKW7eZEdBGIxbJJkGd5kNiPwEV1VKQLaFUjaP3w9sx do1Zxj5GXBAYMb8gY3sj/7unm39/aOBIXClTWmDmE1VFYPWbAVA756qVKfuDvRSmIYT3 Ge+g== X-Gm-Message-State: AC+VfDxdr4CnE0l2CwrmTJid7tXoL0B8eM/401eWZYPA/QDkllqAqP7I OuvcK+nMayNHVBZ6Q8CKFbY= X-Google-Smtp-Source: ACHHUZ4ClWGYB5TybX4dFagoIfg5SXEZMY5oRIcf6aBIkV92h3jFsKf3a0gMRCMvOxwNNWrBhkMAwA== X-Received: by 2002:a05:6a00:2ea4:b0:64a:f8c9:a421 with SMTP id fd36-20020a056a002ea400b0064af8c9a421mr1605041pfb.32.1684469232239; Thu, 18 May 2023 21:07:12 -0700 (PDT) Received: from john.lan ([2605:59c8:148:ba10:706:628a:e6ce:c8a9]) by smtp.gmail.com with ESMTPSA id x11-20020aa784cb000000b00625d84a0194sm434833pfn.107.2023.05.18.21.07.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 May 2023 21:07:11 -0700 (PDT) From: John Fastabend To: jakub@cloudflare.com, daniel@iogearbox.net Cc: john.fastabend@gmail.com, bpf@vger.kernel.org, netdev@vger.kernel.org, edumazet@google.com, ast@kernel.org, andrii@kernel.org, will@isovalent.com Subject: [PATCH bpf v9 06/14] bpf: sockmap, TCP data stall on recv before accept Date: Thu, 18 May 2023 21:06:51 -0700 Message-Id: <20230519040659.670644-7-john.fastabend@gmail.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230519040659.670644-1-john.fastabend@gmail.com> References: <20230519040659.670644-1-john.fastabend@gmail.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: bpf@iogearbox.net A common mechanism to put a TCP socket into the sockmap is to hook the BPF_SOCK_OPS_{ACTIVE_PASSIVE}_ESTABLISHED_CB event with a BPF program that can map the socket info to the correct BPF verdict parser. When the user adds the socket to the map the psock is created and the new ops are assigned to ensure the verdict program will 'see' the sk_buffs as they arrive. Part of this process hooks the sk_data_ready op with a BPF specific handler to wake up the BPF verdict program when data is ready to read. The logic is simple enough (posted here for easy reading) static void sk_psock_verdict_data_ready(struct sock *sk) { struct socket *sock = sk->sk_socket; if (unlikely(!sock || !sock->ops || !sock->ops->read_skb)) return; sock->ops->read_skb(sk, sk_psock_verdict_recv); } The oversight here is sk->sk_socket is not assigned until the application accepts() the new socket. However, its entirely ok for the peer application to do a connect() followed immediately by sends. The socket on the receiver is sitting on the backlog queue of the listening socket until its accepted and the data is queued up. If the peer never accepts the socket or is slow it will eventually hit data limits and rate limit the session. But, important for BPF sockmap hooks when this data is received TCP stack does the sk_data_ready() call but the read_skb() for this data is never called because sk_socket is missing. The data sits on the sk_receive_queue. Then once the socket is accepted if we never receive more data from the peer there will be no further sk_data_ready calls and all the data is still on the sk_receive_queue(). Then user calls recvmsg after accept() and for TCP sockets in sockmap we use the tcp_bpf_recvmsg_parser() handler. The handler checks for data in the sk_msg ingress queue expecting that the BPF program has already run from the sk_data_ready hook and enqueued the data as needed. So we are stuck. To fix do an unlikely check in recvmsg handler for data on the sk_receive_queue and if it exists wake up data_ready. We have the sock locked in both read_skb and recvmsg so should avoid having multiple runners. Fixes: 04919bed948dc ("tcp: Introduce tcp_read_skb()") Reviewed-by: Jakub Sitnicki Signed-off-by: John Fastabend --- net/ipv4/tcp_bpf.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c index 73c13642d47f..01dd76be1a58 100644 --- a/net/ipv4/tcp_bpf.c +++ b/net/ipv4/tcp_bpf.c @@ -212,6 +212,26 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk, return tcp_recvmsg(sk, msg, len, flags, addr_len); lock_sock(sk); + + /* We may have received data on the sk_receive_queue pre-accept and + * then we can not use read_skb in this context because we haven't + * assigned a sk_socket yet so have no link to the ops. The work-around + * is to check the sk_receive_queue and in these cases read skbs off + * queue again. The read_skb hook is not running at this point because + * of lock_sock so we avoid having multiple runners in read_skb. + */ + if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) { + tcp_data_ready(sk); + /* This handles the ENOMEM errors if we both receive data + * pre accept and are already under memory pressure. At least + * let user know to retry. + */ + if (unlikely(!skb_queue_empty(&sk->sk_receive_queue))) { + copied = -EAGAIN; + goto out; + } + } + msg_bytes_ready: copied = sk_msg_recvmsg(sk, psock, msg, len, flags); /* The typical case for EFAULT is the socket was gracefully