From patchwork Sat Dec 10 00:28:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mat Martineau X-Patchwork-Id: 13070176 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63E96C4167B for ; Sat, 10 Dec 2022 00:28:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229892AbiLJA20 (ORCPT ); Fri, 9 Dec 2022 19:28:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229897AbiLJA2V (ORCPT ); Fri, 9 Dec 2022 19:28:21 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF37F3E083; Fri, 9 Dec 2022 16:28:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670632100; x=1702168100; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qnTSysRNbAWtXozK6lTqUq/j/lCjWUE0xHsEdF/iWvg=; b=jMmfGZEJIm7Q1fmT7dc+34PyAVfm8ibILIWRxqGxFQemAf8RZbLxxc4s 0/+9WjmnJl4xzpOdwbMDOVS+tP+Ljm2cOICziWB2m2RIqwiTw4wsFJS72 UlXo1fLKbLNDMssvsyTXoGkCFD+sOmSj/43B5nDDLQrFW3XENKxzUAlPT 8pnyelUoHA6s1HCY9zGRLYFevNwKy7xkh8GH3wSAhxDYydUExcTxqpsBu MYrOcnBQg2RmZL0aQHmObM7R0St0pK2DkqYOckcv5hXGD+pOmEe+NTPiy vPdnAA2JVJin9AbIgdH7GDbkunt/wHMHYll2Efkzd1X+x/AwOllaMwPJB g==; X-IronPort-AV: E=McAfee;i="6500,9779,10556"; a="381879142" X-IronPort-AV: E=Sophos;i="5.96,232,1665471600"; d="scan'208";a="381879142" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2022 16:28:18 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10556"; a="649728883" X-IronPort-AV: E=Sophos;i="5.96,232,1665471600"; d="scan'208";a="649728883" Received: from hthiagar-mobl1.amr.corp.intel.com (HELO mjmartin-desk2.intel.com) ([10.212.231.121]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Dec 2022 16:28:18 -0800 From: Mat Martineau To: netdev@vger.kernel.org Cc: Matthieu Baerts , davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, edumazet@google.com, fw@strlen.de, kishen.maloor@intel.com, dcaratti@redhat.com, mptcp@lists.linux.dev, Mat Martineau , stable@vger.kernel.org Subject: [PATCH net 4/4] mptcp: use proper req destructor for IPv6 Date: Fri, 9 Dec 2022 16:28:10 -0800 Message-Id: <20221210002810.289674-5-mathew.j.martineau@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221210002810.289674-1-mathew.j.martineau@linux.intel.com> References: <20221210002810.289674-1-mathew.j.martineau@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Matthieu Baerts Before, only the destructor from TCP request sock in IPv4 was called even if the subflow was IPv6. It is important to use the right destructor to avoid memory leaks with some advanced IPv6 features, e.g. when the request socks contain specific IPv6 options. Fixes: 79c0949e9a09 ("mptcp: Add key generation and token tree") Reviewed-by: Mat Martineau Cc: stable@vger.kernel.org Signed-off-by: Matthieu Baerts Signed-off-by: Mat Martineau --- Note: One original author of 79c0949e9a09 is not cc'd due to inactive email address. --- net/mptcp/subflow.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 30524dd7d0ec..613f515fedf0 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -45,7 +45,6 @@ static void subflow_req_destructor(struct request_sock *req) sock_put((struct sock *)subflow_req->msk); mptcp_token_destroy_request(req); - tcp_request_sock_ops.destructor(req); } static void subflow_generate_hmac(u64 key1, u64 key2, u32 nonce1, u32 nonce2, @@ -550,6 +549,12 @@ static int subflow_v4_conn_request(struct sock *sk, struct sk_buff *skb) return 0; } +static void subflow_v4_req_destructor(struct request_sock *req) +{ + subflow_req_destructor(req); + tcp_request_sock_ops.destructor(req); +} + #if IS_ENABLED(CONFIG_MPTCP_IPV6) static struct request_sock_ops mptcp_subflow_v6_request_sock_ops __ro_after_init; static struct tcp_request_sock_ops subflow_request_sock_ipv6_ops __ro_after_init; @@ -581,6 +586,12 @@ static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb) tcp_listendrop(sk); return 0; /* don't send reset */ } + +static void subflow_v6_req_destructor(struct request_sock *req) +{ + subflow_req_destructor(req); + tcp6_request_sock_ops.destructor(req); +} #endif struct request_sock *mptcp_subflow_reqsk_alloc(const struct request_sock_ops *ops, @@ -1929,8 +1940,6 @@ static int subflow_ops_init(struct request_sock_ops *subflow_ops) if (!subflow_ops->slab) return -ENOMEM; - subflow_ops->destructor = subflow_req_destructor; - return 0; } @@ -1938,6 +1947,8 @@ void __init mptcp_subflow_init(void) { mptcp_subflow_v4_request_sock_ops = tcp_request_sock_ops; mptcp_subflow_v4_request_sock_ops.slab_name = "request_sock_subflow_v4"; + mptcp_subflow_v4_request_sock_ops.destructor = subflow_v4_req_destructor; + if (subflow_ops_init(&mptcp_subflow_v4_request_sock_ops) != 0) panic("MPTCP: failed to init subflow v4 request sock ops\n"); @@ -1963,6 +1974,8 @@ void __init mptcp_subflow_init(void) mptcp_subflow_v6_request_sock_ops = tcp6_request_sock_ops; mptcp_subflow_v6_request_sock_ops.slab_name = "request_sock_subflow_v6"; + mptcp_subflow_v6_request_sock_ops.destructor = subflow_v6_req_destructor; + if (subflow_ops_init(&mptcp_subflow_v6_request_sock_ops) != 0) panic("MPTCP: failed to init subflow v6 request sock ops\n");