From patchwork Mon Feb 6 10:22:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herbert Xu X-Patchwork-Id: 13129695 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF189C6379F for ; Mon, 6 Feb 2023 11:34:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230107AbjBFLeT (ORCPT ); Mon, 6 Feb 2023 06:34:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbjBFLdo (ORCPT ); Mon, 6 Feb 2023 06:33:44 -0500 Received: from formenos.hmeau.com (167-179-156-38.a7b39c.syd.nbn.aussiebb.net [167.179.156.38]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3EED13536; Mon, 6 Feb 2023 03:33:41 -0800 (PST) Received: from loth.rohan.me.apana.org.au ([192.168.167.2]) by formenos.hmeau.com with smtp (Exim 4.94.2 #2 (Debian)) id 1pOydh-007zhJ-B3; Mon, 06 Feb 2023 18:22:22 +0800 Received: by loth.rohan.me.apana.org.au (sSMTP sendmail emulation); Mon, 06 Feb 2023 18:22:21 +0800 From: "Herbert Xu" Date: Mon, 06 Feb 2023 18:22:21 +0800 Subject: [PATCH 5/17] net: ipv4: Add scaffolding to change completion function signature References: To: Linux Crypto Mailing List , Alasdair Kergon , Mike Snitzer , dm-devel@redhat.com, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org, Tyler Hicks , ecryptfs@vger.kernel.org, Marcel Holtmann , Johan Hedberg , Luiz Augusto von Dentz , linux-bluetooth@vger.kernel.org, Steffen Klassert , Jon Maloy , Ying Xue , Boris Pismenny , John Fastabend , David Howells , Jarkko Sakkinen , keyrings@vger.kernel.org Message-Id: Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This patch adds temporary scaffolding so that the Crypto API completion function can take a void * instead of crypto_async_request. Once affected users have been converted this can be removed. Signed-off-by: Herbert Xu --- net/ipv4/ah4.c | 8 ++++---- net/ipv4/esp4.c | 20 ++++++++++---------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c index ee4e578c7f20..1fc0231eb1ee 100644 --- a/net/ipv4/ah4.c +++ b/net/ipv4/ah4.c @@ -117,11 +117,11 @@ static int ip_clear_mutable_options(const struct iphdr *iph, __be32 *daddr) return 0; } -static void ah_output_done(struct crypto_async_request *base, int err) +static void ah_output_done(crypto_completion_data_t *data, int err) { u8 *icv; struct iphdr *iph; - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_state *x = skb_dst(skb)->xfrm; struct ah_data *ahp = x->data; struct iphdr *top_iph = ip_hdr(skb); @@ -262,12 +262,12 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb) return err; } -static void ah_input_done(struct crypto_async_request *base, int err) +static void ah_input_done(crypto_completion_data_t *data, int err) { u8 *auth_data; u8 *icv; struct iphdr *work_iph; - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_state *x = xfrm_input_state(skb); struct ah_data *ahp = x->data; struct ip_auth_hdr *ah = ip_auth_hdr(skb); diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c index 52c8047efedb..8abe07c1ff28 100644 --- a/net/ipv4/esp4.c +++ b/net/ipv4/esp4.c @@ -244,9 +244,9 @@ static int esp_output_tail_tcp(struct xfrm_state *x, struct sk_buff *skb) } #endif -static void esp_output_done(struct crypto_async_request *base, int err) +static void esp_output_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); struct xfrm_offload *xo = xfrm_offload(skb); void *tmp; struct xfrm_state *x; @@ -332,12 +332,12 @@ static struct ip_esp_hdr *esp_output_set_extra(struct sk_buff *skb, return esph; } -static void esp_output_done_esn(struct crypto_async_request *base, int err) +static void esp_output_done_esn(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); esp_output_restore_header(skb); - esp_output_done(base, err); + esp_output_done(data, err); } static struct ip_esp_hdr *esp_output_udp_encap(struct sk_buff *skb, @@ -830,9 +830,9 @@ int esp_input_done2(struct sk_buff *skb, int err) } EXPORT_SYMBOL_GPL(esp_input_done2); -static void esp_input_done(struct crypto_async_request *base, int err) +static void esp_input_done(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); xfrm_input_resume(skb, esp_input_done2(skb, err)); } @@ -860,12 +860,12 @@ static void esp_input_set_header(struct sk_buff *skb, __be32 *seqhi) } } -static void esp_input_done_esn(struct crypto_async_request *base, int err) +static void esp_input_done_esn(crypto_completion_data_t *data, int err) { - struct sk_buff *skb = base->data; + struct sk_buff *skb = crypto_get_completion_data(data); esp_input_restore_header(skb); - esp_input_done(base, err); + esp_input_done(data, err); } /*