From patchwork Tue Oct 25 10:21:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019030 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 433C6C04A95 for ; Tue, 25 Oct 2022 10:24:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231586AbiJYKYj (ORCPT ); Tue, 25 Oct 2022 06:24:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38408 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230483AbiJYKYO (ORCPT ); Tue, 25 Oct 2022 06:24:14 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C2BA5F4E for ; Tue, 25 Oct 2022 03:22:13 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 11ECE61888 for ; Tue, 25 Oct 2022 10:22:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 06EB4C433C1; Tue, 25 Oct 2022 10:22:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693332; bh=RwyKLQc80e1O45PWcbczRwuuD8qxdLSDJtISC7/WAs0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YMJI2fKhesWv7rQLsjrziHC2HaNCY0QSPYG6hx43xuNSZ/fI/s7PqpV96MO60OJIu YNy2s8NFsQ3C1EP4gtumsz9rHcZImNIg1O8dEyzlSqAV76t6AnxITz88UWnnfgrtFG 5mCM6CDHcs1dgqIeESKnZ5/RCroc0fSpeV9l40PyrAdwkqSkf5phI+AFoNqXU8x0XX 4A4djGRVmCvUqDi2xetkEM0o3nmZbtpwQnE3FX15ZJ6cMgfRvdWDypJwn91dslo7Dq QcH8DN3XGvAEDER7FCiUp9PO+QD+lbbaYJVyOha9r+P32nI8VNqCKANIbl3cTG124X sS6ZbVlGbdMEg== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 1/8] xfrm: add new full offload flag Date: Tue, 25 Oct 2022 13:21:57 +0300 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky In the next patches, the xfrm core code will be extended to support new type of offload - full offload. In that mode, both policy and state should be specially configured in order to perform whole offloaded data path. Full offload takes care of encryption, decryption, encapsulation and other operations with headers. As this mode is new for XFRM policy flow, we can "start fresh" with flag bits and release first and second bit for future use. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/net/xfrm.h | 7 +++++++ include/uapi/linux/xfrm.h | 6 ++++++ net/xfrm/xfrm_device.c | 3 +++ net/xfrm/xfrm_user.c | 2 ++ 4 files changed, 18 insertions(+) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index dbc81f5eb553..c82401b706d5 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -131,12 +131,19 @@ enum { XFRM_DEV_OFFLOAD_OUT, }; +enum { + XFRM_DEV_OFFLOAD_UNSPECIFIED, + XFRM_DEV_OFFLOAD_CRYPTO, + XFRM_DEV_OFFLOAD_FULL, +}; + struct xfrm_dev_offload { struct net_device *dev; netdevice_tracker dev_tracker; struct net_device *real_dev; unsigned long offload_handle; u8 dir : 2; + u8 type : 2; }; struct xfrm_mode { diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 4f84ea7ee14c..463c6c1af23a 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -519,6 +519,12 @@ struct xfrm_user_offload { */ #define XFRM_OFFLOAD_IPV6 1 #define XFRM_OFFLOAD_INBOUND 2 +/* Two bits above are relevant for state path only, while + * offload is used for both policy and state flows. + * + * In policy offload mode, they are free and can be safely reused. + */ +#define XFRM_OFFLOAD_FULL 4 struct xfrm_userpolicy_default { #define XFRM_USERPOLICY_UNSPEC 0 diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 5f5aafd418af..7c4e0f14df27 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -278,12 +278,15 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, else xso->dir = XFRM_DEV_OFFLOAD_OUT; + xso->type = XFRM_DEV_OFFLOAD_CRYPTO; + err = dev->xfrmdev_ops->xdo_dev_state_add(x); if (err) { xso->dev = NULL; xso->dir = 0; xso->real_dev = NULL; netdev_put(dev, &xso->dev_tracker); + xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; if (err != -EOPNOTSUPP) { NL_SET_ERR_MSG(extack, "Device failed to offload this state"); diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index e73f9efc54c1..bea2d4647a90 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -943,6 +943,8 @@ static int copy_user_offload(struct xfrm_dev_offload *xso, struct sk_buff *skb) xuo->ifindex = xso->dev->ifindex; if (xso->dir == XFRM_DEV_OFFLOAD_IN) xuo->flags = XFRM_OFFLOAD_INBOUND; + if (xso->type == XFRM_DEV_OFFLOAD_FULL) + xuo->flags |= XFRM_OFFLOAD_FULL; return 0; } From patchwork Tue Oct 25 10:21:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019034 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFF84C04A95 for ; Tue, 25 Oct 2022 10:25:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232252AbiJYKZQ (ORCPT ); Tue, 25 Oct 2022 06:25:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232321AbiJYKYZ (ORCPT ); Tue, 25 Oct 2022 06:24:25 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8AE8E1805BF for ; Tue, 25 Oct 2022 03:22:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 23D1261873 for ; Tue, 25 Oct 2022 10:22:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23BA0C433C1; Tue, 25 Oct 2022 10:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693348; bh=qHmW8OCfNNSjcrFsFUby6gaoKpoXh3IysglUbRexxjo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gNrDmQBdxR//xI4mdcW4Q9K5DQux2kxzkC30Kdgj2KD0AF6Hj705ZwHWNV4Mjng7n 7os9n8bmbUsdQE3ZmLuwOCUBWZ80pXvnJbRTK4dQwyr+h6VHYvemS9h4Zxr71KTkn8 Mta7WYWj5mHyqfPQlb1cgp19C9Xo8oh97Cq+gM6Qj7YqI3H1rVOgxe5y3e+iLiCc/y AJpkYA4r1sNd2+B0JzKu0V4BoGbyBwQGqU86vEGOLXLH8VIYubsCQ90wvLwMLqdAK8 V0kye35LMJte9VZ41ASKoAnp022ctLNWZ7Q1X5Mgllzxy40imKXGfByFkEC9IAyxru 8DGN7DAjWUnFw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 2/8] xfrm: allow state full offload mode Date: Tue, 25 Oct 2022 13:21:58 +0300 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Allow users to configure xfrm states with full offload mode. The full mode must be requested both for policy and state, and such requires us to do not implement fallback. We explicitly return an error if requested full mode can't be configured. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- .../inline_crypto/ch_ipsec/chcr_ipsec.c | 4 ++++ .../net/ethernet/intel/ixgbe/ixgbe_ipsec.c | 5 ++++ drivers/net/ethernet/intel/ixgbevf/ipsec.c | 5 ++++ .../mellanox/mlx5/core/en_accel/ipsec.c | 4 ++++ drivers/net/netdevsim/ipsec.c | 5 ++++ net/xfrm/xfrm_device.c | 24 +++++++++++++++---- 6 files changed, 42 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c index 585590520076..ca21794281d6 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c @@ -283,6 +283,10 @@ static int ch_ipsec_xfrm_add_state(struct xfrm_state *x) pr_debug("Cannot offload xfrm states with geniv other than seqiv\n"); return -EINVAL; } + if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + pr_debug("Unsupported xfrm offload\n"); + return -EINVAL; + } sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL); if (!sa_entry) { diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c index 774de63dd93a..53a969e34883 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c @@ -585,6 +585,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) return -EINVAL; } + if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_err(dev, "Unsupported ipsec offload type\n"); + return -EINVAL; + } + if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { struct rx_sa rsa; diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c index 9984ebc62d78..c1cf540d162a 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c +++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c @@ -280,6 +280,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) return -EINVAL; } + if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_err(dev, "Unsupported ipsec offload type\n"); + return -EINVAL; + } + if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { struct rx_sa rsa; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 325b56ff3e8c..1d8ce116946d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -256,6 +256,10 @@ static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) netdev_info(netdev, "Cannot offload xfrm states with geniv other than seqiv\n"); return -EINVAL; } + if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_info(netdev, "Unsupported xfrm offload type\n"); + return -EINVAL; + } return 0; } diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c index 386336a38f34..b93baf5c8bee 100644 --- a/drivers/net/netdevsim/ipsec.c +++ b/drivers/net/netdevsim/ipsec.c @@ -149,6 +149,11 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs) return -EINVAL; } + if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_err(dev, "Unsupported ipsec offload type\n"); + return -EINVAL; + } + /* find the first unused index */ ret = nsim_ipsec_find_empty_idx(ipsec); if (ret < 0) { diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 7c4e0f14df27..1294e0490270 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -216,6 +216,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, struct xfrm_dev_offload *xso = &x->xso; xfrm_address_t *saddr; xfrm_address_t *daddr; + bool is_full_offload; if (!x->type_offload) { NL_SET_ERR_MSG(extack, "Type doesn't support offload"); @@ -228,11 +229,13 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, return -EINVAL; } - if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND)) { + if (xuo->flags & + ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND | XFRM_OFFLOAD_FULL)) { NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request"); return -EINVAL; } + is_full_offload = xuo->flags & XFRM_OFFLOAD_FULL; dev = dev_get_by_index(net, xuo->ifindex); if (!dev) { if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) { @@ -247,7 +250,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, x->props.family, xfrm_smark_get(0, x)); if (IS_ERR(dst)) - return 0; + return (is_full_offload) ? -EINVAL : 0; dev = dst->dev; @@ -258,7 +261,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, if (!dev->xfrmdev_ops || !dev->xfrmdev_ops->xdo_dev_state_add) { xso->dev = NULL; dev_put(dev); - return 0; + return (is_full_offload) ? -EINVAL : 0; } if (x->props.flags & XFRM_STATE_ESN && @@ -278,7 +281,10 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, else xso->dir = XFRM_DEV_OFFLOAD_OUT; - xso->type = XFRM_DEV_OFFLOAD_CRYPTO; + if (is_full_offload) + xso->type = XFRM_DEV_OFFLOAD_FULL; + else + xso->type = XFRM_DEV_OFFLOAD_CRYPTO; err = dev->xfrmdev_ops->xdo_dev_state_add(x); if (err) { @@ -288,7 +294,15 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, netdev_put(dev, &xso->dev_tracker); xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; - if (err != -EOPNOTSUPP) { + /* User explicitly requested full offload mode and configured + * policy in addition to the XFRM state. So be civil to users, + * and return an error instead of taking fallback path. + * + * This WARN_ON() can be seen as a documentation for driver + * authors to do not return -EOPNOTSUPP in full offload mode. + */ + WARN_ON(err == -EOPNOTSUPP && is_full_offload); + if (err != -EOPNOTSUPP || is_full_offload) { NL_SET_ERR_MSG(extack, "Device failed to offload this state"); return err; } From patchwork Tue Oct 25 10:21:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019031 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24319C38A2D for ; Tue, 25 Oct 2022 10:24:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231842AbiJYKYl (ORCPT ); Tue, 25 Oct 2022 06:24:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231482AbiJYKYQ (ORCPT ); Tue, 25 Oct 2022 06:24:16 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 05F7F17078 for ; Tue, 25 Oct 2022 03:22:19 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AE6F4B81CE2 for ; Tue, 25 Oct 2022 10:22:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9C63C433D6; Tue, 25 Oct 2022 10:22:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693336; bh=bpQGKmsiOyudJhNJAXoETudixXbpyyOYFvryhG2PjF4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=X+2Nc7eRpG2QZjf9qzzODODpLBzzZ+dp2okEsUaoBzEeHku8azPlBAbqK7wT18hXH gKA2LZ2aR7r84MUZvJOkbX5QAN7kZUxE4TeR5WcWxLLCtnkEa7K8dNNISKYgH64vbF AG+kkyWAkmQFLacLbXiLPCOSl7yqOzfAWjqm0+32Rj84pPcqainock6ieAnNen7L8s G9d+hLTkvPSX+S1yKA/doqlsxtM2b2THggOj8LwWcHKqpy7f7AbcNwnRQvz5Vfjbq/ Wff29UBh7ee2nCbBEleB1Yz/bOye8W32gxbJd4c17YVoqElz2+fe9pjd792wtIuNJa y93v8cBdASQnw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 3/8] xfrm: add an interface to offload policy Date: Tue, 25 Oct 2022 13:21:59 +0300 Message-Id: <5d3f6696226b84e57ddac7423cb19924c8734ece.1666692948.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend netlink interface to add and delete XFRM policy from the device. This functionality is a first step to implement full IPsec offload solution. Signed-off-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/linux/netdevice.h | 3 ++ include/net/xfrm.h | 44 +++++++++++++++++++++++++ net/xfrm/xfrm_device.c | 67 +++++++++++++++++++++++++++++++++++++- net/xfrm/xfrm_policy.c | 68 +++++++++++++++++++++++++++++++++++++++ net/xfrm/xfrm_user.c | 18 +++++++++++ 5 files changed, 199 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index eddf8ee270e7..e3d979a9b69c 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1033,6 +1033,9 @@ struct xfrmdev_ops { bool (*xdo_dev_offload_ok) (struct sk_buff *skb, struct xfrm_state *x); void (*xdo_dev_state_advance_esn) (struct xfrm_state *x); + int (*xdo_dev_policy_add) (struct xfrm_policy *x); + void (*xdo_dev_policy_delete) (struct xfrm_policy *x); + void (*xdo_dev_policy_free) (struct xfrm_policy *x); }; #endif diff --git a/include/net/xfrm.h b/include/net/xfrm.h index c82401b706d5..faa754d9431a 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -129,6 +129,7 @@ struct xfrm_state_walk { enum { XFRM_DEV_OFFLOAD_IN = 1, XFRM_DEV_OFFLOAD_OUT, + XFRM_DEV_OFFLOAD_FWD, }; enum { @@ -541,6 +542,8 @@ struct xfrm_policy { struct xfrm_tmpl xfrm_vec[XFRM_MAX_DEPTH]; struct hlist_node bydst_inexact_list; struct rcu_head rcu; + + struct xfrm_dev_offload xdo; }; static inline struct net *xp_net(const struct xfrm_policy *xp) @@ -1585,6 +1588,7 @@ struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq); int xfrm_state_delete(struct xfrm_state *x); int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync); int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid); +int xfrm_dev_policy_flush(struct net *net, struct net_device *dev, bool task_valid); void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si); void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si); u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq); @@ -1897,6 +1901,9 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, struct xfrm_user_offload *xuo, struct netlink_ext_ack *extack); +int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp, + struct xfrm_user_offload *xuo, u8 dir, + struct netlink_ext_ack *extack); bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x); static inline void xfrm_dev_state_advance_esn(struct xfrm_state *x) @@ -1945,6 +1952,28 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x) netdev_put(dev, &xso->dev_tracker); } } + +static inline void xfrm_dev_policy_delete(struct xfrm_policy *x) +{ + struct xfrm_dev_offload *xdo = &x->xdo; + struct net_device *dev = xdo->dev; + + if (dev && dev->xfrmdev_ops && dev->xfrmdev_ops->xdo_dev_policy_delete) + dev->xfrmdev_ops->xdo_dev_policy_delete(x); +} + +static inline void xfrm_dev_policy_free(struct xfrm_policy *x) +{ + struct xfrm_dev_offload *xdo = &x->xdo; + struct net_device *dev = xdo->dev; + + if (dev && dev->xfrmdev_ops) { + if (dev->xfrmdev_ops->xdo_dev_policy_free) + dev->xfrmdev_ops->xdo_dev_policy_free(x); + xdo->dev = NULL; + netdev_put(dev, &xdo->dev_tracker); + } +} #else static inline void xfrm_dev_resume(struct sk_buff *skb) { @@ -1972,6 +2001,21 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x) { } +static inline int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp, + struct xfrm_user_offload *xuo, u8 dir, + struct netlink_ext_ack *extack) +{ + return 0; +} + +static inline void xfrm_dev_policy_delete(struct xfrm_policy *x) +{ +} + +static inline void xfrm_dev_policy_free(struct xfrm_policy *x) +{ +} + static inline bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) { return false; diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 1294e0490270..b5c6a78fdac2 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -312,6 +312,69 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, } EXPORT_SYMBOL_GPL(xfrm_dev_state_add); +int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp, + struct xfrm_user_offload *xuo, u8 dir, + struct netlink_ext_ack *extack) +{ + struct xfrm_dev_offload *xdo = &xp->xdo; + struct net_device *dev; + int err; + + if (!xuo->flags || xuo->flags & ~XFRM_OFFLOAD_FULL) { + /* We support only Full offload mode and it means + * that user must set XFRM_OFFLOAD_FULL bit. + */ + NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request"); + return -EINVAL; + } + + dev = dev_get_by_index(net, xuo->ifindex); + if (!dev) + return -EINVAL; + + if (!dev->xfrmdev_ops || !dev->xfrmdev_ops->xdo_dev_policy_add) { + xdo->dev = NULL; + dev_put(dev); + NL_SET_ERR_MSG(extack, "Policy offload is not supported"); + return -EINVAL; + } + + xdo->dev = dev; + netdev_tracker_alloc(dev, &xdo->dev_tracker, GFP_ATOMIC); + xdo->real_dev = dev; + xdo->type = XFRM_DEV_OFFLOAD_FULL; + switch (dir) { + case XFRM_POLICY_IN: + xdo->dir = XFRM_DEV_OFFLOAD_IN; + break; + case XFRM_POLICY_OUT: + xdo->dir = XFRM_DEV_OFFLOAD_OUT; + break; + case XFRM_POLICY_FWD: + xdo->dir = XFRM_DEV_OFFLOAD_FWD; + break; + default: + xdo->dev = NULL; + dev_put(dev); + NL_SET_ERR_MSG(extack, "Unrecognized oflload direction"); + return -EINVAL; + } + + err = dev->xfrmdev_ops->xdo_dev_policy_add(xp); + if (err) { + xdo->dev = NULL; + xdo->real_dev = NULL; + xdo->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; + xdo->dir = 0; + netdev_put(dev, &xdo->dev_tracker); + NL_SET_ERR_MSG(extack, "Device failed to offload this policy"); + return err; + } + + return 0; +} +EXPORT_SYMBOL_GPL(xfrm_dev_policy_add); + bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) { int mtu; @@ -414,8 +477,10 @@ static int xfrm_api_check(struct net_device *dev) static int xfrm_dev_down(struct net_device *dev) { - if (dev->features & NETIF_F_HW_ESP) + if (dev->features & NETIF_F_HW_ESP) { xfrm_dev_state_flush(dev_net(dev), dev, true); + xfrm_dev_policy_flush(dev_net(dev), dev, true); + } return NOTIFY_DONE; } diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index e392d8d05e0c..b07ed169f501 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -425,6 +425,7 @@ void xfrm_policy_destroy(struct xfrm_policy *policy) if (del_timer(&policy->timer) || del_timer(&policy->polq.hold_timer)) BUG(); + xfrm_dev_policy_free(policy); call_rcu(&policy->rcu, xfrm_policy_destroy_rcu); } EXPORT_SYMBOL(xfrm_policy_destroy); @@ -1769,12 +1770,41 @@ xfrm_policy_flush_secctx_check(struct net *net, u8 type, bool task_valid) } return err; } + +static inline int xfrm_dev_policy_flush_secctx_check(struct net *net, + struct net_device *dev, + bool task_valid) +{ + struct xfrm_policy *pol; + int err = 0; + + list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { + if (pol->walk.dead || + xfrm_policy_id2dir(pol->index) >= XFRM_POLICY_MAX || + pol->xdo.dev != dev) + continue; + + err = security_xfrm_policy_delete(pol->security); + if (err) { + xfrm_audit_policy_delete(pol, 0, task_valid); + return err; + } + } + return err; +} #else static inline int xfrm_policy_flush_secctx_check(struct net *net, u8 type, bool task_valid) { return 0; } + +static inline int xfrm_dev_policy_flush_secctx_check(struct net *net, + struct net_device *dev, + bool task_valid) +{ + return 0; +} #endif int xfrm_policy_flush(struct net *net, u8 type, bool task_valid) @@ -1814,6 +1844,43 @@ int xfrm_policy_flush(struct net *net, u8 type, bool task_valid) } EXPORT_SYMBOL(xfrm_policy_flush); +int xfrm_dev_policy_flush(struct net *net, struct net_device *dev, bool task_valid) +{ + int dir, err = 0, cnt = 0; + struct xfrm_policy *pol; + + spin_lock_bh(&net->xfrm.xfrm_policy_lock); + + err = xfrm_dev_policy_flush_secctx_check(net, dev, task_valid); + if (err) + goto out; + +again: + list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { + dir = xfrm_policy_id2dir(pol->index); + if (pol->walk.dead || + dir >= XFRM_POLICY_MAX || + pol->xdo.dev != dev) + continue; + + __xfrm_policy_unlink(pol, dir); + spin_unlock_bh(&net->xfrm.xfrm_policy_lock); + cnt++; + xfrm_audit_policy_delete(pol, 1, task_valid); + xfrm_policy_kill(pol); + spin_lock_bh(&net->xfrm.xfrm_policy_lock); + goto again; + } + if (cnt) + __xfrm_policy_inexact_flush(net); + else + err = -ESRCH; +out: + spin_unlock_bh(&net->xfrm.xfrm_policy_lock); + return err; +} +EXPORT_SYMBOL(xfrm_dev_policy_flush); + int xfrm_policy_walk(struct net *net, struct xfrm_policy_walk *walk, int (*func)(struct xfrm_policy *, int, int, void*), void *data) @@ -2245,6 +2312,7 @@ int xfrm_policy_delete(struct xfrm_policy *pol, int dir) pol = __xfrm_policy_unlink(pol, dir); spin_unlock_bh(&net->xfrm.xfrm_policy_lock); if (pol) { + xfrm_dev_policy_delete(pol); xfrm_policy_kill(pol); return 0; } diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index bea2d4647a90..8d7adeeaffbf 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1869,6 +1869,15 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, if (attrs[XFRMA_IF_ID]) xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]); + /* configure the hardware if offload is requested */ + if (attrs[XFRMA_OFFLOAD_DEV]) { + err = xfrm_dev_policy_add(net, xp, + nla_data(attrs[XFRMA_OFFLOAD_DEV]), + p->dir, extack); + if (err) + goto error; + } + return xp; error: *errp = err; @@ -1908,6 +1917,7 @@ static int xfrm_add_policy(struct sk_buff *skb, struct nlmsghdr *nlh, xfrm_audit_policy_add(xp, err ? 0 : 1, true); if (err) { + xfrm_dev_policy_delete(xp); security_xfrm_policy_free(xp->security); kfree(xp); return err; @@ -2020,6 +2030,8 @@ static int dump_one_policy(struct xfrm_policy *xp, int dir, int count, void *ptr err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) { nlmsg_cancel(skb, nlh); return err; @@ -3343,6 +3355,8 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x, err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) { nlmsg_cancel(skb, nlh); return err; @@ -3461,6 +3475,8 @@ static int build_polexpire(struct sk_buff *skb, struct xfrm_policy *xp, err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) { nlmsg_cancel(skb, nlh); return err; @@ -3544,6 +3560,8 @@ static int xfrm_notify_policy(struct xfrm_policy *xp, int dir, const struct km_e err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) goto out_free_skb; From patchwork Tue Oct 25 10:22:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019032 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8722C38A2D for ; Tue, 25 Oct 2022 10:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231938AbiJYKYo (ORCPT ); Tue, 25 Oct 2022 06:24:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231958AbiJYKYS (ORCPT ); Tue, 25 Oct 2022 06:24:18 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 240C618027C for ; Tue, 25 Oct 2022 03:22:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id BFD73B81CE5 for ; Tue, 25 Oct 2022 10:22:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01CDFC433C1; Tue, 25 Oct 2022 10:22:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693340; bh=AHpV9fzwepCYxd+t9ITgxq3MFnbP2Kw3PLKq1IqRhjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RFuXeD6IkAkFB6m3phZWhp4f033Mm8RQc4Xwx7gda7zyQdEiQ3o9JivEbWRBsHwQ3 CU1itEdnL0FuvCjxtPT1cu2iyap3hxOdfqPTemQSdO2KTvO6fqC/gk0o1ImpmIICzV or3J2nec+pPFGW20LZOCaPwW0DbImcLLYEX56wntal/wmWz8nVo8eIr17UHmZtgK4b Pko+L0xa/pHueE9r+PE6BuU0ifMqTVyjvaz0DmwK5Lqn0enrI4088Q8iqzaONNYDlQ WRhYm7IwTvFJlKWKCTav16YUCym3usay6LoXx38gSxGzJvUPZTB8s5mAf1S11Q3/YL 24RvEpQovLnOA== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 4/8] xfrm: add TX datapath support for IPsec full offload mode Date: Tue, 25 Oct 2022 13:22:00 +0300 Message-Id: <8330a2b5667408682197934c9458120e4366c6d3.1666692948.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky In IPsec full mode, the device is going to encrypt and encapsulate packets that are associated with offloaded policy. After successful policy lookup to indicate if packets should be offloaded or not, the stack forwards packets to the device to do the magic. Signed-off-by: Raed Salem Signed-off-by: Huy Nguyen Signed-off-by: Leon Romanovsky --- net/xfrm/xfrm_device.c | 15 +++++++++++++-- net/xfrm/xfrm_output.c | 12 +++++++++++- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index b5c6a78fdac2..783750998e80 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -120,6 +120,16 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur if (xo->flags & XFRM_GRO || x->xso.dir == XFRM_DEV_OFFLOAD_IN) return skb; + /* The packet was sent to HW IPsec full offload engine, + * but to wrong device. Drop the packet, so it won't skip + * XFRM stack. + */ + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL && x->xso.dev != dev) { + kfree_skb(skb); + dev_core_stats_tx_dropped_inc(dev); + return NULL; + } + /* This skb was already validated on the upper/virtual dev */ if ((x->xso.dev != dev) && (x->xso.real_dev == dev)) return skb; @@ -385,8 +395,9 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) if (!x->type_offload || x->encap) return false; - if ((!dev || (dev == xfrm_dst_path(dst)->dev)) && - (!xdst->child->xfrm)) { + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL || + ((!dev || (dev == xfrm_dst_path(dst)->dev)) && + !xdst->child->xfrm)) { mtu = xfrm_state_mtu(x, xdst->child_mtu_cached); if (skb->len <= mtu) goto ok; diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c index 9a5e79a38c67..dde009be8463 100644 --- a/net/xfrm/xfrm_output.c +++ b/net/xfrm/xfrm_output.c @@ -494,7 +494,7 @@ static int xfrm_output_one(struct sk_buff *skb, int err) struct xfrm_state *x = dst->xfrm; struct net *net = xs_net(x); - if (err <= 0) + if (err <= 0 || x->xso.type == XFRM_DEV_OFFLOAD_FULL) goto resume; do { @@ -718,6 +718,16 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb) break; } + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL) { + if (!xfrm_dev_offload_ok(skb, x)) { + XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR); + kfree_skb(skb); + return -EHOSTUNREACH; + } + + return xfrm_output_resume(sk, skb, 0); + } + secpath_reset(skb); if (xfrm_dev_offload_ok(skb, x)) { From patchwork Tue Oct 25 10:22:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019033 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3A28C38A2D for ; Tue, 25 Oct 2022 10:25:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232209AbiJYKZB (ORCPT ); Tue, 25 Oct 2022 06:25:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33100 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232089AbiJYKYU (ORCPT ); Tue, 25 Oct 2022 06:24:20 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A08A81805BA for ; Tue, 25 Oct 2022 03:22:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6F52661873 for ; Tue, 25 Oct 2022 10:22:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 390D7C433D6; Tue, 25 Oct 2022 10:22:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693344; bh=LUDssy/f5o4bhdLM3R/3EblkY+nwxYufonilUhj7wzU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MIy8Y3EptprBq9XOqMBThpzH2MFDoK/YW++4XzM5Uqlm29e97VpNMe+afkwJHqTde MNHtOxJtlkfmFTtJ8w1FDntFMBDNqFEIp3qBicq07a6xpD+WLlIL/ZooWB02ps1KHz zBIbHTe1wWgley8ajAUXIRJxmsNgLqn829fBFUF8hDWS1oqLRAbajtipzfkxH05Wdc Jp6Bebw9frcgUEeLhqIoWH22IBl3wx+4qCo9jXVNorxfV3mRqOwkn44fIBORhKIEGe JJlMIGWkiPpTaWD7vosLi5itmc7L47aeyv9fNJ/vGzperKWzHhJR5FqSYQjMPWGw4p TXrMu0s5lUohg== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 5/8] xfrm: add RX datapath protection for IPsec full offload mode Date: Tue, 25 Oct 2022 13:22:01 +0300 Message-Id: <28850de3338a6ce9a7b46855345ee02268ba23a0.1666692948.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Traffic received by device with enabled IPsec full offload should be forwarded to the stack only after decryption, packet headers and trailers removed. Such packets are expected to be seen as normal (non-XFRM) ones, while not-supported packets should be dropped by the HW. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/net/xfrm.h | 55 +++++++++++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 23 deletions(-) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index faa754d9431a..976361976ed5 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1102,6 +1102,29 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un return !0; } +#ifdef CONFIG_XFRM +static inline struct xfrm_state *xfrm_input_state(struct sk_buff *skb) +{ + struct sec_path *sp = skb_sec_path(skb); + + return sp->xvec[sp->len - 1]; +} +#endif + +static inline struct xfrm_offload *xfrm_offload(struct sk_buff *skb) +{ +#ifdef CONFIG_XFRM + struct sec_path *sp = skb_sec_path(skb); + + if (!sp || !sp->olen || sp->len != sp->olen) + return NULL; + + return &sp->ovec[sp->olen - 1]; +#else + return NULL; +#endif +} + #ifdef CONFIG_XFRM int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, unsigned short family); @@ -1133,10 +1156,19 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, { struct net *net = dev_net(skb->dev); int ndir = dir | (reverse ? XFRM_POLICY_MASK + 1 : 0); + struct xfrm_offload *xo = xfrm_offload(skb); + struct xfrm_state *x; if (sk && sk->sk_policy[XFRM_POLICY_IN]) return __xfrm_policy_check(sk, ndir, skb, family); + if (xo) { + x = xfrm_input_state(skb); + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL) + return (xo->flags & CRYPTO_DONE) && + (xo->status & CRYPTO_SUCCESS); + } + return __xfrm_check_nopolicy(net, skb, dir) || __xfrm_check_dev_nopolicy(skb, dir, family) || __xfrm_policy_check(sk, ndir, skb, family); @@ -1869,29 +1901,6 @@ static inline void xfrm_states_delete(struct xfrm_state **states, int n) } #endif -#ifdef CONFIG_XFRM -static inline struct xfrm_state *xfrm_input_state(struct sk_buff *skb) -{ - struct sec_path *sp = skb_sec_path(skb); - - return sp->xvec[sp->len - 1]; -} -#endif - -static inline struct xfrm_offload *xfrm_offload(struct sk_buff *skb) -{ -#ifdef CONFIG_XFRM - struct sec_path *sp = skb_sec_path(skb); - - if (!sp || !sp->olen || sp->len != sp->olen) - return NULL; - - return &sp->ovec[sp->olen - 1]; -#else - return NULL; -#endif -} - void __init xfrm_dev_init(void); #ifdef CONFIG_XFRM_OFFLOAD From patchwork Tue Oct 25 10:22:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019037 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CCCEC38A2D for ; Tue, 25 Oct 2022 10:25:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232408AbiJYKZi (ORCPT ); Tue, 25 Oct 2022 06:25:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231814AbiJYKYl (ORCPT ); Tue, 25 Oct 2022 06:24:41 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECBD3181971 for ; Tue, 25 Oct 2022 03:22:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id AC377B81CE2 for ; Tue, 25 Oct 2022 10:22:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EABF6C433D6; Tue, 25 Oct 2022 10:22:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693359; bh=p6zKuEFL6GMLtttde0/EyvpqXkRStUIFD60AHI/a9HU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MuKpqeE1daBNJGHNj4j7jHAafC1Xu2qZHyEmM3cQuTNpzmdVBiLUnMrpiVYq2ByfY Zg0/eACs9A2gkmUcDIwfDJvhpaFj81zv48xvwEqNixZn79oMfqIgVOuqFGMmI3sZJ/ HAi4ur0o5eMQyocFNQrlfjW6+iGxNrJHv8FpysYrjL8Ic120xvs2k+PjCDCD/xI5KB fEgKn69MauDBhe3EISwY6bvLFFd55ec17BhO5c9ziJ42rhdvzDPt+dEKlMS8HYdq3k GP782ZgQ39My3NSflrZ2rZZ2KqbnVM1ZoMBuX1EXJADcmrcm+WrxhIUrKjtvvJFPDj Zo4fSBDT8qfqQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 6/8] xfrm: speed-up lookup of HW policies Date: Tue, 25 Oct 2022 13:22:02 +0300 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Devices that implement IPsec full offload mode should offload policies too. In RX path, it causes to the situation that HW will always have higher priority over any SW policies. It means that we don't need to perform any search of inexact policies and/or priority checks if HW policy was discovered. In such situation, the HW will catch the packets anyway and HW can still implement inexact lookups. In case specific policy is not found, we will continue with full lookup and check for existence of HW policies in inexact list. HW policies are added to the head of SPD to ensure fast lookup, as XFRM iterates over all policies in the loop. Signed-off-by: Leon Romanovsky --- net/xfrm/xfrm_policy.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index b07ed169f501..cc10ee3ebafe 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -1562,9 +1562,12 @@ static struct xfrm_policy *xfrm_policy_insert_list(struct hlist_head *chain, break; } - if (newpos) + if (newpos && policy->xdo.type != XFRM_DEV_OFFLOAD_FULL) hlist_add_behind_rcu(&policy->bydst, &newpos->bydst); else + /* Full offload policies are enteded + * to the head to speed-up lookups. + */ hlist_add_head_rcu(&policy->bydst, chain); return delpol; @@ -2180,6 +2183,9 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type, break; } } + if (ret && ret->xdo.type == XFRM_DEV_OFFLOAD_FULL) + goto skip_inexact; + bin = xfrm_policy_inexact_lookup_rcu(net, type, family, dir, if_id); if (!bin || !xfrm_policy_find_inexact_candidates(&cand, bin, saddr, daddr)) From patchwork Tue Oct 25 10:22:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019035 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 660FCC04A95 for ; Tue, 25 Oct 2022 10:25:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232377AbiJYKZT (ORCPT ); Tue, 25 Oct 2022 06:25:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231224AbiJYKYd (ORCPT ); Tue, 25 Oct 2022 06:24:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0FB2181C90 for ; Tue, 25 Oct 2022 03:22:34 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4B6E1B81CE3 for ; Tue, 25 Oct 2022 10:22:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89F5EC433C1; Tue, 25 Oct 2022 10:22:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693352; bh=Mg2Xg5FcnvLV8Y/vsEOnvuiojnt/2RVVEVstU+c6WbQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EPhXgVBW56axMzYhtMuJk25vfEYbwdAHqf9W+BY6fN0aX9qlcUe5EgnrbXRBdrV9o POm4ju/BOZ3dA6sWdVQBVjS5UNU9IS24/FSKb813ZpWTPs7Q3JIPwcOS/LfP3HwJDV dH2/0/Pp/qMuYY7d0CApKm8stRT/DZeIecym79cLCE+xX7rIkG1rYQjrXmgO6p1OqD NqxUyDes6l64c1fD1pi+7AvhMB7UtKKpro/16KBxWQcGC/mJ5emlh+90yOfiHIp1EC hRE61dgZ/ojojaNPo+3JfUP0hHQTQB2czAK5EJWK5fxdgsrHA5EsoMsWSylLFuuVbn JdfakX3JQ3vYQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 7/8] xfrm: add support to HW update soft and hard limits Date: Tue, 25 Oct 2022 13:22:03 +0300 Message-Id: <42757f834a96be1976b6fc1841592dee06a605de.1666692948.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Both in RX and TX, the traffic that performs IPsec full offload transformation is accounted by HW. It is needed to properly handle hard limits that require to drop the packet. It means that XFRM core needs to update internal counters with the one that accounted by the HW, so new callbacks are introduced in this patch. In case of soft or hard limit is occurred, the driver should call to xfrm_state_check_expire() that will perform key rekeying exactly as done by XFRM core. Signed-off-by: Leon Romanovsky --- include/linux/netdevice.h | 1 + include/net/xfrm.h | 17 +++++++++++++++++ net/xfrm/xfrm_output.c | 1 - net/xfrm/xfrm_state.c | 4 ++++ 4 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index e3d979a9b69c..8f87fce07525 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1033,6 +1033,7 @@ struct xfrmdev_ops { bool (*xdo_dev_offload_ok) (struct sk_buff *skb, struct xfrm_state *x); void (*xdo_dev_state_advance_esn) (struct xfrm_state *x); + void (*xdo_dev_state_update_curlft) (struct xfrm_state *x); int (*xdo_dev_policy_add) (struct xfrm_policy *x); void (*xdo_dev_policy_delete) (struct xfrm_policy *x); void (*xdo_dev_policy_free) (struct xfrm_policy *x); diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 976361976ed5..41f8aaafe755 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1571,6 +1571,23 @@ struct xfrm_state *xfrm_stateonly_find(struct net *net, u32 mark, u32 if_id, struct xfrm_state *xfrm_state_lookup_byspi(struct net *net, __be32 spi, unsigned short family); int xfrm_state_check_expire(struct xfrm_state *x); +#ifdef CONFIG_XFRM_OFFLOAD +static inline void xfrm_dev_state_update_curlft(struct xfrm_state *x) +{ + struct xfrm_dev_offload *xdo = &x->xso; + struct net_device *dev = xdo->dev; + + if (x->xso.type != XFRM_DEV_OFFLOAD_FULL) + return; + + if (dev && dev->xfrmdev_ops && + dev->xfrmdev_ops->xdo_dev_state_update_curlft) + dev->xfrmdev_ops->xdo_dev_state_update_curlft(x); + +} +#else +static inline void xfrm_dev_state_update_curlft(struct xfrm_state *x) {} +#endif void xfrm_state_insert(struct xfrm_state *x); int xfrm_state_add(struct xfrm_state *x); int xfrm_state_update(struct xfrm_state *x); diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c index dde009be8463..a22033350ddc 100644 --- a/net/xfrm/xfrm_output.c +++ b/net/xfrm/xfrm_output.c @@ -560,7 +560,6 @@ static int xfrm_output_one(struct sk_buff *skb, int err) XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTSTATEPROTOERROR); goto error_nolock; } - dst = skb_dst_pop(skb); if (!dst) { XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR); diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 3d2fe7712ac5..b2c83c0f27f2 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -549,6 +549,8 @@ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) int err = 0; spin_lock(&x->lock); + xfrm_dev_state_update_curlft(x); + if (x->km.state == XFRM_STATE_DEAD) goto out; if (x->km.state == XFRM_STATE_EXPIRED) @@ -1786,6 +1788,8 @@ EXPORT_SYMBOL(xfrm_state_update); int xfrm_state_check_expire(struct xfrm_state *x) { + xfrm_dev_state_update_curlft(x); + if (!x->curlft.use_time) x->curlft.use_time = ktime_get_real_seconds(); From patchwork Tue Oct 25 10:22:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13019036 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DE1AC04A95 for ; Tue, 25 Oct 2022 10:25:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230285AbiJYKZ1 (ORCPT ); Tue, 25 Oct 2022 06:25:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43454 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231474AbiJYKYi (ORCPT ); Tue, 25 Oct 2022 06:24:38 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98043165CBB for ; Tue, 25 Oct 2022 03:22:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 411EEB81CE8 for ; Tue, 25 Oct 2022 10:22:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 420EAC433C1; Tue, 25 Oct 2022 10:22:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666693355; bh=vR/V5BHVB4iVbF5OXZAc6nS+e2MwsrVg6xSMEyBfIOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oadjb5dqcXSKtZgMj8TOkJ7lzh3MBWvdbuGO8cY5ARdIxJ3gkec9uuyq4gGaX8Fzz 3qsO1pRQftzH3i027LocHw/W7s144/rKHs4FZ0/TkkrebFrxzwFP94s9TC5IQOtOIl E0Dl55Yhh2pRu6h3J+IBUh+s4BVKVIoPMzvmaaAvfKpJXowXU1xCqZK6rC4771vSKT vusw6/pxhbic+KRQi+m0JfvVCOZyef9cYV48tD7b2T2pbOmCmCZ4ncWPcTlyeGgedK FAZRROYzUTDicLVW3FPTklFvhzgzFt1vhde5zfL3yA0qSwzOnqxRNagHYQS1Y3b9Fa nj34vyohxKkCw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v6 8/8] xfrm: document IPsec full offload mode Date: Tue, 25 Oct 2022 13:22:04 +0300 Message-Id: <00044b73d03efd78c75234fe45e2bd5f7982e774.1666692948.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend XFRM device offload API description with newly added full offload mode. Signed-off-by: Leon Romanovsky --- Documentation/networking/xfrm_device.rst | 62 ++++++++++++++++++++---- 1 file changed, 53 insertions(+), 9 deletions(-) diff --git a/Documentation/networking/xfrm_device.rst b/Documentation/networking/xfrm_device.rst index 01391dfd37d9..499f9d4ca021 100644 --- a/Documentation/networking/xfrm_device.rst +++ b/Documentation/networking/xfrm_device.rst @@ -5,6 +5,7 @@ XFRM device - offloading the IPsec computations =============================================== Shannon Nelson +Leon Romanovsky Overview @@ -18,10 +19,21 @@ can radically increase throughput and decrease CPU utilization. The XFRM Device interface allows NIC drivers to offer to the stack access to the hardware offload. +Right now, there are two types of hardware offload that kernel supports. + * IPsec crypto offload: + * NIC performs encrypt/decrypt + * Kernel does everything else + * IPsec full offload: + * NIC performs encrypt/decrypt + * NIC does encapsulation + * Kernel and NIC have SA and policy in-sync + * NIC handles the SA and policies states + * The Kernel talks to the keymanager + Userland access to the offload is typically through a system such as libreswan or KAME/raccoon, but the iproute2 'ip xfrm' command set can be handy when experimenting. An example command might look something -like this:: +like this for crypto offload: ip x s add proto esp dst 14.0.0.70 src 14.0.0.52 spi 0x07 mode transport \ reqid 0x07 replay-window 32 \ @@ -29,6 +41,17 @@ like this:: sel src 14.0.0.52/24 dst 14.0.0.70/24 proto tcp \ offload dev eth4 dir in +and for full offload + + ip x s add proto esp dst 14.0.0.70 src 14.0.0.52 spi 0x07 mode transport \ + reqid 0x07 replay-window 32 \ + aead 'rfc4106(gcm(aes))' 0x44434241343332312423222114131211f4f3f2f1 128 \ + sel src 14.0.0.52/24 dst 14.0.0.70/24 proto tcp \ + offload full dev eth4 dir in + + ip x p add src 14.0.0.70 dst 14.0.0.52 offload full dev eth4 dir in + tmpl src 14.0.0.70 dst 14.0.0.52 proto esp reqid 10000 mode transport + Yes, that's ugly, but that's what shell scripts and/or libreswan are for. @@ -40,17 +63,24 @@ Callbacks to implement /* from include/linux/netdevice.h */ struct xfrmdev_ops { + /* Crypto and Full offload callbacks */ int (*xdo_dev_state_add) (struct xfrm_state *x); void (*xdo_dev_state_delete) (struct xfrm_state *x); void (*xdo_dev_state_free) (struct xfrm_state *x); bool (*xdo_dev_offload_ok) (struct sk_buff *skb, struct xfrm_state *x); void (*xdo_dev_state_advance_esn) (struct xfrm_state *x); + + /* Solely full offload callbacks */ + void (*xdo_dev_state_update_curlft) (struct xfrm_state *x); + int (*xdo_dev_policy_add) (struct xfrm_policy *x); + void (*xdo_dev_policy_delete) (struct xfrm_policy *x); + void (*xdo_dev_policy_free) (struct xfrm_policy *x); }; -The NIC driver offering ipsec offload will need to implement these -callbacks to make the offload available to the network stack's -XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and +The NIC driver offering ipsec offload will need to implement callbacks +relevant to supported offload to make the offload available to the network +stack's XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload. @@ -79,7 +109,8 @@ and an indication of whether it is for Rx or Tx. The driver should =========== =================================== 0 success - -EOPNETSUPP offload not supported, try SW IPsec + -EOPNETSUPP offload not supported, try SW IPsec, + not applicable for full offload mode other fail the request =========== =================================== @@ -96,6 +127,7 @@ will serviceable. This can check the packet information to be sure the offload can be supported (e.g. IPv4 or IPv6, no IPv4 options, etc) and return true of false to signify its support. +Crypto offload mode: When ready to send, the driver needs to inspect the Tx packet for the offload information, including the opaque context, and set up the packet send accordingly:: @@ -139,13 +171,25 @@ the stack in xfrm_input(). In ESN mode, xdo_dev_state_advance_esn() is called from xfrm_replay_advance_esn(). Driver will check packet seq number and update HW ESN state machine if needed. +Full offload mode: +HW adds and deletes XFRM headers. So in RX path, XFRM stack is bypassed if HW +reported success. In TX path, the packet lefts kernel without extra header +and not encrypted, the HW is responsible to perform it. + When the SA is removed by the user, the driver's xdo_dev_state_delete() -is asked to disable the offload. Later, xdo_dev_state_free() is called -from a garbage collection routine after all reference counts to the state +and xdo_dev_policy_delete() are asked to disable the offload. Later, +xdo_dev_state_free() and xdo_dev_policy_free() are called from a garbage +collection routine after all reference counts to the state and policy have been removed and any remaining resources can be cleared for the offload state. How these are used by the driver will depend on specific hardware needs. As a netdev is set to DOWN the XFRM stack's netdev listener will call -xdo_dev_state_delete() and xdo_dev_state_free() on any remaining offloaded -states. +xdo_dev_state_delete(), xdo_dev_policy_delete(), xdo_dev_state_free() and +xdo_dev_policy_free() on any remaining offloaded states. + +Outcome of HW handling packets, the XFRM core can't count hard, soft limits. +The HW/driver are responsible to perform it and provide accurate data when +xdo_dev_state_update_curlft() is called. In case of one of these limits +occuried, the driver needs to call to xfrm_state_check_expire() to make sure +that XFRM performs rekeying sequence.