From patchwork Sun Oct 23 12:05:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016180 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C925ECAAA1 for ; Sun, 23 Oct 2022 12:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230070AbiJWMGe (ORCPT ); Sun, 23 Oct 2022 08:06:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229995AbiJWMGa (ORCPT ); Sun, 23 Oct 2022 08:06:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 837C45851B for ; Sun, 23 Oct 2022 05:06:22 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id C2118B80D63 for ; Sun, 23 Oct 2022 12:06:20 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B63DC433D7; Sun, 23 Oct 2022 12:06:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526779; bh=RwyKLQc80e1O45PWcbczRwuuD8qxdLSDJtISC7/WAs0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W+v7lw1hEZeMAEsd69AZPtBlaiWeKMYOESTCUAx24P3jy1xocMYu+QBzocSli29LG O8OvOEa6MvJBTLcypVgK1TpltT/s5hXZpdgEXOe9icstvvX39cMWR90h3NnofJJ5ZZ B8+luYWr5ADlJnvSrnTFjPoQHpUHSL8pnyXJnzZZK6TJIdwbfkC4cllBXeKMqJvxEr NEjmvQcvAZDNK+A+fL7KjCP+Ssb5quFqEMzbnSYvxaxpSiRGbKoQOeYjmXjAHJKKLI FOn+qb/Bx14LxY10pi4/32XCw2sFFYkt9qBOAEwC5Cv8p5teFUI5K5Bc6c791MbzAx ro23Ij1o33Tnw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 1/8] xfrm: add new full offload flag Date: Sun, 23 Oct 2022 15:05:53 +0300 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky In the next patches, the xfrm core code will be extended to support new type of offload - full offload. In that mode, both policy and state should be specially configured in order to perform whole offloaded data path. Full offload takes care of encryption, decryption, encapsulation and other operations with headers. As this mode is new for XFRM policy flow, we can "start fresh" with flag bits and release first and second bit for future use. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/net/xfrm.h | 7 +++++++ include/uapi/linux/xfrm.h | 6 ++++++ net/xfrm/xfrm_device.c | 3 +++ net/xfrm/xfrm_user.c | 2 ++ 4 files changed, 18 insertions(+) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index dbc81f5eb553..c82401b706d5 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -131,12 +131,19 @@ enum { XFRM_DEV_OFFLOAD_OUT, }; +enum { + XFRM_DEV_OFFLOAD_UNSPECIFIED, + XFRM_DEV_OFFLOAD_CRYPTO, + XFRM_DEV_OFFLOAD_FULL, +}; + struct xfrm_dev_offload { struct net_device *dev; netdevice_tracker dev_tracker; struct net_device *real_dev; unsigned long offload_handle; u8 dir : 2; + u8 type : 2; }; struct xfrm_mode { diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 4f84ea7ee14c..463c6c1af23a 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -519,6 +519,12 @@ struct xfrm_user_offload { */ #define XFRM_OFFLOAD_IPV6 1 #define XFRM_OFFLOAD_INBOUND 2 +/* Two bits above are relevant for state path only, while + * offload is used for both policy and state flows. + * + * In policy offload mode, they are free and can be safely reused. + */ +#define XFRM_OFFLOAD_FULL 4 struct xfrm_userpolicy_default { #define XFRM_USERPOLICY_UNSPEC 0 diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 5f5aafd418af..7c4e0f14df27 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -278,12 +278,15 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, else xso->dir = XFRM_DEV_OFFLOAD_OUT; + xso->type = XFRM_DEV_OFFLOAD_CRYPTO; + err = dev->xfrmdev_ops->xdo_dev_state_add(x); if (err) { xso->dev = NULL; xso->dir = 0; xso->real_dev = NULL; netdev_put(dev, &xso->dev_tracker); + xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; if (err != -EOPNOTSUPP) { NL_SET_ERR_MSG(extack, "Device failed to offload this state"); diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index e73f9efc54c1..bea2d4647a90 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -943,6 +943,8 @@ static int copy_user_offload(struct xfrm_dev_offload *xso, struct sk_buff *skb) xuo->ifindex = xso->dev->ifindex; if (xso->dir == XFRM_DEV_OFFLOAD_IN) xuo->flags = XFRM_OFFLOAD_INBOUND; + if (xso->type == XFRM_DEV_OFFLOAD_FULL) + xuo->flags |= XFRM_OFFLOAD_FULL; return 0; } From patchwork Sun Oct 23 12:05:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016177 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A20ACECAAA1 for ; Sun, 23 Oct 2022 12:06:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230037AbiJWMGU (ORCPT ); Sun, 23 Oct 2022 08:06:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230043AbiJWMGS (ORCPT ); Sun, 23 Oct 2022 08:06:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C792726545 for ; Sun, 23 Oct 2022 05:06:11 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 59F2660AFB for ; Sun, 23 Oct 2022 12:06:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B08CC43470; Sun, 23 Oct 2022 12:06:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526770; bh=qHmW8OCfNNSjcrFsFUby6gaoKpoXh3IysglUbRexxjo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B/URdzSCJMKGRT74Kbu7TcW3+PMyT7sCKNR3mU+ZrsKu/AKxFlXM3B8fvf5M00fTW GIkM/UDeM7AsPOHpv0gmvkcmFtGk9eQnFe/ZvxXP02IRAg577J+RtDsIl46oukOOuD 85d7XvUsxLWhqp8AnjdAAuZ9DDlm0H5VdryEknWEU9skLWU79pAKYmTGjZWfEP5ScW HizpzrrI0CnWdzcsiL2q8ofHCfVTzqRRiEIsbEn+K0QHBlKh9ZkSPKCwLUOt5VaK9l 9fYvgR8Ez2HOhFwuETCFYil+sKLjoStc8m5hbjqXBfh2VqAH9fvo2EWo7bEbfevboD LBENZuIssmeOw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan , Ayush Sawal , Vinay Kumar Yadav , Rohit Maheshwari , Jesse Brandeburg , Tony Nguyen Subject: [PATCH xfrm-next v5 2/8] xfrm: allow state full offload mode Date: Sun, 23 Oct 2022 15:05:54 +0300 Message-Id: <935f1545c513e49e44ec88d78d844652756e712e.1666525321.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Allow users to configure xfrm states with full offload mode. The full mode must be requested both for policy and state, and such requires us to do not implement fallback. We explicitly return an error if requested full mode can't be configured. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- .../inline_crypto/ch_ipsec/chcr_ipsec.c | 4 ++++ .../net/ethernet/intel/ixgbe/ixgbe_ipsec.c | 5 ++++ drivers/net/ethernet/intel/ixgbevf/ipsec.c | 5 ++++ .../mellanox/mlx5/core/en_accel/ipsec.c | 4 ++++ drivers/net/netdevsim/ipsec.c | 5 ++++ net/xfrm/xfrm_device.c | 24 +++++++++++++++---- 6 files changed, 42 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c index 585590520076..ca21794281d6 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c @@ -283,6 +283,10 @@ static int ch_ipsec_xfrm_add_state(struct xfrm_state *x) pr_debug("Cannot offload xfrm states with geniv other than seqiv\n"); return -EINVAL; } + if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + pr_debug("Unsupported xfrm offload\n"); + return -EINVAL; + } sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL); if (!sa_entry) { diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c index 774de63dd93a..53a969e34883 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c @@ -585,6 +585,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs) return -EINVAL; } + if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_err(dev, "Unsupported ipsec offload type\n"); + return -EINVAL; + } + if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { struct rx_sa rsa; diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c index 9984ebc62d78..c1cf540d162a 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c +++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c @@ -280,6 +280,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) return -EINVAL; } + if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_err(dev, "Unsupported ipsec offload type\n"); + return -EINVAL; + } + if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { struct rx_sa rsa; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c index 325b56ff3e8c..1d8ce116946d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c @@ -256,6 +256,10 @@ static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x) netdev_info(netdev, "Cannot offload xfrm states with geniv other than seqiv\n"); return -EINVAL; } + if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_info(netdev, "Unsupported xfrm offload type\n"); + return -EINVAL; + } return 0; } diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c index 386336a38f34..b93baf5c8bee 100644 --- a/drivers/net/netdevsim/ipsec.c +++ b/drivers/net/netdevsim/ipsec.c @@ -149,6 +149,11 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs) return -EINVAL; } + if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) { + netdev_err(dev, "Unsupported ipsec offload type\n"); + return -EINVAL; + } + /* find the first unused index */ ret = nsim_ipsec_find_empty_idx(ipsec); if (ret < 0) { diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 7c4e0f14df27..1294e0490270 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -216,6 +216,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, struct xfrm_dev_offload *xso = &x->xso; xfrm_address_t *saddr; xfrm_address_t *daddr; + bool is_full_offload; if (!x->type_offload) { NL_SET_ERR_MSG(extack, "Type doesn't support offload"); @@ -228,11 +229,13 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, return -EINVAL; } - if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND)) { + if (xuo->flags & + ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND | XFRM_OFFLOAD_FULL)) { NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request"); return -EINVAL; } + is_full_offload = xuo->flags & XFRM_OFFLOAD_FULL; dev = dev_get_by_index(net, xuo->ifindex); if (!dev) { if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) { @@ -247,7 +250,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, x->props.family, xfrm_smark_get(0, x)); if (IS_ERR(dst)) - return 0; + return (is_full_offload) ? -EINVAL : 0; dev = dst->dev; @@ -258,7 +261,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, if (!dev->xfrmdev_ops || !dev->xfrmdev_ops->xdo_dev_state_add) { xso->dev = NULL; dev_put(dev); - return 0; + return (is_full_offload) ? -EINVAL : 0; } if (x->props.flags & XFRM_STATE_ESN && @@ -278,7 +281,10 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, else xso->dir = XFRM_DEV_OFFLOAD_OUT; - xso->type = XFRM_DEV_OFFLOAD_CRYPTO; + if (is_full_offload) + xso->type = XFRM_DEV_OFFLOAD_FULL; + else + xso->type = XFRM_DEV_OFFLOAD_CRYPTO; err = dev->xfrmdev_ops->xdo_dev_state_add(x); if (err) { @@ -288,7 +294,15 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, netdev_put(dev, &xso->dev_tracker); xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; - if (err != -EOPNOTSUPP) { + /* User explicitly requested full offload mode and configured + * policy in addition to the XFRM state. So be civil to users, + * and return an error instead of taking fallback path. + * + * This WARN_ON() can be seen as a documentation for driver + * authors to do not return -EOPNOTSUPP in full offload mode. + */ + WARN_ON(err == -EOPNOTSUPP && is_full_offload); + if (err != -EOPNOTSUPP || is_full_offload) { NL_SET_ERR_MSG(extack, "Device failed to offload this state"); return err; } From patchwork Sun Oct 23 12:05:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016183 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB9A1FA373F for ; Sun, 23 Oct 2022 12:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230008AbiJWMGd (ORCPT ); Sun, 23 Oct 2022 08:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230071AbiJWMGa (ORCPT ); Sun, 23 Oct 2022 08:06:30 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8F1758EA8 for ; Sun, 23 Oct 2022 05:06:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 7704BB80B51 for ; Sun, 23 Oct 2022 12:06:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5E69C433C1; Sun, 23 Oct 2022 12:06:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526775; bh=D9OBP4EPPMsW8GfWQPgTVbjXODchDclGcQFeHdQ/h14=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GvtnQDbznxTi6X1k+5tUTjWAAHstx0fkuOB+DV/6jpJPGR8gGm3Xpc8qNVh8TRBF8 HppDIkmWEjWNF8Q8lg/mdNAgm6ZLEFeO2hq23pUIx17q6xsF7QttYOWAIuuKqD8m6M kp0js2sZR/9z0mct7OD88cSn/va4Kx/nsFyGh5R+jPRugMonw3FjOWJ8zDAXQe3bBM OAlUdBrwRy9KoOKXTae9z5pXoNCYDgzqjBvvIEHf7dWYNXyQPQPHg8P+B1XXs2BOwn Nd85J74c40a9rtHub4A0RdAX0xtLTSvzZyqKRCLOwVFgLJ4VrjnwXPqroGTz46Ucs7 jWKl+4Dhgrchw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 3/8] xfrm: add an interface to offload policy Date: Sun, 23 Oct 2022 15:05:55 +0300 Message-Id: <2aa2e2430694f69ce2977d4a897117a6a90eca4a.1666525321.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend netlink interface to add and delete XFRM policy from the device. This functionality is a first step to implement full IPsec offload solution. Signed-off-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/linux/netdevice.h | 3 ++ include/net/xfrm.h | 44 +++++++++++++++++++++++++ net/xfrm/xfrm_device.c | 67 +++++++++++++++++++++++++++++++++++++- net/xfrm/xfrm_policy.c | 68 +++++++++++++++++++++++++++++++++++++++ net/xfrm/xfrm_user.c | 18 +++++++++++ 5 files changed, 199 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index a36edb0ec199..169387b2e104 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1033,6 +1033,9 @@ struct xfrmdev_ops { bool (*xdo_dev_offload_ok) (struct sk_buff *skb, struct xfrm_state *x); void (*xdo_dev_state_advance_esn) (struct xfrm_state *x); + int (*xdo_dev_policy_add) (struct xfrm_policy *x); + void (*xdo_dev_policy_delete) (struct xfrm_policy *x); + void (*xdo_dev_policy_free) (struct xfrm_policy *x); }; #endif diff --git a/include/net/xfrm.h b/include/net/xfrm.h index c82401b706d5..faa754d9431a 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -129,6 +129,7 @@ struct xfrm_state_walk { enum { XFRM_DEV_OFFLOAD_IN = 1, XFRM_DEV_OFFLOAD_OUT, + XFRM_DEV_OFFLOAD_FWD, }; enum { @@ -541,6 +542,8 @@ struct xfrm_policy { struct xfrm_tmpl xfrm_vec[XFRM_MAX_DEPTH]; struct hlist_node bydst_inexact_list; struct rcu_head rcu; + + struct xfrm_dev_offload xdo; }; static inline struct net *xp_net(const struct xfrm_policy *xp) @@ -1585,6 +1588,7 @@ struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq); int xfrm_state_delete(struct xfrm_state *x); int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync); int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid); +int xfrm_dev_policy_flush(struct net *net, struct net_device *dev, bool task_valid); void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si); void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si); u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq); @@ -1897,6 +1901,9 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, struct xfrm_user_offload *xuo, struct netlink_ext_ack *extack); +int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp, + struct xfrm_user_offload *xuo, u8 dir, + struct netlink_ext_ack *extack); bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x); static inline void xfrm_dev_state_advance_esn(struct xfrm_state *x) @@ -1945,6 +1952,28 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x) netdev_put(dev, &xso->dev_tracker); } } + +static inline void xfrm_dev_policy_delete(struct xfrm_policy *x) +{ + struct xfrm_dev_offload *xdo = &x->xdo; + struct net_device *dev = xdo->dev; + + if (dev && dev->xfrmdev_ops && dev->xfrmdev_ops->xdo_dev_policy_delete) + dev->xfrmdev_ops->xdo_dev_policy_delete(x); +} + +static inline void xfrm_dev_policy_free(struct xfrm_policy *x) +{ + struct xfrm_dev_offload *xdo = &x->xdo; + struct net_device *dev = xdo->dev; + + if (dev && dev->xfrmdev_ops) { + if (dev->xfrmdev_ops->xdo_dev_policy_free) + dev->xfrmdev_ops->xdo_dev_policy_free(x); + xdo->dev = NULL; + netdev_put(dev, &xdo->dev_tracker); + } +} #else static inline void xfrm_dev_resume(struct sk_buff *skb) { @@ -1972,6 +2001,21 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x) { } +static inline int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp, + struct xfrm_user_offload *xuo, u8 dir, + struct netlink_ext_ack *extack) +{ + return 0; +} + +static inline void xfrm_dev_policy_delete(struct xfrm_policy *x) +{ +} + +static inline void xfrm_dev_policy_free(struct xfrm_policy *x) +{ +} + static inline bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) { return false; diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 1294e0490270..b5c6a78fdac2 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -312,6 +312,69 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, } EXPORT_SYMBOL_GPL(xfrm_dev_state_add); +int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp, + struct xfrm_user_offload *xuo, u8 dir, + struct netlink_ext_ack *extack) +{ + struct xfrm_dev_offload *xdo = &xp->xdo; + struct net_device *dev; + int err; + + if (!xuo->flags || xuo->flags & ~XFRM_OFFLOAD_FULL) { + /* We support only Full offload mode and it means + * that user must set XFRM_OFFLOAD_FULL bit. + */ + NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request"); + return -EINVAL; + } + + dev = dev_get_by_index(net, xuo->ifindex); + if (!dev) + return -EINVAL; + + if (!dev->xfrmdev_ops || !dev->xfrmdev_ops->xdo_dev_policy_add) { + xdo->dev = NULL; + dev_put(dev); + NL_SET_ERR_MSG(extack, "Policy offload is not supported"); + return -EINVAL; + } + + xdo->dev = dev; + netdev_tracker_alloc(dev, &xdo->dev_tracker, GFP_ATOMIC); + xdo->real_dev = dev; + xdo->type = XFRM_DEV_OFFLOAD_FULL; + switch (dir) { + case XFRM_POLICY_IN: + xdo->dir = XFRM_DEV_OFFLOAD_IN; + break; + case XFRM_POLICY_OUT: + xdo->dir = XFRM_DEV_OFFLOAD_OUT; + break; + case XFRM_POLICY_FWD: + xdo->dir = XFRM_DEV_OFFLOAD_FWD; + break; + default: + xdo->dev = NULL; + dev_put(dev); + NL_SET_ERR_MSG(extack, "Unrecognized oflload direction"); + return -EINVAL; + } + + err = dev->xfrmdev_ops->xdo_dev_policy_add(xp); + if (err) { + xdo->dev = NULL; + xdo->real_dev = NULL; + xdo->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; + xdo->dir = 0; + netdev_put(dev, &xdo->dev_tracker); + NL_SET_ERR_MSG(extack, "Device failed to offload this policy"); + return err; + } + + return 0; +} +EXPORT_SYMBOL_GPL(xfrm_dev_policy_add); + bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) { int mtu; @@ -414,8 +477,10 @@ static int xfrm_api_check(struct net_device *dev) static int xfrm_dev_down(struct net_device *dev) { - if (dev->features & NETIF_F_HW_ESP) + if (dev->features & NETIF_F_HW_ESP) { xfrm_dev_state_flush(dev_net(dev), dev, true); + xfrm_dev_policy_flush(dev_net(dev), dev, true); + } return NOTIFY_DONE; } diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index e392d8d05e0c..b07ed169f501 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -425,6 +425,7 @@ void xfrm_policy_destroy(struct xfrm_policy *policy) if (del_timer(&policy->timer) || del_timer(&policy->polq.hold_timer)) BUG(); + xfrm_dev_policy_free(policy); call_rcu(&policy->rcu, xfrm_policy_destroy_rcu); } EXPORT_SYMBOL(xfrm_policy_destroy); @@ -1769,12 +1770,41 @@ xfrm_policy_flush_secctx_check(struct net *net, u8 type, bool task_valid) } return err; } + +static inline int xfrm_dev_policy_flush_secctx_check(struct net *net, + struct net_device *dev, + bool task_valid) +{ + struct xfrm_policy *pol; + int err = 0; + + list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { + if (pol->walk.dead || + xfrm_policy_id2dir(pol->index) >= XFRM_POLICY_MAX || + pol->xdo.dev != dev) + continue; + + err = security_xfrm_policy_delete(pol->security); + if (err) { + xfrm_audit_policy_delete(pol, 0, task_valid); + return err; + } + } + return err; +} #else static inline int xfrm_policy_flush_secctx_check(struct net *net, u8 type, bool task_valid) { return 0; } + +static inline int xfrm_dev_policy_flush_secctx_check(struct net *net, + struct net_device *dev, + bool task_valid) +{ + return 0; +} #endif int xfrm_policy_flush(struct net *net, u8 type, bool task_valid) @@ -1814,6 +1844,43 @@ int xfrm_policy_flush(struct net *net, u8 type, bool task_valid) } EXPORT_SYMBOL(xfrm_policy_flush); +int xfrm_dev_policy_flush(struct net *net, struct net_device *dev, bool task_valid) +{ + int dir, err = 0, cnt = 0; + struct xfrm_policy *pol; + + spin_lock_bh(&net->xfrm.xfrm_policy_lock); + + err = xfrm_dev_policy_flush_secctx_check(net, dev, task_valid); + if (err) + goto out; + +again: + list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) { + dir = xfrm_policy_id2dir(pol->index); + if (pol->walk.dead || + dir >= XFRM_POLICY_MAX || + pol->xdo.dev != dev) + continue; + + __xfrm_policy_unlink(pol, dir); + spin_unlock_bh(&net->xfrm.xfrm_policy_lock); + cnt++; + xfrm_audit_policy_delete(pol, 1, task_valid); + xfrm_policy_kill(pol); + spin_lock_bh(&net->xfrm.xfrm_policy_lock); + goto again; + } + if (cnt) + __xfrm_policy_inexact_flush(net); + else + err = -ESRCH; +out: + spin_unlock_bh(&net->xfrm.xfrm_policy_lock); + return err; +} +EXPORT_SYMBOL(xfrm_dev_policy_flush); + int xfrm_policy_walk(struct net *net, struct xfrm_policy_walk *walk, int (*func)(struct xfrm_policy *, int, int, void*), void *data) @@ -2245,6 +2312,7 @@ int xfrm_policy_delete(struct xfrm_policy *pol, int dir) pol = __xfrm_policy_unlink(pol, dir); spin_unlock_bh(&net->xfrm.xfrm_policy_lock); if (pol) { + xfrm_dev_policy_delete(pol); xfrm_policy_kill(pol); return 0; } diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index bea2d4647a90..8d7adeeaffbf 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1869,6 +1869,15 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net, if (attrs[XFRMA_IF_ID]) xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]); + /* configure the hardware if offload is requested */ + if (attrs[XFRMA_OFFLOAD_DEV]) { + err = xfrm_dev_policy_add(net, xp, + nla_data(attrs[XFRMA_OFFLOAD_DEV]), + p->dir, extack); + if (err) + goto error; + } + return xp; error: *errp = err; @@ -1908,6 +1917,7 @@ static int xfrm_add_policy(struct sk_buff *skb, struct nlmsghdr *nlh, xfrm_audit_policy_add(xp, err ? 0 : 1, true); if (err) { + xfrm_dev_policy_delete(xp); security_xfrm_policy_free(xp->security); kfree(xp); return err; @@ -2020,6 +2030,8 @@ static int dump_one_policy(struct xfrm_policy *xp, int dir, int count, void *ptr err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) { nlmsg_cancel(skb, nlh); return err; @@ -3343,6 +3355,8 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x, err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) { nlmsg_cancel(skb, nlh); return err; @@ -3461,6 +3475,8 @@ static int build_polexpire(struct sk_buff *skb, struct xfrm_policy *xp, err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) { nlmsg_cancel(skb, nlh); return err; @@ -3544,6 +3560,8 @@ static int xfrm_notify_policy(struct xfrm_policy *xp, int dir, const struct km_e err = xfrm_mark_put(skb, &xp->mark); if (!err) err = xfrm_if_id_put(skb, xp->if_id); + if (!err && xp->xdo.dev) + err = copy_user_offload(&xp->xdo, skb); if (err) goto out_free_skb; From patchwork Sun Oct 23 12:05:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016182 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A3D3C433FE for ; Sun, 23 Oct 2022 12:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230084AbiJWMGo (ORCPT ); Sun, 23 Oct 2022 08:06:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230092AbiJWMGi (ORCPT ); Sun, 23 Oct 2022 08:06:38 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5BAE59EA9 for ; Sun, 23 Oct 2022 05:06:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 43B8C60AFB for ; Sun, 23 Oct 2022 12:06:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC20BC433D7; Sun, 23 Oct 2022 12:06:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526792; bh=AHpV9fzwepCYxd+t9ITgxq3MFnbP2Kw3PLKq1IqRhjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LAFoa9xN3SOGnaOFu926pJfkQduH9KGLyEDcia4O0MxlB2kGGGpmIQRTTi4jcMby1 gkjOKqUKp0hH78yiMXBrLaszJhWQ8UAON3ImLzB98NCUeX5cTrVAWsU7vXOS9Ebr+5 XI3QQJ5yV97rusvZEZxnhE3F106ondYIb958MFBVOtHf41SaCDE+URmbDhE5/usF6y ixkLVNxvzuIuc9zcIZxvYJCroCRTtV/Lvx4udjbZm9R3JM8WosUend4IquZ3nTLG9d GjOTTcTW6WJekILJisRsdknylmDrzqmyQ0qobGNIF5BneIeE44Wif6nrfAhOy+vBM/ IMV8LYgtSAeSQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 4/8] xfrm: add TX datapath support for IPsec full offload mode Date: Sun, 23 Oct 2022 15:05:56 +0300 Message-Id: <89923ed3ff624c67edf7f3e742a035f70b65dabd.1666525321.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky In IPsec full mode, the device is going to encrypt and encapsulate packets that are associated with offloaded policy. After successful policy lookup to indicate if packets should be offloaded or not, the stack forwards packets to the device to do the magic. Signed-off-by: Raed Salem Signed-off-by: Huy Nguyen Signed-off-by: Leon Romanovsky --- net/xfrm/xfrm_device.c | 15 +++++++++++++-- net/xfrm/xfrm_output.c | 12 +++++++++++- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index b5c6a78fdac2..783750998e80 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -120,6 +120,16 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur if (xo->flags & XFRM_GRO || x->xso.dir == XFRM_DEV_OFFLOAD_IN) return skb; + /* The packet was sent to HW IPsec full offload engine, + * but to wrong device. Drop the packet, so it won't skip + * XFRM stack. + */ + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL && x->xso.dev != dev) { + kfree_skb(skb); + dev_core_stats_tx_dropped_inc(dev); + return NULL; + } + /* This skb was already validated on the upper/virtual dev */ if ((x->xso.dev != dev) && (x->xso.real_dev == dev)) return skb; @@ -385,8 +395,9 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) if (!x->type_offload || x->encap) return false; - if ((!dev || (dev == xfrm_dst_path(dst)->dev)) && - (!xdst->child->xfrm)) { + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL || + ((!dev || (dev == xfrm_dst_path(dst)->dev)) && + !xdst->child->xfrm)) { mtu = xfrm_state_mtu(x, xdst->child_mtu_cached); if (skb->len <= mtu) goto ok; diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c index 9a5e79a38c67..dde009be8463 100644 --- a/net/xfrm/xfrm_output.c +++ b/net/xfrm/xfrm_output.c @@ -494,7 +494,7 @@ static int xfrm_output_one(struct sk_buff *skb, int err) struct xfrm_state *x = dst->xfrm; struct net *net = xs_net(x); - if (err <= 0) + if (err <= 0 || x->xso.type == XFRM_DEV_OFFLOAD_FULL) goto resume; do { @@ -718,6 +718,16 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb) break; } + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL) { + if (!xfrm_dev_offload_ok(skb, x)) { + XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR); + kfree_skb(skb); + return -EHOSTUNREACH; + } + + return xfrm_output_resume(sk, skb, 0); + } + secpath_reset(skb); if (xfrm_dev_offload_ok(skb, x)) { From patchwork Sun Oct 23 12:05:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016181 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB28DFA3740 for ; Sun, 23 Oct 2022 12:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229971AbiJWMGf (ORCPT ); Sun, 23 Oct 2022 08:06:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230043AbiJWMGb (ORCPT ); Sun, 23 Oct 2022 08:06:31 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A9E585A89E for ; Sun, 23 Oct 2022 05:06:25 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8275B601C0 for ; Sun, 23 Oct 2022 12:06:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63506C433D7; Sun, 23 Oct 2022 12:06:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526783; bh=LUDssy/f5o4bhdLM3R/3EblkY+nwxYufonilUhj7wzU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MXFfrjm/jp2wkW3J1m2rd66bTaubz+3cTuAzO8zZLMaGrEGvRdgM/mua6Ypcslusb 3p/x52h8BEOQB/r3OfMWsdJ1TuoVBKfkVw4pnCw1Ns2TsRhpZOJN+dSuiIiGnmuE4i 8CqE9qoljX5+RXpxB/Oav6mTgxN4sP3ckMDMIkyNlrdUbubvzDmA84YVBWAFtCaagn 41Yyoog2xuXfJm6XQTOHu3Fda8zB5IHXvIDmdHZstrt0fij8haD7cwoaYzmtR5sats +psbcXe2l6PSxxoL3Sq8r+9ITFRrQyGuduj729B64M7WJrAEsfxHIYjtlXqaQyizox NEfZD1A4AN8dQ== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 5/8] xfrm: add RX datapath protection for IPsec full offload mode Date: Sun, 23 Oct 2022 15:05:57 +0300 Message-Id: <9b996bb57ef3ed3ca4e4fde49232dd3d6e11f4f4.1666525321.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Traffic received by device with enabled IPsec full offload should be forwarded to the stack only after decryption, packet headers and trailers removed. Such packets are expected to be seen as normal (non-XFRM) ones, while not-supported packets should be dropped by the HW. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/net/xfrm.h | 55 +++++++++++++++++++++++++++------------------- 1 file changed, 32 insertions(+), 23 deletions(-) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index faa754d9431a..976361976ed5 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1102,6 +1102,29 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un return !0; } +#ifdef CONFIG_XFRM +static inline struct xfrm_state *xfrm_input_state(struct sk_buff *skb) +{ + struct sec_path *sp = skb_sec_path(skb); + + return sp->xvec[sp->len - 1]; +} +#endif + +static inline struct xfrm_offload *xfrm_offload(struct sk_buff *skb) +{ +#ifdef CONFIG_XFRM + struct sec_path *sp = skb_sec_path(skb); + + if (!sp || !sp->olen || sp->len != sp->olen) + return NULL; + + return &sp->ovec[sp->olen - 1]; +#else + return NULL; +#endif +} + #ifdef CONFIG_XFRM int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, unsigned short family); @@ -1133,10 +1156,19 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, { struct net *net = dev_net(skb->dev); int ndir = dir | (reverse ? XFRM_POLICY_MASK + 1 : 0); + struct xfrm_offload *xo = xfrm_offload(skb); + struct xfrm_state *x; if (sk && sk->sk_policy[XFRM_POLICY_IN]) return __xfrm_policy_check(sk, ndir, skb, family); + if (xo) { + x = xfrm_input_state(skb); + if (x->xso.type == XFRM_DEV_OFFLOAD_FULL) + return (xo->flags & CRYPTO_DONE) && + (xo->status & CRYPTO_SUCCESS); + } + return __xfrm_check_nopolicy(net, skb, dir) || __xfrm_check_dev_nopolicy(skb, dir, family) || __xfrm_policy_check(sk, ndir, skb, family); @@ -1869,29 +1901,6 @@ static inline void xfrm_states_delete(struct xfrm_state **states, int n) } #endif -#ifdef CONFIG_XFRM -static inline struct xfrm_state *xfrm_input_state(struct sk_buff *skb) -{ - struct sec_path *sp = skb_sec_path(skb); - - return sp->xvec[sp->len - 1]; -} -#endif - -static inline struct xfrm_offload *xfrm_offload(struct sk_buff *skb) -{ -#ifdef CONFIG_XFRM - struct sec_path *sp = skb_sec_path(skb); - - if (!sp || !sp->olen || sp->len != sp->olen) - return NULL; - - return &sp->ovec[sp->olen - 1]; -#else - return NULL; -#endif -} - void __init xfrm_dev_init(void); #ifdef CONFIG_XFRM_OFFLOAD From patchwork Sun Oct 23 12:05:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016179 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BB60FA373D for ; Sun, 23 Oct 2022 12:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230139AbiJWMGl (ORCPT ); Sun, 23 Oct 2022 08:06:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230072AbiJWMGd (ORCPT ); Sun, 23 Oct 2022 08:06:33 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00AB458B4B for ; Sun, 23 Oct 2022 05:06:30 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 9D43CB80B51 for ; Sun, 23 Oct 2022 12:06:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BDCF5C43148; Sun, 23 Oct 2022 12:06:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526788; bh=y1hEzzMV9eK0WTR2Isz/Tv4PO0UwnikO/Pt42xPlypo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EeGtxzuj8IA9yE7tOweWNk6e1SuiEXouRkMK5IRtB8L1luQlSbXoi5OlQQr7mHwWB 8SwYUm8OyL8FTuMlE7ElLp3JPFps3AQyhye8cvMnVUlhldveq+ytCCzaj2VxeHAehp VXKI6uITde/VAETp4YCYj+cHNBLMk7VUpo01dTHhYXi9ddYmbsO1ZthkJo94Fth0QT 8s7VvN+sIEfPhHIOtCX1YkKHKIZ8yHWgswjSzZCcscXMcmUAWW7QbDvLt/AWSSGDEz E4XMz74B3bc+dCQNMUX+yXxcMbgN7PtlNFNwmWLJwGm3RkDUoN1rfWjQx6EwLU3dkM /ljdQIYM6Iokw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 6/8] xfrm: Speed-up lookup of HW policies Date: Sun, 23 Oct 2022 15:05:58 +0300 Message-Id: <09577c71179027f7ffb99bace5a4608e7e857c82.1666525321.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Devices that implement IPsec full offload mode should offload policies too. In RX path, it causes to the situation that HW will always have higher priority over any SW policies. It means that we don't need to perform any search of inexact policies and/or priority checks if HW policy was discovered. In such situation, the HW will catch the packets anyway and HW can still implement inexact lookups. In case specific policy is not found, we will continue with full lookup and check for existence of HW policies in inexact list. HW policies are added to the head of SPD to ensure fast lookup, as XFRM iterates over all policies in the loop. Signed-off-by: Leon Romanovsky --- net/xfrm/xfrm_policy.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index b07ed169f501..aa73e630aef5 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -1562,9 +1562,12 @@ static struct xfrm_policy *xfrm_policy_insert_list(struct hlist_head *chain, break; } - if (newpos) + if (newpos && policy->xdo.type != XFRM_DEV_OFFLOAD_FULL) hlist_add_behind_rcu(&policy->bydst, &newpos->bydst); else + /* Full offload policies are enteded + * to the head to speed-up lookups. + */ hlist_add_head_rcu(&policy->bydst, chain); return delpol; @@ -2180,6 +2183,9 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type, break; } } + if (!ret && ret->xdo.type == XFRM_DEV_OFFLOAD_FULL) + goto skip_inexact; + bin = xfrm_policy_inexact_lookup_rcu(net, type, family, dir, if_id); if (!bin || !xfrm_policy_find_inexact_candidates(&cand, bin, saddr, daddr)) From patchwork Sun Oct 23 12:05:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016185 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35FBAECAAA1 for ; Sun, 23 Oct 2022 12:07:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230125AbiJWMHA (ORCPT ); Sun, 23 Oct 2022 08:07:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230175AbiJWMGz (ORCPT ); Sun, 23 Oct 2022 08:06:55 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AF065A8B4 for ; Sun, 23 Oct 2022 05:06:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D3CF2B80D53 for ; Sun, 23 Oct 2022 12:06:42 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4DC2C433C1; Sun, 23 Oct 2022 12:06:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526801; bh=oM+fcFpcFUee9VAzjbqo/+6YEgmIciJRbXNK4THg3Vg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oHDYp6J0IgmberVhAwavTTJ2nR34AwvUTKCmvPX4f4p6w9d/aswL8p29bpMt2DErI J3WTxfLdVf9yPulOAK9daaBvq8ZYVWUIjL5VDaONzm0AvGLKIMqjKEvYyr+3oiWyY3 7KN681vQkCJifVUx+A1CfJo9CR+oMQtd57Mv+gvi/GXYlnAIVVWs8e2NeQ6Pxmm9LJ E0cjHuBY9tDSj6GdNAse+9ravOYXmj813xGp7MzqB26zurOjoiFjPDpuS4JOUU6S1S dLyPjyfJQDNF/TOt+2kz7ZpTVpffikmPiWDlkQhTA353r/rhNgI4DWtVG8dL/AIBMJ TrL55cS7y70Mw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 7/8] xfrm: add support to HW update soft and hard limits Date: Sun, 23 Oct 2022 15:05:59 +0300 Message-Id: X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Both in RX and TX, the traffic that performs IPsec full offload transformation is accounted by HW. It is needed to properly handle hard limits that require to drop the packet. It means that XFRM core needs to update internal counters with the one that accounted by the HW, so new callbacks are introduced in this patch. In case of soft or hard limit is occurred, the driver should call to xfrm_state_check_expire() that will perform key rekeying exactly as done by XFRM core. Signed-off-by: Leon Romanovsky --- include/linux/netdevice.h | 1 + include/net/xfrm.h | 17 +++++++++++++++++ net/xfrm/xfrm_output.c | 1 - net/xfrm/xfrm_state.c | 4 ++++ 4 files changed, 22 insertions(+), 1 deletion(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 169387b2e104..49d80295812f 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1033,6 +1033,7 @@ struct xfrmdev_ops { bool (*xdo_dev_offload_ok) (struct sk_buff *skb, struct xfrm_state *x); void (*xdo_dev_state_advance_esn) (struct xfrm_state *x); + void (*xdo_dev_state_update_curlft) (struct xfrm_state *x); int (*xdo_dev_policy_add) (struct xfrm_policy *x); void (*xdo_dev_policy_delete) (struct xfrm_policy *x); void (*xdo_dev_policy_free) (struct xfrm_policy *x); diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 976361976ed5..41f8aaafe755 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1571,6 +1571,23 @@ struct xfrm_state *xfrm_stateonly_find(struct net *net, u32 mark, u32 if_id, struct xfrm_state *xfrm_state_lookup_byspi(struct net *net, __be32 spi, unsigned short family); int xfrm_state_check_expire(struct xfrm_state *x); +#ifdef CONFIG_XFRM_OFFLOAD +static inline void xfrm_dev_state_update_curlft(struct xfrm_state *x) +{ + struct xfrm_dev_offload *xdo = &x->xso; + struct net_device *dev = xdo->dev; + + if (x->xso.type != XFRM_DEV_OFFLOAD_FULL) + return; + + if (dev && dev->xfrmdev_ops && + dev->xfrmdev_ops->xdo_dev_state_update_curlft) + dev->xfrmdev_ops->xdo_dev_state_update_curlft(x); + +} +#else +static inline void xfrm_dev_state_update_curlft(struct xfrm_state *x) {} +#endif void xfrm_state_insert(struct xfrm_state *x); int xfrm_state_add(struct xfrm_state *x); int xfrm_state_update(struct xfrm_state *x); diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c index dde009be8463..a22033350ddc 100644 --- a/net/xfrm/xfrm_output.c +++ b/net/xfrm/xfrm_output.c @@ -560,7 +560,6 @@ static int xfrm_output_one(struct sk_buff *skb, int err) XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTSTATEPROTOERROR); goto error_nolock; } - dst = skb_dst_pop(skb); if (!dst) { XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR); diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 81df34b3da6e..f2d31eeef193 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -549,6 +549,8 @@ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) int err = 0; spin_lock(&x->lock); + xfrm_dev_state_update_curlft(x); + if (x->km.state == XFRM_STATE_DEAD) goto out; if (x->km.state == XFRM_STATE_EXPIRED) @@ -1786,6 +1788,8 @@ EXPORT_SYMBOL(xfrm_state_update); int xfrm_state_check_expire(struct xfrm_state *x) { + xfrm_dev_state_update_curlft(x); + if (!x->curlft.use_time) x->curlft.use_time = ktime_get_real_seconds(); From patchwork Sun Oct 23 12:06:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 13016184 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F641C433FE for ; Sun, 23 Oct 2022 12:07:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230116AbiJWMG6 (ORCPT ); Sun, 23 Oct 2022 08:06:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57712 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230074AbiJWMGo (ORCPT ); Sun, 23 Oct 2022 08:06:44 -0400 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1612572854 for ; Sun, 23 Oct 2022 05:06:41 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 117DDCE0EBF for ; Sun, 23 Oct 2022 12:06:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DBB3C433D7; Sun, 23 Oct 2022 12:06:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1666526797; bh=vR/V5BHVB4iVbF5OXZAc6nS+e2MwsrVg6xSMEyBfIOc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AXxRzCB3Kj7vlKt4CkVFn5niZst+5iOj1cEXHYAiUCnEXmFKzm5YaG0nCN5cifQcR ZWvJoElocZdymaizA8PHDQgo9F7WugAnAMhM+fGEZcM+hL0dn4T1Yqe4VqElo6NRNq 69Xj5vSeCk9XXlsmlPYojJyI4NMCThVrF1+3HuMsDsDaPyaYxZyP6DvAFSpjRLtWBs SZ4WLYOm663FUovZHzrpP4TEZw/1oXOvOAHsm0wbXsn65KaqOHNBxLAfCQ64eEl/iM mNvv4l8dOeXnENez4W72AphZy8cHGuUkOvKcYoL0Bqz2GRbBaG/0FKuM1PiTbGaJqa yPHmo80DyFcxg== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH xfrm-next v5 8/8] xfrm: document IPsec full offload mode Date: Sun, 23 Oct 2022 15:06:00 +0300 Message-Id: <1f52c2bc048940ab3679d01f621230f1fc49ba16.1666525321.git.leonro@nvidia.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Leon Romanovsky Extend XFRM device offload API description with newly added full offload mode. Signed-off-by: Leon Romanovsky --- Documentation/networking/xfrm_device.rst | 62 ++++++++++++++++++++---- 1 file changed, 53 insertions(+), 9 deletions(-) diff --git a/Documentation/networking/xfrm_device.rst b/Documentation/networking/xfrm_device.rst index 01391dfd37d9..499f9d4ca021 100644 --- a/Documentation/networking/xfrm_device.rst +++ b/Documentation/networking/xfrm_device.rst @@ -5,6 +5,7 @@ XFRM device - offloading the IPsec computations =============================================== Shannon Nelson +Leon Romanovsky Overview @@ -18,10 +19,21 @@ can radically increase throughput and decrease CPU utilization. The XFRM Device interface allows NIC drivers to offer to the stack access to the hardware offload. +Right now, there are two types of hardware offload that kernel supports. + * IPsec crypto offload: + * NIC performs encrypt/decrypt + * Kernel does everything else + * IPsec full offload: + * NIC performs encrypt/decrypt + * NIC does encapsulation + * Kernel and NIC have SA and policy in-sync + * NIC handles the SA and policies states + * The Kernel talks to the keymanager + Userland access to the offload is typically through a system such as libreswan or KAME/raccoon, but the iproute2 'ip xfrm' command set can be handy when experimenting. An example command might look something -like this:: +like this for crypto offload: ip x s add proto esp dst 14.0.0.70 src 14.0.0.52 spi 0x07 mode transport \ reqid 0x07 replay-window 32 \ @@ -29,6 +41,17 @@ like this:: sel src 14.0.0.52/24 dst 14.0.0.70/24 proto tcp \ offload dev eth4 dir in +and for full offload + + ip x s add proto esp dst 14.0.0.70 src 14.0.0.52 spi 0x07 mode transport \ + reqid 0x07 replay-window 32 \ + aead 'rfc4106(gcm(aes))' 0x44434241343332312423222114131211f4f3f2f1 128 \ + sel src 14.0.0.52/24 dst 14.0.0.70/24 proto tcp \ + offload full dev eth4 dir in + + ip x p add src 14.0.0.70 dst 14.0.0.52 offload full dev eth4 dir in + tmpl src 14.0.0.70 dst 14.0.0.52 proto esp reqid 10000 mode transport + Yes, that's ugly, but that's what shell scripts and/or libreswan are for. @@ -40,17 +63,24 @@ Callbacks to implement /* from include/linux/netdevice.h */ struct xfrmdev_ops { + /* Crypto and Full offload callbacks */ int (*xdo_dev_state_add) (struct xfrm_state *x); void (*xdo_dev_state_delete) (struct xfrm_state *x); void (*xdo_dev_state_free) (struct xfrm_state *x); bool (*xdo_dev_offload_ok) (struct sk_buff *skb, struct xfrm_state *x); void (*xdo_dev_state_advance_esn) (struct xfrm_state *x); + + /* Solely full offload callbacks */ + void (*xdo_dev_state_update_curlft) (struct xfrm_state *x); + int (*xdo_dev_policy_add) (struct xfrm_policy *x); + void (*xdo_dev_policy_delete) (struct xfrm_policy *x); + void (*xdo_dev_policy_free) (struct xfrm_policy *x); }; -The NIC driver offering ipsec offload will need to implement these -callbacks to make the offload available to the network stack's -XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and +The NIC driver offering ipsec offload will need to implement callbacks +relevant to supported offload to make the offload available to the network +stack's XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload. @@ -79,7 +109,8 @@ and an indication of whether it is for Rx or Tx. The driver should =========== =================================== 0 success - -EOPNETSUPP offload not supported, try SW IPsec + -EOPNETSUPP offload not supported, try SW IPsec, + not applicable for full offload mode other fail the request =========== =================================== @@ -96,6 +127,7 @@ will serviceable. This can check the packet information to be sure the offload can be supported (e.g. IPv4 or IPv6, no IPv4 options, etc) and return true of false to signify its support. +Crypto offload mode: When ready to send, the driver needs to inspect the Tx packet for the offload information, including the opaque context, and set up the packet send accordingly:: @@ -139,13 +171,25 @@ the stack in xfrm_input(). In ESN mode, xdo_dev_state_advance_esn() is called from xfrm_replay_advance_esn(). Driver will check packet seq number and update HW ESN state machine if needed. +Full offload mode: +HW adds and deletes XFRM headers. So in RX path, XFRM stack is bypassed if HW +reported success. In TX path, the packet lefts kernel without extra header +and not encrypted, the HW is responsible to perform it. + When the SA is removed by the user, the driver's xdo_dev_state_delete() -is asked to disable the offload. Later, xdo_dev_state_free() is called -from a garbage collection routine after all reference counts to the state +and xdo_dev_policy_delete() are asked to disable the offload. Later, +xdo_dev_state_free() and xdo_dev_policy_free() are called from a garbage +collection routine after all reference counts to the state and policy have been removed and any remaining resources can be cleared for the offload state. How these are used by the driver will depend on specific hardware needs. As a netdev is set to DOWN the XFRM stack's netdev listener will call -xdo_dev_state_delete() and xdo_dev_state_free() on any remaining offloaded -states. +xdo_dev_state_delete(), xdo_dev_policy_delete(), xdo_dev_state_free() and +xdo_dev_policy_free() on any remaining offloaded states. + +Outcome of HW handling packets, the XFRM core can't count hard, soft limits. +The HW/driver are responsible to perform it and provide accurate data when +xdo_dev_state_update_curlft() is called. In case of one of these limits +occuried, the driver needs to call to xfrm_state_check_expire() to make sure +that XFRM performs rekeying sequence.