mbox series

[iwl-next,v9,00/15] Introduce the Parser Library

Message ID 20230904021455.3944605-1-junfeng.guo@intel.com (mailing list archive)
Headers show
Series Introduce the Parser Library | expand

Message

Guo, Junfeng Sept. 4, 2023, 2:14 a.m. UTC
Current software architecture for flow filtering offloading limited
the capability of Intel Ethernet 800 Series Dynamic Device
Personalization (DDP) Package. The flow filtering offloading in the
driver is enabled based on the naming parsers, each flow pattern is
represented by a protocol header stack. And there are multiple layers
(e.g., virtchnl) to maintain their own enum/macro/structure
to represent a protocol header (IP, TCP, UDP ...), thus the extra
parsers to verify if a pattern is supported by hardware or not as
well as the extra converters that to translate represents between
different layers. Every time a new protocol/field is requested to be
supported, the corresponding logic for the parsers and the converters
needs to be modified accordingly. Thus, huge & redundant efforts are
required to support the increasing flow filtering offloading features,
especially for the tunnel types flow filtering.

This patch set provides a way for applications to send down training
packets & masks (in binary) to the driver. Then these binary data
would be used by the driver to generate certain data that are needed
to create a filter rule in the filtering stage of switch/RSS/FDIR.

Note that the impact of a malicious rule in the raw packet filter is
limited to performance rather than functionality. It may affect the
performance of the workload, similar to other limitations in FDIR/RSS
on AVF. For example, there is no resource boundary for VF FDIR/RSS
rules, so one malicious VF could potentially make other VFs
inefficient in offloading.

The parser library is expected to include boundary checks to prevent
critical errors such as infinite loops or segmentation faults.
However, only implementing and validating the parser emulator in a
sandbox environment (like ebpf) presents a challenge.

The idea is to make the driver be able to learn from the DDP package
directly to understand how the hardware parser works (i.e., the
Parser Library), so that it can process on the raw training packet
(in binary) directly and create the filter rule accordingly.

Based on this Parser Library, the raw flow filtering of
switch/RSS/FDIR could be enabled to allow new flow filtering
offloading features to be supported without any driver changes (only
need to update the DDP package).

v9:
- Remove 'inline' of function in c file.
- Refactor bitfield process with FIELD_GET().
- Fix element access overrun of array key[].

v8: https://lore.kernel.org/netdev/20230824075500.1735790-1-junfeng.guo@intel.com/
- Refactor bits revert functions with existing bitrev8()/bitrev8x4().

v7: https://lore.kernel.org/netdev/20230823093158.782802-1-junfeng.guo@intel.com/
- Move/Add below marco to the first appeared commit:
ICE_PARSER_FLG_NUM and ICE_ERR_NOT_IMPL.

v6: https://lore.kernel.org/netdev/20230821081438.2937934-1-junfeng.guo@intel.com/
- Move `rt` field setting to the correct commit (first introduced).

v5: https://lore.kernel.org/netdev/20230821023833.2700902-1-junfeng.guo@intel.com/
- Update copyrights of new files to be 2023 only.
- Update patch set series prefix.
- Fix typo on patch 2 commit message.

v4: https://lore.kernel.org/intel-wired-lan/20230817094240.2584745-1-junfeng.guo@intel.com/
- Update cover letter series title.

v3: https://lore.kernel.org/intel-wired-lan/20230817093442.2576997-1-junfeng.guo@intel.com/
- Replace magic hardcoded values with macros.
- Use size_t to avoid superfluous type cast to uintptr_t in function
  ice_parser_sect_item_get.
- Prefix for static local function names to avoid namespace pollution.
- Use strstarts() function instead of self implementation.

v2: https://lore.kernel.org/intel-wired-lan/20230605054641.2865142-1-junfeng.guo@intel.com/
- Fix build warnings.

Junfeng Guo (15):
  ice: add parser create and destroy skeleton
  ice: init imem table for parser
  ice: init metainit table for parser
  ice: init parse graph cam tables for parser
  ice: init boost tcam and label tables for parser
  ice: init ptype marker tcam table for parser
  ice: init marker and protocol group tables for parser
  ice: init flag redirect table for parser
  ice: init XLT key builder for parser
  ice: add parser runtime skeleton
  ice: add internal help functions
  ice: add parser execution main loop
  ice: support double vlan mode configure for parser
  ice: add tunnel port support for parser
  ice: add API for parser profile initialization

 drivers/net/ethernet/intel/ice/Makefile       |  11 +
 drivers/net/ethernet/intel/ice/ice_bst_tcam.c | 353 ++++++++
 drivers/net/ethernet/intel/ice/ice_bst_tcam.h |  40 +
 drivers/net/ethernet/intel/ice/ice_common.h   |   3 +
 drivers/net/ethernet/intel/ice/ice_ddp.c      |  10 +-
 drivers/net/ethernet/intel/ice/ice_ddp.h      |  14 +
 drivers/net/ethernet/intel/ice/ice_flg_rd.c   |  77 ++
 drivers/net/ethernet/intel/ice/ice_flg_rd.h   |  19 +
 drivers/net/ethernet/intel/ice/ice_imem.c     | 324 +++++++
 drivers/net/ethernet/intel/ice/ice_imem.h     | 132 +++
 drivers/net/ethernet/intel/ice/ice_metainit.c | 193 ++++
 drivers/net/ethernet/intel/ice/ice_metainit.h |  47 +
 drivers/net/ethernet/intel/ice/ice_mk_grp.c   |  51 ++
 drivers/net/ethernet/intel/ice/ice_mk_grp.h   |  17 +
 drivers/net/ethernet/intel/ice/ice_parser.c   | 562 ++++++++++++
 drivers/net/ethernet/intel/ice/ice_parser.h   | 140 +++
 .../net/ethernet/intel/ice/ice_parser_rt.c    | 841 ++++++++++++++++++
 .../net/ethernet/intel/ice/ice_parser_rt.h    |  73 ++
 .../net/ethernet/intel/ice/ice_parser_util.h  |  37 +
 drivers/net/ethernet/intel/ice/ice_pg_cam.c   | 444 +++++++++
 drivers/net/ethernet/intel/ice/ice_pg_cam.h   |  73 ++
 .../net/ethernet/intel/ice/ice_proto_grp.c    |  94 ++
 .../net/ethernet/intel/ice/ice_proto_grp.h    |  24 +
 drivers/net/ethernet/intel/ice/ice_ptype_mk.c |  73 ++
 drivers/net/ethernet/intel/ice/ice_ptype_mk.h |  23 +
 drivers/net/ethernet/intel/ice/ice_tmatch.h   |  40 +
 drivers/net/ethernet/intel/ice/ice_type.h     |   1 +
 drivers/net/ethernet/intel/ice/ice_xlt_kb.c   | 259 ++++++
 drivers/net/ethernet/intel/ice/ice_xlt_kb.h   |  34 +
 29 files changed, 4004 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/ethernet/intel/ice/ice_bst_tcam.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_bst_tcam.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_flg_rd.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_flg_rd.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_imem.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_imem.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_metainit.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_metainit.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_mk_grp.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_mk_grp.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_parser.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_parser.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_parser_rt.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_parser_rt.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_parser_util.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_pg_cam.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_pg_cam.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_proto_grp.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_proto_grp.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_ptype_mk.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_ptype_mk.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_tmatch.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_xlt_kb.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_xlt_kb.h

Comments

Jakub Kicinski Sept. 5, 2023, 10:37 p.m. UTC | #1
On Mon,  4 Sep 2023 10:14:40 +0800 Junfeng Guo wrote:
> Current software architecture for flow filtering offloading limited
> the capability of Intel Ethernet 800 Series Dynamic Device
> Personalization (DDP) Package. The flow filtering offloading in the
> driver is enabled based on the naming parsers, each flow pattern is
> represented by a protocol header stack. And there are multiple layers
> (e.g., virtchnl) to maintain their own enum/macro/structure
> to represent a protocol header (IP, TCP, UDP ...), thus the extra
> parsers to verify if a pattern is supported by hardware or not as
> well as the extra converters that to translate represents between
> different layers. Every time a new protocol/field is requested to be
> supported, the corresponding logic for the parsers and the converters
> needs to be modified accordingly. Thus, huge & redundant efforts are
> required to support the increasing flow filtering offloading features,
> especially for the tunnel types flow filtering.

Are you talking about problems internal to ICE or the flower interface?

> This patch set provides a way for applications to send down training
> packets & masks (in binary) to the driver. Then these binary data
> would be used by the driver to generate certain data that are needed
> to create a filter rule in the filtering stage of switch/RSS/FDIR.

What's the API for the user? I see a whole bunch of functions added here
which never get called.

> Note that the impact of a malicious rule in the raw packet filter is
> limited to performance rather than functionality. It may affect the
> performance of the workload, similar to other limitations in FDIR/RSS
> on AVF. For example, there is no resource boundary for VF FDIR/RSS
> rules, so one malicious VF could potentially make other VFs
> inefficient in offloading.
> 
> The parser library is expected to include boundary checks to prevent
> critical errors such as infinite loops or segmentation faults.
> However, only implementing and validating the parser emulator in a
> sandbox environment (like ebpf) presents a challenge.
> 
> The idea is to make the driver be able to learn from the DDP package
> directly to understand how the hardware parser works (i.e., the
> Parser Library), so that it can process on the raw training packet
> (in binary) directly and create the filter rule accordingly.

No idea what this means in terms of the larger networking stack.

> Based on this Parser Library, the raw flow filtering of
> switch/RSS/FDIR could be enabled to allow new flow filtering
> offloading features to be supported without any driver changes (only
> need to update the DDP package).

Sounds like you are talking about some vague "vision" rather than 
the code you're actually posting.

Given that you've posted 5 versions of this to netdev and got no
notable comments, please don't CC netdev on the next version
until you get some reviews inside Intel. Stuff like:

+#define ICE_ERR_NOT_IMPL		-1

should get caught by internal review.
Tom Herbert Sept. 5, 2023, 11:05 p.m. UTC | #2
On Tue, Sep 5, 2023 at 3:37 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon,  4 Sep 2023 10:14:40 +0800 Junfeng Guo wrote:
> > Current software architecture for flow filtering offloading limited
> > the capability of Intel Ethernet 800 Series Dynamic Device
> > Personalization (DDP) Package. The flow filtering offloading in the
> > driver is enabled based on the naming parsers, each flow pattern is
> > represented by a protocol header stack. And there are multiple layers
> > (e.g., virtchnl) to maintain their own enum/macro/structure
> > to represent a protocol header (IP, TCP, UDP ...), thus the extra
> > parsers to verify if a pattern is supported by hardware or not as
> > well as the extra converters that to translate represents between
> > different layers. Every time a new protocol/field is requested to be
> > supported, the corresponding logic for the parsers and the converters
> > needs to be modified accordingly. Thus, huge & redundant efforts are
> > required to support the increasing flow filtering offloading features,
> > especially for the tunnel types flow filtering.
>
> Are you talking about problems internal to ICE or the flower interface?
>
> > This patch set provides a way for applications to send down training
> > packets & masks (in binary) to the driver. Then these binary data
> > would be used by the driver to generate certain data that are needed
> > to create a filter rule in the filtering stage of switch/RSS/FDIR.
>
> What's the API for the user? I see a whole bunch of functions added here
> which never get called.
>
> > Note that the impact of a malicious rule in the raw packet filter is
> > limited to performance rather than functionality. It may affect the
> > performance of the workload, similar to other limitations in FDIR/RSS
> > on AVF. For example, there is no resource boundary for VF FDIR/RSS
> > rules, so one malicious VF could potentially make other VFs
> > inefficient in offloading.
> >
> > The parser library is expected to include boundary checks to prevent
> > critical errors such as infinite loops or segmentation faults.
> > However, only implementing and validating the parser emulator in a
> > sandbox environment (like ebpf) presents a challenge.
> >
> > The idea is to make the driver be able to learn from the DDP package
> > directly to understand how the hardware parser works (i.e., the
> > Parser Library), so that it can process on the raw training packet
> > (in binary) directly and create the filter rule accordingly.
>
> No idea what this means in terms of the larger networking stack.
>

Yes, creating an elaborate mechanism that is only usable for one
vendor, e.g. a feature of DDP, really isn't very helpful. Parsing is a
very common operation in the networking stack, and if there's
something with the vanglorious name of "Parser Library" really should
start off as being a common, foundational, vendor agnostic library to
solve the larger problem and provide the most utility. The common
components would define consistent user and kernel interfaces for
parser offload, interfaces into the NIC drivers would be defined to
allow different vendors to implement parser offload in their devices.

The concepts in kParser patch "net/kparser: add kParser" were aligned
with what the backend of Parser Library might be. That path introduced
iproute commands to program an in kernel parser extensible to support
arbitrary protocols (including constructs like TLVs, flag fields, and
now even nested TLVs). It is quite conceivable that these commands
could be sent to the device to achieve programmable parser offload.

Tom


> > Based on this Parser Library, the raw flow filtering of
> > switch/RSS/FDIR could be enabled to allow new flow filtering
> > offloading features to be supported without any driver changes (only
> > need to update the DDP package).
>
> Sounds like you are talking about some vague "vision" rather than
> the code you're actually posting.
>
> Given that you've posted 5 versions of this to netdev and got no
> notable comments, please don't CC netdev on the next version
> until you get some reviews inside Intel. Stuff like:
>
> +#define ICE_ERR_NOT_IMPL               -1
>
> should get caught by internal review.
>
Samudrala, Sridhar Sept. 7, 2023, 7:08 p.m. UTC | #3
On 9/5/2023 5:37 PM, Jakub Kicinski wrote:
> On Mon,  4 Sep 2023 10:14:40 +0800 Junfeng Guo wrote:
>> Current software architecture for flow filtering offloading limited
>> the capability of Intel Ethernet 800 Series Dynamic Device
>> Personalization (DDP) Package. The flow filtering offloading in the
>> driver is enabled based on the naming parsers, each flow pattern is
>> represented by a protocol header stack. And there are multiple layers
>> (e.g., virtchnl) to maintain their own enum/macro/structure
>> to represent a protocol header (IP, TCP, UDP ...), thus the extra
>> parsers to verify if a pattern is supported by hardware or not as
>> well as the extra converters that to translate represents between
>> different layers. Every time a new protocol/field is requested to be
>> supported, the corresponding logic for the parsers and the converters
>> needs to be modified accordingly. Thus, huge & redundant efforts are
>> required to support the increasing flow filtering offloading features,
>> especially for the tunnel types flow filtering.
> 
> Are you talking about problems internal to ICE or the flower interface?

This is specific to the ice driver implementation. A package called 
Dynamic device personalization(DDP) is loaded by the driver during probe 
time to configure the HW pipeline.
Today, the driver has a non-scalable implementation that needs to be 
changed whenever the DDP package extends the capabilities supported by 
our HW pipeline. We will re-word the problem statement in the next revision.

> 
>> This patch set provides a way for applications to send down training
>> packets & masks (in binary) to the driver. Then these binary data
>> would be used by the driver to generate certain data that are needed
>> to create a filter rule in the filtering stage of switch/RSS/FDIR.
> 
> What's the API for the user? I see a whole bunch of functions added here
> which never get called.

This link shows an early version of a user of this patch series
	https://lore.kernel.org/intel-wired-lan/20230818064703.154183-1-junfeng.guo@intel.com/

This API is planned to be exposed to VF drivers via virtchnl interface 
to pass raw training packets and masks. The VF using this API can only 
steer RX traffic directed that VF to its own queues.

> 
>> Note that the impact of a malicious rule in the raw packet filter is
>> limited to performance rather than functionality. It may affect the
>> performance of the workload, similar to other limitations in FDIR/RSS
>> on AVF. For example, there is no resource boundary for VF FDIR/RSS
>> rules, so one malicious VF could potentially make other VFs
>> inefficient in offloading.
>>
>> The parser library is expected to include boundary checks to prevent
>> critical errors such as infinite loops or segmentation faults.
>> However, only implementing and validating the parser emulator in a
>> sandbox environment (like ebpf) presents a challenge.
>>
>> The idea is to make the driver be able to learn from the DDP package
>> directly to understand how the hardware parser works (i.e., the
>> Parser Library), so that it can process on the raw training packet
>> (in binary) directly and create the filter rule accordingly.
> 
> No idea what this means in terms of the larger networking stack.

I think the usage of 'parser library' is misleading. This is not a 
generic library, but specific to ice and enables the driver to learn the 
hw parser capabilities from the DDP package that is downloaded to HW. 
This information along with the raw packet/mask is used to figure out 
all the metadata required to add a filter rule.

> 
>> Based on this Parser Library, the raw flow filtering of
>> switch/RSS/FDIR could be enabled to allow new flow filtering
>> offloading features to be supported without any driver changes (only
>> need to update the DDP package).
> 
> Sounds like you are talking about some vague "vision" rather than
> the code you're actually posting.
> 
> Given that you've posted 5 versions of this to netdev and got no
> notable comments, please don't CC netdev on the next version
> until you get some reviews inside Intel. Stuff like:
> 
> +#define ICE_ERR_NOT_IMPL		-1
> 
> should get caught by internal review.
> 
Will do more internal reviews before submitting the next revision.

- Sridhar & Jesse
Samudrala, Sridhar Sept. 7, 2023, 7:10 p.m. UTC | #4
On 9/5/2023 6:05 PM, Tom Herbert wrote:
<snip>

> Yes, creating an elaborate mechanism that is only usable for one
> vendor, e.g. a feature of DDP, really isn't very helpful. Parsing is a
> very common operation in the networking stack, and if there's
> something with the vanglorious name of "Parser Library" really should
> start off as being a common, foundational, vendor agnostic library to
> solve the larger problem and provide the most utility. The common
> components would define consistent user and kernel interfaces for
> parser offload, interfaces into the NIC drivers would be defined to
> allow different vendors to implement parser offload in their devices.

I think naming this framework as 'parser library' may have caused the 
misunderstanding. Will fix in the next revision. This is not a generic 
network packet parser and not applicable to kernel flow dissector. It is 
specific to ice and enables the driver to learn the hardware parser 
capabilities from the DDP package that is downloaded to hardware. This 
information along with the raw packet/mask is used to figure out all the 
metadata required to add a filter rule.
Jakub Kicinski Sept. 7, 2023, 7:38 p.m. UTC | #5
On Thu, 7 Sep 2023 14:08:15 -0500 Samudrala, Sridhar wrote:
> >> This patch set provides a way for applications to send down training
> >> packets & masks (in binary) to the driver. Then these binary data
> >> would be used by the driver to generate certain data that are needed
> >> to create a filter rule in the filtering stage of switch/RSS/FDIR.  
> > 
> > What's the API for the user? I see a whole bunch of functions added here
> > which never get called.  
> 
> This link shows an early version of a user of this patch series
> 	https://lore.kernel.org/intel-wired-lan/20230818064703.154183-1-junfeng.guo@intel.com/
> 
> This API is planned to be exposed to VF drivers via virtchnl interface 
> to pass raw training packets and masks. The VF using this API can only 
> steer RX traffic directed that VF to its own queues.

FWIW I have no idea what a "training packet and mask" is either.
Hopefully next version will come with a _much_ clearer high
level explanation.
Tom Herbert Sept. 9, 2023, 5:34 p.m. UTC | #6
On Thu, Sep 7, 2023 at 12:10 PM Samudrala, Sridhar
<sridhar.samudrala@intel.com> wrote:
>
>
>
> On 9/5/2023 6:05 PM, Tom Herbert wrote:
> <snip>
>
> > Yes, creating an elaborate mechanism that is only usable for one
> > vendor, e.g. a feature of DDP, really isn't very helpful. Parsing is a
> > very common operation in the networking stack, and if there's
> > something with the vanglorious name of "Parser Library" really should
> > start off as being a common, foundational, vendor agnostic library to
> > solve the larger problem and provide the most utility. The common
> > components would define consistent user and kernel interfaces for
> > parser offload, interfaces into the NIC drivers would be defined to
> > allow different vendors to implement parser offload in their devices.
>
> I think naming this framework as 'parser library' may have caused the
> misunderstanding. Will fix in the next revision. This is not a generic
> network packet parser and not applicable to kernel flow dissector. It is
> specific to ice and enables the driver to learn the hardware parser
> capabilities from the DDP package that is downloaded to hardware. This
> information along with the raw packet/mask is used to figure out all the
> metadata required to add a filter rule.

Sriidhar,

Okay, the DDP includes a programmable parser to some extent, and these
patches support the driver logic to support that programmable hardware
parser in ICE. It's still unclear to me how the rest of the world will
use this. When you say you the information "is used to figure out all
the metadata required to add a filter rule", who is adding these
filter rules and what APIs are they using? Considering you mention
it's not applicable to kernel flow dissector that leads me to believe
that you're viewing hardware parser capabilities to be independent of
the kernel and might even be using vendor proprietary tools to program
the parser. But as I said, hardware parsers are becoming common, users
benefit if we can provide common and consistent tools to program and
use them.

For instance, the draft mentions the Flow Director use case. How does
the user program the device for a new protocol in Flow Director? Do
you expect this to be done using common APIs, or would you use some
common API like TC Flower offload. And note that while Flow Director
might be Intel specific and not visible to the kernel, something like
aRFS is visible to the kernel but could benefit from a programmable
hardware parser as well. And really, when you think about, what we
really want for RSS, Flow DIrector, and aRFS is *exactly* an offload
the kernel flow director because those are effectively offloads of RPS
and RFS which rely on flow dissector for packet steering in the host
(in fact, the very first flow dissector was created exactly to produce
as packet hash for use with RPS).

Wrt flow dissector, the missing piece is that it's not user
programmable, every time we add a new protocol it's a major pain and
there's no way for users to add their own custom protocols. Frankly,
it's also spaghetti code that is prone to bugs (I take liberty in
calling it spaghetti code because I am one of the parties responsible
for creating it :-) ). We are working to completely replace the flow
dissector with an eBPF program to solve that. I don't believe we
should force devices to run an eBPF VM, so in order to do parser
offload we can start with a higher layer abstraction of the parser in
a declarative representation (for instance, see the Common Parser
Language I presented at netdev conference). Given the abstracted
program, the idea is that a compiler could produce the instructions to
program the hardware parser in a device with the exact same
functionality that we'd have in a programmable kernel flow dissector.
In this way, we can achieve a proper parser offload.

So I think one of the requirements in hardware parsers is to offload
flow dissector. If that is the requirement, do you think these patches
are aligned with that?

Tom
Samudrala, Sridhar Sept. 12, 2023, 8:37 p.m. UTC | #7
On 9/9/2023 12:34 PM, Tom Herbert wrote:
> On Thu, Sep 7, 2023 at 12:10 PM Samudrala, Sridhar
> <sridhar.samudrala@intel.com> wrote:
>>
>>
>>
>> On 9/5/2023 6:05 PM, Tom Herbert wrote:
>> <snip>
>>
>>> Yes, creating an elaborate mechanism that is only usable for one
>>> vendor, e.g. a feature of DDP, really isn't very helpful. Parsing is a
>>> very common operation in the networking stack, and if there's
>>> something with the vanglorious name of "Parser Library" really should
>>> start off as being a common, foundational, vendor agnostic library to
>>> solve the larger problem and provide the most utility. The common
>>> components would define consistent user and kernel interfaces for
>>> parser offload, interfaces into the NIC drivers would be defined to
>>> allow different vendors to implement parser offload in their devices.
>>
>> I think naming this framework as 'parser library' may have caused the
>> misunderstanding. Will fix in the next revision. This is not a generic
>> network packet parser and not applicable to kernel flow dissector. It is
>> specific to ice and enables the driver to learn the hardware parser
>> capabilities from the DDP package that is downloaded to hardware. This
>> information along with the raw packet/mask is used to figure out all the
>> metadata required to add a filter rule.
> 
> Sridhar,
> 
> Okay, the DDP includes a programmable parser to some extent, and these
> patches support the driver logic to support that programmable hardware
> parser in ICE. It's still unclear to me how the rest of the world will
> use this. When you say you the information "is used to figure out all
> the metadata required to add a filter rule", who is adding these
> filter rules and what APIs are they using? 

The filter rules are added by non-linux VF drivers that provide a 
user-api to pass raw packet along with a mask to indicate the packet 
header fields to be matched. The VF driver passes this rule to the PF 
driver via VF<->PF mailbox using virtchnl API.

> Considering you mention
> it's not applicable to kernel flow dissector that leads me to believe
> that you're viewing hardware parser capabilities to be independent of
> the kernel and might even be using vendor proprietary tools to program
> the parser. But as I said, hardware parsers are becoming common, users
> benefit if we can provide common and consistent tools to program and
> use them.

Sure. But at this time this patch series is not enabling parser offload 
or configuration of parser. Only makes the rule programming to be more 
flexible.