diff mbox series

[iwl-next,v4,05/12] ice: Log virtual channel messages in PF

Message ID 20231121025111.257597-6-yahui.cao@intel.com (mailing list archive)
State New, archived
Headers show
Series Add E800 live migration driver | expand

Commit Message

Cao, Yahui Nov. 21, 2023, 2:51 a.m. UTC
From: Lingyu Liu <lingyu.liu@intel.com>

Save the virtual channel messages sent by VF on the source side during
runtime. The logged virtchnl messages will be transferred and loaded
into the device on the destination side during the device resume stage.

For the feature which can not be migrated yet, it must be disabled or
blocked to prevent from being abused by VF. Otherwise, it may introduce
functional and security issue. Mask unsupported VF capability flags in
the VF-PF negotiaion stage.

Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
Signed-off-by: Yahui Cao <yahui.cao@intel.com>
---
 .../net/ethernet/intel/ice/ice_migration.c    | 167 ++++++++++++++++++
 .../intel/ice/ice_migration_private.h         |  17 ++
 drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   5 +
 drivers/net/ethernet/intel/ice/ice_virtchnl.c |  31 ++++
 4 files changed, 220 insertions(+)

Comments

Simon Horman Nov. 29, 2023, 5:12 p.m. UTC | #1
On Tue, Nov 21, 2023 at 02:51:04AM +0000, Yahui Cao wrote:
> From: Lingyu Liu <lingyu.liu@intel.com>
> 
> Save the virtual channel messages sent by VF on the source side during
> runtime. The logged virtchnl messages will be transferred and loaded
> into the device on the destination side during the device resume stage.
> 
> For the feature which can not be migrated yet, it must be disabled or
> blocked to prevent from being abused by VF. Otherwise, it may introduce
> functional and security issue. Mask unsupported VF capability flags in
> the VF-PF negotiaion stage.
> 
> Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
> Signed-off-by: Yahui Cao <yahui.cao@intel.com>

Hi Lingyu Liu and Yahui Cao,

some minor feedback from my side.

...

> diff --git a/drivers/net/ethernet/intel/ice/ice_migration.c b/drivers/net/ethernet/intel/ice/ice_migration.c

...

> +/**
> + * ice_migration_log_vf_msg - Log request message from VF
> + * @vf: pointer to the VF structure
> + * @event: pointer to the AQ event
> + *
> + * Log VF message for later device state loading during live migration
> + *
> + * Return 0 for success, negative for error
> + */
> +int ice_migration_log_vf_msg(struct ice_vf *vf,
> +			     struct ice_rq_event_info *event)
> +{
> +	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
> +	u32 v_opcode = le32_to_cpu(event->desc.cookie_high);
> +	struct device *dev = ice_pf_to_dev(vf->pf);
> +	u16 msglen = event->msg_len;
> +	u8 *msg = event->msg_buf;
> +
> +	if (!ice_migration_is_loggable_msg(v_opcode))
> +		return 0;
> +
> +	if (vf->virtchnl_msg_num >= VIRTCHNL_MSG_MAX) {
> +		dev_warn(dev, "VF %d has maximum number virtual channel commands\n",
> +			 vf->vf_id);
> +		return -ENOMEM;
> +	}
> +
> +	msg_listnode = (struct ice_migration_virtchnl_msg_listnode *)
> +			kzalloc(struct_size(msg_listnode,
> +					    msg_slot.msg_buffer,
> +					    msglen),
> +				GFP_KERNEL);

nit: there is no need to cast the void * pointer returned by kzalloc().

Flagged by Coccinelle.

> +	if (!msg_listnode) {
> +		dev_err(dev, "VF %d failed to allocate memory for msg listnode\n",
> +			vf->vf_id);
> +		return -ENOMEM;
> +	}
> +	dev_dbg(dev, "VF %d save virtual channel command, op code: %d, len: %d\n",
> +		vf->vf_id, v_opcode, msglen);
> +	msg_listnode->msg_slot.opcode = v_opcode;
> +	msg_listnode->msg_slot.msg_len = msglen;
> +	memcpy(msg_listnode->msg_slot.msg_buffer, msg, msglen);
> +	list_add_tail(&msg_listnode->node, &vf->virtchnl_msg_list);
> +	vf->virtchnl_msg_num++;
> +	vf->virtchnl_msg_size += struct_size(&msg_listnode->msg_slot,
> +					     msg_buffer,
> +					     msglen);
> +	return 0;
> +}
> +
> +/**
> + * ice_migration_unlog_vf_msg - revert logged message
> + * @vf: pointer to the VF structure
> + * @v_opcode: virtchnl message operation code
> + *
> + * Remove the last virtual channel message logged before.
> + */
> +void ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode)
> +{
> +	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
> +
> +	if (!ice_migration_is_loggable_msg(v_opcode))
> +		return;
> +
> +	if (WARN_ON_ONCE(list_empty(&vf->virtchnl_msg_list)))
> +		return;
> +
> +	msg_listnode =
> +		list_last_entry(&vf->virtchnl_msg_list,
> +				struct ice_migration_virtchnl_msg_listnode,
> +				node);
> +	if (WARN_ON_ONCE(msg_listnode->msg_slot.opcode != v_opcode))
> +		return;
> +
> +	list_del(&msg_listnode->node);
> +	kfree(msg_listnode);

msg_listnode is freed on the line above,
but dereferenced in the usage of struct_size() below.

As flagged by Smatch and Coccinelle.

> +	vf->virtchnl_msg_num--;
> +	vf->virtchnl_msg_size -= struct_size(&msg_listnode->msg_slot,
> +					     msg_buffer,
> +					     msg_listnode->msg_slot.msg_len);
> +}

...
Cao, Yahui Dec. 1, 2023, 8:27 a.m. UTC | #2
On 11/30/2023 1:12 AM, Simon Horman wrote:
> On Tue, Nov 21, 2023 at 02:51:04AM +0000, Yahui Cao wrote:
>> From: Lingyu Liu <lingyu.liu@intel.com>
>>
>> Save the virtual channel messages sent by VF on the source side during
>> runtime. The logged virtchnl messages will be transferred and loaded
>> into the device on the destination side during the device resume stage.
>>
>> For the feature which can not be migrated yet, it must be disabled or
>> blocked to prevent from being abused by VF. Otherwise, it may introduce
>> functional and security issue. Mask unsupported VF capability flags in
>> the VF-PF negotiaion stage.
>>
>> Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
>> Signed-off-by: Yahui Cao <yahui.cao@intel.com>
> 
> Hi Lingyu Liu and Yahui Cao,
> 
> some minor feedback from my side.
> 
> ...
> 
>> diff --git a/drivers/net/ethernet/intel/ice/ice_migration.c b/drivers/net/ethernet/intel/ice/ice_migration.c
> 
> ...
> 
>> +/**
>> + * ice_migration_log_vf_msg - Log request message from VF
>> + * @vf: pointer to the VF structure
>> + * @event: pointer to the AQ event
>> + *
>> + * Log VF message for later device state loading during live migration
>> + *
>> + * Return 0 for success, negative for error
>> + */
>> +int ice_migration_log_vf_msg(struct ice_vf *vf,
>> +			     struct ice_rq_event_info *event)
>> +{
>> +	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
>> +	u32 v_opcode = le32_to_cpu(event->desc.cookie_high);
>> +	struct device *dev = ice_pf_to_dev(vf->pf);
>> +	u16 msglen = event->msg_len;
>> +	u8 *msg = event->msg_buf;
>> +
>> +	if (!ice_migration_is_loggable_msg(v_opcode))
>> +		return 0;
>> +
>> +	if (vf->virtchnl_msg_num >= VIRTCHNL_MSG_MAX) {
>> +		dev_warn(dev, "VF %d has maximum number virtual channel commands\n",
>> +			 vf->vf_id);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	msg_listnode = (struct ice_migration_virtchnl_msg_listnode *)
>> +			kzalloc(struct_size(msg_listnode,
>> +					    msg_slot.msg_buffer,
>> +					    msglen),
>> +				GFP_KERNEL);
> 
> nit: there is no need to cast the void * pointer returned by kzalloc().
> 
> Flagged by Coccinelle.

Sure. Will fix in next version.

> 
>> +	if (!msg_listnode) {
>> +		dev_err(dev, "VF %d failed to allocate memory for msg listnode\n",
>> +			vf->vf_id);
>> +		return -ENOMEM;
>> +	}
>> +	dev_dbg(dev, "VF %d save virtual channel command, op code: %d, len: %d\n",
>> +		vf->vf_id, v_opcode, msglen);
>> +	msg_listnode->msg_slot.opcode = v_opcode;
>> +	msg_listnode->msg_slot.msg_len = msglen;
>> +	memcpy(msg_listnode->msg_slot.msg_buffer, msg, msglen);
>> +	list_add_tail(&msg_listnode->node, &vf->virtchnl_msg_list);
>> +	vf->virtchnl_msg_num++;
>> +	vf->virtchnl_msg_size += struct_size(&msg_listnode->msg_slot,
>> +					     msg_buffer,
>> +					     msglen);
>> +	return 0;
>> +}
>> +
>> +/**
>> + * ice_migration_unlog_vf_msg - revert logged message
>> + * @vf: pointer to the VF structure
>> + * @v_opcode: virtchnl message operation code
>> + *
>> + * Remove the last virtual channel message logged before.
>> + */
>> +void ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode)
>> +{
>> +	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
>> +
>> +	if (!ice_migration_is_loggable_msg(v_opcode))
>> +		return;
>> +
>> +	if (WARN_ON_ONCE(list_empty(&vf->virtchnl_msg_list)))
>> +		return;
>> +
>> +	msg_listnode =
>> +		list_last_entry(&vf->virtchnl_msg_list,
>> +				struct ice_migration_virtchnl_msg_listnode,
>> +				node);
>> +	if (WARN_ON_ONCE(msg_listnode->msg_slot.opcode != v_opcode))
>> +		return;
>> +
>> +	list_del(&msg_listnode->node);
>> +	kfree(msg_listnode);
> 
> msg_listnode is freed on the line above,
> but dereferenced in the usage of struct_size() below.
> 
> As flagged by Smatch and Coccinelle.
>  >> +	vf->virtchnl_msg_num--;
>> +	vf->virtchnl_msg_size -= struct_size(&msg_listnode->msg_slot,
>> +					     msg_buffer,
>> +					     msg_listnode->msg_slot.msg_len);
>> +}
> 
> ...
> 

Good catch :) Will fix in next version

Thanks.
Yahui.
Tian, Kevin Dec. 7, 2023, 7:33 a.m. UTC | #3
> From: Cao, Yahui <yahui.cao@intel.com>
> Sent: Tuesday, November 21, 2023 10:51 AM
> @@ -4037,6 +4045,17 @@ void ice_vc_process_vf_msg(struct ice_pf *pf,
> struct ice_rq_event_info *event,
>  		goto finish;
>  	}
> 
> +	if (vf->migration_enabled) {
> +		if (ice_migration_log_vf_msg(vf, event)) {
> +			u32 status_code =
> VIRTCHNL_STATUS_ERR_NO_MEMORY;
> +
> +			err = ice_vc_respond_to_vf(vf, v_opcode,
> +						   status_code,
> +						   NULL, 0);
> +			goto finish;
> +		}
> +	}
> +

I'm not sure it's a good thing to fail guest just because the message
cannot be logged for migration purpose.

It's more reasonable to block migration in this case while letting the
guest run as normal...

> 
> +	/* All of the loggable virtual channel messages are logged by
> +	 * ice_migration_unlog_vf_msg() before they are processed.
> +	 *
> +	 * Two kinds of error may happen, virtual channel message's result
> +	 * is failure after processed by PF or message is not sent to VF
> +	 * successfully. If error happened, fallback here by reverting logged
> +	 * messages.
> +	 */
> +	if (vf->migration_enabled &&
> +	    (vf->virtchnl_retval != VIRTCHNL_STATUS_SUCCESS || err))
> +		ice_migration_unlog_vf_msg(vf, v_opcode);
> +

... and here unlog is not required. Just place log at this point.
Brett Creeley Dec. 8, 2023, 1:53 a.m. UTC | #4
On 11/20/2023 6:51 PM, Yahui Cao wrote:
> Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> 
> 
> From: Lingyu Liu <lingyu.liu@intel.com>
> 
> Save the virtual channel messages sent by VF on the source side during
> runtime. The logged virtchnl messages will be transferred and loaded
> into the device on the destination side during the device resume stage.
> 
> For the feature which can not be migrated yet, it must be disabled or
> blocked to prevent from being abused by VF. Otherwise, it may introduce
> functional and security issue. Mask unsupported VF capability flags in
> the VF-PF negotiaion stage.

s/negotiaion/negotiation/

> 
> Signed-off-by: Lingyu Liu <lingyu.liu@intel.com>
> Signed-off-by: Yahui Cao <yahui.cao@intel.com>
> ---
>   .../net/ethernet/intel/ice/ice_migration.c    | 167 ++++++++++++++++++
>   .../intel/ice/ice_migration_private.h         |  17 ++
>   drivers/net/ethernet/intel/ice/ice_vf_lib.h   |   5 +
>   drivers/net/ethernet/intel/ice/ice_virtchnl.c |  31 ++++
>   4 files changed, 220 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_migration.c b/drivers/net/ethernet/intel/ice/ice_migration.c
> index 2b9b5a2ce367..18ec4ec7d147 100644
> --- a/drivers/net/ethernet/intel/ice/ice_migration.c
> +++ b/drivers/net/ethernet/intel/ice/ice_migration.c
> @@ -3,6 +3,17 @@
> 
>   #include "ice.h"
> 
> +struct ice_migration_virtchnl_msg_slot {
> +       u32 opcode;
> +       u16 msg_len;
> +       char msg_buffer[];
> +};
> +
> +struct ice_migration_virtchnl_msg_listnode {
> +       struct list_head node;
> +       struct ice_migration_virtchnl_msg_slot msg_slot;
> +};
> +
>   /**
>    * ice_migration_get_pf - Get ice PF structure pointer by pdev
>    * @pdev: pointer to ice vfio pci VF pdev structure
> @@ -22,6 +33,9 @@ EXPORT_SYMBOL(ice_migration_get_pf);
>   void ice_migration_init_vf(struct ice_vf *vf)
>   {
>          vf->migration_enabled = true;
> +       INIT_LIST_HEAD(&vf->virtchnl_msg_list);
> +       vf->virtchnl_msg_num = 0;
> +       vf->virtchnl_msg_size = 0;
>   }
> 
>   /**
> @@ -30,10 +44,24 @@ void ice_migration_init_vf(struct ice_vf *vf)
>    */
>   void ice_migration_uninit_vf(struct ice_vf *vf)
>   {
> +       struct ice_migration_virtchnl_msg_listnode *msg_listnode;
> +       struct ice_migration_virtchnl_msg_listnode *dtmp;
> +
>          if (!vf->migration_enabled)
>                  return;
> 
>          vf->migration_enabled = false;
> +
> +       if (list_empty(&vf->virtchnl_msg_list))
> +               return;
> +       list_for_each_entry_safe(msg_listnode, dtmp,
> +                                &vf->virtchnl_msg_list,
> +                                node) {
> +               list_del(&msg_listnode->node);
> +               kfree(msg_listnode);
> +       }
> +       vf->virtchnl_msg_num = 0;
> +       vf->virtchnl_msg_size = 0;
>   }
> 
>   /**
> @@ -80,3 +108,142 @@ void ice_migration_uninit_dev(struct ice_pf *pf, int vf_id)
>          ice_put_vf(vf);
>   }
>   EXPORT_SYMBOL(ice_migration_uninit_dev);
> +
> +/**
> + * ice_migration_is_loggable_msg - is this message loggable or not
> + * @v_opcode: virtchnl message operation code
> + *
> + * Return true if this message logging is supported, otherwise return false
> + */
> +static inline bool ice_migration_is_loggable_msg(u32 v_opcode)
> +{
> +       switch (v_opcode) {
> +       case VIRTCHNL_OP_VERSION:
> +       case VIRTCHNL_OP_GET_VF_RESOURCES:
> +       case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
> +       case VIRTCHNL_OP_CONFIG_IRQ_MAP:
> +       case VIRTCHNL_OP_ADD_ETH_ADDR:
> +       case VIRTCHNL_OP_DEL_ETH_ADDR:
> +       case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
> +       case VIRTCHNL_OP_ENABLE_QUEUES:
> +       case VIRTCHNL_OP_DISABLE_QUEUES:
> +       case VIRTCHNL_OP_ADD_VLAN:
> +       case VIRTCHNL_OP_DEL_VLAN:
> +       case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
> +       case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
> +       case VIRTCHNL_OP_CONFIG_RSS_KEY:
> +       case VIRTCHNL_OP_CONFIG_RSS_LUT:
> +       case VIRTCHNL_OP_GET_SUPPORTED_RXDIDS:
> +               return true;
> +       default:
> +               return false;
> +       }
> +}
> +
> +/**
> + * ice_migration_log_vf_msg - Log request message from VF
> + * @vf: pointer to the VF structure
> + * @event: pointer to the AQ event
> + *
> + * Log VF message for later device state loading during live migration
> + *
> + * Return 0 for success, negative for error
> + */
> +int ice_migration_log_vf_msg(struct ice_vf *vf,
> +                            struct ice_rq_event_info *event)
> +{
> +       struct ice_migration_virtchnl_msg_listnode *msg_listnode;
> +       u32 v_opcode = le32_to_cpu(event->desc.cookie_high);
> +       struct device *dev = ice_pf_to_dev(vf->pf);
> +       u16 msglen = event->msg_len;
> +       u8 *msg = event->msg_buf;
> +
> +       if (!ice_migration_is_loggable_msg(v_opcode))
> +               return 0;
> +
> +       if (vf->virtchnl_msg_num >= VIRTCHNL_MSG_MAX) {
> +               dev_warn(dev, "VF %d has maximum number virtual channel commands\n",
> +                        vf->vf_id);
> +               return -ENOMEM;
> +       }
> +
> +       msg_listnode = (struct ice_migration_virtchnl_msg_listnode *)
> +                       kzalloc(struct_size(msg_listnode,
> +                                           msg_slot.msg_buffer,
> +                                           msglen),
> +                               GFP_KERNEL);
> +       if (!msg_listnode) {
> +               dev_err(dev, "VF %d failed to allocate memory for msg listnode\n",
> +                       vf->vf_id);
> +               return -ENOMEM;
> +       }
> +       dev_dbg(dev, "VF %d save virtual channel command, op code: %d, len: %d\n",
> +               vf->vf_id, v_opcode, msglen);
> +       msg_listnode->msg_slot.opcode = v_opcode;
> +       msg_listnode->msg_slot.msg_len = msglen;
> +       memcpy(msg_listnode->msg_slot.msg_buffer, msg, msglen);

It seems like this can still be abused. What if the VM/VF user sends 
hundreds of thousands of ADD_ADDR/DEL_ADDR, ADD_VLAN/DEL_VLAN, 
PROMISCUOUS, ENABLE_VLAN_STRIPPING/DISABLE_VLAN_STRIPPING, RSS_LUT, 
RSS_KEY, etc.?

Shouldn't you only maintain one copy for each key/value when it makes 
sense? For example, you don't need multiple RSS_LUT and RSS_KEY messages 
logged as just the most recent one is needed.

What if multiple promiscuous messages are sent? Do you need to save them 
all or just the most recent?

What if you have an ADD_ADDR/DEL_ADDR for the same address? Do you need 
to save both of those messages? Seems like when you get a DEL_ADDR you 
should search for the associated ADD_ADDR and just remove it. Same 
comment applies for ADD_VLAN/DEL_VLAN.

> +       list_add_tail(&msg_listnode->node, &vf->virtchnl_msg_list);
> +       vf->virtchnl_msg_num++;
> +       vf->virtchnl_msg_size += struct_size(&msg_listnode->msg_slot,
> +                                            msg_buffer,
> +                                            msglen);
> +       return 0;
> +}
> +
> +/**
> + * ice_migration_unlog_vf_msg - revert logged message
> + * @vf: pointer to the VF structure
> + * @v_opcode: virtchnl message operation code
> + *
> + * Remove the last virtual channel message logged before.
> + */
> +void ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode)
> +{
> +       struct ice_migration_virtchnl_msg_listnode *msg_listnode;
> +
> +       if (!ice_migration_is_loggable_msg(v_opcode))
> +               return;
> +
> +       if (WARN_ON_ONCE(list_empty(&vf->virtchnl_msg_list)))
> +               return;
> +
> +       msg_listnode =
> +               list_last_entry(&vf->virtchnl_msg_list,
> +                               struct ice_migration_virtchnl_msg_listnode,
> +                               node);
> +       if (WARN_ON_ONCE(msg_listnode->msg_slot.opcode != v_opcode))
> +               return;
> +
> +       list_del(&msg_listnode->node);
> +       kfree(msg_listnode);
> +       vf->virtchnl_msg_num--;
> +       vf->virtchnl_msg_size -= struct_size(&msg_listnode->msg_slot,
> +                                            msg_buffer,
> +                                            msg_listnode->msg_slot.msg_len);
> +}
> +
> +#define VIRTCHNL_VF_MIGRATION_SUPPORT_FEATURE \
> +                               (VIRTCHNL_VF_OFFLOAD_L2 | \
> +                                VIRTCHNL_VF_OFFLOAD_RSS_PF | \
> +                                VIRTCHNL_VF_OFFLOAD_RSS_AQ | \
> +                                VIRTCHNL_VF_OFFLOAD_RSS_REG | \
> +                                VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 | \
> +                                VIRTCHNL_VF_OFFLOAD_ENCAP | \
> +                                VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM | \
> +                                VIRTCHNL_VF_OFFLOAD_RX_POLLING | \
> +                                VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
> +                                VIRTCHNL_VF_CAP_ADV_LINK_SPEED | \
> +                                VIRTCHNL_VF_OFFLOAD_VLAN | \
> +                                VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | \
> +                                VIRTCHNL_VF_OFFLOAD_USO)
> +
> +/**
> + * ice_migration_supported_caps - get migration supported VF capabilities
> + *
> + * When migration is activated, some VF capabilities are not supported.
> + * Hence unmask those capability flags for VF resources.
> + */
> +u32 ice_migration_supported_caps(void)
> +{
> +       return VIRTCHNL_VF_MIGRATION_SUPPORT_FEATURE;
> +}
> diff --git a/drivers/net/ethernet/intel/ice/ice_migration_private.h b/drivers/net/ethernet/intel/ice/ice_migration_private.h
> index 2cc2f515fc5e..676eb2d6c12e 100644
> --- a/drivers/net/ethernet/intel/ice/ice_migration_private.h
> +++ b/drivers/net/ethernet/intel/ice/ice_migration_private.h
> @@ -13,9 +13,26 @@
>   #if IS_ENABLED(CONFIG_ICE_VFIO_PCI)
>   void ice_migration_init_vf(struct ice_vf *vf);
>   void ice_migration_uninit_vf(struct ice_vf *vf);
> +int ice_migration_log_vf_msg(struct ice_vf *vf,
> +                            struct ice_rq_event_info *event);
> +void ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode);
> +u32 ice_migration_supported_caps(void);
>   #else
>   static inline void ice_migration_init_vf(struct ice_vf *vf) { }
>   static inline void ice_migration_uninit_vf(struct ice_vf *vf) { }
> +static inline int ice_migration_log_vf_msg(struct ice_vf *vf,
> +                                          struct ice_rq_event_info *event)
> +{
> +       return 0;
> +}
> +
> +static inline void
> +ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode) { }
> +static inline u32
> +ice_migration_supported_caps(void)
> +{
> +       return 0xFFFFFFFF;
> +}
>   #endif /* CONFIG_ICE_VFIO_PCI */
> 
>   #endif /* _ICE_MIGRATION_PRIVATE_H_ */
> diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> index 431fd28787e8..318b6dfc016d 100644
> --- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> +++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
> @@ -77,6 +77,7 @@ struct ice_vfs {
>          unsigned long last_printed_mdd_jiffies; /* MDD message rate limit */
>   };
> 
> +#define VIRTCHNL_MSG_MAX 1000

This seems fairly arbitrary. How did you come up with this value? It 
seems like you can figure out the max number of messages needed for a 
single VF and it wouldn't be too unreasonable. What if it's a trusted VF 
that supports 4K VLANs?

Also, should this be named more appropriately since it's specific to the 
ice driver, i.e.:

ICE_VF_VIRTCHNL_LOGGABLE_MSG_MAX


>   /* VF information structure */
>   struct ice_vf {
>          struct hlist_node entry;
> @@ -141,6 +142,10 @@ struct ice_vf {
>          u16 num_msix;                   /* num of MSI-X configured on this VF */
> 
>          u8 migration_enabled:1;
> +       struct list_head virtchnl_msg_list;
> +       u64 virtchnl_msg_num;
> +       u64 virtchnl_msg_size;
> +       u32 virtchnl_retval;
>   };
> 
>   /* Flags for controlling behavior of ice_reset_vf */
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> index 661ca86c3032..730eeaea8c89 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> @@ -348,6 +348,12 @@ ice_vc_respond_to_vf(struct ice_vf *vf, u32 v_opcode,
>                  return -EIO;
>          }
> 
> +       /* v_retval will not be returned in this function, store it in the
> +        * per VF field to be used by migration logging logic later.
> +        */
> +       if (vf->migration_enabled)
> +               vf->virtchnl_retval = v_retval;
> +
>          return ice_vc_send_response_to_vf(vf, v_opcode, v_retval, msg, msglen);
>   }
> 
> @@ -480,6 +486,8 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
>                                    VIRTCHNL_VF_OFFLOAD_RSS_REG |
>                                    VIRTCHNL_VF_OFFLOAD_VLAN;
> 
> +       if (vf->migration_enabled)
> +               vf->driver_caps &= ice_migration_supported_caps();
>          vfres->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2;
>          vsi = ice_get_vf_vsi(vf);
>          if (!vsi) {
> @@ -4037,6 +4045,17 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
>                  goto finish;
>          }
> 
> +       if (vf->migration_enabled) {
> +               if (ice_migration_log_vf_msg(vf, event)) {
> +                       u32 status_code = VIRTCHNL_STATUS_ERR_NO_MEMORY;
> +
> +                       err = ice_vc_respond_to_vf(vf, v_opcode,
> +                                                  status_code,
> +                                                  NULL, 0);
> +                       goto finish;
> +               }
> +       }
> +
>          switch (v_opcode) {
>          case VIRTCHNL_OP_VERSION:
>                  err = ops->get_ver_msg(vf, msg);
> @@ -4156,6 +4175,18 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
>                           vf_id, v_opcode, err);
>          }
> 
> +       /* All of the loggable virtual channel messages are logged by
> +        * ice_migration_unlog_vf_msg() before they are processed.

Should this be ice_migration_log_vf_msg() in the comment instead?

> +        *
> +        * Two kinds of error may happen, virtual channel message's result
> +        * is failure after processed by PF or message is not sent to VF
> +        * successfully. If error happened, fallback here by reverting logged
> +        * messages.
> +        */
> +       if (vf->migration_enabled &&
> +           (vf->virtchnl_retval != VIRTCHNL_STATUS_SUCCESS || err))
> +               ice_migration_unlog_vf_msg(vf, v_opcode);
> +
>   finish:
>          mutex_unlock(&vf->cfg_lock);
>          ice_put_vf(vf);
> --
> 2.34.1
>
diff mbox series

Patch

diff --git a/drivers/net/ethernet/intel/ice/ice_migration.c b/drivers/net/ethernet/intel/ice/ice_migration.c
index 2b9b5a2ce367..18ec4ec7d147 100644
--- a/drivers/net/ethernet/intel/ice/ice_migration.c
+++ b/drivers/net/ethernet/intel/ice/ice_migration.c
@@ -3,6 +3,17 @@ 
 
 #include "ice.h"
 
+struct ice_migration_virtchnl_msg_slot {
+	u32 opcode;
+	u16 msg_len;
+	char msg_buffer[];
+};
+
+struct ice_migration_virtchnl_msg_listnode {
+	struct list_head node;
+	struct ice_migration_virtchnl_msg_slot msg_slot;
+};
+
 /**
  * ice_migration_get_pf - Get ice PF structure pointer by pdev
  * @pdev: pointer to ice vfio pci VF pdev structure
@@ -22,6 +33,9 @@  EXPORT_SYMBOL(ice_migration_get_pf);
 void ice_migration_init_vf(struct ice_vf *vf)
 {
 	vf->migration_enabled = true;
+	INIT_LIST_HEAD(&vf->virtchnl_msg_list);
+	vf->virtchnl_msg_num = 0;
+	vf->virtchnl_msg_size = 0;
 }
 
 /**
@@ -30,10 +44,24 @@  void ice_migration_init_vf(struct ice_vf *vf)
  */
 void ice_migration_uninit_vf(struct ice_vf *vf)
 {
+	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
+	struct ice_migration_virtchnl_msg_listnode *dtmp;
+
 	if (!vf->migration_enabled)
 		return;
 
 	vf->migration_enabled = false;
+
+	if (list_empty(&vf->virtchnl_msg_list))
+		return;
+	list_for_each_entry_safe(msg_listnode, dtmp,
+				 &vf->virtchnl_msg_list,
+				 node) {
+		list_del(&msg_listnode->node);
+		kfree(msg_listnode);
+	}
+	vf->virtchnl_msg_num = 0;
+	vf->virtchnl_msg_size = 0;
 }
 
 /**
@@ -80,3 +108,142 @@  void ice_migration_uninit_dev(struct ice_pf *pf, int vf_id)
 	ice_put_vf(vf);
 }
 EXPORT_SYMBOL(ice_migration_uninit_dev);
+
+/**
+ * ice_migration_is_loggable_msg - is this message loggable or not
+ * @v_opcode: virtchnl message operation code
+ *
+ * Return true if this message logging is supported, otherwise return false
+ */
+static inline bool ice_migration_is_loggable_msg(u32 v_opcode)
+{
+	switch (v_opcode) {
+	case VIRTCHNL_OP_VERSION:
+	case VIRTCHNL_OP_GET_VF_RESOURCES:
+	case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+	case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+	case VIRTCHNL_OP_ADD_ETH_ADDR:
+	case VIRTCHNL_OP_DEL_ETH_ADDR:
+	case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+	case VIRTCHNL_OP_ENABLE_QUEUES:
+	case VIRTCHNL_OP_DISABLE_QUEUES:
+	case VIRTCHNL_OP_ADD_VLAN:
+	case VIRTCHNL_OP_DEL_VLAN:
+	case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+	case VIRTCHNL_OP_CONFIG_RSS_KEY:
+	case VIRTCHNL_OP_CONFIG_RSS_LUT:
+	case VIRTCHNL_OP_GET_SUPPORTED_RXDIDS:
+		return true;
+	default:
+		return false;
+	}
+}
+
+/**
+ * ice_migration_log_vf_msg - Log request message from VF
+ * @vf: pointer to the VF structure
+ * @event: pointer to the AQ event
+ *
+ * Log VF message for later device state loading during live migration
+ *
+ * Return 0 for success, negative for error
+ */
+int ice_migration_log_vf_msg(struct ice_vf *vf,
+			     struct ice_rq_event_info *event)
+{
+	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
+	u32 v_opcode = le32_to_cpu(event->desc.cookie_high);
+	struct device *dev = ice_pf_to_dev(vf->pf);
+	u16 msglen = event->msg_len;
+	u8 *msg = event->msg_buf;
+
+	if (!ice_migration_is_loggable_msg(v_opcode))
+		return 0;
+
+	if (vf->virtchnl_msg_num >= VIRTCHNL_MSG_MAX) {
+		dev_warn(dev, "VF %d has maximum number virtual channel commands\n",
+			 vf->vf_id);
+		return -ENOMEM;
+	}
+
+	msg_listnode = (struct ice_migration_virtchnl_msg_listnode *)
+			kzalloc(struct_size(msg_listnode,
+					    msg_slot.msg_buffer,
+					    msglen),
+				GFP_KERNEL);
+	if (!msg_listnode) {
+		dev_err(dev, "VF %d failed to allocate memory for msg listnode\n",
+			vf->vf_id);
+		return -ENOMEM;
+	}
+	dev_dbg(dev, "VF %d save virtual channel command, op code: %d, len: %d\n",
+		vf->vf_id, v_opcode, msglen);
+	msg_listnode->msg_slot.opcode = v_opcode;
+	msg_listnode->msg_slot.msg_len = msglen;
+	memcpy(msg_listnode->msg_slot.msg_buffer, msg, msglen);
+	list_add_tail(&msg_listnode->node, &vf->virtchnl_msg_list);
+	vf->virtchnl_msg_num++;
+	vf->virtchnl_msg_size += struct_size(&msg_listnode->msg_slot,
+					     msg_buffer,
+					     msglen);
+	return 0;
+}
+
+/**
+ * ice_migration_unlog_vf_msg - revert logged message
+ * @vf: pointer to the VF structure
+ * @v_opcode: virtchnl message operation code
+ *
+ * Remove the last virtual channel message logged before.
+ */
+void ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode)
+{
+	struct ice_migration_virtchnl_msg_listnode *msg_listnode;
+
+	if (!ice_migration_is_loggable_msg(v_opcode))
+		return;
+
+	if (WARN_ON_ONCE(list_empty(&vf->virtchnl_msg_list)))
+		return;
+
+	msg_listnode =
+		list_last_entry(&vf->virtchnl_msg_list,
+				struct ice_migration_virtchnl_msg_listnode,
+				node);
+	if (WARN_ON_ONCE(msg_listnode->msg_slot.opcode != v_opcode))
+		return;
+
+	list_del(&msg_listnode->node);
+	kfree(msg_listnode);
+	vf->virtchnl_msg_num--;
+	vf->virtchnl_msg_size -= struct_size(&msg_listnode->msg_slot,
+					     msg_buffer,
+					     msg_listnode->msg_slot.msg_len);
+}
+
+#define VIRTCHNL_VF_MIGRATION_SUPPORT_FEATURE \
+				(VIRTCHNL_VF_OFFLOAD_L2 | \
+				 VIRTCHNL_VF_OFFLOAD_RSS_PF | \
+				 VIRTCHNL_VF_OFFLOAD_RSS_AQ | \
+				 VIRTCHNL_VF_OFFLOAD_RSS_REG | \
+				 VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 | \
+				 VIRTCHNL_VF_OFFLOAD_ENCAP | \
+				 VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM | \
+				 VIRTCHNL_VF_OFFLOAD_RX_POLLING | \
+				 VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | \
+				 VIRTCHNL_VF_CAP_ADV_LINK_SPEED | \
+				 VIRTCHNL_VF_OFFLOAD_VLAN | \
+				 VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | \
+				 VIRTCHNL_VF_OFFLOAD_USO)
+
+/**
+ * ice_migration_supported_caps - get migration supported VF capabilities
+ *
+ * When migration is activated, some VF capabilities are not supported.
+ * Hence unmask those capability flags for VF resources.
+ */
+u32 ice_migration_supported_caps(void)
+{
+	return VIRTCHNL_VF_MIGRATION_SUPPORT_FEATURE;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_migration_private.h b/drivers/net/ethernet/intel/ice/ice_migration_private.h
index 2cc2f515fc5e..676eb2d6c12e 100644
--- a/drivers/net/ethernet/intel/ice/ice_migration_private.h
+++ b/drivers/net/ethernet/intel/ice/ice_migration_private.h
@@ -13,9 +13,26 @@ 
 #if IS_ENABLED(CONFIG_ICE_VFIO_PCI)
 void ice_migration_init_vf(struct ice_vf *vf);
 void ice_migration_uninit_vf(struct ice_vf *vf);
+int ice_migration_log_vf_msg(struct ice_vf *vf,
+			     struct ice_rq_event_info *event);
+void ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode);
+u32 ice_migration_supported_caps(void);
 #else
 static inline void ice_migration_init_vf(struct ice_vf *vf) { }
 static inline void ice_migration_uninit_vf(struct ice_vf *vf) { }
+static inline int ice_migration_log_vf_msg(struct ice_vf *vf,
+					   struct ice_rq_event_info *event)
+{
+	return 0;
+}
+
+static inline void
+ice_migration_unlog_vf_msg(struct ice_vf *vf, u32 v_opcode) { }
+static inline u32
+ice_migration_supported_caps(void)
+{
+	return 0xFFFFFFFF;
+}
 #endif /* CONFIG_ICE_VFIO_PCI */
 
 #endif /* _ICE_MIGRATION_PRIVATE_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
index 431fd28787e8..318b6dfc016d 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
@@ -77,6 +77,7 @@  struct ice_vfs {
 	unsigned long last_printed_mdd_jiffies;	/* MDD message rate limit */
 };
 
+#define VIRTCHNL_MSG_MAX 1000
 /* VF information structure */
 struct ice_vf {
 	struct hlist_node entry;
@@ -141,6 +142,10 @@  struct ice_vf {
 	u16 num_msix;			/* num of MSI-X configured on this VF */
 
 	u8 migration_enabled:1;
+	struct list_head virtchnl_msg_list;
+	u64 virtchnl_msg_num;
+	u64 virtchnl_msg_size;
+	u32 virtchnl_retval;
 };
 
 /* Flags for controlling behavior of ice_reset_vf */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
index 661ca86c3032..730eeaea8c89 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
@@ -348,6 +348,12 @@  ice_vc_respond_to_vf(struct ice_vf *vf, u32 v_opcode,
 		return -EIO;
 	}
 
+	/* v_retval will not be returned in this function, store it in the
+	 * per VF field to be used by migration logging logic later.
+	 */
+	if (vf->migration_enabled)
+		vf->virtchnl_retval = v_retval;
+
 	return ice_vc_send_response_to_vf(vf, v_opcode, v_retval, msg, msglen);
 }
 
@@ -480,6 +486,8 @@  static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 				  VIRTCHNL_VF_OFFLOAD_RSS_REG |
 				  VIRTCHNL_VF_OFFLOAD_VLAN;
 
+	if (vf->migration_enabled)
+		vf->driver_caps &= ice_migration_supported_caps();
 	vfres->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2;
 	vsi = ice_get_vf_vsi(vf);
 	if (!vsi) {
@@ -4037,6 +4045,17 @@  void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
 		goto finish;
 	}
 
+	if (vf->migration_enabled) {
+		if (ice_migration_log_vf_msg(vf, event)) {
+			u32 status_code = VIRTCHNL_STATUS_ERR_NO_MEMORY;
+
+			err = ice_vc_respond_to_vf(vf, v_opcode,
+						   status_code,
+						   NULL, 0);
+			goto finish;
+		}
+	}
+
 	switch (v_opcode) {
 	case VIRTCHNL_OP_VERSION:
 		err = ops->get_ver_msg(vf, msg);
@@ -4156,6 +4175,18 @@  void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
 			 vf_id, v_opcode, err);
 	}
 
+	/* All of the loggable virtual channel messages are logged by
+	 * ice_migration_unlog_vf_msg() before they are processed.
+	 *
+	 * Two kinds of error may happen, virtual channel message's result
+	 * is failure after processed by PF or message is not sent to VF
+	 * successfully. If error happened, fallback here by reverting logged
+	 * messages.
+	 */
+	if (vf->migration_enabled &&
+	    (vf->virtchnl_retval != VIRTCHNL_STATUS_SUCCESS || err))
+		ice_migration_unlog_vf_msg(vf, v_opcode);
+
 finish:
 	mutex_unlock(&vf->cfg_lock);
 	ice_put_vf(vf);