Message ID | 20240328044715.266641-5-danielj@nvidia.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | Remove RTNL lock protection of CVQ | expand |
在 2024/3/28 下午12:47, Daniel Jurgens 写道: > Since we no longer have to hold the RTNL lock here just do updates for > the specified queue. > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > --- > drivers/net/virtio_net.c | 38 ++++++++++++++------------------------ > 1 file changed, 14 insertions(+), 24 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index b9298544b1b5..9c4bfb1eb15c 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -3596,36 +3596,26 @@ static void virtnet_rx_dim_work(struct work_struct *work) > struct virtnet_info *vi = rq->vq->vdev->priv; > struct net_device *dev = vi->dev; > struct dim_cq_moder update_moder; > - int i, qnum, err; > + int qnum, err; > > if (!rtnl_trylock()) > return; > > - /* Each rxq's work is queued by "net_dim()->schedule_work()" > - * in response to NAPI traffic changes. Note that dim->profile_ix > - * for each rxq is updated prior to the queuing action. > - * So we only need to traverse and update profiles for all rxqs > - * in the work which is holding rtnl_lock. > - */ > - for (i = 0; i < vi->curr_queue_pairs; i++) { > - rq = &vi->rq[i]; > - dim = &rq->dim; > - qnum = rq - vi->rq; > + qnum = rq - vi->rq; > > - if (!rq->dim_enabled) > - continue; > + if (!rq->dim_enabled) > + continue; ? continue what? For the lock code, please pass the test. It's important. Regards, Heng > > - update_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix); > - if (update_moder.usec != rq->intr_coal.max_usecs || > - update_moder.pkts != rq->intr_coal.max_packets) { > - err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum, > - update_moder.usec, > - update_moder.pkts); > - if (err) > - pr_debug("%s: Failed to send dim parameters on rxq%d\n", > - dev->name, qnum); > - dim->state = DIM_START_MEASURE; > - } > + update_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix); > + if (update_moder.usec != rq->intr_coal.max_usecs || > + update_moder.pkts != rq->intr_coal.max_packets) { > + err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum, > + update_moder.usec, > + update_moder.pkts); > + if (err) > + pr_debug("%s: Failed to send dim parameters on rxq%d\n", > + dev->name, qnum); > + dim->state = DIM_START_MEASURE; > } > > rtnl_unlock();
> From: Heng Qi <hengqi@linux.alibaba.com> > Sent: Wednesday, March 27, 2024 11:57 PM > To: Dan Jurgens <danielj@nvidia.com>; netdev@vger.kernel.org > Cc: mst@redhat.com; jasowang@redhat.com; xuanzhuo@linux.alibaba.com; > virtualization@lists.linux.dev; davem@davemloft.net; > edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; Jiri Pirko > <jiri@nvidia.com> > Subject: Re: [PATCH net-next v2 4/6] virtio_net: Do DIM update for specified > queue only > > > > 在 2024/3/28 下午12:47, Daniel Jurgens 写道: > > Since we no longer have to hold the RTNL lock here just do updates for > > the specified queue. > > > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > > --- > > drivers/net/virtio_net.c | 38 ++++++++++++++------------------------ > > 1 file changed, 14 insertions(+), 24 deletions(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index > > b9298544b1b5..9c4bfb1eb15c 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -3596,36 +3596,26 @@ static void virtnet_rx_dim_work(struct > work_struct *work) > > struct virtnet_info *vi = rq->vq->vdev->priv; > > struct net_device *dev = vi->dev; > > struct dim_cq_moder update_moder; > > - int i, qnum, err; > > + int qnum, err; > > > > if (!rtnl_trylock()) > > return; > > > > - /* Each rxq's work is queued by "net_dim()->schedule_work()" > > - * in response to NAPI traffic changes. Note that dim->profile_ix > > - * for each rxq is updated prior to the queuing action. > > - * So we only need to traverse and update profiles for all rxqs > > - * in the work which is holding rtnl_lock. > > - */ > > - for (i = 0; i < vi->curr_queue_pairs; i++) { > > - rq = &vi->rq[i]; > > - dim = &rq->dim; > > - qnum = rq - vi->rq; > > + qnum = rq - vi->rq; > > > > - if (!rq->dim_enabled) > > - continue; > > + if (!rq->dim_enabled) > > + continue; > > ? > > continue what? > Sorry, messed this up when I was testing the patches and put the fix for the continue in the lock patch. > For the lock code, please pass the test. It's important. I did some bench testing. I'll do more and send a new set early next week. > > Regards, > Heng > > > > > - update_moder = net_dim_get_rx_moderation(dim->mode, > dim->profile_ix); > > - if (update_moder.usec != rq->intr_coal.max_usecs || > > - update_moder.pkts != rq->intr_coal.max_packets) { > > - err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum, > > - > update_moder.usec, > > - > update_moder.pkts); > > - if (err) > > - pr_debug("%s: Failed to send dim > parameters on rxq%d\n", > > - dev->name, qnum); > > - dim->state = DIM_START_MEASURE; > > - } > > + update_moder = net_dim_get_rx_moderation(dim->mode, dim- > >profile_ix); > > + if (update_moder.usec != rq->intr_coal.max_usecs || > > + update_moder.pkts != rq->intr_coal.max_packets) { > > + err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum, > > + update_moder.usec, > > + update_moder.pkts); > > + if (err) > > + pr_debug("%s: Failed to send dim parameters on > rxq%d\n", > > + dev->name, qnum); > > + dim->state = DIM_START_MEASURE; > > } > > > > rtnl_unlock();
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index b9298544b1b5..9c4bfb1eb15c 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3596,36 +3596,26 @@ static void virtnet_rx_dim_work(struct work_struct *work) struct virtnet_info *vi = rq->vq->vdev->priv; struct net_device *dev = vi->dev; struct dim_cq_moder update_moder; - int i, qnum, err; + int qnum, err; if (!rtnl_trylock()) return; - /* Each rxq's work is queued by "net_dim()->schedule_work()" - * in response to NAPI traffic changes. Note that dim->profile_ix - * for each rxq is updated prior to the queuing action. - * So we only need to traverse and update profiles for all rxqs - * in the work which is holding rtnl_lock. - */ - for (i = 0; i < vi->curr_queue_pairs; i++) { - rq = &vi->rq[i]; - dim = &rq->dim; - qnum = rq - vi->rq; + qnum = rq - vi->rq; - if (!rq->dim_enabled) - continue; + if (!rq->dim_enabled) + continue; - update_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix); - if (update_moder.usec != rq->intr_coal.max_usecs || - update_moder.pkts != rq->intr_coal.max_packets) { - err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum, - update_moder.usec, - update_moder.pkts); - if (err) - pr_debug("%s: Failed to send dim parameters on rxq%d\n", - dev->name, qnum); - dim->state = DIM_START_MEASURE; - } + update_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix); + if (update_moder.usec != rq->intr_coal.max_usecs || + update_moder.pkts != rq->intr_coal.max_packets) { + err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum, + update_moder.usec, + update_moder.pkts); + if (err) + pr_debug("%s: Failed to send dim parameters on rxq%d\n", + dev->name, qnum); + dim->state = DIM_START_MEASURE; } rtnl_unlock();
Since we no longer have to hold the RTNL lock here just do updates for the specified queue. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> --- drivers/net/virtio_net.c | 38 ++++++++++++++------------------------ 1 file changed, 14 insertions(+), 24 deletions(-)