Message ID | 3fbfb6e871f625f89eb578c7228e127437b1975a.1727876449.git.mst@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | vsock/virtio: use GFP_ATOMIC under RCU read lock | expand |
On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote: >virtio_transport_send_pkt in now called on transport fast path, >under RCU read lock. In that case, we have a bug: virtio_add_sgs >is called with GFP_KERNEL, and might sleep. > >Pass the gfp flags as an argument, and use GFP_ATOMIC on >the fast path. > >Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x >Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty") >Reported-by: Christian Brauner <brauner@kernel.org> >Cc: Stefano Garzarella <sgarzare@redhat.com> >Cc: Luigi Leonardi <luigi.leonardi@outlook.com> >Signed-off-by: Michael S. Tsirkin <mst@redhat.com> >--- > >Lightly tested. Christian, could you pls confirm this fixes the problem >for you? Stefano, it's a holiday here - could you pls help test! Sure, thanks for the quick fix! I was thinking something similar ;-) >Thanks! > > > net/vmw_vsock/virtio_transport.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > >diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c >index f992f9a216f0..0cd965f24609 100644 >--- a/net/vmw_vsock/virtio_transport.c >+++ b/net/vmw_vsock/virtio_transport.c >@@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void) > > /* Caller need to hold vsock->tx_lock on vq */ > static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, >- struct virtio_vsock *vsock) >+ struct virtio_vsock *vsock, gfp_t gfp) > { > int ret, in_sg = 0, out_sg = 0; > struct scatterlist **sgs; >@@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, > } > } > >- ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL); >+ ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp); > /* Usually this means that there is no more space available in > * the vq > */ >@@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) > > reply = virtio_vsock_skb_reply(skb); > >- ret = virtio_transport_send_skb(skb, vq, vsock); >+ ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL); > if (ret < 0) { > virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb); > break; >@@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc > if (unlikely(ret == 0)) > return -EBUSY; > >- ret = virtio_transport_send_skb(skb, vq, vsock); nit: maybe we can add a comment here: /* GFP_ATOMIC because we are in RCU section, so we can't sleep */ >+ ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC); > if (ret == 0) > virtqueue_kick(vq); > >-- >MST > I'll run some tests and come back with R-b when it's done. Thanks, Stefano
On Wed, Oct 02, 2024 at 04:02:06PM GMT, Stefano Garzarella wrote: >On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote: >>virtio_transport_send_pkt in now called on transport fast path, >>under RCU read lock. In that case, we have a bug: virtio_add_sgs >>is called with GFP_KERNEL, and might sleep. >> >>Pass the gfp flags as an argument, and use GFP_ATOMIC on >>the fast path. >> >>Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x >>Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty") >>Reported-by: Christian Brauner <brauner@kernel.org> >>Cc: Stefano Garzarella <sgarzare@redhat.com> >>Cc: Luigi Leonardi <luigi.leonardi@outlook.com> >>Signed-off-by: Michael S. Tsirkin <mst@redhat.com> >>--- >> >>Lightly tested. Christian, could you pls confirm this fixes the problem >>for you? Stefano, it's a holiday here - could you pls help test! > >Sure, thanks for the quick fix! I was thinking something similar ;-) > >>Thanks! >> >> >>net/vmw_vsock/virtio_transport.c | 8 ++++---- >>1 file changed, 4 insertions(+), 4 deletions(-) >> >>diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c >>index f992f9a216f0..0cd965f24609 100644 >>--- a/net/vmw_vsock/virtio_transport.c >>+++ b/net/vmw_vsock/virtio_transport.c >>@@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void) >> >>/* Caller need to hold vsock->tx_lock on vq */ >>static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, >>- struct virtio_vsock *vsock) >>+ struct virtio_vsock *vsock, gfp_t gfp) >>{ >> int ret, in_sg = 0, out_sg = 0; >> struct scatterlist **sgs; >>@@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, >> } >> } >> >>- ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL); >>+ ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp); >> /* Usually this means that there is no more space available in >> * the vq >> */ >>@@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) >> >> reply = virtio_vsock_skb_reply(skb); >> >>- ret = virtio_transport_send_skb(skb, vq, vsock); >>+ ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL); >> if (ret < 0) { >> virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, >> skb); >> break; >>@@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc >> if (unlikely(ret == 0)) >> return -EBUSY; >> >>- ret = virtio_transport_send_skb(skb, vq, vsock); > >nit: maybe we can add a comment here: > /* GFP_ATOMIC because we are in RCU section, so we can't sleep */ >>+ ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC); >> if (ret == 0) >> virtqueue_kick(vq); >> >>-- >>MST >> > >I'll run some tests and come back with R-b when it's done. I replicated the issue enabling CONFIG_DEBUG_ATOMIC_SLEEP. With that enabled, as soon as I run iperf-vsock, dmesg is flooded with those messages. With this patch applied instead everything is fine. I also ran the usual tests with various debugging options enabled and everything seems okay. With or without adding the comment I suggested in the previous email: Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x > Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty") > Reported-by: Christian Brauner <brauner@kernel.org> > Cc: Stefano Garzarella <sgarzare@redhat.com> > Cc: Luigi Leonardi <luigi.leonardi@outlook.com> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > --- > > Lightly tested. Christian, could you pls confirm this fixes the problem > for you? Stefano, it's a holiday here - could you pls help test! > Thanks! > > > net/vmw_vsock/virtio_transport.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > index f992f9a216f0..0cd965f24609 100644 > --- a/net/vmw_vsock/virtio_transport.c > +++ b/net/vmw_vsock/virtio_transport.c > @@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void) > > /* Caller need to hold vsock->tx_lock on vq */ > static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, > - struct virtio_vsock *vsock) > + struct virtio_vsock *vsock, gfp_t gfp) > { > int ret, in_sg = 0, out_sg = 0; > struct scatterlist **sgs; > @@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, > } > } > > - ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL); > + ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp); > /* Usually this means that there is no more space available in > * the vq > */ > @@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) > > reply = virtio_vsock_skb_reply(skb); > > - ret = virtio_transport_send_skb(skb, vq, vsock); > + ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL); > if (ret < 0) { > virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb); > break; > @@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc > if (unlikely(ret == 0)) > return -EBUSY; > > - ret = virtio_transport_send_skb(skb, vq, vsock); > + ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC); > if (ret == 0) > virtqueue_kick(vq); > > -- > MST > > Thanks for fixing this! I enabled CONFIG_DEBUG_ATOMIC_SLEEP as Stefano suggested and tested with and without the fix, I can confirm that this fixes the problem. Reviewed-by: Luigi Leonardi <luigi.leonardi@outlook.com>
On Wed, Oct 02, 2024 at 09:41:42AM GMT, Michael S. Tsirkin wrote: > virtio_transport_send_pkt in now called on transport fast path, > under RCU read lock. In that case, we have a bug: virtio_add_sgs > is called with GFP_KERNEL, and might sleep. > > Pass the gfp flags as an argument, and use GFP_ATOMIC on > the fast path. > > Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x > Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty") > Reported-by: Christian Brauner <brauner@kernel.org> > Cc: Stefano Garzarella <sgarzare@redhat.com> > Cc: Luigi Leonardi <luigi.leonardi@outlook.com> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > --- > > Lightly tested. Christian, could you pls confirm this fixes the problem > for you? Stefano, it's a holiday here - could you pls help test! > Thanks! Thank you for the quick fix: Reviewed-by: Christian Brauner <brauner@kernel.org>
> virtio_transport_send_pkt in now called on transport fast path, > under RCU read lock. In that case, we have a bug: virtio_add_sgs > is called with GFP_KERNEL, and might sleep. > > Pass the gfp flags as an argument, and use GFP_ATOMIC on > the fast path. > > Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x > Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty") > Reported-by: Christian Brauner <brauner@kernel.org> > Cc: Stefano Garzarella <sgarzare@redhat.com> > Cc: Luigi Leonardi <luigi.leonardi@outlook.com> > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com> > --- > > Lightly tested. Christian, could you pls confirm this fixes the problem > for you? Stefano, it's a holiday here - could you pls help test! > Thanks! > > > net/vmw_vsock/virtio_transport.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > index f992f9a216f0..0cd965f24609 100644 > --- a/net/vmw_vsock/virtio_transport.c > +++ b/net/vmw_vsock/virtio_transport.c > @@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void) > > /* Caller need to hold vsock->tx_lock on vq */ > static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, > - struct virtio_vsock *vsock) > + struct virtio_vsock *vsock, gfp_t gfp) > { > int ret, in_sg = 0, out_sg = 0; > struct scatterlist **sgs; > @@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, > } > } > > - ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL); > + ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp); > /* Usually this means that there is no more space available in > * the vq > */ > @@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) > > reply = virtio_vsock_skb_reply(skb); > > - ret = virtio_transport_send_skb(skb, vq, vsock); > + ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL); > if (ret < 0) { > virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb); > break; > @@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc > if (unlikely(ret == 0)) > return -EBUSY; > > - ret = virtio_transport_send_skb(skb, vq, vsock); > + ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC); > if (ret == 0) > virtqueue_kick(vq); >
On Wed, 2 Oct 2024 09:41:42 -0400 Michael S. Tsirkin wrote: > virtio_transport_send_pkt in now called on transport fast path, > under RCU read lock. In that case, we have a bug: virtio_add_sgs > is called with GFP_KERNEL, and might sleep. > > Pass the gfp flags as an argument, and use GFP_ATOMIC on > the fast path. Hi Michael! The To: linux-kernel@vger.kernel.org doesn't give much info on who you expect to apply this ;) Please let us know if you plan to take it via your own tree, otherwise we'll ship it to Linus on Thu.
On Mon, Oct 07, 2024 at 08:39:20AM -0700, Jakub Kicinski wrote: > On Wed, 2 Oct 2024 09:41:42 -0400 Michael S. Tsirkin wrote: > > virtio_transport_send_pkt in now called on transport fast path, > > under RCU read lock. In that case, we have a bug: virtio_add_sgs > > is called with GFP_KERNEL, and might sleep. > > > > Pass the gfp flags as an argument, and use GFP_ATOMIC on > > the fast path. > > Hi Michael! The To: linux-kernel@vger.kernel.org doesn't give much info > on who you expect to apply this ;) Please let us know if you plan to > take it via your own tree, otherwise we'll ship it to Linus on Thu. Hi! It's in my tree, was in the process of sending a pull request actually.
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index f992f9a216f0..0cd965f24609 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -96,7 +96,7 @@ static u32 virtio_transport_get_local_cid(void) /* Caller need to hold vsock->tx_lock on vq */ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, - struct virtio_vsock *vsock) + struct virtio_vsock *vsock, gfp_t gfp) { int ret, in_sg = 0, out_sg = 0; struct scatterlist **sgs; @@ -140,7 +140,7 @@ static int virtio_transport_send_skb(struct sk_buff *skb, struct virtqueue *vq, } } - ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL); + ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, gfp); /* Usually this means that there is no more space available in * the vq */ @@ -178,7 +178,7 @@ virtio_transport_send_pkt_work(struct work_struct *work) reply = virtio_vsock_skb_reply(skb); - ret = virtio_transport_send_skb(skb, vq, vsock); + ret = virtio_transport_send_skb(skb, vq, vsock, GFP_KERNEL); if (ret < 0) { virtio_vsock_skb_queue_head(&vsock->send_pkt_queue, skb); break; @@ -221,7 +221,7 @@ static int virtio_transport_send_skb_fast_path(struct virtio_vsock *vsock, struc if (unlikely(ret == 0)) return -EBUSY; - ret = virtio_transport_send_skb(skb, vq, vsock); + ret = virtio_transport_send_skb(skb, vq, vsock, GFP_ATOMIC); if (ret == 0) virtqueue_kick(vq);
virtio_transport_send_pkt in now called on transport fast path, under RCU read lock. In that case, we have a bug: virtio_add_sgs is called with GFP_KERNEL, and might sleep. Pass the gfp flags as an argument, and use GFP_ATOMIC on the fast path. Link: https://lore.kernel.org/all/hfcr2aget2zojmqpr4uhlzvnep4vgskblx5b6xf2ddosbsrke7@nt34bxgp7j2x Fixes: efcd71af38be ("vsock/virtio: avoid queuing packets when intermediate queue is empty") Reported-by: Christian Brauner <brauner@kernel.org> Cc: Stefano Garzarella <sgarzare@redhat.com> Cc: Luigi Leonardi <luigi.leonardi@outlook.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> --- Lightly tested. Christian, could you pls confirm this fixes the problem for you? Stefano, it's a holiday here - could you pls help test! Thanks! net/vmw_vsock/virtio_transport.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)