diff mbox series

[v2] virtio_pmem: add the missing REQ_OP_WRITE for flush bio

Message ID 20230621134340.878461-1-houtao@huaweicloud.com (mailing list archive)
State Superseded
Headers show
Series [v2] virtio_pmem: add the missing REQ_OP_WRITE for flush bio | expand

Commit Message

Hou Tao June 21, 2023, 1:43 p.m. UTC
From: Hou Tao <houtao1@huawei.com>

The following warning was reported when doing fsync on a pmem device:

 ------------[ cut here ]------------
 WARNING: CPU: 2 PID: 384 at block/blk-core.c:751 submit_bio_noacct+0x340/0x520
 Modules linked in:
 CPU: 2 PID: 384 Comm: mkfs.xfs Not tainted 6.4.0-rc7+ #154
 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
 RIP: 0010:submit_bio_noacct+0x340/0x520
 ......
 Call Trace:
  <TASK>
  ? asm_exc_invalid_op+0x1b/0x20
  ? submit_bio_noacct+0x340/0x520
  ? submit_bio_noacct+0xd5/0x520
  submit_bio+0x37/0x60
  async_pmem_flush+0x79/0xa0
  nvdimm_flush+0x17/0x40
  pmem_submit_bio+0x370/0x390
  __submit_bio+0xbc/0x190
  submit_bio_noacct_nocheck+0x14d/0x370
  submit_bio_noacct+0x1ef/0x520
  submit_bio+0x55/0x60
  submit_bio_wait+0x5a/0xc0
  blkdev_issue_flush+0x44/0x60

The root cause is that submit_bio_noacct() needs bio_op() is either
WRITE or ZONE_APPEND for flush bio and async_pmem_flush() doesn't assign
REQ_OP_WRITE when allocating flush bio, so submit_bio_noacct just fail
the flush bio.

Simply fix it by adding the missing REQ_OP_WRITE for flush bio. And we
could fix the flush order issue and do flush optimization later.

Fixes: b4a6bb3a67aa ("block: add a sanity check for non-write flush/fua bios")
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
v2:
  * do a minimal fix first (Suggested by Christoph)
v1: https://lore.kernel.org/linux-block/ZJLpYMC8FgtZ0k2k@infradead.org/T/#t

Hi Jens & Dan,

I found Pankaj was working on the fix and optimization of virtio-pmem
flush bio [0], but considering the last status update was 1/12/2022, so
could you please pick the patch up for v6.4 and we can do the flush fix
and optimization later ?

[0]: https://lore.kernel.org/lkml/20220111161937.56272-1-pankaj.gupta.linux@gmail.com/T/

 drivers/nvdimm/nd_virtio.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Christoph Hellwig June 21, 2023, 1:15 p.m. UTC | #1
Please avoid the overly long line.  With that fixe this looks good
to me.
Pankaj Gupta June 22, 2023, 8:35 a.m. UTC | #2
> The following warning was reported when doing fsync on a pmem device:
>
>  ------------[ cut here ]------------
>  WARNING: CPU: 2 PID: 384 at block/blk-core.c:751 submit_bio_noacct+0x340/0x520
>  Modules linked in:
>  CPU: 2 PID: 384 Comm: mkfs.xfs Not tainted 6.4.0-rc7+ #154
>  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
>  RIP: 0010:submit_bio_noacct+0x340/0x520
>  ......
>  Call Trace:
>   <TASK>
>   ? asm_exc_invalid_op+0x1b/0x20
>   ? submit_bio_noacct+0x340/0x520
>   ? submit_bio_noacct+0xd5/0x520
>   submit_bio+0x37/0x60
>   async_pmem_flush+0x79/0xa0
>   nvdimm_flush+0x17/0x40
>   pmem_submit_bio+0x370/0x390
>   __submit_bio+0xbc/0x190
>   submit_bio_noacct_nocheck+0x14d/0x370
>   submit_bio_noacct+0x1ef/0x520
>   submit_bio+0x55/0x60
>   submit_bio_wait+0x5a/0xc0
>   blkdev_issue_flush+0x44/0x60
>
> The root cause is that submit_bio_noacct() needs bio_op() is either
> WRITE or ZONE_APPEND for flush bio and async_pmem_flush() doesn't assign
> REQ_OP_WRITE when allocating flush bio, so submit_bio_noacct just fail
> the flush bio.
>
> Simply fix it by adding the missing REQ_OP_WRITE for flush bio. And we
> could fix the flush order issue and do flush optimization later.
>
> Fixes: b4a6bb3a67aa ("block: add a sanity check for non-write flush/fua bios")
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
> v2:
>   * do a minimal fix first (Suggested by Christoph)
> v1: https://lore.kernel.org/linux-block/ZJLpYMC8FgtZ0k2k@infradead.org/T/#t
>
> Hi Jens & Dan,
>
> I found Pankaj was working on the fix and optimization of virtio-pmem
> flush bio [0], but considering the last status update was 1/12/2022, so
> could you please pick the patch up for v6.4 and we can do the flush fix
> and optimization later ?
>
> [0]: https://lore.kernel.org/lkml/20220111161937.56272-1-pankaj.gupta.linux@gmail.com/T/
>
>  drivers/nvdimm/nd_virtio.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
> index c6a648fd8744..97098099f8a3 100644
> --- a/drivers/nvdimm/nd_virtio.c
> +++ b/drivers/nvdimm/nd_virtio.c
> @@ -105,7 +105,7 @@ int async_pmem_flush(struct nd_region *nd_region, struct bio *bio)
>          * parent bio. Otherwise directly call nd_region flush.
>          */
>         if (bio && bio->bi_iter.bi_sector != -1) {
> -               struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_PREFLUSH,
> +               struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_OP_WRITE | REQ_PREFLUSH,
>                                               GFP_ATOMIC);
>
>                 if (!child)

Fix looks good to me. Will give a run soon.

Yes, [0] needs to be completed. Curious to know if you guys using
virtio-pmem device?

Thanks,
Pankaj
Hou Tao June 30, 2023, 2:25 a.m. UTC | #3
Hi Pankaj,

On 6/22/2023 4:35 PM, Pankaj Gupta wrote:
>> The following warning was reported when doing fsync on a pmem device:
>>
>>  ------------[ cut here ]------------
>>  WARNING: CPU: 2 PID: 384 at block/blk-core.c:751 submit_bio_noacct+0x340/0x520
SNIP
>> Hi Jens & Dan,
>>
>> I found Pankaj was working on the fix and optimization of virtio-pmem
>> flush bio [0], but considering the last status update was 1/12/2022, so
>> could you please pick the patch up for v6.4 and we can do the flush fix
>> and optimization later ?
>>
>> [0]: https://lore.kernel.org/lkml/20220111161937.56272-1-pankaj.gupta.linux@gmail.com/T/
>>
>>  drivers/nvdimm/nd_virtio.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
>> index c6a648fd8744..97098099f8a3 100644
>> --- a/drivers/nvdimm/nd_virtio.c
>> +++ b/drivers/nvdimm/nd_virtio.c
>> @@ -105,7 +105,7 @@ int async_pmem_flush(struct nd_region *nd_region, struct bio *bio)
>>          * parent bio. Otherwise directly call nd_region flush.
>>          */
>>         if (bio && bio->bi_iter.bi_sector != -1) {
>> -               struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_PREFLUSH,
>> +               struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_OP_WRITE | REQ_PREFLUSH,
>>                                               GFP_ATOMIC);
>>
>>                 if (!child)
> Fix looks good to me. Will give a run soon.
>
> Yes, [0] needs to be completed. Curious to know if you guys using
> virtio-pmem device?
Sorry about missing the question. We are plan to use DAX to do page
cache offload and now we are just do experiment with virtio-pmem and
nd-pmem.

> Thanks,
> Pankaj
Pankaj Gupta June 30, 2023, 4:45 a.m. UTC | #4
> > Yes, [0] needs to be completed. Curious to know if you guys using
> > virtio-pmem device?
> Sorry about missing the question. We are plan to use DAX to do page
> cache offload and now we are just do experiment with virtio-pmem and
> nd-pmem.

Sounds good. Thank you for answering!

Best regards,
Pankaj
diff mbox series

Patch

diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
index c6a648fd8744..97098099f8a3 100644
--- a/drivers/nvdimm/nd_virtio.c
+++ b/drivers/nvdimm/nd_virtio.c
@@ -105,7 +105,7 @@  int async_pmem_flush(struct nd_region *nd_region, struct bio *bio)
 	 * parent bio. Otherwise directly call nd_region flush.
 	 */
 	if (bio && bio->bi_iter.bi_sector != -1) {
-		struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_PREFLUSH,
+		struct bio *child = bio_alloc(bio->bi_bdev, 0, REQ_OP_WRITE | REQ_PREFLUSH,
 					      GFP_ATOMIC);
 
 		if (!child)