Message ID | 20240219130109.341523-8-john.g.garry@oracle.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | block atomic writes | expand |
John Garry <john.g.garry@oracle.com> writes: > Support atomic writes by submitting a single BIO with the REQ_ATOMIC set. > > It must be ensured that the atomic write adheres to its rules, like > naturally aligned offset, so call blkdev_dio_invalid() -> > blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to > blkdev_dio_invalid()] for this purpose. > > In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot > produce a single BIO, so error in this case. BIO_MAX_VECS is 256. So around 1MB limit with 4k pagesize. Any mention of why this limit for now? Is it due to code complexity that we only support a single bio? As I see it, you have still enabled req merging in block layer for atomic requests. So it can essentially submit bio chains to the device driver? So why not support this case for user to submit a req. larger than 1 MB? > > Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes > and the associated file flag is for O_DIRECT. > > Signed-off-by: John Garry <john.g.garry@oracle.com> > --- > block/fops.c | 31 ++++++++++++++++++++++++++++--- > 1 file changed, 28 insertions(+), 3 deletions(-) > > diff --git a/block/fops.c b/block/fops.c > index 28382b4d097a..563189c2fc5a 100644 > --- a/block/fops.c > +++ b/block/fops.c > @@ -34,13 +34,27 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb) > return opf; > } > > -static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos, > - struct iov_iter *iter) > +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos, > + struct iov_iter *iter) > { > + struct request_queue *q = bdev_get_queue(bdev); > + unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q); > + unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q); > + > + return atomic_write_valid(pos, iter, min_bytes, max_bytes); generic_atomic_write_valid() would be better for this function. However, I have any commented about this in some previous > +} > + > +static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos, > + struct iov_iter *iter, bool atomic_write) bool "is_atomic" or "is_atomic_write" perhaps? we anyway know that we only support atomic writes and RWF_ATOMIC operation is made -EOPNOTSUPP for reads in kiocb_set_rw_flags(). So we may as well make it "is_atomic" for bools. > +{ > + if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter)) > + return true; > + > return pos & (bdev_logical_block_size(bdev) - 1) || > !bdev_iter_is_aligned(bdev, iter); > } > > + > #define DIO_INLINE_BIO_VECS 4 > > static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb, > @@ -71,6 +85,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb, > } > bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT; > bio.bi_ioprio = iocb->ki_ioprio; > + if (iocb->ki_flags & IOCB_ATOMIC) > + bio.bi_opf |= REQ_ATOMIC; > > ret = bio_iov_iter_get_pages(&bio, iter); > if (unlikely(ret)) > @@ -341,6 +357,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, > task_io_account_write(bio->bi_iter.bi_size); > } > > + if (iocb->ki_flags & IOCB_ATOMIC) > + bio->bi_opf |= REQ_ATOMIC; > + > if (iocb->ki_flags & IOCB_NOWAIT) > bio->bi_opf |= REQ_NOWAIT; > > @@ -357,13 +376,14 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, > static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) > { > struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); > + bool atomic_write = iocb->ki_flags & IOCB_ATOMIC; ditto, bool is_atomic perhaps? > loff_t pos = iocb->ki_pos; > unsigned int nr_pages; > > if (!iov_iter_count(iter)) > return 0; > > - if (blkdev_dio_unaligned(bdev, pos, iter)) > + if (blkdev_dio_invalid(bdev, pos, iter, atomic_write)) > return -EINVAL; > > nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1); > @@ -371,6 +391,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) > if (is_sync_kiocb(iocb)) > return __blkdev_direct_IO_simple(iocb, iter, nr_pages); > return __blkdev_direct_IO_async(iocb, iter, nr_pages); > + } else if (atomic_write) { > + return -EINVAL; > } > return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages)); > } > @@ -616,6 +638,9 @@ static int blkdev_open(struct inode *inode, struct file *filp) > if (bdev_nowait(handle->bdev)) > filp->f_mode |= FMODE_NOWAIT; > > + if (bdev_can_atomic_write(handle->bdev) && filp->f_flags & O_DIRECT) > + filp->f_mode |= FMODE_CAN_ATOMIC_WRITE; > + > filp->f_mapping = handle->bdev->bd_inode->i_mapping; > filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping); > filp->private_data = handle; > -- > 2.31.1
On 25/02/2024 14:46, Ritesh Harjani (IBM) wrote: > John Garry <john.g.garry@oracle.com> writes: > >> Support atomic writes by submitting a single BIO with the REQ_ATOMIC set. >> >> It must be ensured that the atomic write adheres to its rules, like >> naturally aligned offset, so call blkdev_dio_invalid() -> >> blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to >> blkdev_dio_invalid()] for this purpose. >> >> In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot >> produce a single BIO, so error in this case. > > BIO_MAX_VECS is 256. So around 1MB limit with 4k pagesize. > Any mention of why this limit for now? Is it due to code complexity that > we only support a single bio? The reason is that lifting this limit adds extra complexity and I don't see any HW out there which supports a larger atomic write unit yet. And even if there was HW (which supports this larger size), is there a usecase for a larger atomic write unit? Nilay reports awupf = 63 for his controller: # lspci 0040:01:00.0 Non-Volatile memory controller: KIOXIA Corporation Device 0025 (rev 01) # nvme id-ctrl /dev/nvme0 -H NVME Identify Controller: vid : 0x1e0f ssvid : 0x1014 sn : Z130A00LTGZ8 mn : 800GB NVMe Gen4 U.2 SSD fr : REV.C9S2 [...] awun : 65535 awupf : 63 [...] And SCSI device I know which supports atomic writes can only handle 32KB max. > As I see it, you have still enabled req merging in block layer for > atomic requests. So it can essentially submit bio chains to the device > driver? So why not support this case for user to submit a req. larger > than 1 MB? Indeed, we could try to lift this limit and submit larger bios or chains of bios for a single atomic write from userspace, but do we need it now? Please also remember that we are always limited by the request queue DMA capabilities also. > >> >> Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes >> and the associated file flag is for O_DIRECT. >> >> Signed-off-by: John Garry <john.g.garry@oracle.com> >> --- >> block/fops.c | 31 ++++++++++++++++++++++++++++--- >> 1 file changed, 28 insertions(+), 3 deletions(-) >> >> diff --git a/block/fops.c b/block/fops.c >> index 28382b4d097a..563189c2fc5a 100644 >> --- a/block/fops.c >> +++ b/block/fops.c >> @@ -34,13 +34,27 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb) >> return opf; >> } >> >> -static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos, >> - struct iov_iter *iter) >> +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos, >> + struct iov_iter *iter) >> { >> + struct request_queue *q = bdev_get_queue(bdev); >> + unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q); >> + unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q); >> + >> + return atomic_write_valid(pos, iter, min_bytes, max_bytes); > > generic_atomic_write_valid() would be better for this function. However, > I have any commented about this in some previous ok > >> +} >> + >> +static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos, >> + struct iov_iter *iter, bool atomic_write) > > bool "is_atomic" or "is_atomic_write" perhaps? > we anyway know that we only support atomic writes and RWF_ATOMIC > operation is made -EOPNOTSUPP for reads in kiocb_set_rw_flags(). > So we may as well make it "is_atomic" for bools. ok > >> +{ >> + if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter)) >> + return true; >> + >> return pos & (bdev_logical_block_size(bdev) - 1) || >> !bdev_iter_is_aligned(bdev, iter); >> } >> >> + >> #define DIO_INLINE_BIO_VECS 4 >> >> static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb, >> @@ -71,6 +85,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb, >> } >> bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT; >> bio.bi_ioprio = iocb->ki_ioprio; >> + if (iocb->ki_flags & IOCB_ATOMIC) >> + bio.bi_opf |= REQ_ATOMIC; >> >> ret = bio_iov_iter_get_pages(&bio, iter); >> if (unlikely(ret)) >> @@ -341,6 +357,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, >> task_io_account_write(bio->bi_iter.bi_size); >> } >> >> + if (iocb->ki_flags & IOCB_ATOMIC) >> + bio->bi_opf |= REQ_ATOMIC; >> + >> if (iocb->ki_flags & IOCB_NOWAIT) >> bio->bi_opf |= REQ_NOWAIT; >> >> @@ -357,13 +376,14 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, >> static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) >> { >> struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); >> + bool atomic_write = iocb->ki_flags & IOCB_ATOMIC; > > ditto, bool is_atomic perhaps? ok > >> loff_t pos = iocb->ki_pos; >> unsigned int nr_pages; >> >> if (!iov_iter_count(iter)) >> return 0; >> >> - if (blkdev_dio_unaligned(bdev, pos, iter)) >> + if (blkdev_dio_invalid(bdev, pos, iter, atomic_write)) >> return -EINVAL; >> >> nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1); >> @@ -371,6 +391,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) >> if (is_sync_kiocb(iocb)) >> return __blkdev_direct_IO_simple(iocb, iter, nr_pages); >> return __blkdev_direct_IO_async(iocb, iter, nr_pages); >> + } else if (atomic_write) { >> + return -EINVAL; >> } >> return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages)); >> } >> @@ -616,6 +638,9 @@ static int blkdev_open(struct inode *inode, struct file *filp) >> if (bdev_nowait(handle->bdev)) >> filp->f_mode |= FMODE_NOWAIT; >> >> + if (bdev_can_atomic_write(handle->bdev) && filp->f_flags & O_DIRECT) >> + filp->f_mode |= FMODE_CAN_ATOMIC_WRITE; >> + >> filp->f_mapping = handle->bdev->bd_inode->i_mapping; >> filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping); >> filp->private_data = handle; >> -- >> 2.31.1 Thanks, John
diff --git a/block/fops.c b/block/fops.c index 28382b4d097a..563189c2fc5a 100644 --- a/block/fops.c +++ b/block/fops.c @@ -34,13 +34,27 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb) return opf; } -static bool blkdev_dio_unaligned(struct block_device *bdev, loff_t pos, - struct iov_iter *iter) +static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos, + struct iov_iter *iter) { + struct request_queue *q = bdev_get_queue(bdev); + unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q); + unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q); + + return atomic_write_valid(pos, iter, min_bytes, max_bytes); +} + +static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos, + struct iov_iter *iter, bool atomic_write) +{ + if (atomic_write && !blkdev_atomic_write_valid(bdev, pos, iter)) + return true; + return pos & (bdev_logical_block_size(bdev) - 1) || !bdev_iter_is_aligned(bdev, iter); } + #define DIO_INLINE_BIO_VECS 4 static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb, @@ -71,6 +85,8 @@ static ssize_t __blkdev_direct_IO_simple(struct kiocb *iocb, } bio.bi_iter.bi_sector = pos >> SECTOR_SHIFT; bio.bi_ioprio = iocb->ki_ioprio; + if (iocb->ki_flags & IOCB_ATOMIC) + bio.bi_opf |= REQ_ATOMIC; ret = bio_iov_iter_get_pages(&bio, iter); if (unlikely(ret)) @@ -341,6 +357,9 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, task_io_account_write(bio->bi_iter.bi_size); } + if (iocb->ki_flags & IOCB_ATOMIC) + bio->bi_opf |= REQ_ATOMIC; + if (iocb->ki_flags & IOCB_NOWAIT) bio->bi_opf |= REQ_NOWAIT; @@ -357,13 +376,14 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb, static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) { struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host); + bool atomic_write = iocb->ki_flags & IOCB_ATOMIC; loff_t pos = iocb->ki_pos; unsigned int nr_pages; if (!iov_iter_count(iter)) return 0; - if (blkdev_dio_unaligned(bdev, pos, iter)) + if (blkdev_dio_invalid(bdev, pos, iter, atomic_write)) return -EINVAL; nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1); @@ -371,6 +391,8 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) if (is_sync_kiocb(iocb)) return __blkdev_direct_IO_simple(iocb, iter, nr_pages); return __blkdev_direct_IO_async(iocb, iter, nr_pages); + } else if (atomic_write) { + return -EINVAL; } return __blkdev_direct_IO(iocb, iter, bio_max_segs(nr_pages)); } @@ -616,6 +638,9 @@ static int blkdev_open(struct inode *inode, struct file *filp) if (bdev_nowait(handle->bdev)) filp->f_mode |= FMODE_NOWAIT; + if (bdev_can_atomic_write(handle->bdev) && filp->f_flags & O_DIRECT) + filp->f_mode |= FMODE_CAN_ATOMIC_WRITE; + filp->f_mapping = handle->bdev->bd_inode->i_mapping; filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping); filp->private_data = handle;
Support atomic writes by submitting a single BIO with the REQ_ATOMIC set. It must be ensured that the atomic write adheres to its rules, like naturally aligned offset, so call blkdev_dio_invalid() -> blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to blkdev_dio_invalid()] for this purpose. In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot produce a single BIO, so error in this case. Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes and the associated file flag is for O_DIRECT. Signed-off-by: John Garry <john.g.garry@oracle.com> --- block/fops.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-)