Message ID | 20230720181310.71589-2-axboe@kernel.dk (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
Series | Improve async iomap DIO performance | expand |
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Thu, Jul 20, 2023 at 12:13:03PM -0600, Jens Axboe wrote: > Make the logic a bit easier to follow: > > 1) Add a release_bio out path, as everybody needs to touch that, and > have our bio ref check jump there if it's non-zero. > 2) Add a kiocb local variable. > 3) Add comments for each of the three conditions (sync, inline, or > async workqueue punt). > > No functional changes in this patch. > > Signed-off-by: Jens Axboe <axboe@kernel.dk> Thanks for deindentifying this, Reviewed-by: Darrick J. Wong <djwong@kernel.org> --D > --- > fs/iomap/direct-io.c | 46 +++++++++++++++++++++++++++++--------------- > 1 file changed, 31 insertions(+), 15 deletions(-) > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c > index ea3b868c8355..0ce60e80c901 100644 > --- a/fs/iomap/direct-io.c > +++ b/fs/iomap/direct-io.c > @@ -152,27 +152,43 @@ void iomap_dio_bio_end_io(struct bio *bio) > { > struct iomap_dio *dio = bio->bi_private; > bool should_dirty = (dio->flags & IOMAP_DIO_DIRTY); > + struct kiocb *iocb = dio->iocb; > > if (bio->bi_status) > iomap_dio_set_error(dio, blk_status_to_errno(bio->bi_status)); > + if (!atomic_dec_and_test(&dio->ref)) > + goto release_bio; > > - if (atomic_dec_and_test(&dio->ref)) { > - if (dio->wait_for_completion) { > - struct task_struct *waiter = dio->submit.waiter; > - WRITE_ONCE(dio->submit.waiter, NULL); > - blk_wake_io_task(waiter); > - } else if (dio->flags & IOMAP_DIO_WRITE) { > - struct inode *inode = file_inode(dio->iocb->ki_filp); > - > - WRITE_ONCE(dio->iocb->private, NULL); > - INIT_WORK(&dio->aio.work, iomap_dio_complete_work); > - queue_work(inode->i_sb->s_dio_done_wq, &dio->aio.work); > - } else { > - WRITE_ONCE(dio->iocb->private, NULL); > - iomap_dio_complete_work(&dio->aio.work); > - } > + /* > + * Synchronous dio, task itself will handle any completion work > + * that needs after IO. All we need to do is wake the task. > + */ > + if (dio->wait_for_completion) { > + struct task_struct *waiter = dio->submit.waiter; > + > + WRITE_ONCE(dio->submit.waiter, NULL); > + blk_wake_io_task(waiter); > + goto release_bio; > + } > + > + /* Read completion can always complete inline. */ > + if (!(dio->flags & IOMAP_DIO_WRITE)) { > + WRITE_ONCE(iocb->private, NULL); > + iomap_dio_complete_work(&dio->aio.work); > + goto release_bio; > } > > + /* > + * Async DIO completion that requires filesystem level completion work > + * gets punted to a work queue to complete as the operation may require > + * more IO to be issued to finalise filesystem metadata changes or > + * guarantee data integrity. > + */ > + WRITE_ONCE(iocb->private, NULL); > + INIT_WORK(&dio->aio.work, iomap_dio_complete_work); > + queue_work(file_inode(iocb->ki_filp)->i_sb->s_dio_done_wq, > + &dio->aio.work); > +release_bio: > if (should_dirty) { > bio_check_pages_dirty(bio); > } else { > -- > 2.40.1 >
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index ea3b868c8355..0ce60e80c901 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -152,27 +152,43 @@ void iomap_dio_bio_end_io(struct bio *bio) { struct iomap_dio *dio = bio->bi_private; bool should_dirty = (dio->flags & IOMAP_DIO_DIRTY); + struct kiocb *iocb = dio->iocb; if (bio->bi_status) iomap_dio_set_error(dio, blk_status_to_errno(bio->bi_status)); + if (!atomic_dec_and_test(&dio->ref)) + goto release_bio; - if (atomic_dec_and_test(&dio->ref)) { - if (dio->wait_for_completion) { - struct task_struct *waiter = dio->submit.waiter; - WRITE_ONCE(dio->submit.waiter, NULL); - blk_wake_io_task(waiter); - } else if (dio->flags & IOMAP_DIO_WRITE) { - struct inode *inode = file_inode(dio->iocb->ki_filp); - - WRITE_ONCE(dio->iocb->private, NULL); - INIT_WORK(&dio->aio.work, iomap_dio_complete_work); - queue_work(inode->i_sb->s_dio_done_wq, &dio->aio.work); - } else { - WRITE_ONCE(dio->iocb->private, NULL); - iomap_dio_complete_work(&dio->aio.work); - } + /* + * Synchronous dio, task itself will handle any completion work + * that needs after IO. All we need to do is wake the task. + */ + if (dio->wait_for_completion) { + struct task_struct *waiter = dio->submit.waiter; + + WRITE_ONCE(dio->submit.waiter, NULL); + blk_wake_io_task(waiter); + goto release_bio; + } + + /* Read completion can always complete inline. */ + if (!(dio->flags & IOMAP_DIO_WRITE)) { + WRITE_ONCE(iocb->private, NULL); + iomap_dio_complete_work(&dio->aio.work); + goto release_bio; } + /* + * Async DIO completion that requires filesystem level completion work + * gets punted to a work queue to complete as the operation may require + * more IO to be issued to finalise filesystem metadata changes or + * guarantee data integrity. + */ + WRITE_ONCE(iocb->private, NULL); + INIT_WORK(&dio->aio.work, iomap_dio_complete_work); + queue_work(file_inode(iocb->ki_filp)->i_sb->s_dio_done_wq, + &dio->aio.work); +release_bio: if (should_dirty) { bio_check_pages_dirty(bio); } else {
Make the logic a bit easier to follow: 1) Add a release_bio out path, as everybody needs to touch that, and have our bio ref check jump there if it's non-zero. 2) Add a kiocb local variable. 3) Add comments for each of the three conditions (sync, inline, or async workqueue punt). No functional changes in this patch. Signed-off-by: Jens Axboe <axboe@kernel.dk> --- fs/iomap/direct-io.c | 46 +++++++++++++++++++++++++++++--------------- 1 file changed, 31 insertions(+), 15 deletions(-)