Message ID | 20220516164718.2419891-5-shr@fb.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io-uring/xfs: support async buffered writes | expand |
On Mon 16-05-22 09:47:06, Stefan Roesch wrote: > This adds async buffered write support to iomap. The support is focused > on the changes necessary to support XFS with iomap. > > Support for other filesystems might require additional changes. > > Signed-off-by: Stefan Roesch <shr@fb.com> > --- > fs/iomap/buffered-io.c | 21 ++++++++++++++++++++- > 1 file changed, 20 insertions(+), 1 deletion(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 1ffdc7078e7d..ceb3091f94c2 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -580,13 +580,20 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, > size_t from = offset_in_folio(folio, pos), to = from + len; > size_t poff, plen; > gfp_t gfp = GFP_NOFS | __GFP_NOFAIL; > + bool no_wait = (iter->flags & IOMAP_NOWAIT); > + > + if (no_wait) > + gfp = GFP_NOIO; GFP_NOIO means that direct reclaim is still allowed. Not sure whether you want to enter direct reclaim from io_uring fast path because in theory that can still sleep. GFP_NOWAIT would be a more natural choice... Honza
On 5/17/22 4:14 AM, Jan Kara wrote: > On Mon 16-05-22 09:47:06, Stefan Roesch wrote: >> This adds async buffered write support to iomap. The support is focused >> on the changes necessary to support XFS with iomap. >> >> Support for other filesystems might require additional changes. >> >> Signed-off-by: Stefan Roesch <shr@fb.com> >> --- >> fs/iomap/buffered-io.c | 21 ++++++++++++++++++++- >> 1 file changed, 20 insertions(+), 1 deletion(-) >> >> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c >> index 1ffdc7078e7d..ceb3091f94c2 100644 >> --- a/fs/iomap/buffered-io.c >> +++ b/fs/iomap/buffered-io.c >> @@ -580,13 +580,20 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, >> size_t from = offset_in_folio(folio, pos), to = from + len; >> size_t poff, plen; >> gfp_t gfp = GFP_NOFS | __GFP_NOFAIL; >> + bool no_wait = (iter->flags & IOMAP_NOWAIT); >> + >> + if (no_wait) >> + gfp = GFP_NOIO; > > GFP_NOIO means that direct reclaim is still allowed. Not sure whether you > want to enter direct reclaim from io_uring fast path because in theory that > can still sleep. GFP_NOWAIT would be a more natural choice... I'll change it to GFP_NOWAIT in the next version of the patch series. > > Honza
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 1ffdc7078e7d..ceb3091f94c2 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -580,13 +580,20 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, size_t from = offset_in_folio(folio, pos), to = from + len; size_t poff, plen; gfp_t gfp = GFP_NOFS | __GFP_NOFAIL; + bool no_wait = (iter->flags & IOMAP_NOWAIT); + + if (no_wait) + gfp = GFP_NOIO; if (folio_test_uptodate(folio)) return 0; folio_clear_error(folio); - if (!iop && nr_blocks > 1) + if (!iop && nr_blocks > 1) { iop = iomap_page_create_gfp(iter->inode, folio, nr_blocks, gfp); + if (no_wait && !iop) + return -EAGAIN; + } do { iomap_adjust_read_range(iter->inode, folio, &block_start, @@ -603,6 +610,8 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos, if (WARN_ON_ONCE(iter->flags & IOMAP_UNSHARE)) return -EIO; folio_zero_segments(folio, poff, from, to, poff + plen); + } else if (no_wait) { + return -EAGAIN; } else { int status = iomap_read_folio_sync(block_start, folio, poff, plen, srcmap); @@ -633,6 +642,9 @@ static int iomap_write_begin(const struct iomap_iter *iter, loff_t pos, unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS; int status = 0; + if (iter->flags & IOMAP_NOWAIT) + fgp |= FGP_NOWAIT; + BUG_ON(pos + len > iter->iomap.offset + iter->iomap.length); if (srcmap != &iter->iomap) BUG_ON(pos + len > srcmap->offset + srcmap->length); @@ -790,6 +802,10 @@ static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) * Otherwise there's a nasty deadlock on copying from the * same page as we're writing to, without it being marked * up-to-date. + * + * For async buffered writes the assumption is that the user + * page has already been faulted in. This can be optimized by + * faulting the user page in the prepare phase of io-uring. */ if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) { status = -EFAULT; @@ -845,6 +861,9 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *i, }; int ret; + if (iocb->ki_flags & IOCB_NOWAIT) + iter.flags |= IOMAP_NOWAIT; + while ((ret = iomap_iter(&iter, ops)) > 0) iter.processed = iomap_write_iter(&iter, i); if (iter.pos == iocb->ki_pos)
This adds async buffered write support to iomap. The support is focused on the changes necessary to support XFS with iomap. Support for other filesystems might require additional changes. Signed-off-by: Stefan Roesch <shr@fb.com> --- fs/iomap/buffered-io.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-)