mbox series

[PATCHSET,v3,0/16] Uncached buffered IO

Message ID 20241111234842.2024180-1-axboe@kernel.dk (mailing list archive)
Headers show
Series Uncached buffered IO | expand

Message

Jens Axboe Nov. 11, 2024, 11:37 p.m. UTC
Hi,

(A bit of version confusion, but this follows v4 -> v2 -> v3, as v4 was
 a relic of the 5 year old version. Next will be v5 and we should be
 consistent again)

5 years ago I posted patches adding support for RWF_UNCACHED, as a way
to do buffered IO that isn't page cache persistent. The approach back
then was to have private pages for IO, and then get rid of them once IO
was done. But that then runs into all the issues that O_DIRECT has, in
terms of synchronizing with the page cache.

So here's a new approach to the same concent, but using the page cache
as synchronization. That makes RWF_UNCACHED less special, in that it's
just page cache IO, except it prunes the ranges once IO is completed.

Why do this, you may ask? The tldr is that device speeds are only
getting faster, while reclaim is not. Doing normal buffered IO can be
very unpredictable, and suck up a lot of resources on the reclaim side.
This leads people to use O_DIRECT as a work-around, which has its own
set of restrictions in terms of size, offset, and length of IO. It's
also inherently synchronous, and now you need async IO as well. While
the latter isn't necessarily a big problem as we have good options
available there, it also should not be a requirement when all you want
to do is read or write some data without caching.

Even on desktop type systems, a normal NVMe device can fill the entire
page cache in seconds. On the big system I used for testing, there's a
lot more RAM, but also a lot more devices. As can be seen in some of the
results in the following patches, you can still fill RAM in seconds even
when there's 1TB of it. Hence this problem isn't solely a "big
hyperscaler system" issue, it's common across the board.

Common for both reads and writes with RWF_UNCACHED is that they use the
page cache for IO. Reads work just like a normal buffered read would,
with the only exception being that the touched ranges will get pruned
after data has been copied. For writes, the ranges will get writeback
kicked off before the syscall returns, and then writeback completion
will prune the range. Hence writes aren't synchronous, and it's easy to
pipeline writes using RWF_UNCACHED. Folios that aren't instantiated by
RWF_UNCACHED IO are left untouched. This means you that uncached IO
will take advantage of the page cache for uptodate data, but not leave
anything it instantiated/created in cache.

File systems need to support this. The patches add support for the
generic filemap helpers, and for iomap. Then ext4 and XFS are marked as
supporting it. The last patch adds support for btrfs as well, lightly
tested. The read side is already done by filemap, only the write side
needs a bit of help. The amount of code here is really trivial, and the
only reason the fs opt-in is necessary is to have an RWF_UNCACHED IO
return -EOPNOTSUPP just in case the fs doesn't use either the generic
paths or iomap. Adding "support" to other file systems should be
trivial, most of the time just a one-liner adding FOP_UNCACHED to the
fop_flags in the file_operations struct.

Performance results are in patch 8 for reads and patch 10 for writes,
with the tldr being that I see about a 65% improvement in performance
for both, with fully predictable IO times. CPU reduction is substantial
as well, with no kswapd activity at all for reclaim when using uncached
IO.

Using it from applications is trivial - just set RWF_UNCACHED for the
read or write, using pwritev2(2) or preadv2(2). For io_uring, same
thing, just set RWF_UNCACHED in sqe->rw_flags for a buffered read/write
operation. And that's it.

Patches 1..7 are just prep patches, and should have no functional
changes at all. Patch 8 adds support for the filemap path for
RWF_UNCACHED reads, patch 10 adds support for filemap RWF_UNCACHED
writes, and patches 12..16 adds ext4, xfs/iomap, and btrfs support.

I ran this through xfstests, and it found some of the issue listed as
fixed below. This posted version passes the whole generic suite of
xfstests. The xfstests patch is here:

https://lore.kernel.org/linux-mm/3da73668-a954-47b9-b66d-bb2e719f5590@kernel.dk/

And git tree for the patches is here:

https://git.kernel.dk/cgit/linux/log/?h=buffered-uncached.6


 fs/btrfs/bio.c                 |   4 +-
 fs/btrfs/bio.h                 |   2 +
 fs/btrfs/extent_io.c           |   8 ++-
 fs/btrfs/file.c                |  10 +++-
 fs/ext4/ext4.h                 |   1 +
 fs/ext4/file.c                 |   2 +-
 fs/ext4/inline.c               |   7 ++-
 fs/ext4/inode.c                |  18 +++++-
 fs/ext4/page-io.c              |  28 +++++----
 fs/iomap/buffered-io.c         |  15 ++++-
 fs/xfs/xfs_aops.c              |   7 ++-
 fs/xfs/xfs_file.c              |   4 +-
 include/linux/fs.h             |  10 +++-
 include/linux/iomap.h          |   4 +-
 include/linux/page-flags.h     |   5 ++
 include/linux/pagemap.h        |  34 +++++++++++
 include/trace/events/mmflags.h |   3 +-
 include/uapi/linux/fs.h        |   6 +-
 mm/filemap.c                   | 101 ++++++++++++++++++++++++++++-----
 mm/readahead.c                 |  22 +++++--
 mm/swap.c                      |   2 +
 mm/truncate.c                  |  33 ++++++-----
 22 files changed, 262 insertions(+), 64 deletions(-)

Since v2
- Add patch for btrfs to work on the write side, read side was already
  covered by the generic filemap changes. Now btrfs is FOP_UNCACHED
  enabled as well.
- Add folio_unmap_invalidate() helper, and use that from both the core
  code and the uncached handling.
- Add filemap_uncached_read() helper to encapsulate the uncached
  handling on the read side.
- Enable handling of invalidation of mapped folios
- Clear uncached in looked up folio, if FGP_UNCACHED isn't set. For this
  case, there are competing non-uncached page cache users and the folio
  should not get invalidated.
- Various little tweaks or comments.
- Ran fsstress with read/write uncached support, no issues seen
- Fixup a commit message
- Rebase on 6.12-rc7

Comments

Jens Axboe Nov. 12, 2024, 1:31 a.m. UTC | #1
On 11/11/24 4:37 PM, Jens Axboe wrote:
> I ran this through xfstests, and it found some of the issue listed as
> fixed below. This posted version passes the whole generic suite of
> xfstests. The xfstests patch is here:

FWIW, the xfs grouping also ran to completion after that, some hours
later... At least from the "is it semi sane, at least?" perspective, the
answer should be yes.