mbox series

[0/7] Move prefaulting into write slow paths

Message ID 20250129181749.C229F6F3@davehans-spike.ostc.intel.com (mailing list archive)
Headers show
Series Move prefaulting into write slow paths | expand

Message

Dave Hansen Jan. 29, 2025, 6:17 p.m. UTC
tl;dr: The VFS and several filesystems have some suspect prefaulting
code. It is unnecessarily slow for the common case where a write's
source buffer is resident and does not need to be faulted in.

Move these "prefaulting" operations to slow paths where they ensure
forward progress but they do not slow down the fast paths. This
optimizes the fast path to touch userspace once instead of twice.

Also update somewhat dubious comments about the need for prefaulting.

This has been very lightly tested. I have not tested any of the fs/
code explicitly.

I started by just trying to deal with generic_perform_write() and
looked at a few more cases after Dave Chinner mentioned there was
some apparent proliferation of its pattern across the tree.

I think the first patch is probably OK for 6.14. If folks are OK
with other ones, perhaps they can just them up individually for
their trees.

--

More detailed cover letter below.

There are logically two pieces of data involved in a write operation:
a source that is read from and a target which is written to, like:

	sys_write(target_fd, &source, len);

This is implemented in generic VFS code and several filesystems
with loops that look something like this:

	do {
		fault_in_iov_iter_readable(source)
		// lock target folios
		copy_folio_from_iter_atomic()
		// unlock target folios
	} while(iov_iter_count(iter))

They fault in the source first and then proceed to do the write.  This
fault is ostensibly done for a few reasons:

 1. Deadlock avoidance if the source and target are the same
    folios.
 2. To check the user address that copy_folio_from_iter_atomic()
    will touch because atomic user copies do not check the address.
 3. "Optimization"

I'm not sure any of these are actually valid reasons.

The "atomic" user copy functions disable page fault handling because
page faults are not very atomic. This makes them naturally resistant
to deadlocking in page fault handling. They take the page fault
itself but short-circuit any handling.

copy_folio_from_iter_atomic() also *does* have user address checking.
I get a little lost in the iov_iter code, but it does know when it's
dealing with userspace versus kernel addresses and does seem to know
when to do things like copy_from_user_iter() (which does access_ok())
versus memcpy_from_iter().[1]

The "optimization" is for the case where 'source' is not faulted in.
It can avoid the cost of a "failed" page fault (it will fail to be
handled because of the atomic copy) and then needing to drop locks and
repeat the fault.

But the common case is surely one where 'source' *is* faulted in.
Usually, a program will put some data in a buffer and then write it to
a file in very short order. Think of something as simple as:

	sprintf(buf, "Hello world");
	write(fd, buf, len);

In this common case, the fault_in_iov_iter_readable() incurs the cost
of touching 'buf' in userspace twice.  On x86, that means at least an
extra STAC/CLAC pair.

Optimize for the case where the source buffer has already been faulted
in. Ensure forward progress by doing the fault in slow paths when the
atomic copies are not making progress.

That logically changes the above loop to something more akin to:

	do {
		// lock target folios
		copied = copy_folio_from_iter_atomic()
		// unlock target folios

		if (unlikely(!copied))
			fault_in_iov_iter_readable(source)
	} while(iov_iter_count(iter))

1. The comment about atomic user copies not checking addresses seems
   to have originated in 08291429cfa6 ("mm: fix pagecache write
   deadlocks") circa 2007. It was true then, but is no longer true.

 fs/bcachefs/fs-io-buffered.c |   30 ++++++++++--------------------
 fs/btrfs/file.c              |   20 +++++++++++---------
 fs/fuse/file.c               |   14 ++++++++++----
 fs/iomap/buffered-io.c       |   24 +++++++++---------------
 fs/netfs/buffered_write.c    |   13 +++----------
 fs/ntfs3/file.c              |   17 ++++++++++++-----
 mm/filemap.c                 |   26 +++++++++++++++-----------
 7 files changed, 70 insertions(+), 74 deletions(-)

Cc: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
Cc: ntfs3@lists.linux.dev
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: linux-bcachefs@vger.kernel.org
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: David Sterba <dsterba@suse.com>
Cc: linux-btrfs@vger.kernel.org
Cc: David Howells <dhowells@redhat.com>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: netfs@lists.linux.dev

Comments

Kent Overstreet Jan. 30, 2025, 7:44 a.m. UTC | #1
On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote:
> tl;dr: The VFS and several filesystems have some suspect prefaulting
> code. It is unnecessarily slow for the common case where a write's
> source buffer is resident and does not need to be faulted in.
> 
> Move these "prefaulting" operations to slow paths where they ensure
> forward progress but they do not slow down the fast paths. This
> optimizes the fast path to touch userspace once instead of twice.
> 
> Also update somewhat dubious comments about the need for prefaulting.
> 
> This has been very lightly tested. I have not tested any of the fs/
> code explicitly.

Q: what is preventing us from posting code to the list that's been
properly tested?

I just got another bcachefs patch series that blew up immediately when I
threw it at my CI.

This is getting _utterly ridiculous_.

I built multiuser test infrastructure with a nice dashboard that anyone
can use, and the only response I've gotten from the old guard is Ted
jumping in every time I talk about it to say "no, we just don't want to
rewrite our stuff on _your_ stuff!". Real helpful, that.

>  1. Deadlock avoidance if the source and target are the same
>     folios.
>  2. To check the user address that copy_folio_from_iter_atomic()
>     will touch because atomic user copies do not check the address.
>  3. "Optimization"
> 
> I'm not sure any of these are actually valid reasons.
> 
> The "atomic" user copy functions disable page fault handling because
> page faults are not very atomic. This makes them naturally resistant
> to deadlocking in page fault handling. They take the page fault
> itself but short-circuit any handling.

#1 is emphatically valid: the deadlock avoidance is in _both_ using
_atomic when we have locks held, and doing the actual faulting with
locks dropped... either alone would be a buggy incomplete solution.

This needs to be reflected and fully described in the comments, since
it's subtle and a lot of people don't fully grok what's going on.

I'm fairly certain we have ioctl code where this is mishandled and thus
buggy, because it takes some fairly particular testing for lockdep to
spot it.

> copy_folio_from_iter_atomic() also *does* have user address checking.
> I get a little lost in the iov_iter code, but it does know when it's
> dealing with userspace versus kernel addresses and does seem to know
> when to do things like copy_from_user_iter() (which does access_ok())
> versus memcpy_from_iter().[1]
> 
> The "optimization" is for the case where 'source' is not faulted in.
> It can avoid the cost of a "failed" page fault (it will fail to be
> handled because of the atomic copy) and then needing to drop locks and
> repeat the fault.

I do agree on moving it to the slowpath - I think we can expect the case
where the process's immediate workingset is faulted out while it's
running to be vanishingly small.
Dave Hansen Jan. 30, 2025, 4:04 p.m. UTC | #2
On 1/29/25 23:44, Kent Overstreet wrote:
> On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote:
>> tl;dr: The VFS and several filesystems have some suspect prefaulting
>> code. It is unnecessarily slow for the common case where a write's
>> source buffer is resident and does not need to be faulted in.
>>
>> Move these "prefaulting" operations to slow paths where they ensure
>> forward progress but they do not slow down the fast paths. This
>> optimizes the fast path to touch userspace once instead of twice.
>>
>> Also update somewhat dubious comments about the need for prefaulting.
>>
>> This has been very lightly tested. I have not tested any of the fs/
>> code explicitly.
> 
> Q: what is preventing us from posting code to the list that's been
> properly tested?
> 
> I just got another bcachefs patch series that blew up immediately when I
> threw it at my CI.
> 
> This is getting _utterly ridiculous_.

In this case, I started with a single patch for generic code that I knew
I could test. In fact, I even had the 9-year-old binary sitting on my
test box.

Dave Chinner suggested that I take the generic pattern go look a _bit_
more widely in the tree for a similar pattern. That search paid off, I
think. But I ended up touching corners of the tree I don't know well and
don't have test cases for.

> I built multiuser test infrastructure with a nice dashboard that anyone
> can use, and the only response I've gotten from the old guard is Ted
> jumping in every time I talk about it to say "no, we just don't want to
> rewrite our stuff on _your_ stuff!". Real helpful, that.

Sounds pretty cool! Is this something that I could have and should have
used to test the bcachefs patch?  I see some trees in here:

	https://evilpiepirate.org/~testdashboard/ci

But I'm not sure how to submit patches to it. Do you need to add users
manually? I wonder, though, how we could make it easier to find. I
didn't see anything Documentation/filesystems/bcachefs/ about this.

>>  1. Deadlock avoidance if the source and target are the same
>>     folios.
>>  2. To check the user address that copy_folio_from_iter_atomic()
>>     will touch because atomic user copies do not check the address.
>>  3. "Optimization"
>>
>> I'm not sure any of these are actually valid reasons.
>>
>> The "atomic" user copy functions disable page fault handling because
>> page faults are not very atomic. This makes them naturally resistant
>> to deadlocking in page fault handling. They take the page fault
>> itself but short-circuit any handling.
> 
> #1 is emphatically valid: the deadlock avoidance is in _both_ using
> _atomic when we have locks held, and doing the actual faulting with
> locks dropped... either alone would be a buggy incomplete solution.

I was (badly) attempting to separate out the two different problems:

	1. Doing lock_page() twice, which I was mostly calling the
	   "deadlock"
	2. Retrying the copy_folio_from_iter_atomic() forever which I
	   was calling the "livelock"

Disabling page faults fixes #1.
Doing faulting outside the locks somewhere fixes #2.

So when I was talking about "Deadlock avoidance" in the cover letter, I
was trying to focus on the double lock_page() problem.

> This needs to be reflected and fully described in the comments, since
> it's subtle and a lot of people don't fully grok what's going on.

Any suggestions for fully describing the situation? I tried to sprinkle
comments liberally but I'm also painfully aware that I'm not doing a
perfect job of talking about the fs code.

> I'm fairly certain we have ioctl code where this is mishandled and thus
> buggy, because it takes some fairly particular testing for lockdep to
> spot it.

Yeah, I wouldn't be surprised. It was having a little chuckle thinking
about how many engineers have discovered and fixed this problem
independently over the years in all the file system code in all the OSes.

>> copy_folio_from_iter_atomic() also *does* have user address checking.
>> I get a little lost in the iov_iter code, but it does know when it's
>> dealing with userspace versus kernel addresses and does seem to know
>> when to do things like copy_from_user_iter() (which does access_ok())
>> versus memcpy_from_iter().[1]
>>
>> The "optimization" is for the case where 'source' is not faulted in.
>> It can avoid the cost of a "failed" page fault (it will fail to be
>> handled because of the atomic copy) and then needing to drop locks and
>> repeat the fault.
> 
> I do agree on moving it to the slowpath - I think we can expect the case
> where the process's immediate workingset is faulted out while it's
> running to be vanishingly small.

Great! I'm glad we're on the same page there.

For bcachefs specifically, how should we move forward? If you're happy
with the concept, would you prefer that I do some manual bcachefs
testing? Or leave a branch sitting there for a week and pray the robots
test it?
Dave Chinner Jan. 30, 2025, 9:36 p.m. UTC | #3
On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote:
> On 1/29/25 23:44, Kent Overstreet wrote:
> > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote:
> >> tl;dr: The VFS and several filesystems have some suspect prefaulting
> >> code. It is unnecessarily slow for the common case where a write's
> >> source buffer is resident and does not need to be faulted in.
> >>
> >> Move these "prefaulting" operations to slow paths where they ensure
> >> forward progress but they do not slow down the fast paths. This
> >> optimizes the fast path to touch userspace once instead of twice.
> >>
> >> Also update somewhat dubious comments about the need for prefaulting.
> >>
> >> This has been very lightly tested. I have not tested any of the fs/
> >> code explicitly.
> > 
> > Q: what is preventing us from posting code to the list that's been
> > properly tested?
> > 
> > I just got another bcachefs patch series that blew up immediately when I
> > threw it at my CI.
> > 
> > This is getting _utterly ridiculous_.

That's a bit of an over-reaction, Kent.

IMO, the developers and/or maintainers of each filesystem have some
responsibility to test changes like this themselves as part of their
review process.

That's what you have just done, Kent. Good work!

However, it is not OK to rant about how the proposed change failed
because it was not exhaustively tested on every filesytem before it
was posted.

I agree with Dave - it is difficult for someone to test widepsread
changes in code outside their specific expertise. In many cases, the
test infrastructure just doesn't exist or, if it does, requires
specialised knowledge and tools to run.

In such cases, we have to acknowledge that best effort testing is
about as good as we can do without overly burdening the author of
such a change. In these cases, it is best left to the maintainer of
that subsystem to exhaustively test the change to their
subsystem....

Indeed, this is the whole point of extensive post-merge integration
testing (e.g. the testing that gets run on linux-next -every day-).
It reduces the burden on individuals proposing changes created by
requiring exhaustive testing before review by amortising the cost of
that exhaustive testing over many peer reviewed changes....

> In this case, I started with a single patch for generic code that I knew
> I could test. In fact, I even had the 9-year-old binary sitting on my
> test box.
> 
> Dave Chinner suggested that I take the generic pattern go look a _bit_
> more widely in the tree for a similar pattern. That search paid off, I
> think. But I ended up touching corners of the tree I don't know well and
> don't have test cases for.

Many thanks for doing the search, identifying all the places
where this pattern existed and trying to address them, Dave. 

> For bcachefs specifically, how should we move forward? If you're happy
> with the concept, would you prefer that I do some manual bcachefs
> testing? Or leave a branch sitting there for a week and pray the robots
> test it?

The public automated test robots are horribly unreliable with their
coverage of proposed changes. Hence my comment above about the
subsystem maintainers bearing some responsibility to test the code
as part of their review process....

-Dave.
Kent Overstreet Jan. 31, 2025, 12:56 a.m. UTC | #4
On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote:
> On 1/29/25 23:44, Kent Overstreet wrote:
> > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote:
> >> tl;dr: The VFS and several filesystems have some suspect prefaulting
> >> code. It is unnecessarily slow for the common case where a write's
> >> source buffer is resident and does not need to be faulted in.
> >>
> >> Move these "prefaulting" operations to slow paths where they ensure
> >> forward progress but they do not slow down the fast paths. This
> >> optimizes the fast path to touch userspace once instead of twice.
> >>
> >> Also update somewhat dubious comments about the need for prefaulting.
> >>
> >> This has been very lightly tested. I have not tested any of the fs/
> >> code explicitly.
> > 
> > Q: what is preventing us from posting code to the list that's been
> > properly tested?
> > 
> > I just got another bcachefs patch series that blew up immediately when I
> > threw it at my CI.
> > 
> > This is getting _utterly ridiculous_.
> 
> In this case, I started with a single patch for generic code that I knew
> I could test. In fact, I even had the 9-year-old binary sitting on my
> test box.
> 
> Dave Chinner suggested that I take the generic pattern go look a _bit_
> more widely in the tree for a similar pattern. That search paid off, I
> think. But I ended up touching corners of the tree I don't know well and
> don't have test cases for.

That's all well and good, but the testing thing is really coming to a
head. I have enough on my plate without back-and-forth 

> > I built multiuser test infrastructure with a nice dashboard that anyone
> > can use, and the only response I've gotten from the old guard is Ted
> > jumping in every time I talk about it to say "no, we just don't want to
> > rewrite our stuff on _your_ stuff!". Real helpful, that.
> 
> Sounds pretty cool! Is this something that I could have and should have
> used to test the bcachefs patch?  I see some trees in here:
> 
> 	https://evilpiepirate.org/~testdashboard/ci
> 
> But I'm not sure how to submit patches to it. Do you need to add users
> manually? I wonder, though, how we could make it easier to find. I
> didn't see anything Documentation/filesystems/bcachefs/ about this.

Yes, I give out user accounts and that gives you a config file you can
edit to specify branches to test and tests to run; it then automatically
watches those branch(es).

Here's the thing though, the servers cost real money. I give out
accounts to community members (and people working on bcachefs are using
it and it's working wel), but if you've got a big tech company email
address you (or your managers) will have to be pitching in. Not
subsidizing you guys :)

> 
> >>  1. Deadlock avoidance if the source and target are the same
> >>     folios.
> >>  2. To check the user address that copy_folio_from_iter_atomic()
> >>     will touch because atomic user copies do not check the address.
> >>  3. "Optimization"
> >>
> >> I'm not sure any of these are actually valid reasons.
> >>
> >> The "atomic" user copy functions disable page fault handling because
> >> page faults are not very atomic. This makes them naturally resistant
> >> to deadlocking in page fault handling. They take the page fault
> >> itself but short-circuit any handling.
> > 
> > #1 is emphatically valid: the deadlock avoidance is in _both_ using
> > _atomic when we have locks held, and doing the actual faulting with
> > locks dropped... either alone would be a buggy incomplete solution.
> 
> I was (badly) attempting to separate out the two different problems:
> 
> 	1. Doing lock_page() twice, which I was mostly calling the
> 	   "deadlock"
> 	2. Retrying the copy_folio_from_iter_atomic() forever which I
> 	   was calling the "livelock"
> 
> Disabling page faults fixes #1.
> Doing faulting outside the locks somewhere fixes #2.
> 
> So when I was talking about "Deadlock avoidance" in the cover letter, I
> was trying to focus on the double lock_page() problem.
> 
> > This needs to be reflected and fully described in the comments, since
> > it's subtle and a lot of people don't fully grok what's going on.
> 
> Any suggestions for fully describing the situation? I tried to sprinkle
> comments liberally but I'm also painfully aware that I'm not doing a
> perfect job of talking about the fs code.

The critical thing to cover is the fact that mmap means that page faults
can recurse into arbitrary filesystem code, thus blowing a hole in all
our carefully crafted lock ordering if we allow that while holding
locks - you didn't mention that at all.

> > I'm fairly certain we have ioctl code where this is mishandled and thus
> > buggy, because it takes some fairly particular testing for lockdep to
> > spot it.
> 
> Yeah, I wouldn't be surprised. It was having a little chuckle thinking
> about how many engineers have discovered and fixed this problem
> independently over the years in all the file system code in all the OSes.

Oh, this is the easy one - mmap and dio is where it really gets into
"stories we tell young engineers to keep them awake at night".

> 
> >> copy_folio_from_iter_atomic() also *does* have user address checking.
> >> I get a little lost in the iov_iter code, but it does know when it's
> >> dealing with userspace versus kernel addresses and does seem to know
> >> when to do things like copy_from_user_iter() (which does access_ok())
> >> versus memcpy_from_iter().[1]
> >>
> >> The "optimization" is for the case where 'source' is not faulted in.
> >> It can avoid the cost of a "failed" page fault (it will fail to be
> >> handled because of the atomic copy) and then needing to drop locks and
> >> repeat the fault.
> > 
> > I do agree on moving it to the slowpath - I think we can expect the case
> > where the process's immediate workingset is faulted out while it's
> > running to be vanishingly small.
> 
> Great! I'm glad we're on the same page there.
> 
> For bcachefs specifically, how should we move forward? If you're happy
> with the concept, would you prefer that I do some manual bcachefs
> testing? Or leave a branch sitting there for a week and pray the robots
> test it?

No to the sit and pray. If I see one more "testing? that's something
other people do" conversation I'll blow another gasket.

xfstests supports bcachefs, and if you need a really easy way to run it
locally on all the various filesystems, I have a solution for that:

https://evilpiepirate.org/git/ktest.git/

If you want access to my CI that runs all that in parallel across 120
VMs with the nice dashboard - shoot me an email and I'll outline server
costs and we can work something out.
Kent Overstreet Jan. 31, 2025, 1:06 a.m. UTC | #5
On Fri, Jan 31, 2025 at 08:36:29AM +1100, Dave Chinner wrote:
> On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote:
> > On 1/29/25 23:44, Kent Overstreet wrote:
> > > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote:
> > >> tl;dr: The VFS and several filesystems have some suspect prefaulting
> > >> code. It is unnecessarily slow for the common case where a write's
> > >> source buffer is resident and does not need to be faulted in.
> > >>
> > >> Move these "prefaulting" operations to slow paths where they ensure
> > >> forward progress but they do not slow down the fast paths. This
> > >> optimizes the fast path to touch userspace once instead of twice.
> > >>
> > >> Also update somewhat dubious comments about the need for prefaulting.
> > >>
> > >> This has been very lightly tested. I have not tested any of the fs/
> > >> code explicitly.
> > > 
> > > Q: what is preventing us from posting code to the list that's been
> > > properly tested?
> > > 
> > > I just got another bcachefs patch series that blew up immediately when I
> > > threw it at my CI.
> > > 
> > > This is getting _utterly ridiculous_.
> 
> That's a bit of an over-reaction, Kent.
> 
> IMO, the developers and/or maintainers of each filesystem have some
> responsibility to test changes like this themselves as part of their
> review process.
> 
> That's what you have just done, Kent. Good work!
> 
> However, it is not OK to rant about how the proposed change failed
> because it was not exhaustively tested on every filesytem before it
> was posted.

This is just tone policing.

> I agree with Dave - it is difficult for someone to test widepsread
> changes in code outside their specific expertise. In many cases, the
> test infrastructure just doesn't exist or, if it does, requires
> specialised knowledge and tools to run.

No, it exists - I built it.

> In such cases, we have to acknowledge that best effort testing is
> about as good as we can do without overly burdening the author of
> such a change. In these cases, it is best left to the maintainer of
> that subsystem to exhaustively test the change to their
> subsystem....

I keep having the same conversations in code review:
"Did you test this?"
"No really, did you test this?"
"See that error path you modified, that our automated tests definitely
don't cover? You need to test that manually".

Developers don't think enough about testing - and if the excuse is that
the tools suck, then we need to address that. I'm not going to babysit
and do a bunch of manual work - I spend quite enough of my day staring
at test dashboards already, thank you very much.

> > For bcachefs specifically, how should we move forward? If you're happy
> > with the concept, would you prefer that I do some manual bcachefs
> > testing? Or leave a branch sitting there for a week and pray the robots
> > test it?
> 
> The public automated test robots are horribly unreliable with their
> coverage of proposed changes. Hence my comment above about the
> subsystem maintainers bearing some responsibility to test the code
> as part of their review process....

AFAIK the public test robots don't run xfstests at all.
Dave Hansen Jan. 31, 2025, 1:34 a.m. UTC | #6
On 1/30/25 16:56, Kent Overstreet wrote:
> On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote:...
>> Any suggestions for fully describing the situation? I tried to sprinkle
>> comments liberally but I'm also painfully aware that I'm not doing a
>> perfect job of talking about the fs code.
> 
> The critical thing to cover is the fact that mmap means that page faults
> can recurse into arbitrary filesystem code, thus blowing a hole in all
> our carefully crafted lock ordering if we allow that while holding
> locks - you didn't mention that at all.

What I've got today is this:

              /*
               * This needs to be atomic because actually handling page
               * faults on 'i' can deadlock if the copy targets a
               * userspace mapping of 'folio'.
               */
	      copied = copy_folio_from_iter_atomic(...);

Are you saying you'd prefer that this be something more like:

		/*
		 * Faults here on mmap()s can recurse into arbitrary
		 * filesystem code. Lots of locks are held that can
		 * deadlock. Use an atomic copy to avoid deadlocking
		 * in page fault handling.
		 */

?

>>> I do agree on moving it to the slowpath - I think we can expect the case
>>> where the process's immediate workingset is faulted out while it's
>>> running to be vanishingly small.
>>
>> Great! I'm glad we're on the same page there.
>>
>> For bcachefs specifically, how should we move forward? If you're happy
>> with the concept, would you prefer that I do some manual bcachefs
>> testing? Or leave a branch sitting there for a week and pray the robots
>> test it?
> 
> No to the sit and pray. If I see one more "testing? that's something
> other people do" conversation I'll blow another gasket.
> 
> xfstests supports bcachefs, and if you need a really easy way to run it
> locally on all the various filesystems, I have a solution for that:
> 
> https://evilpiepirate.org/git/ktest.git/
> 
> If you want access to my CI that runs all that in parallel across 120
> VMs with the nice dashboard - shoot me an email and I'll outline server
> costs and we can work something out.

That _sounds_ a bit heavyweight to me for this patch:

 b/fs/bcachefs/fs-io-buffered.c |   30 ++++++++++--------------------
 1 file changed, 10 insertions(+), 20 deletions(-)

Is that the the kind of testing (120 VMs) that is needed to get a patch
into bcachefs?

Or are you saying that running xfstests on bcachefs with this patch
applied would be sufficient?

On the x86 side, I'm usually pretty happy to know that someone has
compiled a patch and at least executed the code at runtime a time or
two. So this process is a bit unfamiliar to me.