Message ID | 20250129181749.C229F6F3@davehans-spike.ostc.intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Move prefaulting into write slow paths | expand |
On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: > tl;dr: The VFS and several filesystems have some suspect prefaulting > code. It is unnecessarily slow for the common case where a write's > source buffer is resident and does not need to be faulted in. > > Move these "prefaulting" operations to slow paths where they ensure > forward progress but they do not slow down the fast paths. This > optimizes the fast path to touch userspace once instead of twice. > > Also update somewhat dubious comments about the need for prefaulting. > > This has been very lightly tested. I have not tested any of the fs/ > code explicitly. Q: what is preventing us from posting code to the list that's been properly tested? I just got another bcachefs patch series that blew up immediately when I threw it at my CI. This is getting _utterly ridiculous_. I built multiuser test infrastructure with a nice dashboard that anyone can use, and the only response I've gotten from the old guard is Ted jumping in every time I talk about it to say "no, we just don't want to rewrite our stuff on _your_ stuff!". Real helpful, that. > 1. Deadlock avoidance if the source and target are the same > folios. > 2. To check the user address that copy_folio_from_iter_atomic() > will touch because atomic user copies do not check the address. > 3. "Optimization" > > I'm not sure any of these are actually valid reasons. > > The "atomic" user copy functions disable page fault handling because > page faults are not very atomic. This makes them naturally resistant > to deadlocking in page fault handling. They take the page fault > itself but short-circuit any handling. #1 is emphatically valid: the deadlock avoidance is in _both_ using _atomic when we have locks held, and doing the actual faulting with locks dropped... either alone would be a buggy incomplete solution. This needs to be reflected and fully described in the comments, since it's subtle and a lot of people don't fully grok what's going on. I'm fairly certain we have ioctl code where this is mishandled and thus buggy, because it takes some fairly particular testing for lockdep to spot it. > copy_folio_from_iter_atomic() also *does* have user address checking. > I get a little lost in the iov_iter code, but it does know when it's > dealing with userspace versus kernel addresses and does seem to know > when to do things like copy_from_user_iter() (which does access_ok()) > versus memcpy_from_iter().[1] > > The "optimization" is for the case where 'source' is not faulted in. > It can avoid the cost of a "failed" page fault (it will fail to be > handled because of the atomic copy) and then needing to drop locks and > repeat the fault. I do agree on moving it to the slowpath - I think we can expect the case where the process's immediate workingset is faulted out while it's running to be vanishingly small.
On 1/29/25 23:44, Kent Overstreet wrote: > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: >> tl;dr: The VFS and several filesystems have some suspect prefaulting >> code. It is unnecessarily slow for the common case where a write's >> source buffer is resident and does not need to be faulted in. >> >> Move these "prefaulting" operations to slow paths where they ensure >> forward progress but they do not slow down the fast paths. This >> optimizes the fast path to touch userspace once instead of twice. >> >> Also update somewhat dubious comments about the need for prefaulting. >> >> This has been very lightly tested. I have not tested any of the fs/ >> code explicitly. > > Q: what is preventing us from posting code to the list that's been > properly tested? > > I just got another bcachefs patch series that blew up immediately when I > threw it at my CI. > > This is getting _utterly ridiculous_. In this case, I started with a single patch for generic code that I knew I could test. In fact, I even had the 9-year-old binary sitting on my test box. Dave Chinner suggested that I take the generic pattern go look a _bit_ more widely in the tree for a similar pattern. That search paid off, I think. But I ended up touching corners of the tree I don't know well and don't have test cases for. > I built multiuser test infrastructure with a nice dashboard that anyone > can use, and the only response I've gotten from the old guard is Ted > jumping in every time I talk about it to say "no, we just don't want to > rewrite our stuff on _your_ stuff!". Real helpful, that. Sounds pretty cool! Is this something that I could have and should have used to test the bcachefs patch? I see some trees in here: https://evilpiepirate.org/~testdashboard/ci But I'm not sure how to submit patches to it. Do you need to add users manually? I wonder, though, how we could make it easier to find. I didn't see anything Documentation/filesystems/bcachefs/ about this. >> 1. Deadlock avoidance if the source and target are the same >> folios. >> 2. To check the user address that copy_folio_from_iter_atomic() >> will touch because atomic user copies do not check the address. >> 3. "Optimization" >> >> I'm not sure any of these are actually valid reasons. >> >> The "atomic" user copy functions disable page fault handling because >> page faults are not very atomic. This makes them naturally resistant >> to deadlocking in page fault handling. They take the page fault >> itself but short-circuit any handling. > > #1 is emphatically valid: the deadlock avoidance is in _both_ using > _atomic when we have locks held, and doing the actual faulting with > locks dropped... either alone would be a buggy incomplete solution. I was (badly) attempting to separate out the two different problems: 1. Doing lock_page() twice, which I was mostly calling the "deadlock" 2. Retrying the copy_folio_from_iter_atomic() forever which I was calling the "livelock" Disabling page faults fixes #1. Doing faulting outside the locks somewhere fixes #2. So when I was talking about "Deadlock avoidance" in the cover letter, I was trying to focus on the double lock_page() problem. > This needs to be reflected and fully described in the comments, since > it's subtle and a lot of people don't fully grok what's going on. Any suggestions for fully describing the situation? I tried to sprinkle comments liberally but I'm also painfully aware that I'm not doing a perfect job of talking about the fs code. > I'm fairly certain we have ioctl code where this is mishandled and thus > buggy, because it takes some fairly particular testing for lockdep to > spot it. Yeah, I wouldn't be surprised. It was having a little chuckle thinking about how many engineers have discovered and fixed this problem independently over the years in all the file system code in all the OSes. >> copy_folio_from_iter_atomic() also *does* have user address checking. >> I get a little lost in the iov_iter code, but it does know when it's >> dealing with userspace versus kernel addresses and does seem to know >> when to do things like copy_from_user_iter() (which does access_ok()) >> versus memcpy_from_iter().[1] >> >> The "optimization" is for the case where 'source' is not faulted in. >> It can avoid the cost of a "failed" page fault (it will fail to be >> handled because of the atomic copy) and then needing to drop locks and >> repeat the fault. > > I do agree on moving it to the slowpath - I think we can expect the case > where the process's immediate workingset is faulted out while it's > running to be vanishingly small. Great! I'm glad we're on the same page there. For bcachefs specifically, how should we move forward? If you're happy with the concept, would you prefer that I do some manual bcachefs testing? Or leave a branch sitting there for a week and pray the robots test it?
On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote: > On 1/29/25 23:44, Kent Overstreet wrote: > > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: > >> tl;dr: The VFS and several filesystems have some suspect prefaulting > >> code. It is unnecessarily slow for the common case where a write's > >> source buffer is resident and does not need to be faulted in. > >> > >> Move these "prefaulting" operations to slow paths where they ensure > >> forward progress but they do not slow down the fast paths. This > >> optimizes the fast path to touch userspace once instead of twice. > >> > >> Also update somewhat dubious comments about the need for prefaulting. > >> > >> This has been very lightly tested. I have not tested any of the fs/ > >> code explicitly. > > > > Q: what is preventing us from posting code to the list that's been > > properly tested? > > > > I just got another bcachefs patch series that blew up immediately when I > > threw it at my CI. > > > > This is getting _utterly ridiculous_. That's a bit of an over-reaction, Kent. IMO, the developers and/or maintainers of each filesystem have some responsibility to test changes like this themselves as part of their review process. That's what you have just done, Kent. Good work! However, it is not OK to rant about how the proposed change failed because it was not exhaustively tested on every filesytem before it was posted. I agree with Dave - it is difficult for someone to test widepsread changes in code outside their specific expertise. In many cases, the test infrastructure just doesn't exist or, if it does, requires specialised knowledge and tools to run. In such cases, we have to acknowledge that best effort testing is about as good as we can do without overly burdening the author of such a change. In these cases, it is best left to the maintainer of that subsystem to exhaustively test the change to their subsystem.... Indeed, this is the whole point of extensive post-merge integration testing (e.g. the testing that gets run on linux-next -every day-). It reduces the burden on individuals proposing changes created by requiring exhaustive testing before review by amortising the cost of that exhaustive testing over many peer reviewed changes.... > In this case, I started with a single patch for generic code that I knew > I could test. In fact, I even had the 9-year-old binary sitting on my > test box. > > Dave Chinner suggested that I take the generic pattern go look a _bit_ > more widely in the tree for a similar pattern. That search paid off, I > think. But I ended up touching corners of the tree I don't know well and > don't have test cases for. Many thanks for doing the search, identifying all the places where this pattern existed and trying to address them, Dave. > For bcachefs specifically, how should we move forward? If you're happy > with the concept, would you prefer that I do some manual bcachefs > testing? Or leave a branch sitting there for a week and pray the robots > test it? The public automated test robots are horribly unreliable with their coverage of proposed changes. Hence my comment above about the subsystem maintainers bearing some responsibility to test the code as part of their review process.... -Dave.
On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote: > On 1/29/25 23:44, Kent Overstreet wrote: > > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: > >> tl;dr: The VFS and several filesystems have some suspect prefaulting > >> code. It is unnecessarily slow for the common case where a write's > >> source buffer is resident and does not need to be faulted in. > >> > >> Move these "prefaulting" operations to slow paths where they ensure > >> forward progress but they do not slow down the fast paths. This > >> optimizes the fast path to touch userspace once instead of twice. > >> > >> Also update somewhat dubious comments about the need for prefaulting. > >> > >> This has been very lightly tested. I have not tested any of the fs/ > >> code explicitly. > > > > Q: what is preventing us from posting code to the list that's been > > properly tested? > > > > I just got another bcachefs patch series that blew up immediately when I > > threw it at my CI. > > > > This is getting _utterly ridiculous_. > > In this case, I started with a single patch for generic code that I knew > I could test. In fact, I even had the 9-year-old binary sitting on my > test box. > > Dave Chinner suggested that I take the generic pattern go look a _bit_ > more widely in the tree for a similar pattern. That search paid off, I > think. But I ended up touching corners of the tree I don't know well and > don't have test cases for. That's all well and good, but the testing thing is really coming to a head. I have enough on my plate without back-and-forth > > I built multiuser test infrastructure with a nice dashboard that anyone > > can use, and the only response I've gotten from the old guard is Ted > > jumping in every time I talk about it to say "no, we just don't want to > > rewrite our stuff on _your_ stuff!". Real helpful, that. > > Sounds pretty cool! Is this something that I could have and should have > used to test the bcachefs patch? I see some trees in here: > > https://evilpiepirate.org/~testdashboard/ci > > But I'm not sure how to submit patches to it. Do you need to add users > manually? I wonder, though, how we could make it easier to find. I > didn't see anything Documentation/filesystems/bcachefs/ about this. Yes, I give out user accounts and that gives you a config file you can edit to specify branches to test and tests to run; it then automatically watches those branch(es). Here's the thing though, the servers cost real money. I give out accounts to community members (and people working on bcachefs are using it and it's working wel), but if you've got a big tech company email address you (or your managers) will have to be pitching in. Not subsidizing you guys :) > > >> 1. Deadlock avoidance if the source and target are the same > >> folios. > >> 2. To check the user address that copy_folio_from_iter_atomic() > >> will touch because atomic user copies do not check the address. > >> 3. "Optimization" > >> > >> I'm not sure any of these are actually valid reasons. > >> > >> The "atomic" user copy functions disable page fault handling because > >> page faults are not very atomic. This makes them naturally resistant > >> to deadlocking in page fault handling. They take the page fault > >> itself but short-circuit any handling. > > > > #1 is emphatically valid: the deadlock avoidance is in _both_ using > > _atomic when we have locks held, and doing the actual faulting with > > locks dropped... either alone would be a buggy incomplete solution. > > I was (badly) attempting to separate out the two different problems: > > 1. Doing lock_page() twice, which I was mostly calling the > "deadlock" > 2. Retrying the copy_folio_from_iter_atomic() forever which I > was calling the "livelock" > > Disabling page faults fixes #1. > Doing faulting outside the locks somewhere fixes #2. > > So when I was talking about "Deadlock avoidance" in the cover letter, I > was trying to focus on the double lock_page() problem. > > > This needs to be reflected and fully described in the comments, since > > it's subtle and a lot of people don't fully grok what's going on. > > Any suggestions for fully describing the situation? I tried to sprinkle > comments liberally but I'm also painfully aware that I'm not doing a > perfect job of talking about the fs code. The critical thing to cover is the fact that mmap means that page faults can recurse into arbitrary filesystem code, thus blowing a hole in all our carefully crafted lock ordering if we allow that while holding locks - you didn't mention that at all. > > I'm fairly certain we have ioctl code where this is mishandled and thus > > buggy, because it takes some fairly particular testing for lockdep to > > spot it. > > Yeah, I wouldn't be surprised. It was having a little chuckle thinking > about how many engineers have discovered and fixed this problem > independently over the years in all the file system code in all the OSes. Oh, this is the easy one - mmap and dio is where it really gets into "stories we tell young engineers to keep them awake at night". > > >> copy_folio_from_iter_atomic() also *does* have user address checking. > >> I get a little lost in the iov_iter code, but it does know when it's > >> dealing with userspace versus kernel addresses and does seem to know > >> when to do things like copy_from_user_iter() (which does access_ok()) > >> versus memcpy_from_iter().[1] > >> > >> The "optimization" is for the case where 'source' is not faulted in. > >> It can avoid the cost of a "failed" page fault (it will fail to be > >> handled because of the atomic copy) and then needing to drop locks and > >> repeat the fault. > > > > I do agree on moving it to the slowpath - I think we can expect the case > > where the process's immediate workingset is faulted out while it's > > running to be vanishingly small. > > Great! I'm glad we're on the same page there. > > For bcachefs specifically, how should we move forward? If you're happy > with the concept, would you prefer that I do some manual bcachefs > testing? Or leave a branch sitting there for a week and pray the robots > test it? No to the sit and pray. If I see one more "testing? that's something other people do" conversation I'll blow another gasket. xfstests supports bcachefs, and if you need a really easy way to run it locally on all the various filesystems, I have a solution for that: https://evilpiepirate.org/git/ktest.git/ If you want access to my CI that runs all that in parallel across 120 VMs with the nice dashboard - shoot me an email and I'll outline server costs and we can work something out.
On Fri, Jan 31, 2025 at 08:36:29AM +1100, Dave Chinner wrote: > On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote: > > On 1/29/25 23:44, Kent Overstreet wrote: > > > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: > > >> tl;dr: The VFS and several filesystems have some suspect prefaulting > > >> code. It is unnecessarily slow for the common case where a write's > > >> source buffer is resident and does not need to be faulted in. > > >> > > >> Move these "prefaulting" operations to slow paths where they ensure > > >> forward progress but they do not slow down the fast paths. This > > >> optimizes the fast path to touch userspace once instead of twice. > > >> > > >> Also update somewhat dubious comments about the need for prefaulting. > > >> > > >> This has been very lightly tested. I have not tested any of the fs/ > > >> code explicitly. > > > > > > Q: what is preventing us from posting code to the list that's been > > > properly tested? > > > > > > I just got another bcachefs patch series that blew up immediately when I > > > threw it at my CI. > > > > > > This is getting _utterly ridiculous_. > > That's a bit of an over-reaction, Kent. > > IMO, the developers and/or maintainers of each filesystem have some > responsibility to test changes like this themselves as part of their > review process. > > That's what you have just done, Kent. Good work! > > However, it is not OK to rant about how the proposed change failed > because it was not exhaustively tested on every filesytem before it > was posted. This is just tone policing. > I agree with Dave - it is difficult for someone to test widepsread > changes in code outside their specific expertise. In many cases, the > test infrastructure just doesn't exist or, if it does, requires > specialised knowledge and tools to run. No, it exists - I built it. > In such cases, we have to acknowledge that best effort testing is > about as good as we can do without overly burdening the author of > such a change. In these cases, it is best left to the maintainer of > that subsystem to exhaustively test the change to their > subsystem.... I keep having the same conversations in code review: "Did you test this?" "No really, did you test this?" "See that error path you modified, that our automated tests definitely don't cover? You need to test that manually". Developers don't think enough about testing - and if the excuse is that the tools suck, then we need to address that. I'm not going to babysit and do a bunch of manual work - I spend quite enough of my day staring at test dashboards already, thank you very much. > > For bcachefs specifically, how should we move forward? If you're happy > > with the concept, would you prefer that I do some manual bcachefs > > testing? Or leave a branch sitting there for a week and pray the robots > > test it? > > The public automated test robots are horribly unreliable with their > coverage of proposed changes. Hence my comment above about the > subsystem maintainers bearing some responsibility to test the code > as part of their review process.... AFAIK the public test robots don't run xfstests at all.
On 1/30/25 16:56, Kent Overstreet wrote: > On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote:... >> Any suggestions for fully describing the situation? I tried to sprinkle >> comments liberally but I'm also painfully aware that I'm not doing a >> perfect job of talking about the fs code. > > The critical thing to cover is the fact that mmap means that page faults > can recurse into arbitrary filesystem code, thus blowing a hole in all > our carefully crafted lock ordering if we allow that while holding > locks - you didn't mention that at all. What I've got today is this: /* * This needs to be atomic because actually handling page * faults on 'i' can deadlock if the copy targets a * userspace mapping of 'folio'. */ copied = copy_folio_from_iter_atomic(...); Are you saying you'd prefer that this be something more like: /* * Faults here on mmap()s can recurse into arbitrary * filesystem code. Lots of locks are held that can * deadlock. Use an atomic copy to avoid deadlocking * in page fault handling. */ ? >>> I do agree on moving it to the slowpath - I think we can expect the case >>> where the process's immediate workingset is faulted out while it's >>> running to be vanishingly small. >> >> Great! I'm glad we're on the same page there. >> >> For bcachefs specifically, how should we move forward? If you're happy >> with the concept, would you prefer that I do some manual bcachefs >> testing? Or leave a branch sitting there for a week and pray the robots >> test it? > > No to the sit and pray. If I see one more "testing? that's something > other people do" conversation I'll blow another gasket. > > xfstests supports bcachefs, and if you need a really easy way to run it > locally on all the various filesystems, I have a solution for that: > > https://evilpiepirate.org/git/ktest.git/ > > If you want access to my CI that runs all that in parallel across 120 > VMs with the nice dashboard - shoot me an email and I'll outline server > costs and we can work something out. That _sounds_ a bit heavyweight to me for this patch: b/fs/bcachefs/fs-io-buffered.c | 30 ++++++++++-------------------- 1 file changed, 10 insertions(+), 20 deletions(-) Is that the the kind of testing (120 VMs) that is needed to get a patch into bcachefs? Or are you saying that running xfstests on bcachefs with this patch applied would be sufficient? On the x86 side, I'm usually pretty happy to know that someone has compiled a patch and at least executed the code at runtime a time or two. So this process is a bit unfamiliar to me.
On Thu, Jan 30, 2025 at 05:34:18PM -0800, Dave Hansen wrote: > On 1/30/25 16:56, Kent Overstreet wrote: > > On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote:... > >> Any suggestions for fully describing the situation? I tried to sprinkle > >> comments liberally but I'm also painfully aware that I'm not doing a > >> perfect job of talking about the fs code. > > > > The critical thing to cover is the fact that mmap means that page faults > > can recurse into arbitrary filesystem code, thus blowing a hole in all > > our carefully crafted lock ordering if we allow that while holding > > locks - you didn't mention that at all. > > What I've got today is this: > > /* > * This needs to be atomic because actually handling page > * faults on 'i' can deadlock if the copy targets a > * userspace mapping of 'folio'. > */ > copied = copy_folio_from_iter_atomic(...); > > Are you saying you'd prefer that this be something more like: > > /* > * Faults here on mmap()s can recurse into arbitrary > * filesystem code. Lots of locks are held that can > * deadlock. Use an atomic copy to avoid deadlocking > * in page fault handling. > */ > > ? Yeah I like that better, but I would also include the bit about "redoing the fault when locks are dropped" in the same comment, you want the whole idea described in one place. > >>> I do agree on moving it to the slowpath - I think we can expect the case > >>> where the process's immediate workingset is faulted out while it's > >>> running to be vanishingly small. > >> > >> Great! I'm glad we're on the same page there. > >> > >> For bcachefs specifically, how should we move forward? If you're happy > >> with the concept, would you prefer that I do some manual bcachefs > >> testing? Or leave a branch sitting there for a week and pray the robots > >> test it? > > > > No to the sit and pray. If I see one more "testing? that's something > > other people do" conversation I'll blow another gasket. > > > > xfstests supports bcachefs, and if you need a really easy way to run it > > locally on all the various filesystems, I have a solution for that: > > > > https://evilpiepirate.org/git/ktest.git/ > > > > If you want access to my CI that runs all that in parallel across 120 > > VMs with the nice dashboard - shoot me an email and I'll outline server > > costs and we can work something out. > > That _sounds_ a bit heavyweight to me for this patch: > > b/fs/bcachefs/fs-io-buffered.c | 30 ++++++++++-------------------- > 1 file changed, 10 insertions(+), 20 deletions(-) > > Is that the the kind of testing (120 VMs) that is needed to get a patch > into bcachefs? No - that's what CI access gets you, and it's not just for bcachefs testing; it could have tested this series on every filesystem in parallel for you. > Or are you saying that running xfstests on bcachefs with this patch > applied would be sufficient? Correct. > On the x86 side, I'm usually pretty happy to know that someone has > compiled a patch and at least executed the code at runtime a time or > two. So this process is a bit unfamiliar to me. To me, as a filesystem guy, that sounds terrifying :) I'm a hardass, even among filesystem people, but this really isn't just about making my life easier, it's about getting us _all_ better tools. If we can consolidate on a single testrunner for VM tests that people at least don't hate, then life would get drastically easier for all sorts of things, including cross subsystem work. And pretty much exists now, but the younger guys (and engineers in China) have been a lot quicker to start using it than people who are used to "the way things have always been"...