Message ID | 20250129181749.C229F6F3@davehans-spike.ostc.intel.com (mailing list archive) |
---|---|
Headers | show |
Series | Move prefaulting into write slow paths | expand |
On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: > tl;dr: The VFS and several filesystems have some suspect prefaulting > code. It is unnecessarily slow for the common case where a write's > source buffer is resident and does not need to be faulted in. > > Move these "prefaulting" operations to slow paths where they ensure > forward progress but they do not slow down the fast paths. This > optimizes the fast path to touch userspace once instead of twice. > > Also update somewhat dubious comments about the need for prefaulting. > > This has been very lightly tested. I have not tested any of the fs/ > code explicitly. Q: what is preventing us from posting code to the list that's been properly tested? I just got another bcachefs patch series that blew up immediately when I threw it at my CI. This is getting _utterly ridiculous_. I built multiuser test infrastructure with a nice dashboard that anyone can use, and the only response I've gotten from the old guard is Ted jumping in every time I talk about it to say "no, we just don't want to rewrite our stuff on _your_ stuff!". Real helpful, that. > 1. Deadlock avoidance if the source and target are the same > folios. > 2. To check the user address that copy_folio_from_iter_atomic() > will touch because atomic user copies do not check the address. > 3. "Optimization" > > I'm not sure any of these are actually valid reasons. > > The "atomic" user copy functions disable page fault handling because > page faults are not very atomic. This makes them naturally resistant > to deadlocking in page fault handling. They take the page fault > itself but short-circuit any handling. #1 is emphatically valid: the deadlock avoidance is in _both_ using _atomic when we have locks held, and doing the actual faulting with locks dropped... either alone would be a buggy incomplete solution. This needs to be reflected and fully described in the comments, since it's subtle and a lot of people don't fully grok what's going on. I'm fairly certain we have ioctl code where this is mishandled and thus buggy, because it takes some fairly particular testing for lockdep to spot it. > copy_folio_from_iter_atomic() also *does* have user address checking. > I get a little lost in the iov_iter code, but it does know when it's > dealing with userspace versus kernel addresses and does seem to know > when to do things like copy_from_user_iter() (which does access_ok()) > versus memcpy_from_iter().[1] > > The "optimization" is for the case where 'source' is not faulted in. > It can avoid the cost of a "failed" page fault (it will fail to be > handled because of the atomic copy) and then needing to drop locks and > repeat the fault. I do agree on moving it to the slowpath - I think we can expect the case where the process's immediate workingset is faulted out while it's running to be vanishingly small.
On 1/29/25 23:44, Kent Overstreet wrote: > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: >> tl;dr: The VFS and several filesystems have some suspect prefaulting >> code. It is unnecessarily slow for the common case where a write's >> source buffer is resident and does not need to be faulted in. >> >> Move these "prefaulting" operations to slow paths where they ensure >> forward progress but they do not slow down the fast paths. This >> optimizes the fast path to touch userspace once instead of twice. >> >> Also update somewhat dubious comments about the need for prefaulting. >> >> This has been very lightly tested. I have not tested any of the fs/ >> code explicitly. > > Q: what is preventing us from posting code to the list that's been > properly tested? > > I just got another bcachefs patch series that blew up immediately when I > threw it at my CI. > > This is getting _utterly ridiculous_. In this case, I started with a single patch for generic code that I knew I could test. In fact, I even had the 9-year-old binary sitting on my test box. Dave Chinner suggested that I take the generic pattern go look a _bit_ more widely in the tree for a similar pattern. That search paid off, I think. But I ended up touching corners of the tree I don't know well and don't have test cases for. > I built multiuser test infrastructure with a nice dashboard that anyone > can use, and the only response I've gotten from the old guard is Ted > jumping in every time I talk about it to say "no, we just don't want to > rewrite our stuff on _your_ stuff!". Real helpful, that. Sounds pretty cool! Is this something that I could have and should have used to test the bcachefs patch? I see some trees in here: https://evilpiepirate.org/~testdashboard/ci But I'm not sure how to submit patches to it. Do you need to add users manually? I wonder, though, how we could make it easier to find. I didn't see anything Documentation/filesystems/bcachefs/ about this. >> 1. Deadlock avoidance if the source and target are the same >> folios. >> 2. To check the user address that copy_folio_from_iter_atomic() >> will touch because atomic user copies do not check the address. >> 3. "Optimization" >> >> I'm not sure any of these are actually valid reasons. >> >> The "atomic" user copy functions disable page fault handling because >> page faults are not very atomic. This makes them naturally resistant >> to deadlocking in page fault handling. They take the page fault >> itself but short-circuit any handling. > > #1 is emphatically valid: the deadlock avoidance is in _both_ using > _atomic when we have locks held, and doing the actual faulting with > locks dropped... either alone would be a buggy incomplete solution. I was (badly) attempting to separate out the two different problems: 1. Doing lock_page() twice, which I was mostly calling the "deadlock" 2. Retrying the copy_folio_from_iter_atomic() forever which I was calling the "livelock" Disabling page faults fixes #1. Doing faulting outside the locks somewhere fixes #2. So when I was talking about "Deadlock avoidance" in the cover letter, I was trying to focus on the double lock_page() problem. > This needs to be reflected and fully described in the comments, since > it's subtle and a lot of people don't fully grok what's going on. Any suggestions for fully describing the situation? I tried to sprinkle comments liberally but I'm also painfully aware that I'm not doing a perfect job of talking about the fs code. > I'm fairly certain we have ioctl code where this is mishandled and thus > buggy, because it takes some fairly particular testing for lockdep to > spot it. Yeah, I wouldn't be surprised. It was having a little chuckle thinking about how many engineers have discovered and fixed this problem independently over the years in all the file system code in all the OSes. >> copy_folio_from_iter_atomic() also *does* have user address checking. >> I get a little lost in the iov_iter code, but it does know when it's >> dealing with userspace versus kernel addresses and does seem to know >> when to do things like copy_from_user_iter() (which does access_ok()) >> versus memcpy_from_iter().[1] >> >> The "optimization" is for the case where 'source' is not faulted in. >> It can avoid the cost of a "failed" page fault (it will fail to be >> handled because of the atomic copy) and then needing to drop locks and >> repeat the fault. > > I do agree on moving it to the slowpath - I think we can expect the case > where the process's immediate workingset is faulted out while it's > running to be vanishingly small. Great! I'm glad we're on the same page there. For bcachefs specifically, how should we move forward? If you're happy with the concept, would you prefer that I do some manual bcachefs testing? Or leave a branch sitting there for a week and pray the robots test it?
On Thu, Jan 30, 2025 at 08:04:49AM -0800, Dave Hansen wrote: > On 1/29/25 23:44, Kent Overstreet wrote: > > On Wed, Jan 29, 2025 at 10:17:49AM -0800, Dave Hansen wrote: > >> tl;dr: The VFS and several filesystems have some suspect prefaulting > >> code. It is unnecessarily slow for the common case where a write's > >> source buffer is resident and does not need to be faulted in. > >> > >> Move these "prefaulting" operations to slow paths where they ensure > >> forward progress but they do not slow down the fast paths. This > >> optimizes the fast path to touch userspace once instead of twice. > >> > >> Also update somewhat dubious comments about the need for prefaulting. > >> > >> This has been very lightly tested. I have not tested any of the fs/ > >> code explicitly. > > > > Q: what is preventing us from posting code to the list that's been > > properly tested? > > > > I just got another bcachefs patch series that blew up immediately when I > > threw it at my CI. > > > > This is getting _utterly ridiculous_. That's a bit of an over-reaction, Kent. IMO, the developers and/or maintainers of each filesystem have some responsibility to test changes like this themselves as part of their review process. That's what you have just done, Kent. Good work! However, it is not OK to rant about how the proposed change failed because it was not exhaustively tested on every filesytem before it was posted. I agree with Dave - it is difficult for someone to test widepsread changes in code outside their specific expertise. In many cases, the test infrastructure just doesn't exist or, if it does, requires specialised knowledge and tools to run. In such cases, we have to acknowledge that best effort testing is about as good as we can do without overly burdening the author of such a change. In these cases, it is best left to the maintainer of that subsystem to exhaustively test the change to their subsystem.... Indeed, this is the whole point of extensive post-merge integration testing (e.g. the testing that gets run on linux-next -every day-). It reduces the burden on individuals proposing changes created by requiring exhaustive testing before review by amortising the cost of that exhaustive testing over many peer reviewed changes.... > In this case, I started with a single patch for generic code that I knew > I could test. In fact, I even had the 9-year-old binary sitting on my > test box. > > Dave Chinner suggested that I take the generic pattern go look a _bit_ > more widely in the tree for a similar pattern. That search paid off, I > think. But I ended up touching corners of the tree I don't know well and > don't have test cases for. Many thanks for doing the search, identifying all the places where this pattern existed and trying to address them, Dave. > For bcachefs specifically, how should we move forward? If you're happy > with the concept, would you prefer that I do some manual bcachefs > testing? Or leave a branch sitting there for a week and pray the robots > test it? The public automated test robots are horribly unreliable with their coverage of proposed changes. Hence my comment above about the subsystem maintainers bearing some responsibility to test the code as part of their review process.... -Dave.