Message ID | 156763534546.18676.3530557439501101639.stgit@warthog.procyon.org.uk (mailing list archive) |
---|---|
Headers | show |
Series | Keyrings, Block and USB notifications [ver #8] | expand |
On Wed, Sep 4, 2019 at 3:15 PM David Howells <dhowells@redhat.com> wrote: > > > Here's a set of patches to add a general notification queue concept and to > add event sources such as: Why? I'm just going to be very blunt about this, and say that there is no way I can merge any of this *ever*, unless other people stand up and say that (a) they'll use it and (b) they'll actively develop it and participate in testing and coding Because I'm simply not willing to have the same situation that happened with the keyring ACL stuff this merge window happen with some other random feature some day in the future. That change never had anybody else that showed any interest in it, it was never really clear why it was made, and it broke booting for me. That had better never happen again, and I'm tired of seeing unexplained random changes to key handling that have one single author and nobody else involved. And there is this whole long cover letter to explain what the code does, what you can do with it, and what the changes have been in revisions, but AT NO POINT does it explain what the point of the feature is at all. Why would we want this, and what is the advantage over udev etc that already has event handling for things like block events and USB events? What's the advantage of another random character device, and what's the use? Who is asking for this, and who would use it? Why are keys special, and why should you be able to track events on keys in the first place? Who is co-developing and testing this, and what's the point? Fundamentally, I'm not even interested in seeing "Reviewed-by". New features need actual users and explanations for what they are, over and beyond the developer itself. IOW, you need to have an outside person step in and say "yes, I need this". No more of these "David makes random changes without any external input" series. Linus
Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Here's a set of patches to add a general notification queue concept and to > > add event sources such as: > > Why? > > I'm just going to be very blunt about this, and say that there is no > way I can merge any of this *ever*, unless other people stand up and > say that > > (a) they'll use it > > and > > (b) they'll actively develop it and participate in testing and coding Besides the core notification buffer which ties this together, there are a number of sources that I've implemented, not all of which are in this patch series: (1) Key/keyring notifications. If you have your kerberos tickets in a file/directory, your gnome desktop will monitor that using something like fanotify and tell you if your credentials cache changes. We also have the ability to cache your kerberos tickets in the session, user or persistent keyring so that it isn't left around on disk across a reboot or logout. Keyrings, however, cannot currently be monitored asynchronously, so the desktop has to poll for it - not so good on a laptop. This source will allow the desktop to avoid the need to poll. (2) USB notifications. GregKH was looking for a way to do USB notifications as I was looking to find additional sources to implement. I'm not sure how he wants to use them, but I'll let him speak to that himself. (3) Block notifications. This one I was thinking that I could make something like ddrescue better by letting it get notifications this way. This was a target of convenience since I had a dodgy disk I was trying to rescue. It could also potentially be used help systemd, say, detect broken devices and avoid trying to unmount them when trying to reboot the machine. I can drop this for now if you prefer. (4) Mount notifications. This one is wanted to avoid repeated trawling of /proc/mounts or similar to work out changes to the mount object attributes and mount topology. I'm told that the proc file holding the namespace_sem is a point of contention, especially as the process of generating the text descriptions of the mounts/superblocks can be quite involved. The notifications directly indicate the mounts involved in any particular event and what the change was. You can poll /proc/mounts, but all you know is that something changed; you don't know what and you don't know how and reading that file may race with multiple changed being effected. I pair this with a new fsinfo() system call that allows, amongst other things, the ability to retrieve in one go an { id, change counter } tuple from all the children of a specified mount, allowing buffer overruns to be cleaned up quickly. It's not just Red Hat that's potentially interested in this: https://lore.kernel.org/linux-fsdevel/293c9bd3-f530-d75e-c353-ddeabac27cf6@6wind.com/ (5) Superblock notifications. This one is provided to allow systemd or the desktop to more easily detect events such as I/O errors and EDQUOT/ENOSPC. I've tried to make the core multipurpose so that the price of the code footprint is mitigated. David
On Thu, Sep 5, 2019 at 10:01 AM David Howells <dhowells@redhat.com> wrote: > > > > I'm just going to be very blunt about this, and say that there is no > > way I can merge any of this *ever*, unless other people stand up and > > say that > > > > (a) they'll use it > > > > and > > > > (b) they'll actively develop it and participate in testing and coding > > Besides the core notification buffer which ties this together, there are a > number of sources that I've implemented, not all of which are in this patch > series: You've at least now answered part of the "Why", but you didn't actually answer the whole "another developer" part. I really don't like how nobody else than you seems to even look at any of the key handling patches. Because nobody else seems to care. This seems to be another new subsystem / driver that has the same pattern. If it's all just you, I don't want to merge it, because I really want more than just other developers doing "Reviewed-by" after looking at somebody elses code that they don't actually use or really care about. See what I'm saying? New features that go into the kernel should have multiple users. Not a single developer who pushes both the kernel feature and the single use of that feature. This very much comes from me reverting the key ACL pull. Not only did I revert it, ABSOLUTELY NOBODY even reacted to the revert. Nobody stepped up and said they they want that new ACL code, and pushed for a fix. There was some very little murmuring about it when Mimi at least figured out _why_ it broke, but other than that all the noise I saw about the revert was Eric Biggers pointing out it broke other things too, and that it had actually broken some test suites. But since it hadn't even been in linux-next, that too had been noticed much too late. See what I'm saying? This whole "David Howells does his own features that nobody else uses" needs to stop. You need to have a champion. I just don't feel safe pulling these kinds of changes from you, because I get the feeling that ABSOLUTELY NOBODY ELSE ever really looked at it or really cared. Most of the patches has nobody else even Cc'd, and even the patches that do have some "Reviewed-by" feel more like somebody else went "ok, the change looks fine to me", without any other real attachment to the code. New kernel features and interfaces really need to have a higher barrier of entry than one developer working on his or her own thing. Is that a change from 25 years ago? Or yes it is. We can point to lots of "single developer did a thing" from years past. But things have changed. And once bitten, twice shy: I really am a _lot_ more nervous about all these key changes now. Linus
Hi, On Thu, Sep 5, 2019 at 1:20 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > You've at least now answered part of the "Why", but you didn't > actually answer the whole "another developer" part. It's certainly something we've wanted in the GNOME world for a long time: See for instance https://bugzilla.redhat.com/show_bug.cgi?id=991110 and https://bugzilla.gnome.org/show_bug.cgi?id=707402 from all the way back 2013. These are the alternatives I can think of: - poll? status quo, but not great for obvious wakeup reasons - use a different credential cache collection type that does support change notification? some of the other types do support change notification, but have their own set of problems. But maybe we should just go back to DIR type credential cache collections and try to figure out the life cycle problems they pose, i don't know... or get more man power behind KCM... - manage change notification entirely from userspace. assume credentials will always be put in place from krb5-libs entry points, and just skip notification if it happens out from under the libraries. maybe upstream kerberos guys would be onboard with this, I don't know. This seems less robust than having the kernel in the loop, though. > I really don't like how nobody else than you seems to even look at any > of the key handling patches. Because nobody else seems to care. I've got no insight here, so i'll just throw a dart... viro, is this something you have any interest in watching closer? > See what I'm saying? This whole "David Howells does his own features > that nobody else uses" needs to stop. You need to have a champion. I > just don't feel safe pulling these kinds of changes from you, because > I get the feeling that ABSOLUTELY NOBODY ELSE ever really looked at it > or really cared. I get the "one man is not enough for proper maintenance" argument, and maybe it's true. I don't know. But I just want to point out, I have been asking dhowells for this change notification API for years, so it's not something he did on his own and for no particularly good reason. It solves a real problem and has a real-world use case. He kindly did it because I (and Robbie Harwood and others) asked him about it, off and on, and he was able to fit it onto his priority list for us. From this thread, it sounds like he solved a problem for Greg too, killing a couple birds with one stone? --Ray
On Thu, Sep 05, 2019 at 06:01:47PM +0100, David Howells wrote: > (2) USB notifications. > > GregKH was looking for a way to do USB notifications as I was looking to > find additional sources to implement. I'm not sure how he wants to use > them, but I'll let him speak to that himself. We are getting people asking for all sorts of "error reporting" events that can happen in the USB subsystem that we have started to abuse the KOBJ_CHANGE uevent notification for. At the same time your patches were submitted, someone else submitted yet-another-USB-error patchset. This type of user/kernel interface is much easier to use than abusing uevents for USB errors and general notifications about what happened with USB devices (more than just add/remove that uevents have). So yes, I would like this, and I am sure the ChromeOS people would like it too given that I rejected their patcheset with the assumption that this could be done with the notification queue api "soon" :) thanks, greg k-h
Hi, On 05/09/2019 18:19, Linus Torvalds wrote: > On Thu, Sep 5, 2019 at 10:01 AM David Howells <dhowells@redhat.com> wrote: >>> I'm just going to be very blunt about this, and say that there is no >>> way I can merge any of this *ever*, unless other people stand up and >>> say that >>> >>> (a) they'll use it >>> >>> and >>> >>> (b) they'll actively develop it and participate in testing and coding >> Besides the core notification buffer which ties this together, there are a >> number of sources that I've implemented, not all of which are in this patch >> series: > You've at least now answered part of the "Why", but you didn't > actually answer the whole "another developer" part. > > I really don't like how nobody else than you seems to even look at any > of the key handling patches. Because nobody else seems to care. > > This seems to be another new subsystem / driver that has the same > pattern. If it's all just you, I don't want to merge it, because I > really want more than just other developers doing "Reviewed-by" after > looking at somebody elses code that they don't actually use or really > care about. > > See what I'm saying? > > New features that go into the kernel should have multiple users. Not a > single developer who pushes both the kernel feature and the single use > of that feature. > > This very much comes from me reverting the key ACL pull. Not only did > I revert it, ABSOLUTELY NOBODY even reacted to the revert. Nobody > stepped up and said they they want that new ACL code, and pushed for a > fix. There was some very little murmuring about it when Mimi at least > figured out _why_ it broke, but other than that all the noise I saw > about the revert was Eric Biggers pointing out it broke other things > too, and that it had actually broken some test suites. But since it > hadn't even been in linux-next, that too had been noticed much too > late. > > See what I'm saying? This whole "David Howells does his own features > that nobody else uses" needs to stop. You need to have a champion. I > just don't feel safe pulling these kinds of changes from you, because > I get the feeling that ABSOLUTELY NOBODY ELSE ever really looked at it > or really cared. > > Most of the patches has nobody else even Cc'd, and even the patches > that do have some "Reviewed-by" feel more like somebody else went "ok, > the change looks fine to me", without any other real attachment to the > code. > > New kernel features and interfaces really need to have a higher > barrier of entry than one developer working on his or her own thing. > > Is that a change from 25 years ago? Or yes it is. We can point to lots > of "single developer did a thing" from years past. But things have > changed. And once bitten, twice shy: I really am a _lot_ more nervous > about all these key changes now. > > Linus There are a number of potential users, some waiting just to have a mechanism to avoid the racy alternatives to (for example) parsing /proc/mounts repeatedly, others perhaps a bit further away, but who have nonetheless expressed interest in having an interface which allows notifications for mounts. The subject of mount notifications has been discussed at LSF/MM in the past too, I proposed it as a topic a little while back: https://www.spinics.net/lists/linux-block/msg07653.html and David's patch set is a potential solution to some of the issues that I raised there. The original series for the new mount API came from an idea of Al/Miklos which was also presented at LSF/MM 2017, and this is a follow on project. So it has not come out of nowhere, but has been something that has been discussed in various forums over a period of time. Originally, there was a proposal to use netlink for the notifications, however that didn't seem to meet with general approval, even though Ian Kent did some work towards figuring out whether that would be a useful direction to go in. David has since come up with the proposal presented here, which is intended to improve on the original proposal in various ways - mostly making the notifications more efficient (i.e. smaller) and also generic enough that it might have uses beyond the original intent of just being a mount notification mechanism. The original reason for the mount notification mechanism was so that we are able to provide information to GUIs and similar filesystem and storage management tools, matching the state of the filesystem with the state of the underlying devices. This is part of a larger project entitled "Project Springfield" to try and provide better management tools for storage and filesystems. I've copied David Lehman in, since he can provide a wider view on this topic. It is something that I do expect will receive wide use, and which will be tested carefully. I know that Ian Kent has started work on some support for libmount for example, even outside of autofs. We do regularly hear from customers that better storage and filesystem management tools are something that they consider very important, so that is why we are spending such a lot of effort in trying to improve the support in this area. I'm not sure if that really answers your question, except to say that it is something that is much more than a personal project of David's and that other people do care about it too, Steve.
Hi, On Thu, Sep 5, 2019 at 2:37 PM Steven Whitehouse <swhiteho@redhat.com> wrote: > The original reason for the mount notification mechanism was so that we > are able to provide information to GUIs and similar filesystem and > storage management tools, matching the state of the filesystem with the > state of the underlying devices. This is part of a larger project > entitled "Project Springfield" to try and provide better management > tools for storage and filesystems. I've copied David Lehman in, since he > can provide a wider view on this topic. So one problem that I've heard discussed before is what happens in a thinp setup when the disk space is overallocated and gets used up. IIRC, the volumes just sort of eat themselves? Getting proper notification of looming catastrophic failure to the workstation user before it's too late would be useful, indeed. I don't know if this new mechanism dhowells has development can help with that, and/or if solving that problem is part of the Project Springfield initiative or not. Do you know off hand? --Ray
On Thu, 2019-09-05 at 14:51 -0400, Ray Strode wrote: > Hi, > > On Thu, Sep 5, 2019 at 2:37 PM Steven Whitehouse <swhiteho@redhat.com > > wrote: > > The original reason for the mount notification mechanism was so > > that we > > are able to provide information to GUIs and similar filesystem and > > storage management tools, matching the state of the filesystem with > > the > > state of the underlying devices. This is part of a larger project > > entitled "Project Springfield" to try and provide better management > > tools for storage and filesystems. I've copied David Lehman in, > > since he > > can provide a wider view on this topic. > So one problem that I've heard discussed before is what happens in a > thinp > setup when the disk space is overallocated and gets used up. IIRC, > the > volumes just sort of eat themselves? > > Getting proper notification of looming catastrophic failure to the > workstation user > before it's too late would be useful, indeed. > > I don't know if this new mechanism dhowells has development can help > with that, My understanding is that there is already a dm devent that gets sent when the low water mark is crossed for a thin pool, but there is nothing in userspace that knows how to effectively get the user's attention at that time. > and/or if solving that problem is part of the Project Springfield > initiative or not. Do you > know off hand? We have been looking into building a userspace event notification service (for storage, initially) to aggregate and add context to low- level events such as these, providing a single source for all kinds of storage events with an excellent signal:noise ratio. Thin pool exhaustion is high on the list of problems we would want to address. David
On Thu, Sep 5, 2019 at 11:33 AM Ray Strode <rstrode@redhat.com> wrote: > > Hi, > > On Thu, Sep 5, 2019 at 1:20 PM Linus Torvalds > <torvalds@linux-foundation.org> wrote: > > You've at least now answered part of the "Why", but you didn't > > actually answer the whole "another developer" part. > It's certainly something we've wanted in the GNOME world for a long time: > > See for instance > > https://bugzilla.redhat.com/show_bug.cgi?id=991110 That is *way* too specific to make for any kind of generic notification mechanism. Also, what is the security model here? Open a special character device, and you get access to random notifications from random sources? That makes no sense. Do they have the same security permissions? USB error reporting is one thing - and has completely different security rules than some per-user key thing (or system? or namespace? Or what?) And why would you do a broken big-key thing in the kernel in the first place? Why don't you just have a kernel key to indirectly encrypt using a key and "additional user space data". The kernel should simply not take care of insane 1MB keys. Big keys just don't make sense for a kernel. Just use the backing store THAT YOU HAVE TO HAVE ANYWAY. Introduce some "indirect key" instead that is used to encrypt and authenticate the backing store. And mix in /proc/mounts tracking, which has a namespace component and completely different events and security model (likely "none" - since you can always read your own /proc/mounts). So honestly, this all just makes me go "user interfaces are hard, all the users seem to have *completely* different requirements, and nobody has apparently really tested this in practice". Maybe a generic notification mechanism is sensible. But I don't see how security issues could *possibly* be unified, and some of the examples given (particularly "track changes to /proc/mounts") seem to have obviously better alternatives (as in "just support poll() on it"). All this discussion has convinced me of is that this whole thing is half-baked and not ready even on a conceptual level. So as far as I'm concerned, I think I want things like actual "Tested-by:" lines from actual users, because it's not clear that this makes sense. Gnome certainly should work as a regular user, if you need a system daemon for it with root privileges you might as well just do any notification entirely inside that daemon in user space. Same goes for /proc/mounts - which as mentioned has a much more obvious interface for waiting anyway. User interfaces need a lot of thought and testing. They shouldn't be ad-hoc "maybe this could work for X, Y and Z" theories. Linus
Linus Torvalds <torvalds@linux-foundation.org> wrote: > Also, what is the security model here? Open a special character > device, and you get access to random notifications from random > sources? > > That makes no sense. Do they have the same security permissions? Sigh. It doesn't work like that. I tried to describe this in the manpages I referred to in the cover note. Obviously I didn't do a good enough job. Let me try and explain the general workings and the security model here. (1) /dev/watch_queue just implements locked-in-memory buffers. It gets you no events by simply opening it. Each time you open it you get your own private buffer. Buffers are not shares. Creation of buffers is limited by ENFILE, EMFILE and RLIMIT_MEMLOCK. (2) A buffer is implemented as a pollable ring buffer, with the head pointer belonging to the kernel and the tail pointer belonging to userspace. Userspace mmaps the buffer. The kernel *only ever* reads the head and tail pointer from a buffer; it never reads anything else. When it wants to post a message to a buffer, the kernel reads the pointers and then does one of three things: (a) If the pointers were incoherent it drops the message. (b) If the buffer was full the kernel writes a flag to indicate this and drops the message. (c) Otherwise, the kernel writes a message and maybe padding at the place(s) it expects and writes the head pointer. If userspace was busy trashing the place, that should not cause a problem for the kernel. The buffer pointers are expected to run to the end and wrap naturally; they're only masked off at the point of actually accessing the buffer. (3) You connect event sources to your buffer, e.g.: fd = open("/dev/watch_queue", ...); keyctl_watch_key(KEY_SPEC_SESSION_KEYRING, fd, ...); or: watch_mount(AT_FDCWD, "/net", 0, fd, ...); Security is checked at the point of connection to make sure you have permission to access that source. You have to have View permission on a key/keyring for key events, for example, and you have to have execute permission on a directory for mount events. The LSM gets a look-in too: Smack checks you have read permission on a key for example. (4) You can connect multiple sources of different types to your buffer and a source can be connected to multiple buffers at a time. (5) Security is checked when an event is delivered to make sure the triggerer of the event has permission to give you that event. Smack requires that the triggerer has write permission on the opener of the buffer for example. (6) poll() signals POLLIN|POLLRDNORM if there is stuff in the buffer and POLLERR if the pointers are incoherent. David
On Thu, Sep 5, 2019 at 2:32 PM David Howells <dhowells@redhat.com> wrote: > > (1) /dev/watch_queue just implements locked-in-memory buffers. It gets you > no events by simply opening it. Cool. In-memory buffers. But I know - we *have* one of those. There's already a system call for it, and has been forever. One that we then extended to allow people to change the buffer size, and do a lot of other things with. It's called "pipe()". And you can give the writing side to other user space processes too, in case you are running an older kernel that didn't have some "event pipe support". It comes with resource management, because people already use those things. If you want to make a message protocol on top of it, it has cool atomicity guarantees for any message size less than PIPE_BUF, but to make things simple, maybe just say "fixed record sizes of 64 bytes" or something like that for events. Then you can use them from things like perl scripts, not just magical C programs. Why do we need a new kind of super-fancy high-speed thing for event reporting? If you have *so* many events that pipe handling is a performance issue, you have something seriously wrong going on. So no. I'm not interested in a magical memory-mapped pipe that is actually more limited than the real thing. Linus
Linus Torvalds <torvalds@linux-foundation.org> wrote: > But I know - we *have* one of those. There's already a system call for > it, and has been forever. One that we then extended to allow people to > change the buffer size, and do a lot of other things with. > > It's called "pipe()". And you can give the writing side to other user > space processes too, in case you are running an older kernel that > didn't have some "event pipe support". It comes with resource > management, because people already use those things. Can you write into a pipe from softirq context and/or with spinlocks held and/or with the RCU read lock held? That is a requirement. Another is that messages get inserted whole or not at all (or if they are truncated, the size field gets updated). Since one end would certainly be attached to an fd, it looks on the face of it that writing into the pipe would require taking pipe->mutex. David
On Thu, Sep 5, 2019 at 4:18 PM David Howells <dhowells@redhat.com> wrote: > > Can you write into a pipe from softirq context and/or with spinlocks held > and/or with the RCU read lock held? That is a requirement. Another is that > messages get inserted whole or not at all (or if they are truncated, the size > field gets updated). Right now we use a mutex for the buffer locking, so no, pipe buffers are not irq-safe or atomic. That's due to the whole "we may block on data from user space" when doing a write. HOWEVER. Pipes actually have buffers on two different levels: there's the actual data buffers themselves (each described by a "struct pipe_buffer"), and there's the circular queue of them (the "pipe->buf[]" array, with pipe->curbuf/nrbufs) that points to individual data buffers. And we could easily separate out that data buffer management. Right now it's not really all that separated: people just do things like int newbuf = (pipe->curbuf + bufs) & (pipe->buffers-1); struct pipe_buffer *buf = pipe->bufs + newbuf; ... pipe->nrbufs++; to add a buffer into that circular array of buffers, but _that_ part could be made separate. It's just all protected by the pipe mutex right now, so it has never been an issue. And yes, atomicity of writes has actually been an integral part of pipes since forever. It's actually the only unambiguous atomicity that POSIX guarantees. It only holds for writes to pipes() of less than PIPE_BUF blocks, but that's 4096 on Linux. > Since one end would certainly be attached to an fd, it looks on the face of it > that writing into the pipe would require taking pipe->mutex. That's how the normal synchronization is done, yes. And changing that in general would be pretty painful. For example, two concurrent user-space writers might take page faults and just generally be painful, and the pipe locking needs to serialize that. So the mutex couldn't go away from pipes in general - it would remain for read/write/splice mutual exclusion (and it's not just the data it protects, it's the reader/writer logic for EPIPE etc). But the low-level pipe->bufs[] handling is another issue entirely. Even when a user space writer copies things from user space, it does so into a pre-allocated buffer that is then attached to the list of buffers somewhat separately (there's a magical special case where you can re-use a buffer that is marked as "I can be reused" and append into an already allocated buffer). And adding new buffers *could* be done with it's own separate locking. If you have a blocking writer (ie a user space data source), that would still take the pipe mutex, and it would delay the user space readers (because the readers also need the mutex), but it should not be all that hard to just make the whole "curbuf/nrbufs" handling use its own locking (maybe even some lockless atomics and cmpxchg). So a kernel writer could "insert" a "struct pipe_buffer" atomically, and wake up the reader atomically. No need for the other complexity that is protected by the mutex. The buggest problem is perhaps that the number of pipe buffers per pipe is fairly limited by default. PIPE_DEF_BUFFERS is 16, and if we'd insert using the ->bufs[] array, that would be the limit of "number of messages". But each message could be any size (we've historically limited pipe buffers to one page each, but that limit isn't all that hard. You could put more data in there). The number of pipe buffers _is_ dynamic, so the above PIPE_DEF_BUFFERS isn't a hard limit, but it would be the default. Would it be entirely trivial to do all the above? No. But it's *literally* just finding the places that work with pipe->curbuf/nrbufs and making them use atomic updates. You'd find all the places by just renaming them (and making them atomic or whatever) and the compiler will tell you "this area needs fixing". We've actually used pipes for messages before: autofs uses a magic packetized pipe buffer thing. It didn't need any extra atomicity, though, so it stil all worked with the regular pipe->mutex thing. And there is a big advantage from using pipes. They really would work with almost anything. You could even mix-and-match "data generated by kernel" and "data done by 'write()' or 'splice()' by a user process". NOTE! I'm not at all saying that pipes are perfect. You'll find people who swear by sockets instead. They have their own advantages (and disadvantages). Most people who do packet-based stuff tend to prefer sockets, because those have standard packet-based models (Linux pipes have that packet mode too, but it's certainly not standard, and I'm not even sure we ever exposed it to user space - it could be that it's only used by the autofs daemon). I have a soft spot for pipes, just because I think they are simpler than sockets. But that soft spot might be misplaced. Linus
Linus Torvalds <torvalds@linux-foundation.org> wrote: > But it's *literally* just finding the places that work with > pipe->curbuf/nrbufs and making them use atomic updates. No. It really isn't. That's two variables that describe the occupied section of the buffer. Unless you have something like a 68020 with CAS2, or put them next to each other so you can use CMPXCHG8, you can't do that. They need converting to head/tail pointers first. > They really would work with almost anything. You could even mix-and-match > "data generated by kernel" and "data done by 'write()' or 'splice()' by a > user process". Imagine that userspace writes a large message and takes the mutex. At the same time something in softirq context decides *it* wants to write a message - it can't take the mutex and it can't wait, so the userspace write would have to cause the kernel message to be dropped. What I would have to do is make a write to a notification pipe go through post_notification() and limit the size to the maximum for a single message. Much easier to simply suppress writes and splices on pipes that have been set up to be notification queues - at least for now. David
On Fri, Sep 6, 2019 at 3:09 AM David Howells <dhowells@redhat.com> wrote: > > Linus Torvalds <torvalds@linux-foundation.org> wrote: > > > But it's *literally* just finding the places that work with > > pipe->curbuf/nrbufs and making them use atomic updates. > > No. It really isn't. That's two variables that describe the occupied section > of the buffer. Unless you have something like a 68020 with CAS2, or put them > next to each other so you can use CMPXCHG8, you can't do that. > > They need converting to head/tail pointers first. You misunderstand - because I phrased it badly. I meant "atomic" in the traditional kernel sense, as in "usable in not thread context" (eg GFP_ATOMIC etc). I'd start out just using a spinlock. I do agree that we could try to be fancy and do it entirely locklessly too, and I mentioned that in another part: "[..] it should not be all that hard to just make the whole "curbuf/nrbufs" handling use its own locking (maybe even some lockless atomics and cmpxchg)" but I also very much agree that it's much more complex. The main complexity of a lockless thing is actually almost certainly not in curbuf/nrbufs, because those could easily be packed as two 16-bit values in a 32-bit entity and then regular cmpxchg works fine. No, the complexity in the lockless model is that then you have to be very careful with the "buf[]" array update too. Maybe that's trivial (just make sure that they are NULL when not used), but it just looks less than wonderfully easy. So a lockless update I'm sure is _doable_ with some cleverness, but is probably not really worth it. That's particularly true since we already *have* a spinlock that we would take anyway: the we could strive to use the waitqueue spinlock in pipe->wait, and not even really add any new locking. That would require a bit of cleverness too and re-ordering things more, but we do that in other places (eg completions, but the fs_pin code does it too, and a few other cases. Look for "wake_up_locked()" and friends, which is a sure-fire sign that somebody is playing games and taking the wait-queue lock manually for their own nefarious reasons. > > They really would work with almost anything. You could even mix-and-match > > "data generated by kernel" and "data done by 'write()' or 'splice()' by a > > user process". > > Imagine that userspace writes a large message and takes the mutex. At the > same time something in softirq context decides *it* wants to write a message - > it can't take the mutex and it can't wait, so the userspace write would have > to cause the kernel message to be dropped. No. You're missing the point entirely. The mutex is entirely immaterial for the "insert a message". It is only used for user-space synchronization. The "add message to the pipe buffers" would only do the low-level buffer updates (whether using a new spinlock, re-using the pipe waitqueue lock, or entirely locklessly, ends up being then just an implementation detail). Note that user-space writes are defined to be atomic, but they are (a) not ordered and (b) only atomic up to a single buffer entry (which is that PIPE_BUF limit). So you can always put in a new buffer entry at any time. Obviously if a user space write just fills up the whole queue (or _other_ messages fill up the whole queue) you'd have to drop the notification. But that's always true. That's true even in your thing. The only difference is that we _allow_ other user spaces to write to the notification queue too. But if you don't want to allow that, then don't give out the write side of the pipe to any untrusted user space. But in *general*, allowing user space to write to the pipe is a great feature: it means that your notification source *can* be a user space daemon that you gave the write side of the pipe to (possibly using fd passing, possibly by just forking your own user-space child or cloning a thread). So for example, from a consumer standpoint, you can start off doing these things in user space with a helper thread that feeds the pipe (for example, polling /proc/mounts every second), and then when you've prototyped it and are happy with it, you can add the system call (or ioctl or whatever) to make the kernel generate the messages so that you don't have to poll. But now, once you have the kernel patch, you already have a proven user, and you can show numbers ("My user-space thing works, but it uses up 0.1% CPU time and has that nasty up-to-one-second latency because of polling"). Ta-daa! End result: it's backwards compatible, it's prototypable, and it's fairly easily extensible. Want to add a new source of events? Just pass the pipe to any random piece of code you want. It needs kernel support only when you've proven the concept _and_ you can show that "yeah, this user space polling model is a real performance or complexity problem" or whatever. This is why I like pipes. You can use them today. They are simple, and extensible, and you don't need to come up with a new subsystem and some untested ad-hoc thing that nobody has actually used. And they work automatically with all the existing infrastructure. They work with whatever perl or shell scripts, they work with poll/select loops, they work with user-space sources of events, they are just very flexible. Linus
On Fri, Sep 6, 2019 at 8:35 AM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > This is why I like pipes. You can use them today. They are simple, and > extensible, and you don't need to come up with a new subsystem and > some untested ad-hoc thing that nobody has actually used. The only _real_ complexity is to make sure that events are reliably parseable. That's where you really want to use the Linux-only "packet pipe" thing, becasue otherwise you have to have size markers or other things to delineate events. But if you do that, then it really becomes trivial. And I checked, we made it available to user space, even if the original reason for that code was kernel-only autofs use: you just need to make the pipe be O_DIRECT. This overly stupid program shows off the feature: #define _GNU_SOURCE #include <fcntl.h> #include <unistd.h> int main(int argc, char **argv) { int fd[2]; char buf[10]; pipe2(fd, O_DIRECT | O_NONBLOCK); write(fd[1], "hello", 5); write(fd[1], "hi", 2); read(fd[0], buf, sizeof(buf)); read(fd[0], buf, sizeof(buf)); return 0; } and it you strace it (because I was too lazy to add error handling or printing of results), you'll see write(4, "hello", 5) = 5 write(4, "hi", 2) = 2 read(3, "hello", 10) = 5 read(3, "hi", 10) = 2 note how you got packets of data on the reader side, instead of getting the traditional "just buffer it as a stream". So now you can even have multiple readers of the same event pipe, and packetization is obvious and trivial. Of course, I'm not sure why you'd want to have multiple readers, and you'd lose _ordering_, but if all events are independent, this _might_ be a useful thing in a threaded environment. Maybe. (Side note: a zero-sized write will not cause a zero-sized packet. It will just be dropped). Linus
Hi, On 06/09/2019 16:53, Linus Torvalds wrote: > On Fri, Sep 6, 2019 at 8:35 AM Linus Torvalds > <torvalds@linux-foundation.org> wrote: >> This is why I like pipes. You can use them today. They are simple, and >> extensible, and you don't need to come up with a new subsystem and >> some untested ad-hoc thing that nobody has actually used. > The only _real_ complexity is to make sure that events are reliably parseable. > > That's where you really want to use the Linux-only "packet pipe" > thing, becasue otherwise you have to have size markers or other things > to delineate events. But if you do that, then it really becomes > trivial. > > And I checked, we made it available to user space, even if the > original reason for that code was kernel-only autofs use: you just > need to make the pipe be O_DIRECT. > > This overly stupid program shows off the feature: > > #define _GNU_SOURCE > #include <fcntl.h> > #include <unistd.h> > > int main(int argc, char **argv) > { > int fd[2]; > char buf[10]; > > pipe2(fd, O_DIRECT | O_NONBLOCK); > write(fd[1], "hello", 5); > write(fd[1], "hi", 2); > read(fd[0], buf, sizeof(buf)); > read(fd[0], buf, sizeof(buf)); > return 0; > } > > and it you strace it (because I was too lazy to add error handling or > printing of results), you'll see > > write(4, "hello", 5) = 5 > write(4, "hi", 2) = 2 > read(3, "hello", 10) = 5 > read(3, "hi", 10) = 2 > > note how you got packets of data on the reader side, instead of > getting the traditional "just buffer it as a stream". > > So now you can even have multiple readers of the same event pipe, and > packetization is obvious and trivial. Of course, I'm not sure why > you'd want to have multiple readers, and you'd lose _ordering_, but if > all events are independent, this _might_ be a useful thing in a > threaded environment. Maybe. > > (Side note: a zero-sized write will not cause a zero-sized packet. It > will just be dropped). > > Linus The events are generally not independent - we would need ordering either implicit in the protocol or explicit in the messages. We also need to know in case messages are dropped too - doesn't need to be anything fancy, just some idea that since we last did a read, there are messages that got lost, most likely due to buffer overrun. That is why the initial idea was to use netlink, since it solves a lot of those issues. The downside was that the indirect nature of the netlink sockets resulted in making it tricky to know the namespace of the process to which the message was to be delivered (and hence whether it should be delivered at all), Steve.
On Fri, Sep 6, 2019 at 9:12 AM Steven Whitehouse <swhiteho@redhat.com> wrote: > > The events are generally not independent - we would need ordering either > implicit in the protocol or explicit in the messages. Note that pipes certainly would never re-order messages. It's just that _if_ you have two independent and concurrent readers of the same pipe, they could read one message each, and you couldn't tell which was first in user space. Of course, I would suggest that anything that actually has non-independent messages should always use a sequence number or something like that in the message anyway. But then it would have to be on a protocol level. And it's not clear that all notifications need it. If it's just a random "things changed" notification, mnaybe that doesn't need a sequence number or anything else. So being on a protocol/data stream level might be the right thing regardless. Another possibility is to just say "don't do that then". If you want multiple concurrent readers, open multiple pipes for them and use separate events, and be happy in the knowledge that you don't have any complicated cases. > We also need to > know in case messages are dropped too - doesn't need to be anything > fancy, just some idea that since we last did a read, there are messages > that got lost, most likely due to buffer overrun. Pipes don't have that, but another flag certainly wouldn't be _hard_ to add. But one problem (and this is fundamental) is that while O_DIRECT works today (and works with kernels going back years), any new features like overflow notification would obviously not work with legacy kernels. On the user write side, with an O_NONBLOCK pipe, you currently just get an -EAGAIN, so you _see_ the drop happening. But (again) there's no sticky flag for it anywhere else, and there's no clean automatic way for the reader to see "ok, the writer overflowed". That's not a problem for any future extensions - the feature sounds like a new flag and a couple of lines to do it - but it's a problem for the whole "prototype in user space using existing pipe support" that I personally find so nice, and which I think is such a good way to prove the user space _need_ for anything like this. But if people are ok with the pipe model in theory, _that_ kind of small and directed feature I have absolutely no problem with adding. It's just whole new untested character mode drivers with odd semantics that I find troublesome. Hmm. Maybe somebody can come up with a good legacy signaling solution (and "just use another pipe for error notification and OOB data" for the first one may _work_, but that sounds pretty hacky and just not very convenient). Linus
> On Sep 6, 2019, at 9:12 AM, Steven Whitehouse <swhiteho@redhat.com> wrote: > > Hi, > >> On 06/09/2019 16:53, Linus Torvalds wrote: >> On Fri, Sep 6, 2019 at 8:35 AM Linus Torvalds >> <torvalds@linux-foundation.org> wrote: >>> This is why I like pipes. You can use them today. They are simple, and >>> extensible, and you don't need to come up with a new subsystem and >>> some untested ad-hoc thing that nobody has actually used. >> The only _real_ complexity is to make sure that events are reliably parseable. >> >> That's where you really want to use the Linux-only "packet pipe" >> thing, becasue otherwise you have to have size markers or other things >> to delineate events. But if you do that, then it really becomes >> trivial. >> >> And I checked, we made it available to user space, even if the >> original reason for that code was kernel-only autofs use: you just >> need to make the pipe be O_DIRECT. >> >> This overly stupid program shows off the feature: >> >> #define _GNU_SOURCE >> #include <fcntl.h> >> #include <unistd.h> >> >> int main(int argc, char **argv) >> { >> int fd[2]; >> char buf[10]; >> >> pipe2(fd, O_DIRECT | O_NONBLOCK); >> write(fd[1], "hello", 5); >> write(fd[1], "hi", 2); >> read(fd[0], buf, sizeof(buf)); >> read(fd[0], buf, sizeof(buf)); >> return 0; >> } >> >> and it you strace it (because I was too lazy to add error handling or >> printing of results), you'll see >> >> write(4, "hello", 5) = 5 >> write(4, "hi", 2) = 2 >> read(3, "hello", 10) = 5 >> read(3, "hi", 10) = 2 >> >> note how you got packets of data on the reader side, instead of >> getting the traditional "just buffer it as a stream". >> >> So now you can even have multiple readers of the same event pipe, and >> packetization is obvious and trivial. Of course, I'm not sure why >> you'd want to have multiple readers, and you'd lose _ordering_, but if >> all events are independent, this _might_ be a useful thing in a >> threaded environment. Maybe. >> >> (Side note: a zero-sized write will not cause a zero-sized packet. It >> will just be dropped). >> >> Linus > > The events are generally not independent - we would need ordering either implicit in the protocol or explicit in the messages. We also need to know in case messages are dropped too - doesn't need to be anything fancy, just some idea that since we last did a read, there are messages that got lost, most likely due to buffer overrun. This could be a bit fancier: if the pipe recorded the bitwise or of the first few bytes of dropped message, then the messages could set a bit in the header indicating the type, and readers could then learn which *types* of messages were dropped. Or they could just use multiple pipes. If this whole mechanism catches on, I wonder if implementing recvmmsg() on pipes would be worthwhile.
On Fri, Sep 6, 2019 at 10:07 AM Linus Torvalds <torvalds@linux-foundation.org> wrote: > > Hmm. Maybe somebody can come up with a good legacy signaling solution > (and "just use another pipe for error notification and OOB data" for > the first one may _work_, but that sounds pretty hacky and just not > very convenient). ... actually, maybe the trivial solution for at least some prototyping cases is to make any user mode writers never drop messages. Don't use a non-blocking fd for the write direction. That's obviously *not* acceptable for a kernel writer, and it's not acceptable for an actual system daemon writer (that you could block by just not reading the notifications), but it's certainly acceptable for the "let's prototype having kernel support for /proc/mounts notifications using a local thread that just polls for it every few seconds". So at least for _some_ prototypes you can probably just ignore the overflow issue. It won't get you full test coverage, but it will get you a working legacy solution and a "look, if we have kernel support for this, we can do better". Linus
Hi, On Thu, Sep 5, 2019 at 4:39 PM Linus Torvalds <torvalds@linux-foundation.org> wrote: > That is *way* too specific to make for any kind of generic > notification mechanism. Well from my standpoint, I just don't want to have to poll... I don't have a strong opinion about how it looks architecturally to reach that goal. Ideally, at a higher level, I want the userspace api that gnome uses to be something like: err = krb5_cc_watch (ctx, ccache, (krb5_cc_change_fct) on_cc_change , &watch_fd); then a watch_fd would get handed back and caller could poll on it. if it woke up poll(), caller would do krb5_cc_watch_update (ctx, ccache, watch_fd) or so and it would trigger on_cc_change to get called (or something like that). If under the hood, fd comes from opening /dev/watch_queue, and krb5_cc_watch_update reads from some mmap'ed buffer to decide whether or not to call on_cc_change, that's fine with me. If under the hood, fd comes from a pipe fd returned from some ioctl or syscall, and krb5_cc_watch_update reads messages directly from that fd to decide whether or not to call on_cc_change, that's fine with me. too. it could be an eventfd too, or whatever, too, just as long as its something I can add to poll() and don't have to intermittently poll ... :-) > Also, what is the security model here? Open a special character > device, and you get access to random notifications from random > sources? I guess dhowells answered this... > And why would you do a broken big-key thing in the kernel in the first > place? Why don't you just have a kernel key to indirectly encrypt > using a key and "additional user space data". The kernel should simply > not take care of insane 1MB keys.
Hi, On Fri, Sep 6, 2019 at 3:32 PM Ray Strode <rstrode@redhat.com> wrote: > of course, one advantage of having the tickets kernel side is nfs could > in theory access them directly, rather than up calling back to userspace... No, that's not true actually, it's still going to need to go to userspace to do hairy context setup i guess... so
Ray Strode <rstrode@redhat.com> writes: > Linus Torvalds <torvalds@linux-foundation.org> wrote: > >> That is *way* too specific to make for any kind of generic >> notification mechanism. > > Well from my standpoint, I just don't want to have to poll... I don't > have a strong opinion about how it looks architecturally to reach that > goal. > > Ideally, at a higher level, I want the userspace api that gnome uses > to be something like: > > err = krb5_cc_watch (ctx, ccache, (krb5_cc_change_fct) on_cc_change , > &watch_fd); > > then a watch_fd would get handed back and caller could poll on it. if > it woke up poll(), caller would do > > krb5_cc_watch_update (ctx, ccache, watch_fd) > > or so and it would trigger on_cc_change to get called (or something like that). > > If under the hood, fd comes from opening /dev/watch_queue, and > krb5_cc_watch_update reads from some mmap'ed buffer to decide whether > or not to call on_cc_change, that's fine with me. > > If under the hood, fd comes from a pipe fd returned from some ioctl or > syscall, and krb5_cc_watch_update reads messages directly from that fd > to decide whether or not to call on_cc_change, that's fine with > me. too. > > it could be an eventfd too, or whatever, too, just as long as its > something I can add to poll() and don't have to intermittently poll > ... :-) > > >> And why would you do a broken big-key thing in the kernel in the >> first place? Why don't you just have a kernel key to indirectly >> encrypt using a key and "additional user space data". The kernel >> should simply not take care of insane 1MB keys. > >
Theodore Y. Ts'o <tytso@mit.edu> wrote: > Something else which we should consider up front is how to handle the > case where you have multiple userspace processes that want to > subscribe to the same notification. I have that. > This also implies that we'll need to have some kind of standard header > at the beginning to specify the source of a particular notification > message. That too. David