Message ID | 158454408854.2864823.5910520544515668590.stgit@warthog.procyon.org.uk (mailing list archive) |
---|---|
Headers | show |
Series | VFS: Filesystem information [ver #19] | expand |
On Wed, Mar 18, 2020 at 4:08 PM David Howells <dhowells@redhat.com> wrote: > ============================ > WHY NOT USE PROCFS OR SYSFS? > ============================ > > Why is it better to go with a new system call rather than adding more magic > stuff to /proc or /sysfs for each superblock object and each mount object? > > (1) It can be targetted. It makes it easy to query directly by path. > procfs and sysfs cannot do this easily. > > (2) It's more efficient as we can return specific binary data rather than > making huge text dumps. Granted, sysfs and procfs could present the > same data, though as lots of little files which have to be > individually opened, read, closed and parsed. Asked this a number of times, but you haven't answered yet: what application would require such a high efficiency? Nobody's suggesting we move stat(2) to proc interfaces, and AFAIK nobody suggested we move /proc/PID/* to a binary syscall interface. Each one has its place, and I strongly feel that mount info belongs in the latter category. Feel free to prove the opposite. > (3) We wouldn't have the overhead of open and close (even adding a > self-contained readfile() syscall has to do that internally Busted: add f_op->readfile() and be done with all that. For example DEFINE_SHOW_ATTRIBUTE() could be trivially moved to that interface. We could optimize existing proc, sys, etc. interfaces, but it's not been an issue, apparently. > > (4) Opening a file in procfs or sysfs has a pathwalk overhead for each > file accessed. We can use an integer attribute ID instead (yes, this > is similar to ioctl) - but could also use a string ID if that is > preferred. > > (5) Can easily query cross-namespace if, say, a container manager process > is given an fs_context that hasn't yet been mounted into a namespace - > or hasn't even been fully created yet. Works with my patch. > (6) Don't have to create/delete a bunch of sysfs/procfs nodes each time a > mount happens or is removed - and since systemd makes much use of > mount namespaces and mount propagation, this will create a lot of > nodes. Not true. > The argument for doing this through procfs/sysfs/somemagicfs is that > someone using a shell can just query the magic files using ordinary text > tools, such as cat - and that has merit - but it doesn't solve the > query-by-pathname problem. > > The suggested way around the query-by-pathname problem is to open the > target file O_PATH and then look in a magic directory under procfs > corresponding to the fd number to see a set of attribute files[*] laid out. > Bash, however, can't open by O_PATH or O_NOFOLLOW as things stand... Bash doesn't have fsinfo(2) either, so that's not really a good argument. Implementing a utility to show mount attribute(s) by path is trivial for the file based interface, while it would need to be updated for each extension of fsinfo(2). Same goes for libc, language bindings, etc. Thanks, Miklos
Miklos Szeredi <miklos@szeredi.hu> wrote: > > (2) It's more efficient as we can return specific binary data rather than > > making huge text dumps. Granted, sysfs and procfs could present the > > same data, though as lots of little files which have to be > > individually opened, read, closed and parsed. > > Asked this a number of times, but you haven't answered yet: what > application would require such a high efficiency? Low efficiency means more time doing this when that time could be spent doing other things - or even putting the CPU in a powersaving state. Using an open/read/close render-to-text-and-parse interface *will* be slower and less efficient as there are more things you have to do to use it. Then consider doing a walk over all the mounts in the case where there are 10000 of them - we have issues with /proc/mounts for such. fsinfo() will end up doing a lot less work. > I strongly feel that mount info belongs in the latter category I feel strongly that a lot of stuff done through /proc or /sys shouldn't be. Yes, it's nice that you can explore it with cat and poke it with echo, but it has a number of problems: security, atomiticity, efficiency and providing an round-the-back way to pin stuff if not done right. > > (3) We wouldn't have the overhead of open and close (even adding a > > self-contained readfile() syscall has to do that internally > > Busted: add f_op->readfile() and be done with all that. For example > DEFINE_SHOW_ATTRIBUTE() could be trivially moved to that interface. Look at your example. "f_op->". That's "file->f_op->" I presume. You would have to make it "i_op->" to avoid the open and the close - and for things like procfs and sysfs, that's probably entirely reasonable - but bear in mind that you still have to apply all the LSM file security controls, just in case the backing filesystem is, say, ext4 rather than procfs. > We could optimize existing proc, sys, etc. interfaces, but it's not > been an issue, apparently. You can't get rid of or change many of the existing interfaces. A lot of them are effectively indirect system calls and are, as such, part of the fixed UAPI. You'd have to add a parallel optimised set. > > (6) Don't have to create/delete a bunch of sysfs/procfs nodes each time a > > mount happens or is removed - and since systemd makes much use of > > mount namespaces and mount propagation, this will create a lot of > > nodes. > > Not true. This may not be true if you roll your own special filesystem. It *is* true if you do it in procfs or sysfs. The files don't exist if you don't create nodes or attribute tables for them. > > The argument for doing this through procfs/sysfs/somemagicfs is that > > someone using a shell can just query the magic files using ordinary text > > tools, such as cat - and that has merit - but it doesn't solve the > > query-by-pathname problem. > > > > The suggested way around the query-by-pathname problem is to open the > > target file O_PATH and then look in a magic directory under procfs > > corresponding to the fd number to see a set of attribute files[*] laid out. > > Bash, however, can't open by O_PATH or O_NOFOLLOW as things stand... > > Bash doesn't have fsinfo(2) either, so that's not really a good argument. I never claimed that fsinfo() could be accessed directly from the shell. For you proposal, you claimed "immediately usable from all programming languages, including scripts". > Implementing a utility to show mount attribute(s) by path is trivial > for the file based interface, while it would need to be updated for > each extension of fsinfo(2). Same goes for libc, language bindings, > etc. That's not precisely true. If you aren't using an extension to an fsinfo() attribute, you wouldn't need to change anything[*]. If you want to use an extension - *even* through a file based interface - you *would* have to change your code and your parser. And, no, extending an fsinfo() attribute would not require any changes to libc unless libc is using that attribute[*] and wants to access the extension. [*] I assume that in C/C++ at least, you'd use linux/fsinfo.h rather than some libc version. [*] statfs() could be emulated this way, but I'm not sure what else libc specifically is going to look at. This is more aimed at libmount amongst other things. David
On Thu, Mar 19, 2020 at 11:37 AM David Howells <dhowells@redhat.com> wrote: > > Miklos Szeredi <miklos@szeredi.hu> wrote: > > > > (2) It's more efficient as we can return specific binary data rather than > > > making huge text dumps. Granted, sysfs and procfs could present the > > > same data, though as lots of little files which have to be > > > individually opened, read, closed and parsed. > > > > Asked this a number of times, but you haven't answered yet: what > > application would require such a high efficiency? > > Low efficiency means more time doing this when that time could be spent doing > other things - or even putting the CPU in a powersaving state. Using an > open/read/close render-to-text-and-parse interface *will* be slower and less > efficient as there are more things you have to do to use it. > > Then consider doing a walk over all the mounts in the case where there are > 10000 of them - we have issues with /proc/mounts for such. fsinfo() will end > up doing a lot less work. Current /proc/mounts problems arise from the fact that mount info can only be queried for the whole namespace, and hence changes related to a single mount will require rescanning the complete mount list. If mount info can be queried for individual mounts, then the need to scan the complete list will be rare. That's *the* point of this change. > > > (3) We wouldn't have the overhead of open and close (even adding a > > > self-contained readfile() syscall has to do that internally > > > > Busted: add f_op->readfile() and be done with all that. For example > > DEFINE_SHOW_ATTRIBUTE() could be trivially moved to that interface. > > Look at your example. "f_op->". That's "file->f_op->" I presume. > > You would have to make it "i_op->" to avoid the open and the close - and for > things like procfs and sysfs, that's probably entirely reasonable - but bear > in mind that you still have to apply all the LSM file security controls, just > in case the backing filesystem is, say, ext4 rather than procfs. > > > We could optimize existing proc, sys, etc. interfaces, but it's not > > been an issue, apparently. > > You can't get rid of or change many of the existing interfaces. A lot of them > are effectively indirect system calls and are, as such, part of the fixed > UAPI. You'd have to add a parallel optimised set. Sure. We already have the single_open() internal API that is basically a ->readfile() wrapper. Moving this up to the f_op level (no, it's not an i_op, and yes, we do need struct file, but it can be simply allocated on the stack) is a trivial optimization that would let a readfile(2) syscall access that level. No new complexity in that case. Same generally goes for seq_file: seq_readfile() is trivial to implement without messing with current implementation or any existing APIs. > > > > (6) Don't have to create/delete a bunch of sysfs/procfs nodes each time a > > > mount happens or is removed - and since systemd makes much use of > > > mount namespaces and mount propagation, this will create a lot of > > > nodes. > > > > Not true. > > This may not be true if you roll your own special filesystem. It *is* true if > you do it in procfs or sysfs. The files don't exist if you don't create nodes > or attribute tables for them. That's one of the reasons why I opted to roll my own. But the ideas therein could be applied to kernfs, if found to be generally useful. Nothing magic about that. > > > > The argument for doing this through procfs/sysfs/somemagicfs is that > > > someone using a shell can just query the magic files using ordinary text > > > tools, such as cat - and that has merit - but it doesn't solve the > > > query-by-pathname problem. > > > > > > The suggested way around the query-by-pathname problem is to open the > > > target file O_PATH and then look in a magic directory under procfs > > > corresponding to the fd number to see a set of attribute files[*] laid out. > > > Bash, however, can't open by O_PATH or O_NOFOLLOW as things stand... > > > > Bash doesn't have fsinfo(2) either, so that's not really a good argument. > > I never claimed that fsinfo() could be accessed directly from the shell. For > you proposal, you claimed "immediately usable from all programming languages, > including scripts". You are right. Note however: only special files need the O_PATH handling, regular files are directories can be opened by the shell without side effects. In any case, I think neither of us can be convinced of the other's right, so I guess It's up to Al and Linus to make a decision. Thanks, Miklos
On Wed, 2020-03-18 at 17:05 +0100, Miklos Szeredi wrote: > On Wed, Mar 18, 2020 at 4:08 PM David Howells <dhowells@redhat.com> > wrote: > > > ============================ > > WHY NOT USE PROCFS OR SYSFS? > > ============================ > > > > Why is it better to go with a new system call rather than adding > > more magic > > stuff to /proc or /sysfs for each superblock object and each mount > > object? > > > > (1) It can be targetted. It makes it easy to query directly by > > path. > > procfs and sysfs cannot do this easily. > > > > (2) It's more efficient as we can return specific binary data > > rather than > > making huge text dumps. Granted, sysfs and procfs could > > present the > > same data, though as lots of little files which have to be > > individually opened, read, closed and parsed. > > Asked this a number of times, but you haven't answered yet: what > application would require such a high efficiency? Umm ... systemd and udisks2 and about 4 others. A problem I've had with autofs for years is using autofs direct mount maps of any appreciable size cause several key user space applications to consume all available CPU while autofs is starting or stopping which takes a fair while with a very large mount table. I saw a couple of applications affected purely because of the large mount table but not as badly as starting or stopping autofs. Maps of 5,000 to 10,000 map entries can almost be handled, not uncommon for heavy autofs users in spite of the problem, but much larger than that and you've got a serious problem. There are problems with expiration as well but that's more an autofs problem that I need to fix. To be clear it's not autofs that needs the improvement (I need to deal with this in autofs itself) it's the affect that these large mount tables have on the rest of the user space and that's quite significant. I can't even think about resolving my autofs problem until this problem is resolved and handling very large numbers of mounts as efficiently as possible must be part of that solution for me and I think for the OS overall too. Ian > > Nobody's suggesting we move stat(2) to proc interfaces, and AFAIK > nobody suggested we move /proc/PID/* to a binary syscall interface. > Each one has its place, and I strongly feel that mount info belongs > in > the latter category. Feel free to prove the opposite. > > > (3) We wouldn't have the overhead of open and close (even adding a > > self-contained readfile() syscall has to do that internally > > Busted: add f_op->readfile() and be done with all that. For example > DEFINE_SHOW_ATTRIBUTE() could be trivially moved to that interface. > > We could optimize existing proc, sys, etc. interfaces, but it's not > been an issue, apparently. > > > (4) Opening a file in procfs or sysfs has a pathwalk overhead for > > each > > file accessed. We can use an integer attribute ID instead > > (yes, this > > is similar to ioctl) - but could also use a string ID if that > > is > > preferred. > > > > (5) Can easily query cross-namespace if, say, a container manager > > process > > is given an fs_context that hasn't yet been mounted into a > > namespace - > > or hasn't even been fully created yet. > > Works with my patch. > > > (6) Don't have to create/delete a bunch of sysfs/procfs nodes each > > time a > > mount happens or is removed - and since systemd makes much use > > of > > mount namespaces and mount propagation, this will create a lot > > of > > nodes. > > Not true. > > > The argument for doing this through procfs/sysfs/somemagicfs is > > that > > someone using a shell can just query the magic files using ordinary > > text > > tools, such as cat - and that has merit - but it doesn't solve the > > query-by-pathname problem. > > > > The suggested way around the query-by-pathname problem is to open > > the > > target file O_PATH and then look in a magic directory under procfs > > corresponding to the fd number to see a set of attribute files[*] > > laid out. > > Bash, however, can't open by O_PATH or O_NOFOLLOW as things > > stand... > > Bash doesn't have fsinfo(2) either, so that's not really a good > argument. > > Implementing a utility to show mount attribute(s) by path is trivial > for the file based interface, while it would need to be updated for > each extension of fsinfo(2). Same goes for libc, language bindings, > etc. > > Thanks, > Miklos
On Wed, Apr 1, 2020 at 7:22 AM Ian Kent <raven@themaw.net> wrote: > > On Wed, 2020-03-18 at 17:05 +0100, Miklos Szeredi wrote: > > On Wed, Mar 18, 2020 at 4:08 PM David Howells <dhowells@redhat.com> > > wrote: > > > > > ============================ > > > WHY NOT USE PROCFS OR SYSFS? > > > ============================ > > > > > > Why is it better to go with a new system call rather than adding > > > more magic > > > stuff to /proc or /sysfs for each superblock object and each mount > > > object? > > > > > > (1) It can be targetted. It makes it easy to query directly by > > > path. > > > procfs and sysfs cannot do this easily. > > > > > > (2) It's more efficient as we can return specific binary data > > > rather than > > > making huge text dumps. Granted, sysfs and procfs could > > > present the > > > same data, though as lots of little files which have to be > > > individually opened, read, closed and parsed. > > > > Asked this a number of times, but you haven't answered yet: what > > application would require such a high efficiency? > > Umm ... systemd and udisks2 and about 4 others. > > A problem I've had with autofs for years is using autofs direct mount > maps of any appreciable size cause several key user space applications > to consume all available CPU while autofs is starting or stopping which > takes a fair while with a very large mount table. I saw a couple of > applications affected purely because of the large mount table but not > as badly as starting or stopping autofs. > > Maps of 5,000 to 10,000 map entries can almost be handled, not uncommon > for heavy autofs users in spite of the problem, but much larger than > that and you've got a serious problem. > > There are problems with expiration as well but that's more an autofs > problem that I need to fix. > > To be clear it's not autofs that needs the improvement (I need to > deal with this in autofs itself) it's the affect that these large > mount tables have on the rest of the user space and that's quite > significant. According to dhowell's measurements processing 100k mounts would take about a few seconds of system time (that's the time spent by the kernel to retrieve the data, obviously the userspace processing would add to that, but that's independent of the kernel patchset). I think that sort of time spent by the kernel is entirely reasonable and is probably not worth heavy optimization, since userspace is probably going to spend as much, if not more time with each mount entry. > I can't even think about resolving my autofs problem until this > problem is resolved and handling very large numbers of mounts > as efficiently as possible must be part of that solution for me > and I think for the OS overall too. The key to that is allowing userspace to retrieve individual mount entries instead of having to parse the complete mount table on every change. Thanks, Miklos
Miklos Szeredi <miklos@szeredi.hu> wrote: > According to dhowell's measurements processing 100k mounts would take > about a few seconds of system time (that's the time spent by the > kernel to retrieve the data, But the inefficiency of mountfs - at least as currently implemented - scales up with the number of individual values you want to retrieve, both in terms of memory usage and time taken. With fsinfo(), I've tried to batch values together where it makes sense - and there's no lingering memory overhead - no extra inodes, dentries and files required. David
On Wed, Apr 1, 2020 at 10:27 AM David Howells <dhowells@redhat.com> wrote: > > Miklos Szeredi <miklos@szeredi.hu> wrote: > > > According to dhowell's measurements processing 100k mounts would take > > about a few seconds of system time (that's the time spent by the > > kernel to retrieve the data, > > But the inefficiency of mountfs - at least as currently implemented - scales > up with the number of individual values you want to retrieve, both in terms of > memory usage and time taken. I've taken that into account when guesstimating a "few seconds per 100k entries". My guess is that there's probably an order of magnitude difference between the performance of a fs based interface and a binary syscall based interface. That could be reduced somewhat with a readfile(2) type API. But the point is: this does not matter. Whether it's .5s or 5s is completely irrelevant, as neither is going to take down the system, and userspace processing is probably going to take as much, if not more time. And remember, we are talking about stopping and starting the automount daemon, which is something that happens, but it should not happen often by any measure. > With fsinfo(), I've tried to batch values together where it makes sense - and > there's no lingering memory overhead - no extra inodes, dentries and files > required. The dentries, inodes and files in your test are single use (except the root dentry) and can be made ephemeral if that turns out to be better. My guess is that dentries belonging to individual attributes should be deleted on final put, while the dentries belonging to the mount directory can be reclaimed normally. Thanks, Miklos
On Wed, Apr 1, 2020 at 10:37 AM Miklos Szeredi <miklos@szeredi.hu> wrote: > > On Wed, Apr 1, 2020 at 10:27 AM David Howells <dhowells@redhat.com> wrote: > > > > Miklos Szeredi <miklos@szeredi.hu> wrote: > > > > > According to dhowell's measurements processing 100k mounts would take > > > about a few seconds of system time (that's the time spent by the > > > kernel to retrieve the data, > > > > But the inefficiency of mountfs - at least as currently implemented - scales > > up with the number of individual values you want to retrieve, both in terms of > > memory usage and time taken. > > I've taken that into account when guesstimating a "few seconds per > 100k entries". My guess is that there's probably an order of > magnitude difference between the performance of a fs based interface > and a binary syscall based interface. That could be reduced somewhat > with a readfile(2) type API. And to show that I'm not completely off base, attached a patch that adds a limited readfile(2) syscall and uses it in the p2 method. Results are promising: ./test-fsinfo-perf /tmp/a 30000 --- make mounts --- --- test fsinfo by path --- sum(mnt_id) = 930000 --- test fsinfo by mnt_id --- sum(mnt_id) = 930000 --- test /proc/fdinfo --- sum(mnt_id) = 930000 --- test mountfs --- sum(mnt_id) = 930000 For 30000 mounts, f= 146400us f2= 136766us p= 1406569us p2= 221669us; p=9.6*f p=10.3*f2 p=6.3*p2 --- umount --- This is about a 2 fold increase in speed compared to open + read + close. Is someone still worried about performance, or can we move on to more interesting parts of the design? Thanks, Miklos
Miklos Szeredi <miklos@szeredi.hu> wrote: > For 30000 mounts, f= 146400us f2= 136766us p= 1406569us p2= > 221669us; p=9.6*f p=10.3*f2 p=6.3*p2 f = 146400us f2= 136766us p = 1406569us <--- Order of magnitude slower p2= 221669us And more memory used because it's added a whole bunch of inodes and dentries to the cache. For each mount that's a pair for each dir and a pair for each file within the dir. So for the two files my test is reading, for 30000 mounts, that's 90000 dentries and 90000 inodes in mountfs alone. (gdb) p sizeof(struct dentry) $1 = 216 (gdb) p sizeof(struct inode) $2 = 696 (gdb) p (216*696)*30000*3/1024/1024 $3 = 615 so 615 MiB of RAM added to the caches in an extreme case. We're seeing customers with 10000+ mounts - that would be 205 MiB, just to read two values from each mount. I presume you're not going through /proc/fdinfo each time as that would add another d+i - for >1GiB added to the caches for 30000 mounts. David
On Wed, 2020-04-01 at 10:37 +0200, Miklos Szeredi wrote: > On Wed, Apr 1, 2020 at 10:27 AM David Howells <dhowells@redhat.com> > wrote: > > Miklos Szeredi <miklos@szeredi.hu> wrote: > > > > > According to dhowell's measurements processing 100k mounts would > > > take > > > about a few seconds of system time (that's the time spent by the > > > kernel to retrieve the data, > > > > But the inefficiency of mountfs - at least as currently implemented > > - scales > > up with the number of individual values you want to retrieve, both > > in terms of > > memory usage and time taken. > > I've taken that into account when guesstimating a "few seconds per > 100k entries". My guess is that there's probably an order of > magnitude difference between the performance of a fs based interface > and a binary syscall based interface. That could be reduced somewhat > with a readfile(2) type API. > > But the point is: this does not matter. Whether it's .5s or 5s is > completely irrelevant, as neither is going to take down the system, > and userspace processing is probably going to take as much, if not > more time. And remember, we are talking about stopping and starting > the automount daemon, which is something that happens, but it should > not happen often by any measure. Yes, but don't forget, I'm reporting what I saw when testing during development. From previous discussion we know systemd (and probably the other apps like udisks2, et. al.) gets notified on mount and umount activity so its not going to be just starting and stopping autofs that's a problem with very large mount tables. To get a feel for the real difference we'd need to make the libmount changes for both and then check between the two and check behaviour. The mount and umount lookup case that Karel (and I) talked about should be sufficient. The biggest problem I had with fsinfo() when I was working with earlier series was getting fs specific options, in particular the need to use sb op ->fsinfo(). With this latest series David has made that part of the generic code and your patch also cover it. So the thing that was holding me up is done so we should be getting on with libmount improvements, we need to settle this. I prefer the system call interface and I'm not offering justification for that other than a general dislike (and on occasion outright frustration) of pretty much every proc implementation I have had to look at. > > > With fsinfo(), I've tried to batch values together where it makes > > sense - and > > there's no lingering memory overhead - no extra inodes, dentries > > and files > > required. > > The dentries, inodes and files in your test are single use (except > the > root dentry) and can be made ephemeral if that turns out to be > better. > My guess is that dentries belonging to individual attributes should > be > deleted on final put, while the dentries belonging to the mount > directory can be reclaimed normally. > > Thanks, > Miklos
On Thu, Apr 02, 2020 at 09:38:20AM +0800, Ian Kent wrote: > I prefer the system call interface and I'm not offering justification > for that other than a general dislike (and on occasion outright > frustration) of pretty much every proc implementation I have had to > look at. Frankly, I'm modest, what about to have both interfaces in kernel -- fsinfo() as well mountfs? It's nothing unusual for example for block devices to have attribute accessible by /sys as well as by ioctl(). I can imagine that for complex task or performance sensitive tasks it's better to use fsinfo(), but in another simple use-cases (for example to convert mountpoint to device name in shell) is better to read /proc/.../<atrtr>. Karel