Message ID | 20220623220613.3014268-2-kaleshsingh@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | procfs: Add file path and size to /proc/<pid>/fdinfo | expand |
On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > To be able to account the amount of memory a process is keeping pinned > by open file descriptors add a 'size' field to fdinfo output. > > dmabufs fds already expose a 'size' field for this reason, remove this > and make it a common field for all fds. This allows tracking of > other types of memory (e.g. memfd and ashmem in Android). > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > Reviewed-by: Christian König <christian.koenig@amd.com> > --- > > Changes in v2: > - Add Christian's Reviewed-by > > Changes from rfc: > - Split adding 'size' and 'path' into a separate patches, per Christian > - Split fdinfo seq_printf into separate lines, per Christian > - Fix indentation (use tabs) in documentaion, per Randy > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > drivers/dma-buf/dma-buf.c | 1 - > fs/proc/fd.c | 9 +++++---- > 3 files changed, 15 insertions(+), 7 deletions(-) > ... > diff --git a/fs/proc/fd.c b/fs/proc/fd.c > index 913bef0d2a36..464bc3f55759 100644 > --- a/fs/proc/fd.c > +++ b/fs/proc/fd.c > @@ -54,10 +54,11 @@ static int seq_show(struct seq_file *m, void *v) > if (ret) > return ret; > > - seq_printf(m, "pos:\t%lli\nflags:\t0%o\nmnt_id:\t%i\nino:\t%lu\n", > - (long long)file->f_pos, f_flags, > - real_mount(file->f_path.mnt)->mnt_id, > - file_inode(file)->i_ino); > + seq_printf(m, "pos:\t%lli\n", (long long)file->f_pos); > + seq_printf(m, "flags:\t0%o\n", f_flags); > + seq_printf(m, "mnt_id:\t%i\n", real_mount(file->f_path.mnt)->mnt_id); > + seq_printf(m, "ino:\t%lu\n", file_inode(file)->i_ino); > + seq_printf(m, "size:\t%lli\n", (long long)file_inode(file)->i_size); Hi Kalesh, Any reason not to use i_size_read() here? Also not sure if it matters that much for your use case, but something worth noting at least with shmem is that one can do something like: # cat /proc/meminfo | grep Shmem: Shmem: 764 kB # xfs_io -fc "falloc -k 0 10m" ./file # ls -alh file -rw-------. 1 root root 0 Jun 28 07:22 file # stat file File: file Size: 0 Blocks: 20480 IO Block: 4096 regular empty file # cat /proc/meminfo | grep Shmem: Shmem: 11004 kB ... where the resulting memory usage isn't reflected in i_size (but is is in i_blocks/bytes). Brian > > /* show_fd_locks() never deferences files so a stale value is safe */ > show_fd_locks(m, file, files); > -- > 2.37.0.rc0.161.g10f37bed90-goog >
On Tue, Jun 28, 2022 at 4:54 AM Brian Foster <bfoster@redhat.com> wrote: > > On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > > To be able to account the amount of memory a process is keeping pinned > > by open file descriptors add a 'size' field to fdinfo output. > > > > dmabufs fds already expose a 'size' field for this reason, remove this > > and make it a common field for all fds. This allows tracking of > > other types of memory (e.g. memfd and ashmem in Android). > > > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > > Reviewed-by: Christian König <christian.koenig@amd.com> > > --- > > > > Changes in v2: > > - Add Christian's Reviewed-by > > > > Changes from rfc: > > - Split adding 'size' and 'path' into a separate patches, per Christian > > - Split fdinfo seq_printf into separate lines, per Christian > > - Fix indentation (use tabs) in documentaion, per Randy > > > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > > drivers/dma-buf/dma-buf.c | 1 - > > fs/proc/fd.c | 9 +++++---- > > 3 files changed, 15 insertions(+), 7 deletions(-) > > > ... > > diff --git a/fs/proc/fd.c b/fs/proc/fd.c > > index 913bef0d2a36..464bc3f55759 100644 > > --- a/fs/proc/fd.c > > +++ b/fs/proc/fd.c > > @@ -54,10 +54,11 @@ static int seq_show(struct seq_file *m, void *v) > > if (ret) > > return ret; > > > > - seq_printf(m, "pos:\t%lli\nflags:\t0%o\nmnt_id:\t%i\nino:\t%lu\n", > > - (long long)file->f_pos, f_flags, > > - real_mount(file->f_path.mnt)->mnt_id, > > - file_inode(file)->i_ino); > > + seq_printf(m, "pos:\t%lli\n", (long long)file->f_pos); > > + seq_printf(m, "flags:\t0%o\n", f_flags); > > + seq_printf(m, "mnt_id:\t%i\n", real_mount(file->f_path.mnt)->mnt_id); > > + seq_printf(m, "ino:\t%lu\n", file_inode(file)->i_ino); > > + seq_printf(m, "size:\t%lli\n", (long long)file_inode(file)->i_size); > > Hi Kalesh, > > Any reason not to use i_size_read() here? Hi Brian. Thanks for pointing this out. You are right, we should use i_size_read() here. I'll update in the next version. > > Also not sure if it matters that much for your use case, but something > worth noting at least with shmem is that one can do something like: > > # cat /proc/meminfo | grep Shmem: > Shmem: 764 kB > # xfs_io -fc "falloc -k 0 10m" ./file > # ls -alh file > -rw-------. 1 root root 0 Jun 28 07:22 file > # stat file > File: file > Size: 0 Blocks: 20480 IO Block: 4096 regular empty file > # cat /proc/meminfo | grep Shmem: > Shmem: 11004 kB > > ... where the resulting memory usage isn't reflected in i_size (but is > is in i_blocks/bytes). I tried a similar experiment a few times, but I don't see the same results. In my case, there is not any change in shmem. IIUC the fallocate is allocating the disk space not shared memory. cat /proc/meminfo > meminfo.start xfs_io -fc "falloc -k 0 50m" ./xfs_file cat /proc/meminfo > meminfo.stop tail -n +1 meminfo.st* | grep -i '==\|Shmem:' ==> meminfo.start <== Shmem: 484 kB ==> meminfo.stop <== Shmem: 484 kB ls -lh xfs_file -rw------- 1 root root 0 Jun 28 15:12 xfs_file stat xfs_file File: xfs_file Size: 0 Blocks: 102400 IO Block: 4096 regular empty file Thanks, Kalesh > > Brian > > > > > /* show_fd_locks() never deferences files so a stale value is safe */ > > show_fd_locks(m, file, files); > > -- > > 2.37.0.rc0.161.g10f37bed90-goog > > >
On Tue, Jun 28, 2022 at 03:38:02PM -0700, Kalesh Singh wrote: > On Tue, Jun 28, 2022 at 4:54 AM Brian Foster <bfoster@redhat.com> wrote: > > > > On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > > > To be able to account the amount of memory a process is keeping pinned > > > by open file descriptors add a 'size' field to fdinfo output. > > > > > > dmabufs fds already expose a 'size' field for this reason, remove this > > > and make it a common field for all fds. This allows tracking of > > > other types of memory (e.g. memfd and ashmem in Android). > > > > > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > > > Reviewed-by: Christian König <christian.koenig@amd.com> > > > --- > > > > > > Changes in v2: > > > - Add Christian's Reviewed-by > > > > > > Changes from rfc: > > > - Split adding 'size' and 'path' into a separate patches, per Christian > > > - Split fdinfo seq_printf into separate lines, per Christian > > > - Fix indentation (use tabs) in documentaion, per Randy > > > > > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > > > drivers/dma-buf/dma-buf.c | 1 - > > > fs/proc/fd.c | 9 +++++---- > > > 3 files changed, 15 insertions(+), 7 deletions(-) > > > ... > > > > Also not sure if it matters that much for your use case, but something > > worth noting at least with shmem is that one can do something like: > > > > # cat /proc/meminfo | grep Shmem: > > Shmem: 764 kB > > # xfs_io -fc "falloc -k 0 10m" ./file > > # ls -alh file > > -rw-------. 1 root root 0 Jun 28 07:22 file > > # stat file > > File: file > > Size: 0 Blocks: 20480 IO Block: 4096 regular empty file > > # cat /proc/meminfo | grep Shmem: > > Shmem: 11004 kB > > > > ... where the resulting memory usage isn't reflected in i_size (but is > > is in i_blocks/bytes). > > I tried a similar experiment a few times, but I don't see the same > results. In my case, there is not any change in shmem. IIUC the > fallocate is allocating the disk space not shared memory. > Sorry, it was implied in my previous test was that I was running against tmpfs. So regardless of fs, the fallocate keep_size semantics shown in both cases is as expected: the underlying blocks are allocated and the inode size is unchanged. What wasn't totally clear to me when I read this patch was 1. whether tmpfs refers to Shmem and 2. whether tmpfs allowed this sort of operation. The test above seems to confirm both, however, right? E.g., a more detailed example: # mount | grep /tmp tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64) # cat /proc/meminfo | grep Shmem: Shmem: 5300 kB # xfs_io -fc "falloc -k 0 1g" /tmp/file # stat /tmp/file File: /tmp/file Size: 0 Blocks: 2097152 IO Block: 4096 regular empty file Device: 22h/34d Inode: 45 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:user_tmp_t:s0 Access: 2022-06-29 08:04:01.301307154 -0400 Modify: 2022-06-29 08:04:01.301307154 -0400 Change: 2022-06-29 08:04:01.451312834 -0400 Birth: 2022-06-29 08:04:01.301307154 -0400 # cat /proc/meminfo | grep Shmem: Shmem: 1053876 kB # rm -f /tmp/file # cat /proc/meminfo | grep Shmem: Shmem: 5300 kB So clearly this impacts Shmem.. was your test run against tmpfs or some other (disk based) fs? FWIW, I don't have any objection to exposing inode size if it's commonly useful information. My feedback was more just an fyi that i_size doesn't necessarily reflect underlying space consumption (whether it's memory or disk space) in more generic cases, because it sounds like that is really what you're after here. The opposite example to the above would be something like an 'xfs_io -fc "truncate 1t" /tmp/file', which shows a 1TB inode size with zero additional shmem usage. Brian > cat /proc/meminfo > meminfo.start > xfs_io -fc "falloc -k 0 50m" ./xfs_file > cat /proc/meminfo > meminfo.stop > tail -n +1 meminfo.st* | grep -i '==\|Shmem:' > > ==> meminfo.start <== > Shmem: 484 kB > ==> meminfo.stop <== > Shmem: 484 kB > > ls -lh xfs_file > -rw------- 1 root root 0 Jun 28 15:12 xfs_file > > stat xfs_file > File: xfs_file > Size: 0 Blocks: 102400 IO Block: 4096 regular empty file > > Thanks, > Kalesh > > > > > Brian > > > > > > > > /* show_fd_locks() never deferences files so a stale value is safe */ > > > show_fd_locks(m, file, files); > > > -- > > > 2.37.0.rc0.161.g10f37bed90-goog > > > > > >
On Wed, Jun 29, 2022 at 5:23 AM Brian Foster <bfoster@redhat.com> wrote: > > On Tue, Jun 28, 2022 at 03:38:02PM -0700, Kalesh Singh wrote: > > On Tue, Jun 28, 2022 at 4:54 AM Brian Foster <bfoster@redhat.com> wrote: > > > > > > On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > > > > To be able to account the amount of memory a process is keeping pinned > > > > by open file descriptors add a 'size' field to fdinfo output. > > > > > > > > dmabufs fds already expose a 'size' field for this reason, remove this > > > > and make it a common field for all fds. This allows tracking of > > > > other types of memory (e.g. memfd and ashmem in Android). > > > > > > > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > > > > Reviewed-by: Christian König <christian.koenig@amd.com> > > > > --- > > > > > > > > Changes in v2: > > > > - Add Christian's Reviewed-by > > > > > > > > Changes from rfc: > > > > - Split adding 'size' and 'path' into a separate patches, per Christian > > > > - Split fdinfo seq_printf into separate lines, per Christian > > > > - Fix indentation (use tabs) in documentaion, per Randy > > > > > > > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > > > > drivers/dma-buf/dma-buf.c | 1 - > > > > fs/proc/fd.c | 9 +++++---- > > > > 3 files changed, 15 insertions(+), 7 deletions(-) > > > > > ... > > > > > > Also not sure if it matters that much for your use case, but something > > > worth noting at least with shmem is that one can do something like: > > > > > > # cat /proc/meminfo | grep Shmem: > > > Shmem: 764 kB > > > # xfs_io -fc "falloc -k 0 10m" ./file > > > # ls -alh file > > > -rw-------. 1 root root 0 Jun 28 07:22 file > > > # stat file > > > File: file > > > Size: 0 Blocks: 20480 IO Block: 4096 regular empty file > > > # cat /proc/meminfo | grep Shmem: > > > Shmem: 11004 kB > > > > > > ... where the resulting memory usage isn't reflected in i_size (but is > > > is in i_blocks/bytes). > > > > I tried a similar experiment a few times, but I don't see the same > > results. In my case, there is not any change in shmem. IIUC the > > fallocate is allocating the disk space not shared memory. > > > > Sorry, it was implied in my previous test was that I was running against > tmpfs. So regardless of fs, the fallocate keep_size semantics shown in > both cases is as expected: the underlying blocks are allocated and the > inode size is unchanged. > > What wasn't totally clear to me when I read this patch was 1. whether > tmpfs refers to Shmem and 2. whether tmpfs allowed this sort of > operation. The test above seems to confirm both, however, right? E.g., a > more detailed example: > > # mount | grep /tmp > tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64) > # cat /proc/meminfo | grep Shmem: > Shmem: 5300 kB > # xfs_io -fc "falloc -k 0 1g" /tmp/file > # stat /tmp/file > File: /tmp/file > Size: 0 Blocks: 2097152 IO Block: 4096 regular empty file > Device: 22h/34d Inode: 45 Links: 1 > Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) > Context: unconfined_u:object_r:user_tmp_t:s0 > Access: 2022-06-29 08:04:01.301307154 -0400 > Modify: 2022-06-29 08:04:01.301307154 -0400 > Change: 2022-06-29 08:04:01.451312834 -0400 > Birth: 2022-06-29 08:04:01.301307154 -0400 > # cat /proc/meminfo | grep Shmem: > Shmem: 1053876 kB > # rm -f /tmp/file > # cat /proc/meminfo | grep Shmem: > Shmem: 5300 kB > > So clearly this impacts Shmem.. was your test run against tmpfs or some > other (disk based) fs? Hi Brian, Thanks for clarifying. My issue was tmpfs not mounted at /tmp in my system: ==> meminfo.start <== Shmem: 572 kB ==> meminfo.stop <== Shmem: 51688 kB > > FWIW, I don't have any objection to exposing inode size if it's commonly > useful information. My feedback was more just an fyi that i_size doesn't > necessarily reflect underlying space consumption (whether it's memory or > disk space) in more generic cases, because it sounds like that is really > what you're after here. The opposite example to the above would be > something like an 'xfs_io -fc "truncate 1t" /tmp/file', which shows a > 1TB inode size with zero additional shmem usage. From these cases, it seems the more generic way to do this is by calculating the actual size consumed using the blocks. (i_blocks * 512). So in the latter example 'xfs_io -fc "truncate 1t" /tmp/file' the size consumed would be zero. Let me know if it sounds ok to you and I can repost the updated version. Thanks, Kalesh > > Brian > > > cat /proc/meminfo > meminfo.start > > xfs_io -fc "falloc -k 0 50m" ./xfs_file > > cat /proc/meminfo > meminfo.stop > > tail -n +1 meminfo.st* | grep -i '==\|Shmem:' > > > > ==> meminfo.start <== > > Shmem: 484 kB > > ==> meminfo.stop <== > > Shmem: 484 kB > > > > ls -lh xfs_file > > -rw------- 1 root root 0 Jun 28 15:12 xfs_file > > > > stat xfs_file > > File: xfs_file > > Size: 0 Blocks: 102400 IO Block: 4096 regular empty file > > > > Thanks, > > Kalesh > > > > > > > > Brian > > > > > > > > > > > /* show_fd_locks() never deferences files so a stale value is safe */ > > > > show_fd_locks(m, file, files); > > > > -- > > > > 2.37.0.rc0.161.g10f37bed90-goog > > > > > > > > > > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >
On Wed, Jun 29, 2022 at 01:43:11PM -0700, Kalesh Singh wrote: > On Wed, Jun 29, 2022 at 5:23 AM Brian Foster <bfoster@redhat.com> wrote: > > > > On Tue, Jun 28, 2022 at 03:38:02PM -0700, Kalesh Singh wrote: > > > On Tue, Jun 28, 2022 at 4:54 AM Brian Foster <bfoster@redhat.com> wrote: > > > > > > > > On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > > > > > To be able to account the amount of memory a process is keeping pinned > > > > > by open file descriptors add a 'size' field to fdinfo output. > > > > > > > > > > dmabufs fds already expose a 'size' field for this reason, remove this > > > > > and make it a common field for all fds. This allows tracking of > > > > > other types of memory (e.g. memfd and ashmem in Android). > > > > > > > > > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > > > > > Reviewed-by: Christian König <christian.koenig@amd.com> > > > > > --- > > > > > > > > > > Changes in v2: > > > > > - Add Christian's Reviewed-by > > > > > > > > > > Changes from rfc: > > > > > - Split adding 'size' and 'path' into a separate patches, per Christian > > > > > - Split fdinfo seq_printf into separate lines, per Christian > > > > > - Fix indentation (use tabs) in documentaion, per Randy > > > > > > > > > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > > > > > drivers/dma-buf/dma-buf.c | 1 - > > > > > fs/proc/fd.c | 9 +++++---- > > > > > 3 files changed, 15 insertions(+), 7 deletions(-) > > > > > > > ... > > > > > > > > Also not sure if it matters that much for your use case, but something > > > > worth noting at least with shmem is that one can do something like: > > > > > > > > # cat /proc/meminfo | grep Shmem: > > > > Shmem: 764 kB > > > > # xfs_io -fc "falloc -k 0 10m" ./file > > > > # ls -alh file > > > > -rw-------. 1 root root 0 Jun 28 07:22 file > > > > # stat file > > > > File: file > > > > Size: 0 Blocks: 20480 IO Block: 4096 regular empty file > > > > # cat /proc/meminfo | grep Shmem: > > > > Shmem: 11004 kB > > > > > > > > ... where the resulting memory usage isn't reflected in i_size (but is > > > > is in i_blocks/bytes). > > > > > > I tried a similar experiment a few times, but I don't see the same > > > results. In my case, there is not any change in shmem. IIUC the > > > fallocate is allocating the disk space not shared memory. > > > > > > > Sorry, it was implied in my previous test was that I was running against > > tmpfs. So regardless of fs, the fallocate keep_size semantics shown in > > both cases is as expected: the underlying blocks are allocated and the > > inode size is unchanged. > > > > What wasn't totally clear to me when I read this patch was 1. whether > > tmpfs refers to Shmem and 2. whether tmpfs allowed this sort of > > operation. The test above seems to confirm both, however, right? E.g., a > > more detailed example: > > > > # mount | grep /tmp > > tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64) > > # cat /proc/meminfo | grep Shmem: > > Shmem: 5300 kB > > # xfs_io -fc "falloc -k 0 1g" /tmp/file > > # stat /tmp/file > > File: /tmp/file > > Size: 0 Blocks: 2097152 IO Block: 4096 regular empty file > > Device: 22h/34d Inode: 45 Links: 1 > > Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) > > Context: unconfined_u:object_r:user_tmp_t:s0 > > Access: 2022-06-29 08:04:01.301307154 -0400 > > Modify: 2022-06-29 08:04:01.301307154 -0400 > > Change: 2022-06-29 08:04:01.451312834 -0400 > > Birth: 2022-06-29 08:04:01.301307154 -0400 > > # cat /proc/meminfo | grep Shmem: > > Shmem: 1053876 kB > > # rm -f /tmp/file > > # cat /proc/meminfo | grep Shmem: > > Shmem: 5300 kB > > > > So clearly this impacts Shmem.. was your test run against tmpfs or some > > other (disk based) fs? > > Hi Brian, > > Thanks for clarifying. My issue was tmpfs not mounted at /tmp in my system: > > ==> meminfo.start <== > Shmem: 572 kB > ==> meminfo.stop <== > Shmem: 51688 kB > Ok, makes sense. > > > > FWIW, I don't have any objection to exposing inode size if it's commonly > > useful information. My feedback was more just an fyi that i_size doesn't > > necessarily reflect underlying space consumption (whether it's memory or > > disk space) in more generic cases, because it sounds like that is really > > what you're after here. The opposite example to the above would be > > something like an 'xfs_io -fc "truncate 1t" /tmp/file', which shows a > > 1TB inode size with zero additional shmem usage. > > From these cases, it seems the more generic way to do this is by > calculating the actual size consumed using the blocks. (i_blocks * > 512). So in the latter example 'xfs_io -fc "truncate 1t" /tmp/file' > the size consumed would be zero. Let me know if it sounds ok to you > and I can repost the updated version. > That sounds a bit more useful to me if you're interested in space usage, or at least I don't have a better idea for you. ;) One thing to note is that I'm not sure whether all fs' use i_blocks reliably. E.g., XFS populates stat->blocks via a separate block counter in the XFS specific inode structure (see xfs_vn_getattr()). A bunch of other fs' seem to touch it so perhaps that is just an outlier. You could consider fixing that up, perhaps make a ->getattr() call to avoid it, or just use the field directly if it's useful enough as is and there are no other objections. Something to think about anyways.. Brian > Thanks, > Kalesh > > > > > Brian > > > > > cat /proc/meminfo > meminfo.start > > > xfs_io -fc "falloc -k 0 50m" ./xfs_file > > > cat /proc/meminfo > meminfo.stop > > > tail -n +1 meminfo.st* | grep -i '==\|Shmem:' > > > > > > ==> meminfo.start <== > > > Shmem: 484 kB > > > ==> meminfo.stop <== > > > Shmem: 484 kB > > > > > > ls -lh xfs_file > > > -rw------- 1 root root 0 Jun 28 15:12 xfs_file > > > > > > stat xfs_file > > > File: xfs_file > > > Size: 0 Blocks: 102400 IO Block: 4096 regular empty file > > > > > > Thanks, > > > Kalesh > > > > > > > > > > > Brian > > > > > > > > > > > > > > /* show_fd_locks() never deferences files so a stale value is safe */ > > > > > show_fd_locks(m, file, files); > > > > > -- > > > > > 2.37.0.rc0.161.g10f37bed90-goog > > > > > > > > > > > > > > > > -- > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > > >
On Thu, Jun 30, 2022 at 07:48:46AM -0400, Brian Foster wrote: > On Wed, Jun 29, 2022 at 01:43:11PM -0700, Kalesh Singh wrote: > > On Wed, Jun 29, 2022 at 5:23 AM Brian Foster <bfoster@redhat.com> wrote: > > > > > > On Tue, Jun 28, 2022 at 03:38:02PM -0700, Kalesh Singh wrote: > > > > On Tue, Jun 28, 2022 at 4:54 AM Brian Foster <bfoster@redhat.com> wrote: > > > > > > > > > > On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > > > > > > To be able to account the amount of memory a process is keeping pinned > > > > > > by open file descriptors add a 'size' field to fdinfo output. > > > > > > > > > > > > dmabufs fds already expose a 'size' field for this reason, remove this > > > > > > and make it a common field for all fds. This allows tracking of > > > > > > other types of memory (e.g. memfd and ashmem in Android). > > > > > > > > > > > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > > > > > > Reviewed-by: Christian König <christian.koenig@amd.com> > > > > > > --- > > > > > > > > > > > > Changes in v2: > > > > > > - Add Christian's Reviewed-by > > > > > > > > > > > > Changes from rfc: > > > > > > - Split adding 'size' and 'path' into a separate patches, per Christian > > > > > > - Split fdinfo seq_printf into separate lines, per Christian > > > > > > - Fix indentation (use tabs) in documentaion, per Randy > > > > > > > > > > > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > > > > > > drivers/dma-buf/dma-buf.c | 1 - > > > > > > fs/proc/fd.c | 9 +++++---- > > > > > > 3 files changed, 15 insertions(+), 7 deletions(-) > > > > > > > > > ... > > > > > > > > > > Also not sure if it matters that much for your use case, but something > > > > > worth noting at least with shmem is that one can do something like: > > > > > > > > > > # cat /proc/meminfo | grep Shmem: > > > > > Shmem: 764 kB > > > > > # xfs_io -fc "falloc -k 0 10m" ./file > > > > > # ls -alh file > > > > > -rw-------. 1 root root 0 Jun 28 07:22 file > > > > > # stat file > > > > > File: file > > > > > Size: 0 Blocks: 20480 IO Block: 4096 regular empty file > > > > > # cat /proc/meminfo | grep Shmem: > > > > > Shmem: 11004 kB > > > > > > > > > > ... where the resulting memory usage isn't reflected in i_size (but is > > > > > is in i_blocks/bytes). > > > > > > > > I tried a similar experiment a few times, but I don't see the same > > > > results. In my case, there is not any change in shmem. IIUC the > > > > fallocate is allocating the disk space not shared memory. > > > > > > > > > > Sorry, it was implied in my previous test was that I was running against > > > tmpfs. So regardless of fs, the fallocate keep_size semantics shown in > > > both cases is as expected: the underlying blocks are allocated and the > > > inode size is unchanged. > > > > > > What wasn't totally clear to me when I read this patch was 1. whether > > > tmpfs refers to Shmem and 2. whether tmpfs allowed this sort of > > > operation. The test above seems to confirm both, however, right? E.g., a > > > more detailed example: > > > > > > # mount | grep /tmp > > > tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64) > > > # cat /proc/meminfo | grep Shmem: > > > Shmem: 5300 kB > > > # xfs_io -fc "falloc -k 0 1g" /tmp/file > > > # stat /tmp/file > > > File: /tmp/file > > > Size: 0 Blocks: 2097152 IO Block: 4096 regular empty file > > > Device: 22h/34d Inode: 45 Links: 1 > > > Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) > > > Context: unconfined_u:object_r:user_tmp_t:s0 > > > Access: 2022-06-29 08:04:01.301307154 -0400 > > > Modify: 2022-06-29 08:04:01.301307154 -0400 > > > Change: 2022-06-29 08:04:01.451312834 -0400 > > > Birth: 2022-06-29 08:04:01.301307154 -0400 > > > # cat /proc/meminfo | grep Shmem: > > > Shmem: 1053876 kB > > > # rm -f /tmp/file > > > # cat /proc/meminfo | grep Shmem: > > > Shmem: 5300 kB > > > > > > So clearly this impacts Shmem.. was your test run against tmpfs or some > > > other (disk based) fs? > > > > Hi Brian, > > > > Thanks for clarifying. My issue was tmpfs not mounted at /tmp in my system: > > > > ==> meminfo.start <== > > Shmem: 572 kB > > ==> meminfo.stop <== > > Shmem: 51688 kB > > > > Ok, makes sense. > > > > > > > FWIW, I don't have any objection to exposing inode size if it's commonly > > > useful information. My feedback was more just an fyi that i_size doesn't > > > necessarily reflect underlying space consumption (whether it's memory or > > > disk space) in more generic cases, because it sounds like that is really > > > what you're after here. The opposite example to the above would be > > > something like an 'xfs_io -fc "truncate 1t" /tmp/file', which shows a > > > 1TB inode size with zero additional shmem usage. > > > > From these cases, it seems the more generic way to do this is by > > calculating the actual size consumed using the blocks. (i_blocks * > > 512). So in the latter example 'xfs_io -fc "truncate 1t" /tmp/file' > > the size consumed would be zero. Let me know if it sounds ok to you > > and I can repost the updated version. > > > > That sounds a bit more useful to me if you're interested in space usage, > or at least I don't have a better idea for you. ;) > > One thing to note is that I'm not sure whether all fs' use i_blocks > reliably. E.g., XFS populates stat->blocks via a separate block counter > in the XFS specific inode structure (see xfs_vn_getattr()). A bunch of > other fs' seem to touch it so perhaps that is just an outlier. You could > consider fixing that up, perhaps make a ->getattr() call to avoid it, or > just use the field directly if it's useful enough as is and there are no > other objections. Something to think about anyways.. > Oh, I wonder if you're looking for similar "file rss" information this series wants to collect/expose..? https://lore.kernel.org/linux-fsdevel/20220624080444.7619-1-christian.koenig@amd.com/#r Brian > Brian > > > Thanks, > > Kalesh > > > > > > > > Brian > > > > > > > cat /proc/meminfo > meminfo.start > > > > xfs_io -fc "falloc -k 0 50m" ./xfs_file > > > > cat /proc/meminfo > meminfo.stop > > > > tail -n +1 meminfo.st* | grep -i '==\|Shmem:' > > > > > > > > ==> meminfo.start <== > > > > Shmem: 484 kB > > > > ==> meminfo.stop <== > > > > Shmem: 484 kB > > > > > > > > ls -lh xfs_file > > > > -rw------- 1 root root 0 Jun 28 15:12 xfs_file > > > > > > > > stat xfs_file > > > > File: xfs_file > > > > Size: 0 Blocks: 102400 IO Block: 4096 regular empty file > > > > > > > > Thanks, > > > > Kalesh > > > > > > > > > > > > > > Brian > > > > > > > > > > > > > > > > > /* show_fd_locks() never deferences files so a stale value is safe */ > > > > > > show_fd_locks(m, file, files); > > > > > > -- > > > > > > 2.37.0.rc0.161.g10f37bed90-goog > > > > > > > > > > > > > > > > > > > > > -- > > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > > > > >
On Thu, Jun 30, 2022 at 5:03 AM Brian Foster <bfoster@redhat.com> wrote: > > On Thu, Jun 30, 2022 at 07:48:46AM -0400, Brian Foster wrote: > > On Wed, Jun 29, 2022 at 01:43:11PM -0700, Kalesh Singh wrote: > > > On Wed, Jun 29, 2022 at 5:23 AM Brian Foster <bfoster@redhat.com> wrote: > > > > > > > > On Tue, Jun 28, 2022 at 03:38:02PM -0700, Kalesh Singh wrote: > > > > > On Tue, Jun 28, 2022 at 4:54 AM Brian Foster <bfoster@redhat.com> wrote: > > > > > > > > > > > > On Thu, Jun 23, 2022 at 03:06:06PM -0700, Kalesh Singh wrote: > > > > > > > To be able to account the amount of memory a process is keeping pinned > > > > > > > by open file descriptors add a 'size' field to fdinfo output. > > > > > > > > > > > > > > dmabufs fds already expose a 'size' field for this reason, remove this > > > > > > > and make it a common field for all fds. This allows tracking of > > > > > > > other types of memory (e.g. memfd and ashmem in Android). > > > > > > > > > > > > > > Signed-off-by: Kalesh Singh <kaleshsingh@google.com> > > > > > > > Reviewed-by: Christian König <christian.koenig@amd.com> > > > > > > > --- > > > > > > > > > > > > > > Changes in v2: > > > > > > > - Add Christian's Reviewed-by > > > > > > > > > > > > > > Changes from rfc: > > > > > > > - Split adding 'size' and 'path' into a separate patches, per Christian > > > > > > > - Split fdinfo seq_printf into separate lines, per Christian > > > > > > > - Fix indentation (use tabs) in documentaion, per Randy > > > > > > > > > > > > > > Documentation/filesystems/proc.rst | 12 ++++++++++-- > > > > > > > drivers/dma-buf/dma-buf.c | 1 - > > > > > > > fs/proc/fd.c | 9 +++++---- > > > > > > > 3 files changed, 15 insertions(+), 7 deletions(-) > > > > > > > > > > > ... > > > > > > > > > > > > Also not sure if it matters that much for your use case, but something > > > > > > worth noting at least with shmem is that one can do something like: > > > > > > > > > > > > # cat /proc/meminfo | grep Shmem: > > > > > > Shmem: 764 kB > > > > > > # xfs_io -fc "falloc -k 0 10m" ./file > > > > > > # ls -alh file > > > > > > -rw-------. 1 root root 0 Jun 28 07:22 file > > > > > > # stat file > > > > > > File: file > > > > > > Size: 0 Blocks: 20480 IO Block: 4096 regular empty file > > > > > > # cat /proc/meminfo | grep Shmem: > > > > > > Shmem: 11004 kB > > > > > > > > > > > > ... where the resulting memory usage isn't reflected in i_size (but is > > > > > > is in i_blocks/bytes). > > > > > > > > > > I tried a similar experiment a few times, but I don't see the same > > > > > results. In my case, there is not any change in shmem. IIUC the > > > > > fallocate is allocating the disk space not shared memory. > > > > > > > > > > > > > Sorry, it was implied in my previous test was that I was running against > > > > tmpfs. So regardless of fs, the fallocate keep_size semantics shown in > > > > both cases is as expected: the underlying blocks are allocated and the > > > > inode size is unchanged. > > > > > > > > What wasn't totally clear to me when I read this patch was 1. whether > > > > tmpfs refers to Shmem and 2. whether tmpfs allowed this sort of > > > > operation. The test above seems to confirm both, however, right? E.g., a > > > > more detailed example: > > > > > > > > # mount | grep /tmp > > > > tmpfs on /tmp type tmpfs (rw,nosuid,nodev,seclabel,nr_inodes=1048576,inode64) > > > > # cat /proc/meminfo | grep Shmem: > > > > Shmem: 5300 kB > > > > # xfs_io -fc "falloc -k 0 1g" /tmp/file > > > > # stat /tmp/file > > > > File: /tmp/file > > > > Size: 0 Blocks: 2097152 IO Block: 4096 regular empty file > > > > Device: 22h/34d Inode: 45 Links: 1 > > > > Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) > > > > Context: unconfined_u:object_r:user_tmp_t:s0 > > > > Access: 2022-06-29 08:04:01.301307154 -0400 > > > > Modify: 2022-06-29 08:04:01.301307154 -0400 > > > > Change: 2022-06-29 08:04:01.451312834 -0400 > > > > Birth: 2022-06-29 08:04:01.301307154 -0400 > > > > # cat /proc/meminfo | grep Shmem: > > > > Shmem: 1053876 kB > > > > # rm -f /tmp/file > > > > # cat /proc/meminfo | grep Shmem: > > > > Shmem: 5300 kB > > > > > > > > So clearly this impacts Shmem.. was your test run against tmpfs or some > > > > other (disk based) fs? > > > > > > Hi Brian, > > > > > > Thanks for clarifying. My issue was tmpfs not mounted at /tmp in my system: > > > > > > ==> meminfo.start <== > > > Shmem: 572 kB > > > ==> meminfo.stop <== > > > Shmem: 51688 kB > > > > > > > Ok, makes sense. > > > > > > > > > > FWIW, I don't have any objection to exposing inode size if it's commonly > > > > useful information. My feedback was more just an fyi that i_size doesn't > > > > necessarily reflect underlying space consumption (whether it's memory or > > > > disk space) in more generic cases, because it sounds like that is really > > > > what you're after here. The opposite example to the above would be > > > > something like an 'xfs_io -fc "truncate 1t" /tmp/file', which shows a > > > > 1TB inode size with zero additional shmem usage. > > > > > > From these cases, it seems the more generic way to do this is by > > > calculating the actual size consumed using the blocks. (i_blocks * > > > 512). So in the latter example 'xfs_io -fc "truncate 1t" /tmp/file' > > > the size consumed would be zero. Let me know if it sounds ok to you > > > and I can repost the updated version. > > > > > > > That sounds a bit more useful to me if you're interested in space usage, > > or at least I don't have a better idea for you. ;) > > > > One thing to note is that I'm not sure whether all fs' use i_blocks > > reliably. E.g., XFS populates stat->blocks via a separate block counter > > in the XFS specific inode structure (see xfs_vn_getattr()). A bunch of > > other fs' seem to touch it so perhaps that is just an outlier. You could > > consider fixing that up, perhaps make a ->getattr() call to avoid it, or > > just use the field directly if it's useful enough as is and there are no > > other objections. Something to think about anyways.. > > Hi Brian, Thanks for pointing it out. Let me take a look into the xfs case. > > Oh, I wonder if you're looking for similar "file rss" information this > series wants to collect/expose..? > > https://lore.kernel.org/linux-fsdevel/20220624080444.7619-1-christian.koenig@amd.com/#r Christian's series seems to have some overlap with what we want to achieve here. IIUC it exposes the information on the per process granularity. Perhaps if that approach is agreed on, I think we can use the file_rss() f_op to expose the per file size in the fdinfo for the cases where the i_blocks are unreliable. Thanks, Kalesh > > Brian > > > Brian > > > > > Thanks, > > > Kalesh > > > > > > > > > > > Brian > > > > > > > > > cat /proc/meminfo > meminfo.start > > > > > xfs_io -fc "falloc -k 0 50m" ./xfs_file > > > > > cat /proc/meminfo > meminfo.stop > > > > > tail -n +1 meminfo.st* | grep -i '==\|Shmem:' > > > > > > > > > > ==> meminfo.start <== > > > > > Shmem: 484 kB > > > > > ==> meminfo.stop <== > > > > > Shmem: 484 kB > > > > > > > > > > ls -lh xfs_file > > > > > -rw------- 1 root root 0 Jun 28 15:12 xfs_file > > > > > > > > > > stat xfs_file > > > > > File: xfs_file > > > > > Size: 0 Blocks: 102400 IO Block: 4096 regular empty file > > > > > > > > > > Thanks, > > > > > Kalesh > > > > > > > > > > > > > > > > > Brian > > > > > > > > > > > > > > > > > > > > /* show_fd_locks() never deferences files so a stale value is safe */ > > > > > > > show_fd_locks(m, file, files); > > > > > > > -- > > > > > > > 2.37.0.rc0.161.g10f37bed90-goog > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > > > > > > > >
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index 1bc91fb8c321..779c05528e87 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -1886,13 +1886,14 @@ if precise results are needed. 3.8 /proc/<pid>/fdinfo/<fd> - Information about opened file --------------------------------------------------------------- This file provides information associated with an opened file. The regular -files have at least four fields -- 'pos', 'flags', 'mnt_id' and 'ino'. +files have at least five fields -- 'pos', 'flags', 'mnt_id', 'ino', and 'size'. + The 'pos' represents the current offset of the opened file in decimal form [see lseek(2) for details], 'flags' denotes the octal O_xxx mask the file has been created with [see open(2) for details] and 'mnt_id' represents mount ID of the file system containing the opened file [see 3.5 /proc/<pid>/mountinfo for details]. 'ino' represents the inode number of -the file. +the file, and 'size' represents the size of the file in bytes. A typical output is:: @@ -1900,6 +1901,7 @@ A typical output is:: flags: 0100002 mnt_id: 19 ino: 63107 + size: 0 All locks associated with a file descriptor are shown in its fdinfo too:: @@ -1917,6 +1919,7 @@ Eventfd files flags: 04002 mnt_id: 9 ino: 63107 + size: 0 eventfd-count: 5a where 'eventfd-count' is hex value of a counter. @@ -1930,6 +1933,7 @@ Signalfd files flags: 04002 mnt_id: 9 ino: 63107 + size: 0 sigmask: 0000000000000200 where 'sigmask' is hex value of the signal mask associated @@ -1944,6 +1948,7 @@ Epoll files flags: 02 mnt_id: 9 ino: 63107 + size: 0 tfd: 5 events: 1d data: ffffffffffffffff pos:0 ino:61af sdev:7 where 'tfd' is a target file descriptor number in decimal form, @@ -1962,6 +1967,7 @@ For inotify files the format is the following:: flags: 02000000 mnt_id: 9 ino: 63107 + size: 0 inotify wd:3 ino:9e7e sdev:800013 mask:800afce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:7e9e0000640d1b6d where 'wd' is a watch descriptor in decimal form, i.e. a target file @@ -1985,6 +1991,7 @@ For fanotify files the format is:: flags: 02 mnt_id: 9 ino: 63107 + size: 0 fanotify flags:10 event-flags:0 fanotify mnt_id:12 mflags:40 mask:38 ignored_mask:40000003 fanotify ino:4f969 sdev:800013 mflags:0 mask:3b ignored_mask:40000000 fhandle-bytes:8 fhandle-type:1 f_handle:69f90400c275b5b4 @@ -2010,6 +2017,7 @@ Timerfd files flags: 02 mnt_id: 9 ino: 63107 + size: 0 clockid: 0 ticks: 0 settime flags: 01 diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 32f55640890c..5f2ae38c960f 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -378,7 +378,6 @@ static void dma_buf_show_fdinfo(struct seq_file *m, struct file *file) { struct dma_buf *dmabuf = file->private_data; - seq_printf(m, "size:\t%zu\n", dmabuf->size); /* Don't count the temporary reference taken inside procfs seq_show */ seq_printf(m, "count:\t%ld\n", file_count(dmabuf->file) - 1); seq_printf(m, "exp_name:\t%s\n", dmabuf->exp_name); diff --git a/fs/proc/fd.c b/fs/proc/fd.c index 913bef0d2a36..464bc3f55759 100644 --- a/fs/proc/fd.c +++ b/fs/proc/fd.c @@ -54,10 +54,11 @@ static int seq_show(struct seq_file *m, void *v) if (ret) return ret; - seq_printf(m, "pos:\t%lli\nflags:\t0%o\nmnt_id:\t%i\nino:\t%lu\n", - (long long)file->f_pos, f_flags, - real_mount(file->f_path.mnt)->mnt_id, - file_inode(file)->i_ino); + seq_printf(m, "pos:\t%lli\n", (long long)file->f_pos); + seq_printf(m, "flags:\t0%o\n", f_flags); + seq_printf(m, "mnt_id:\t%i\n", real_mount(file->f_path.mnt)->mnt_id); + seq_printf(m, "ino:\t%lu\n", file_inode(file)->i_ino); + seq_printf(m, "size:\t%lli\n", (long long)file_inode(file)->i_size); /* show_fd_locks() never deferences files so a stale value is safe */ show_fd_locks(m, file, files);