Message ID | 20190805160307.5418-3-sergey.senozhatsky@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | convert i915 to new mount API | expand |
On Tue, Aug 06, 2019 at 01:03:06AM +0900, Sergey Senozhatsky wrote: > tmpfs does not set ->remount_fs() anymore and its users need > to be converted to new mount API. Could you explain why the devil do you bother with remount at all? Why not pass the right options when mounting the damn thing?
On Mon, Aug 05, 2019 at 07:12:55PM +0100, Al Viro wrote: > On Tue, Aug 06, 2019 at 01:03:06AM +0900, Sergey Senozhatsky wrote: > > tmpfs does not set ->remount_fs() anymore and its users need > > to be converted to new mount API. > > Could you explain why the devil do you bother with remount at all? > Why not pass the right options when mounting the damn thing? ... and while we are at it, I really wonder what's going on with that gemfs thing - among the other things, this is the only user of shmem_file_setup_with_mnt(). Sure, you want your own options, but that brings another question - is there any reason for having the huge=... per-superblock rather than per-file? After all, the readers of ->huge in mm/shmem.c are mm/shmem.c:582: (shmem_huge == SHMEM_HUGE_FORCE || sbinfo->huge) && is_huge_enabled(), sbinfo is an explicit argument mm/shmem.c:1799: switch (sbinfo->huge) { shmem_getpage_gfp(), sbinfo comes from inode mm/shmem.c:2113: if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) shmem_get_unmapped_area(), sb comes from file mm/shmem.c:3531: if (sbinfo->huge) mm/shmem.c:3532: seq_printf(seq, ",huge=%s", shmem_format_huge(sbinfo->huge)); ->show_options() mm/shmem.c:3880: switch (sbinfo->huge) { shmem_huge_enabled(), sbinfo comes from an inode And the only caller of is_huge_enabled() is shmem_getattr(), with sbinfo picked from inode. So is there any reason why the hugepage policy can't be per-file, with the current being overridable default?
On Mon, Aug 05, 2019 at 07:12:55PM +0100, Al Viro wrote: > On Tue, Aug 06, 2019 at 01:03:06AM +0900, Sergey Senozhatsky wrote: > > tmpfs does not set ->remount_fs() anymore and its users need > > to be converted to new mount API. > > Could you explain why the devil do you bother with remount at all? > Why not pass the right options when mounting the damn thing? Incidentally, the only remaining modular user of get_fs_type() is the same i915_gemfs.c. And I wonder if we should aim for unexporting the damn thing instead of exporting put_filesystem()... Note that users in tomoyo and apparmor are bogus - they are in the instances of ill-defined method that needs to be split and moved, with the lookups (fs type included) replaced with callers passing the values they look up and will end up using. IOW, outside of core VFS we have very few legitimate users, and the one in kernel/trace might be better off as vfs_submount_by_name().
On (08/05/19 19:12), Al Viro wrote: [..] > On Tue, Aug 06, 2019 at 01:03:06AM +0900, Sergey Senozhatsky wrote: > > tmpfs does not set ->remount_fs() anymore and its users need > > to be converted to new mount API. > > Could you explain why the devil do you bother with remount at all? I would redirect this question to i915 developers. As far as I know i915 performance suffers with huge pages enabled. > Why not pass the right options when mounting the damn thing? vfs_kern_mount()? It still requires struct file_system_type, which we need to get and put. -ss
On Mon, 5 Aug 2019, Al Viro wrote: > On Mon, Aug 05, 2019 at 07:12:55PM +0100, Al Viro wrote: > > On Tue, Aug 06, 2019 at 01:03:06AM +0900, Sergey Senozhatsky wrote: > > > tmpfs does not set ->remount_fs() anymore and its users need > > > to be converted to new mount API. > > > > Could you explain why the devil do you bother with remount at all? > > Why not pass the right options when mounting the damn thing? > > ... and while we are at it, I really wonder what's going on with > that gemfs thing - among the other things, this is the only > user of shmem_file_setup_with_mnt(). Sure, you want your own > options, but that brings another question - is there any reason > for having the huge=... per-superblock rather than per-file? Yes: we want a default for how files of that superblock are to allocate their pages, without people having to fcntl or advise each of their files. Setting aside the weirder options (within_size, advise) and emergency/ testing override (shmem_huge), we want files on an ordinary default tmpfs (huge=never) to be allocated with small pages (so users with access to that filesystem will not consume, and will not waste time and space on consuming, the more valuable huge pages); but files on a huge=always tmpfs to be allocated with huge pages whenever possible. Or am I missing your point? Yes, hugeness can certainly be decided differently per-file, or even per-extent of file. That is already made possible through "judicious" use of madvise MADV_HUGEPAGE and MADV_NOHUGEPAGE on mmaps of the file, carried over from anon THP. Though personally I'm averse to managing "f"objects through "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works on the virtual address of a mapping, but the huge-or-not alignment of that mapping must have been decided previously). In Google we do use fcntls F_HUGEPAGE and F_NOHUGEPAGE to override on a per-file basis - one day I'll get to upstreaming those. Hugh > > After all, the readers of ->huge in mm/shmem.c are > mm/shmem.c:582: (shmem_huge == SHMEM_HUGE_FORCE || sbinfo->huge) && > is_huge_enabled(), sbinfo is an explicit argument > > mm/shmem.c:1799: switch (sbinfo->huge) { > shmem_getpage_gfp(), sbinfo comes from inode > > mm/shmem.c:2113: if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) > shmem_get_unmapped_area(), sb comes from file > > mm/shmem.c:3531: if (sbinfo->huge) > mm/shmem.c:3532: seq_printf(seq, ",huge=%s", shmem_format_huge(sbinfo->huge)); > ->show_options() > mm/shmem.c:3880: switch (sbinfo->huge) { > shmem_huge_enabled(), sbinfo comes from an inode > > And the only caller of is_huge_enabled() is shmem_getattr(), with sbinfo > picked from inode. > > So is there any reason why the hugepage policy can't be per-file, with > the current being overridable default?
On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote: > that mapping must have been decided previously). In Google we do use > fcntls F_HUGEPAGE and F_NOHUGEPAGE to override on a per-file basis - > one day I'll get to upstreaming those. That'd be nice - we could kill the i915 wierd private shmem instance, along with some kludges in mm/shmem.c.
On Wed, Aug 07, 2019 at 08:30:02AM +0200, Christoph Hellwig wrote: > On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote: > > Though personally I'm averse to managing "f"objects through > > "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works > > on the virtual address of a mapping, but the huge-or-not alignment of > > that mapping must have been decided previously). In Google we do use > > fcntls F_HUGEPAGE and F_NOHUGEPAGE to override on a per-file basis - > > one day I'll get to upstreaming those. > > Such an interface seems very useful, although the two fcntls seem a bit > odd. > > But I think the point here is that the i915 has its own somewhat odd > instance of tmpfs. If we could pass the equivalent of the huge=* > options to shmem_file_setup all that garbage (including the > shmem_file_setup_with_mnt function) could go away. ... or follow shmem_file_super() with whatever that fcntl maps to internally. I would really love to get rid of that i915 kludge.
On Thu, 8 Aug 2019, Al Viro wrote: > On Wed, Aug 07, 2019 at 08:30:02AM +0200, Christoph Hellwig wrote: > > On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote: > > > Though personally I'm averse to managing "f"objects through > > > "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works > > > on the virtual address of a mapping, but the huge-or-not alignment of > > > that mapping must have been decided previously). In Google we do use > > > fcntls F_HUGEPAGE and F_NOHUGEPAGE to override on a per-file basis - > > > one day I'll get to upstreaming those. > > > > Such an interface seems very useful, although the two fcntls seem a bit > > odd. > > > > But I think the point here is that the i915 has its own somewhat odd > > instance of tmpfs. If we could pass the equivalent of the huge=* > > options to shmem_file_setup all that garbage (including the > > shmem_file_setup_with_mnt function) could go away. > > ... or follow shmem_file_super() with whatever that fcntl maps to > internally. I would really love to get rid of that i915 kludge. As to the immediate problem of i915_gemfs using remount_fs on linux-next, IIUC, all that is necessary at the moment is the deletions patch below (but I'd prefer that to come from the i915 folks). Since gemfs has no need to change the huge option from its default to its default. As to the future of when they get back to wanting huge pages in gemfs, yes, that can probably best be arranged by using the internals of an fcntl F_HUGEPAGE on those objects that would benefit from it. Though my intention there was that the "huge=never" default ought to continue to refuse to give huge pages, even when asked by fcntl. So a little hackery may still be required, to allow the i915_gemfs internal mount to get huge pages when a user mount would not. As to whether shmem_file_setup_with_mnt() needs to live: I've given that no thought, but accept that shm_mnt is such a ragbag of different usages, that i915 is right to prefer their own separate gemfs mount. Hugh --- mmotm/drivers/gpu/drm/i915/gem/i915_gemfs.c 2019-07-21 19:40:16.573703780 -0700 +++ linux/drivers/gpu/drm/i915/gem/i915_gemfs.c 2019-08-08 07:19:23.967689058 -0700 @@ -24,28 +24,6 @@ int i915_gemfs_init(struct drm_i915_priv if (IS_ERR(gemfs)) return PTR_ERR(gemfs); - /* - * Enable huge-pages for objects that are at least HPAGE_PMD_SIZE, most - * likely 2M. Note that within_size may overallocate huge-pages, if say - * we allocate an object of size 2M + 4K, we may get 2M + 2M, but under - * memory pressure shmem should split any huge-pages which can be - * shrunk. - */ - - if (has_transparent_hugepage()) { - struct super_block *sb = gemfs->mnt_sb; - /* FIXME: Disabled until we get W/A for read BW issue. */ - char options[] = "huge=never"; - int flags = 0; - int err; - - err = sb->s_op->remount_fs(sb, &flags, options); - if (err) { - kern_unmount(gemfs); - return err; - } - } - i915->mm.gemfs = gemfs; return 0;
Quoting Hugh Dickins (2019-08-08 16:54:16) > On Thu, 8 Aug 2019, Al Viro wrote: > > On Wed, Aug 07, 2019 at 08:30:02AM +0200, Christoph Hellwig wrote: > > > On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote: > > > > Though personally I'm averse to managing "f"objects through > > > > "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works > > > > on the virtual address of a mapping, but the huge-or-not alignment of > > > > that mapping must have been decided previously). In Google we do use > > > > fcntls F_HUGEPAGE and F_NOHUGEPAGE to override on a per-file basis - > > > > one day I'll get to upstreaming those. > > > > > > Such an interface seems very useful, although the two fcntls seem a bit > > > odd. > > > > > > But I think the point here is that the i915 has its own somewhat odd > > > instance of tmpfs. If we could pass the equivalent of the huge=* > > > options to shmem_file_setup all that garbage (including the > > > shmem_file_setup_with_mnt function) could go away. > > > > ... or follow shmem_file_super() with whatever that fcntl maps to > > internally. I would really love to get rid of that i915 kludge. > > As to the immediate problem of i915_gemfs using remount_fs on linux-next, > IIUC, all that is necessary at the moment is the deletions patch below > (but I'd prefer that to come from the i915 folks). Since gemfs has no > need to change the huge option from its default to its default. > > As to the future of when they get back to wanting huge pages in gemfs, > yes, that can probably best be arranged by using the internals of an > fcntl F_HUGEPAGE on those objects that would benefit from it. > > Though my intention there was that the "huge=never" default ought > to continue to refuse to give huge pages, even when asked by fcntl. > So a little hackery may still be required, to allow the i915_gemfs > internal mount to get huge pages when a user mount would not. > > As to whether shmem_file_setup_with_mnt() needs to live: I've given > that no thought, but accept that shm_mnt is such a ragbag of different > usages, that i915 is right to prefer their own separate gemfs mount. > > Hugh > > --- mmotm/drivers/gpu/drm/i915/gem/i915_gemfs.c 2019-07-21 19:40:16.573703780 -0700 > +++ linux/drivers/gpu/drm/i915/gem/i915_gemfs.c 2019-08-08 07:19:23.967689058 -0700 > @@ -24,28 +24,6 @@ int i915_gemfs_init(struct drm_i915_priv > if (IS_ERR(gemfs)) > return PTR_ERR(gemfs); > > - /* > - * Enable huge-pages for objects that are at least HPAGE_PMD_SIZE, most > - * likely 2M. Note that within_size may overallocate huge-pages, if say > - * we allocate an object of size 2M + 4K, we may get 2M + 2M, but under > - * memory pressure shmem should split any huge-pages which can be > - * shrunk. > - */ > - > - if (has_transparent_hugepage()) { > - struct super_block *sb = gemfs->mnt_sb; > - /* FIXME: Disabled until we get W/A for read BW issue. */ > - char options[] = "huge=never"; > - int flags = 0; > - int err; > - > - err = sb->s_op->remount_fs(sb, &flags, options); > - if (err) { > - kern_unmount(gemfs); > - return err; > - } > - } That's perfectly fine; we should probably leave a hint as to why gemfs exists and include the suggestion of looking at per-file hugepage controls. Matthew, how does this affect your current plans? If at all? -Chris
On 08/08/2019 17:23, Chris Wilson wrote: > Quoting Hugh Dickins (2019-08-08 16:54:16) >> On Thu, 8 Aug 2019, Al Viro wrote: >>> On Wed, Aug 07, 2019 at 08:30:02AM +0200, Christoph Hellwig wrote: >>>> On Tue, Aug 06, 2019 at 12:50:10AM -0700, Hugh Dickins wrote: >>>>> Though personally I'm averse to managing "f"objects through >>>>> "m"interfaces, which can get ridiculous (notably, MADV_HUGEPAGE works >>>>> on the virtual address of a mapping, but the huge-or-not alignment of >>>>> that mapping must have been decided previously). In Google we do use >>>>> fcntls F_HUGEPAGE and F_NOHUGEPAGE to override on a per-file basis - >>>>> one day I'll get to upstreaming those. >>>> >>>> Such an interface seems very useful, although the two fcntls seem a bit >>>> odd. >>>> >>>> But I think the point here is that the i915 has its own somewhat odd >>>> instance of tmpfs. If we could pass the equivalent of the huge=* >>>> options to shmem_file_setup all that garbage (including the >>>> shmem_file_setup_with_mnt function) could go away. >>> >>> ... or follow shmem_file_super() with whatever that fcntl maps to >>> internally. I would really love to get rid of that i915 kludge. >> >> As to the immediate problem of i915_gemfs using remount_fs on linux-next, >> IIUC, all that is necessary at the moment is the deletions patch below >> (but I'd prefer that to come from the i915 folks). Since gemfs has no >> need to change the huge option from its default to its default. >> >> As to the future of when they get back to wanting huge pages in gemfs, >> yes, that can probably best be arranged by using the internals of an >> fcntl F_HUGEPAGE on those objects that would benefit from it. >> >> Though my intention there was that the "huge=never" default ought >> to continue to refuse to give huge pages, even when asked by fcntl. >> So a little hackery may still be required, to allow the i915_gemfs >> internal mount to get huge pages when a user mount would not. >> >> As to whether shmem_file_setup_with_mnt() needs to live: I've given >> that no thought, but accept that shm_mnt is such a ragbag of different >> usages, that i915 is right to prefer their own separate gemfs mount. >> >> Hugh >> >> --- mmotm/drivers/gpu/drm/i915/gem/i915_gemfs.c 2019-07-21 19:40:16.573703780 -0700 >> +++ linux/drivers/gpu/drm/i915/gem/i915_gemfs.c 2019-08-08 07:19:23.967689058 -0700 >> @@ -24,28 +24,6 @@ int i915_gemfs_init(struct drm_i915_priv >> if (IS_ERR(gemfs)) >> return PTR_ERR(gemfs); >> >> - /* >> - * Enable huge-pages for objects that are at least HPAGE_PMD_SIZE, most >> - * likely 2M. Note that within_size may overallocate huge-pages, if say >> - * we allocate an object of size 2M + 4K, we may get 2M + 2M, but under >> - * memory pressure shmem should split any huge-pages which can be >> - * shrunk. >> - */ >> - >> - if (has_transparent_hugepage()) { >> - struct super_block *sb = gemfs->mnt_sb; >> - /* FIXME: Disabled until we get W/A for read BW issue. */ >> - char options[] = "huge=never"; >> - int flags = 0; >> - int err; >> - >> - err = sb->s_op->remount_fs(sb, &flags, options); >> - if (err) { >> - kern_unmount(gemfs); >> - return err; >> - } >> - } > > That's perfectly fine; we should probably leave a hint as to why gemfs > exists and include the suggestion of looking at per-file hugepage > controls. > > Matthew, how does this affect your current plans? If at all? Fine with me. > -Chris >
diff --git a/drivers/gpu/drm/i915/gem/i915_gemfs.c b/drivers/gpu/drm/i915/gem/i915_gemfs.c index 099f3397aada..feedc9242072 100644 --- a/drivers/gpu/drm/i915/gem/i915_gemfs.c +++ b/drivers/gpu/drm/i915/gem/i915_gemfs.c @@ -7,14 +7,17 @@ #include <linux/fs.h> #include <linux/mount.h> #include <linux/pagemap.h> +#include <linux/fs_context.h> #include "i915_drv.h" #include "i915_gemfs.h" int i915_gemfs_init(struct drm_i915_private *i915) { + struct fs_context *fc = NULL; struct file_system_type *type; struct vfsmount *gemfs; + bool ok = true; type = get_fs_type("tmpfs"); if (!type) @@ -36,18 +39,29 @@ int i915_gemfs_init(struct drm_i915_private *i915) struct super_block *sb = gemfs->mnt_sb; /* FIXME: Disabled until we get W/A for read BW issue. */ char options[] = "huge=never"; - int flags = 0; - int err; - - err = sb->s_op->remount_fs(sb, &flags, options); - if (err) { - kern_unmount(gemfs); - return err; - } + + ok = false; + fc = fs_context_for_reconfigure(sb->s_root, 0, 0); + if (IS_ERR(fc)) + goto out; + + if (!fc->ops->parse_monolithic || + fc->ops->parse_monolithic(fc, options)) + goto out; + + if (fc->ops->reconfigure && !fc->ops->reconfigure(fc)) + ok = true; } +out: + if (!ok) + dev_err(i915->drm.dev, + "Unable to reconfigure %s. %s\n", + "shmemfs for preferred allocation strategy", + "Continuing, but performance may suffer"); + if (!IS_ERR_OR_NULL(fc)) + put_fs_context(fc); i915->mm.gemfs = gemfs; - return 0; }