diff mbox series

[08/16] huge tmpfs: fcntl(fd, F_HUGEPAGE) and fcntl(fd, F_NOHUGEPAGE)

Message ID 1c32c75b-095-22f0-aee3-30a44d4a4744@google.com (mailing list archive)
State New
Headers show
Series tmpfs: HUGEPAGE and MEM_LOCK fcntls and memfds | expand

Commit Message

Hugh Dickins July 30, 2021, 7:48 a.m. UTC
Add support for fcntl(fd, F_HUGEPAGE) and fcntl(fd, F_NOHUGEPAGE), to
select hugeness per file: useful to override the default hugeness of the
shmem mount, when occasionally needing to store a hugepage file in a
smallpage mount or vice versa.

These fcntls just specify whether or not to try for huge pages when
allocating to the object later: F_HUGEPAGE does not touch small pages
already allocated (though khugepaged may do so when the file is mapped
afterwards), F_NOHUGEPAGE does not split huge pages already allocated.

Why fcntl?  Because it's already in use (for sealing) on memfds; and I'm
anxious to keep this simple, just applying it to whole files: fallocate,
madvise and posix_fadvise each involve a range, which would need a new
kind of tree attached to the inode for proper support.  Any application
needing range support should be able to provide that from userspace, by
issuing the respective fcntl prior to instantiating each range.

Do not allow it when the file is open read-only (EBADF).  Do not permit
a PR_SET_THP_DISABLE (MMF_DISABLE_THP) task to interfere with the flags,
and do not let VM_HUGEPAGE be set if THPs are not allowed at all (EPERM).

Note that transparent_hugepage_allowed(), used to validate F_HUGEPAGE,
accepts (anon) transparent_hugepage_flags in addition to mount option.
This is to overcome the limitation of the "huge=advise" option, which
applies hugepage alignment (reducing ASLR) to all mappings, because
madvise(address,len,MADV_HUGEPAGE) needs address before it can be used.
So mount option "huge=never" gives a default which can be overridden by
fcntl(fd, F_HUGEPAGE) when /sys/kernel/mm/transparent_hugepage/enabled
is not "never" too.  (We could instead add a "huge=fcntl" mount option
between "never" and "advise", but I lack the enthusiasm for that.)

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 fs/fcntl.c                 |  5 +++
 include/linux/shmem_fs.h   |  8 +++++
 include/uapi/linux/fcntl.h |  9 +++++
 mm/shmem.c                 | 70 ++++++++++++++++++++++++++++++++++----
 4 files changed, 85 insertions(+), 7 deletions(-)

Comments

Kirill A. Shutemov Aug. 4, 2021, 2:08 p.m. UTC | #1
On Fri, Jul 30, 2021 at 12:48:33AM -0700, Hugh Dickins wrote:
> Add support for fcntl(fd, F_HUGEPAGE) and fcntl(fd, F_NOHUGEPAGE), to
> select hugeness per file: useful to override the default hugeness of the
> shmem mount, when occasionally needing to store a hugepage file in a
> smallpage mount or vice versa.

Hm. But why is the new MFD_* needed if the fcntl() can do the same.

> These fcntls just specify whether or not to try for huge pages when
> allocating to the object later: F_HUGEPAGE does not touch small pages
> already allocated (though khugepaged may do so when the file is mapped
> afterwards), F_NOHUGEPAGE does not split huge pages already allocated.
> 
> Why fcntl?  Because it's already in use (for sealing) on memfds; and I'm
> anxious to keep this simple, just applying it to whole files: fallocate,
> madvise and posix_fadvise each involve a range, which would need a new
> kind of tree attached to the inode for proper support.

Most of fadvise() operations ignore the range. I like fadvise() because
it's less prescriptive: kernel is free to ignore it.
Hugh Dickins Aug. 6, 2021, 4:34 a.m. UTC | #2
On Wed, 4 Aug 2021, Kirill A. Shutemov wrote:
> On Fri, Jul 30, 2021 at 12:48:33AM -0700, Hugh Dickins wrote:
> > Add support for fcntl(fd, F_HUGEPAGE) and fcntl(fd, F_NOHUGEPAGE), to
> > select hugeness per file: useful to override the default hugeness of the
> > shmem mount, when occasionally needing to store a hugepage file in a
> > smallpage mount or vice versa.
> 
> Hm. But why is the new MFD_* needed if the fcntl() can do the same.

That I've just addressed in the MFD_HUGEPAGE 07/16 thread.

> 
> > These fcntls just specify whether or not to try for huge pages when
> > allocating to the object later: F_HUGEPAGE does not touch small pages
> > already allocated (though khugepaged may do so when the file is mapped
> > afterwards), F_NOHUGEPAGE does not split huge pages already allocated.
> > 
> > Why fcntl?  Because it's already in use (for sealing) on memfds; and I'm
> > anxious to keep this simple, just applying it to whole files: fallocate,
> > madvise and posix_fadvise each involve a range, which would need a new
> > kind of tree attached to the inode for proper support.
> 
> Most of fadvise() operations ignore the range. I like fadvise() because
> it's less prescriptive: kernel is free to ignore it.

As to ignoring the range, yes, I see now that some do; and I'm relieved
to see "Len == 0 means as much as possible", that's great, I was afraid
of compat bugs over 0xffy numbers for the len.  And we would want, not
to ignore the range, but insist on offset 0, len 0 for now, if there's
any intention (not mine) of extending it to ranges in the future.

As to ignoring the prescription, that's just a matter of how we describe
it in the manpage, no matter whether it's fadvise() or fcntl().

And in the 07/16 thread you also said:

> 
> If a tunable needed, I would rather go with fadvise(). It would operate on
> a couple of bits per struct file and they get translated into VM_HUGEPAGE
> and VM_NOHUGEPAGE on mmap().

Not so sure about that detail: the point here is to decide what kind
of allocations to try for, before the file is mmap()ed; and it is the
file (the underlying object) that I want to condition here, rather than
the struct file of who has it open at the time, or their mmap()s.

But adding the flags into the vm_flags on mmap(): that's an interesting
idea, I haven't played with that at all.  Offhand, I don't think it will
give different allocation results from what I'm already doing, but might
affect what is shown by default in /proc/<pid>/smaps.

> 
> Later if needed fadvise() implementation may be extended to track
> requested ranges. But initially it can be simple.

I still prefer fcntl() myself, but we can go with either: what I'd
like to hear is the preference of linux-fsdevel and linux-api people.

Aside from the unused offset+len, my main problem with fadvise()
is that... it doesn't exist.  It's posix_fadvise() or fadvise64() or
fadvise64_64(), and all its good advices are POSIX_MADV_whatever.

Are we comfortable now adding LINUX_MADV_HUGEPAGE, LINUX_MADV_NOHUGEPAGE?

I find myself singing 64 64 Zoo Lane.

Hugh
diff mbox series

Patch

diff --git a/fs/fcntl.c b/fs/fcntl.c
index f946bec8f1f1..9cfff87c3332 100644
--- a/fs/fcntl.c
+++ b/fs/fcntl.c
@@ -23,6 +23,7 @@ 
 #include <linux/rcupdate.h>
 #include <linux/pid_namespace.h>
 #include <linux/user_namespace.h>
+#include <linux/shmem_fs.h>
 #include <linux/memfd.h>
 #include <linux/compat.h>
 #include <linux/mount.h>
@@ -434,6 +435,10 @@  static long do_fcntl(int fd, unsigned int cmd, unsigned long arg,
 	case F_SET_FILE_RW_HINT:
 		err = fcntl_rw_hint(filp, cmd, arg);
 		break;
+	case F_HUGEPAGE:
+	case F_NOHUGEPAGE:
+		err = shmem_fcntl(filp, cmd, arg);
+		break;
 	default:
 		break;
 	}
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 3b05a28e34c4..51b75d74ce89 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -67,6 +67,14 @@  extern int shmem_zero_setup(struct vm_area_struct *);
 extern unsigned long shmem_get_unmapped_area(struct file *, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 extern int shmem_lock(struct file *file, int lock, struct ucounts *ucounts);
+#ifdef CONFIG_TMPFS
+extern long shmem_fcntl(struct file *file, unsigned int cmd, unsigned long arg);
+#else
+static inline long shmem_fcntl(struct file *f, unsigned int c, unsigned long a)
+{
+	return -EINVAL;
+}
+#endif /* CONFIG_TMPFS */
 #ifdef CONFIG_SHMEM
 extern const struct address_space_operations shmem_aops;
 static inline bool shmem_mapping(struct address_space *mapping)
diff --git a/include/uapi/linux/fcntl.h b/include/uapi/linux/fcntl.h
index 2f86b2ad6d7e..10f82b223642 100644
--- a/include/uapi/linux/fcntl.h
+++ b/include/uapi/linux/fcntl.h
@@ -73,6 +73,15 @@ 
  */
 #define RWF_WRITE_LIFE_NOT_SET	RWH_WRITE_LIFE_NOT_SET
 
+/*
+ * Allocate hugepages when available: useful on a tmpfs which was not mounted
+ * with the "huge=always" option, as for memfds.  And, do not allocate hugepages
+ * even when available: useful to cancel the above request, or make an exception
+ * on a tmpfs mounted with "huge=always" (without splitting existing hugepages).
+ */
+#define F_HUGEPAGE		(F_LINUX_SPECIFIC_BASE + 15)
+#define F_NOHUGEPAGE		(F_LINUX_SPECIFIC_BASE + 16)
+
 /*
  * Types of directory notifications that may be requested.
  */
diff --git a/mm/shmem.c b/mm/shmem.c
index e2bcf3313686..67a4b7a4849b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -448,9 +448,9 @@  static bool shmem_confirm_swap(struct address_space *mapping,
  *	enables huge pages for the mount;
  * SHMEM_HUGE_WITHIN_SIZE:
  *	only allocate huge pages if the page will be fully within i_size,
- *	also respect fadvise()/madvise() hints;
+ *	also respect fcntl()/madvise() hints;
  * SHMEM_HUGE_ADVISE:
- *	only allocate huge pages if requested with fadvise()/madvise();
+ *	only allocate huge pages if requested with fcntl()/madvise().
  */
 
 #define SHMEM_HUGE_NEVER	0
@@ -477,13 +477,13 @@  static bool shmem_confirm_swap(struct address_space *mapping,
 static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER;
 
 /*
- * Does either /sys/kernel/mm/transparent_hugepage/shmem_enabled or
+ * Does either tmpfs mount option (or transparent_hugepage/shmem_enabled) or
  * /sys/kernel/mm/transparent_hugepage/enabled allow transparent hugepages?
  * (Can only return true when the machine has_transparent_hugepage() too.)
  */
-static bool transparent_hugepage_allowed(void)
+static bool transparent_hugepage_allowed(struct shmem_sb_info *sbinfo)
 {
-	return	shmem_huge > SHMEM_HUGE_NEVER ||
+	return	sbinfo->huge > SHMEM_HUGE_NEVER ||
 		test_bit(TRANSPARENT_HUGEPAGE_FLAG,
 			&transparent_hugepage_flags) ||
 		test_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
@@ -500,6 +500,8 @@  bool shmem_is_huge(struct vm_area_struct *vma,
 	if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) ||
 	    test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)))
 		return false;
+	if (SHMEM_I(inode)->flags & VM_NOHUGEPAGE)
+		return false;
 	if (SHMEM_I(inode)->flags & VM_HUGEPAGE)
 		return true;
 	if (shmem_huge == SHMEM_HUGE_FORCE)
@@ -692,7 +694,7 @@  static long shmem_unused_huge_count(struct super_block *sb,
 
 #define shmem_huge SHMEM_HUGE_DENY
 
-bool transparent_hugepage_allowed(void)
+bool transparent_hugepage_allowed(struct shmem_sb_info *sbinfo)
 {
 	return false;
 }
@@ -2197,6 +2199,8 @@  unsigned long shmem_get_unmapped_area(struct file *file,
 		if (file) {
 			VM_BUG_ON(file->f_op != &shmem_file_operations);
 			inode = file_inode(file);
+			if (SHMEM_I(inode)->flags & VM_NOHUGEPAGE)
+				return addr;
 			if (SHMEM_I(inode)->flags & VM_HUGEPAGE)
 				goto huge;
 			sb = inode->i_sb;
@@ -2211,6 +2215,11 @@  unsigned long shmem_get_unmapped_area(struct file *file,
 		}
 		if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER)
 			return addr;
+		/*
+		 * Note that SHMEM_HUGE_ADVISE has to give out huge-aligned
+		 * addresses to everyone, because madvise(,,MADV_HUGEPAGE)
+		 * needs the address-chicken on which to advise if huge-egg.
+		 */
 	}
 huge:
 	offset = (pgoff << PAGE_SHIFT) & (HPAGE_PMD_SIZE-1);
@@ -2334,7 +2343,7 @@  static struct inode *shmem_get_inode(struct super_block *sb, const struct inode
 		info->seals = F_SEAL_SEAL;
 		info->flags = flags & VM_NORESERVE;
 		if ((flags & VM_HUGEPAGE) &&
-		    transparent_hugepage_allowed() &&
+		    transparent_hugepage_allowed(sbinfo) &&
 		    !test_bit(MMF_DISABLE_THP, &current->mm->flags))
 			info->flags |= VM_HUGEPAGE;
 		INIT_LIST_HEAD(&info->shrinklist);
@@ -2674,6 +2683,53 @@  static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence)
 	return offset;
 }
 
+static int shmem_huge_fcntl(struct file *file, unsigned int cmd)
+{
+	struct inode *inode = file_inode(file);
+	struct shmem_inode_info *info = SHMEM_I(inode);
+
+	if (!(file->f_mode & FMODE_WRITE))
+		return -EBADF;
+	if (test_bit(MMF_DISABLE_THP, &current->mm->flags))
+		return -EPERM;
+	if (cmd == F_HUGEPAGE &&
+	    !transparent_hugepage_allowed(SHMEM_SB(inode->i_sb)))
+		return -EPERM;
+
+	inode_lock(inode);
+	if (cmd == F_HUGEPAGE) {
+		info->flags &= ~VM_NOHUGEPAGE;
+		info->flags |= VM_HUGEPAGE;
+	} else {
+		info->flags &= ~VM_HUGEPAGE;
+		info->flags |= VM_NOHUGEPAGE;
+	}
+	inode_unlock(inode);
+	return 0;
+}
+
+long shmem_fcntl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+	long error = -EINVAL;
+
+	if (file->f_op != &shmem_file_operations)
+		return error;
+
+	switch (cmd) {
+	/*
+	 * case F_ADD_SEALS:
+	 * case F_GET_SEALS:
+	 *	are handled by memfd_fcntl().
+	 */
+	case F_HUGEPAGE:
+	case F_NOHUGEPAGE:
+		error = shmem_huge_fcntl(file, cmd);
+		break;
+	}
+
+	return error;
+}
+
 static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 							 loff_t len)
 {