From patchwork Fri Jul 30 07:45:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12410597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9AA3C4338F for ; Fri, 30 Jul 2021 07:45:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5488160EBB for ; Fri, 30 Jul 2021 07:45:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5488160EBB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E393F6B0036; Fri, 30 Jul 2021 03:45:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE99C6B005D; Fri, 30 Jul 2021 03:45:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB2016B006C; Fri, 30 Jul 2021 03:45:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id AEA596B0036 for ; Fri, 30 Jul 2021 03:45:54 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 45292253D5 for ; Fri, 30 Jul 2021 07:45:54 +0000 (UTC) X-FDA: 78418470228.03.4CF972C Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) by imf18.hostedemail.com (Postfix) with ESMTP id F0EAE4001890 for ; Fri, 30 Jul 2021 07:45:53 +0000 (UTC) Received: by mail-qk1-f177.google.com with SMTP id x3so8605826qkl.6 for ; Fri, 30 Jul 2021 00:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=Hjb4Qvz+d4rYfp83KEA4cSUCQ4NRMEv6hD5clar2zEo=; b=a7HxZWdRuQp/zhxdj9ZcSffkcLO8HYoMF85EQHR69lnn6lWZFwTxyv1XpqKMeLtIqv XIXaGDHJwfWpAl+KwQV0h6XIqzYEOWE60HA9eqcxGD37L31oqLG3edyDXvQ8Retknk6K 0WuRqc35Hc3ncI3tEDPx4nIPfhoOoxRviPd0VD2XUuZ+hb4W1vHQugu6R4CpXCa/DO8C KSOXL82pzPwx3B5pU6GJNxVM/E22uC7bFDvuHFjKggCu97Z4dxW6o33YLWi4uEDKMi6K z/rRSowG6Za2DhA/BYRuReLIYJQ/tXN+EonrH/gzp+MW3wGfZKCQhOAp8C0SrnHPDetw UGeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=Hjb4Qvz+d4rYfp83KEA4cSUCQ4NRMEv6hD5clar2zEo=; b=kzKUtHt8X3NfSGIWtmswSEaALbG96LAKZKqKuKQJVQJ+nZqZtTPwWAYjmNb8gOhkUY hmLxNaXfo/AaDZQ79pDveN7cOZolbNNNv849Os8fkdsfhJoAhxkatjhj7lqozZyfz+7a bN7VnDG8EWMg5ek4vuinurwC0G+NZTE0+ZRy9xh9DB/5e6PhEe2q7tzluywGdjSBaRzi jOC/pCbtDVj7aiEQ2hKmCKa+oeF/9IDlvsmmOt0nlnAOyUzqRkIdAFNzo1nnpNsLCaXe 8ZyLd+aBK5IJcTUPKxSdFk7vlakJkPa/OZhywGpHobPIUmVPVkOfr8Qa3vCr4k3JXXsR oR6Q== X-Gm-Message-State: AOAM531ha3KefwnfMQm7X7JQuaDFe8MAthEwZNf5cGifNN3iYg1Zw27e B0H1QUIKWTPDqODLIM7kXa1X7g== X-Google-Smtp-Source: ABdhPJyxwl7K7cMuTTQwVGAyfcH23tbKgh3bhsC84QKSqBFc0wlrNsNL3TZXdmX5JSzubXT3+108+A== X-Received: by 2002:ae9:e901:: with SMTP id x1mr1079559qkf.360.1627631153052; Fri, 30 Jul 2021 00:45:53 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x125sm539177qkd.8.2021.07.30.00.45.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Jul 2021 00:45:52 -0700 (PDT) Date: Fri, 30 Jul 2021 00:45:49 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , Shakeel Butt , "Kirill A. Shutemov" , Yang Shi , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Christoph Hellwig , Matthew Wilcox , "Eric W. Biederman" , Alexey Gladkov , Chris Wilson , Matthew Auld , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 07/16] memfd: memfd_create(name, MFD_HUGEPAGE) for shmem huge pages In-Reply-To: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> Message-ID: References: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: F0EAE4001890 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=a7HxZWdR; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of hughd@google.com designates 209.85.222.177 as permitted sender) smtp.mailfrom=hughd@google.com X-Stat-Signature: mzdma15uzmsmjtri5d3yzz6u5kz31ieg X-HE-Tag: 1627631153-617003 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit 749df87bd7be ("mm/shmem: add hugetlbfs support to memfd_create()") in 4.14 added the MFD_HUGETLB flag to memfd_create(), to use hugetlbfs pages instead of tmpfs pages: now add the MFD_HUGEPAGE flag, to use tmpfs Transparent Huge Pages when they can be allocated (flag named to follow the precedent of madvise's MADV_HUGEPAGE for THPs). /sys/kernel/mm/transparent_hugepage/shmem_enabled "always" or "force" already made this possible: but that is much too blunt an instrument, affecting all the very different kinds of files on the internal shmem mount, and was intended just for ease of testing hugepage loads. MFD_HUGEPAGE is implemented internally by VM_HUGEPAGE in the shmem inode flags: do not permit a PR_SET_THP_DISABLE (MMF_DISABLE_THP) task to set this flag, and do not set it if THPs are not allowed at all; but let the memfd_create() succeed even in those cases - the caller wants to create a memfd, just hinting how it's best allocated if huge pages are available. shmem_is_huge() (at allocation time or khugepaged time) applies its SHMEM_HUGE_DENY and vma VM_NOHUGEPAGE and vm_mm MMF_DISABLE_THP checks first, and only then allows the memfd's MFD_HUGEPAGE to take effect. Signed-off-by: Hugh Dickins Reported-by: kernel test robot --- include/uapi/linux/memfd.h | 3 ++- mm/memfd.c | 24 ++++++++++++++++++------ mm/shmem.c | 33 +++++++++++++++++++++++++++++++-- 3 files changed, 51 insertions(+), 9 deletions(-) diff --git a/include/uapi/linux/memfd.h b/include/uapi/linux/memfd.h index 7a8a26751c23..8358a69e78cc 100644 --- a/include/uapi/linux/memfd.h +++ b/include/uapi/linux/memfd.h @@ -7,7 +7,8 @@ /* flags for memfd_create(2) (unsigned int) */ #define MFD_CLOEXEC 0x0001U #define MFD_ALLOW_SEALING 0x0002U -#define MFD_HUGETLB 0x0004U +#define MFD_HUGETLB 0x0004U /* Use hugetlbfs */ +#define MFD_HUGEPAGE 0x0008U /* Use huge tmpfs */ /* * Huge page size encoding when MFD_HUGETLB is specified, and a huge page diff --git a/mm/memfd.c b/mm/memfd.c index 081dd33e6a61..0d1a504d2fc9 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -245,7 +245,10 @@ long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg) #define MFD_NAME_PREFIX_LEN (sizeof(MFD_NAME_PREFIX) - 1) #define MFD_NAME_MAX_LEN (NAME_MAX - MFD_NAME_PREFIX_LEN) -#define MFD_ALL_FLAGS (MFD_CLOEXEC | MFD_ALLOW_SEALING | MFD_HUGETLB) +#define MFD_ALL_FLAGS (MFD_CLOEXEC | \ + MFD_ALLOW_SEALING | \ + MFD_HUGETLB | \ + MFD_HUGEPAGE) SYSCALL_DEFINE2(memfd_create, const char __user *, uname, @@ -257,14 +260,17 @@ SYSCALL_DEFINE2(memfd_create, char *name; long len; - if (!(flags & MFD_HUGETLB)) { - if (flags & ~(unsigned int)MFD_ALL_FLAGS) + if (flags & MFD_HUGETLB) { + /* Disallow huge tmpfs when choosing hugetlbfs */ + if (flags & MFD_HUGEPAGE) return -EINVAL; - } else { /* Allow huge page size encoding in flags. */ if (flags & ~(unsigned int)(MFD_ALL_FLAGS | (MFD_HUGE_MASK << MFD_HUGE_SHIFT))) return -EINVAL; + } else { + if (flags & ~(unsigned int)MFD_ALL_FLAGS) + return -EINVAL; } /* length includes terminating zero */ @@ -303,8 +309,14 @@ SYSCALL_DEFINE2(memfd_create, HUGETLB_ANONHUGE_INODE, (flags >> MFD_HUGE_SHIFT) & MFD_HUGE_MASK); - } else - file = shmem_file_setup(name, 0, VM_NORESERVE); + } else { + unsigned long vm_flags = VM_NORESERVE; + + if (flags & MFD_HUGEPAGE) + vm_flags |= VM_HUGEPAGE; + file = shmem_file_setup(name, 0, vm_flags); + } + if (IS_ERR(file)) { error = PTR_ERR(file); goto err_fd; diff --git a/mm/shmem.c b/mm/shmem.c index 6def7391084c..e2bcf3313686 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -476,6 +476,20 @@ static bool shmem_confirm_swap(struct address_space *mapping, static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; +/* + * Does either /sys/kernel/mm/transparent_hugepage/shmem_enabled or + * /sys/kernel/mm/transparent_hugepage/enabled allow transparent hugepages? + * (Can only return true when the machine has_transparent_hugepage() too.) + */ +static bool transparent_hugepage_allowed(void) +{ + return shmem_huge > SHMEM_HUGE_NEVER || + test_bit(TRANSPARENT_HUGEPAGE_FLAG, + &transparent_hugepage_flags) || + test_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, + &transparent_hugepage_flags); +} + bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, pgoff_t index) { @@ -486,6 +500,8 @@ bool shmem_is_huge(struct vm_area_struct *vma, if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) || test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) return false; + if (SHMEM_I(inode)->flags & VM_HUGEPAGE) + return true; if (shmem_huge == SHMEM_HUGE_FORCE) return true; @@ -676,6 +692,11 @@ static long shmem_unused_huge_count(struct super_block *sb, #define shmem_huge SHMEM_HUGE_DENY +bool transparent_hugepage_allowed(void) +{ + return false; +} + bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, pgoff_t index) { @@ -2171,10 +2192,14 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (shmem_huge != SHMEM_HUGE_FORCE) { struct super_block *sb; + struct inode *inode; if (file) { VM_BUG_ON(file->f_op != &shmem_file_operations); - sb = file_inode(file)->i_sb; + inode = file_inode(file); + if (SHMEM_I(inode)->flags & VM_HUGEPAGE) + goto huge; + sb = inode->i_sb; } else { /* * Called directly from mm/mmap.c, or drivers/char/mem.c @@ -2187,7 +2212,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) return addr; } - +huge: offset = (pgoff << PAGE_SHIFT) & (HPAGE_PMD_SIZE-1); if (offset && offset + len < 2 * HPAGE_PMD_SIZE) return addr; @@ -2308,6 +2333,10 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode atomic_set(&info->stop_eviction, 0); info->seals = F_SEAL_SEAL; info->flags = flags & VM_NORESERVE; + if ((flags & VM_HUGEPAGE) && + transparent_hugepage_allowed() && + !test_bit(MMF_DISABLE_THP, ¤t->mm->flags)) + info->flags |= VM_HUGEPAGE; INIT_LIST_HEAD(&info->shrinklist); INIT_LIST_HEAD(&info->swaplist); simple_xattrs_init(&info->xattrs);