From patchwork Thu Sep 22 22:40:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12985904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81647C6FA86 for ; Thu, 22 Sep 2022 22:40:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 14ABF940009; Thu, 22 Sep 2022 18:40:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F8F3940007; Thu, 22 Sep 2022 18:40:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDC15940009; Thu, 22 Sep 2022 18:40:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E0F28940007 for ; Thu, 22 Sep 2022 18:40:51 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id AD77CAB863 for ; Thu, 22 Sep 2022 22:40:51 +0000 (UTC) X-FDA: 79941192702.02.7CEEF9B Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf19.hostedemail.com (Postfix) with ESMTP id 54D821A0004 for ; Thu, 22 Sep 2022 22:40:51 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id n18-20020a056a000d5200b0053e16046751so6058209pfv.7 for ; Thu, 22 Sep 2022 15:40:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=puqRnKrvqny8lZD9l+R4ob28H2OLAsGqa9r7Be9elB8=; b=Ug3xHbvJiPRbeH5/g7NxHX1Ur65ZzMM13E45ZG6fdmj3/8Xr7SkDVcp+4InbNMfq23 6FKA1hdhTn8aBKhC5gmv/T/+jaMgSgC69WAyWTG5+VMvcLrq9vaj9jDFsNDSXq/7F3Kg onMgCGFUjVKf83LrP8GEWyUSLGaVO2h6RQQqMTvcgUS8Iu/vYaFUoh6gYjDsMDeTNNZ4 EMA7FTLwt05Dhx/r+I99rDnYr/rv8HHjPqW8KGROSd9O1KUhStlBbam+omwJJ3dOR6XF +cRf1/5tuW1ecaG1GExyP7svjNkE/4wllJtbKKNbbVtyeJ4bNKB5grOhEbKvrgjqyVxm qP/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=puqRnKrvqny8lZD9l+R4ob28H2OLAsGqa9r7Be9elB8=; b=V2Z8zojmq/Eg1cQbTL65+x6BPBLnGmWvXmmyucXG1pCA12ImGgTuzbMFP+EXV2aiNR RpmdqJR+I7Gq5+dmoVtUshqDB/t7XS94swdD+qSl+vB9nqpqXHh0DPRkulDuWBFE/XzY f19H31NCOs+PeiqW1QuJuzja6Ry9VpOwODzXW5Zik2oh+fA4BU4mxBYK9e03qsl70dT2 sO6FtTct/AXr2k2Vyd7tenFOQqr3yVvnAqs8FbXg9kw2HFgTItKyDEcuX8vCO6iFjryd ibWAGjd2lFgM9wXn7uh8r3jWp7lh/DIriQaSAQAUYmuW5W4rDtU1IHwD0aJXE04hPZQm nq+Q== X-Gm-Message-State: ACrzQf0PQdlAmQEPuc+V5r3m4SpKVblO5Xc2xE2z5W8h0OY1Rjp4vCmI z6fc05pWmxF7HOKOL98kU8dXUcOhfn6r X-Google-Smtp-Source: AMsMyM6XJd66bYtUXfIU+MAlCiD+0FD+J03+WZRCwrKo791W5IgmAyRWIMLXSySop/rZiyMsOAs6YvVUgwIk X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:e88b:b0:176:c37f:511c with SMTP id w11-20020a170902e88b00b00176c37f511cmr5628990plg.2.1663886450312; Thu, 22 Sep 2022 15:40:50 -0700 (PDT) Date: Thu, 22 Sep 2022 15:40:37 -0700 In-Reply-To: <20220922224046.1143204-1-zokeefe@google.com> Mime-Version: 1.0 References: <20220922224046.1143204-1-zokeefe@google.com> X-Mailer: git-send-email 2.37.3.998.g577e59143f-goog Message-ID: <20220922224046.1143204-2-zokeefe@google.com> Subject: [PATCH mm-unstable v4 01/10] mm/shmem: add flag to enforce shmem THP in hugepage_vma_check() From: "Zach O'Keefe" To: Andrew Morton Cc: linux-mm@kvack.org, linux-api@vger.kernel.org, Axel Rasmussen , James Houghton , Hugh Dickins , Yang Shi , Miaohe Lin , David Hildenbrand , David Rientjes , Matthew Wilcox , Pasha Tatashin , Peter Xu , Rongwei Wang , SeongJae Park , Song Liu , Vlastimil Babka , Chris Kennelly , "Kirill A. Shutemov" , Minchan Kim , Patrick Xia ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663886451; a=rsa-sha256; cv=none; b=Wzi/4uGjhu2EIr0TiXjTfESPsZJ74lw1XFeWHiS0WjYEvmEjz6iMhuy4xNrAGvJfshyIIK qC/j8s1bWuBAbBu74YnNCNAJGku2dLdO/4L/Ln/5CjXMKSww7XXtCE7mIegsYicOC7Z3Zz wnKtosgPrXZ9gjx2vAfpzR0oC1IY7Fk= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ug3xHbvJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3cuQsYwcKCA8E3zttutv33v0t.r310x29C-11zAprz.36v@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3cuQsYwcKCA8E3zttutv33v0t.r310x29C-11zAprz.36v@flex--zokeefe.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663886451; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=puqRnKrvqny8lZD9l+R4ob28H2OLAsGqa9r7Be9elB8=; b=zmP9T8fXJ8jPS2ZnxGUCWmosodXdQPNeRJQfdBiEAm9ddex2ybyQ+KVznOvObnco/cK26R HLLlEYo/z0At0M3RFDzzpnAws/rkvUOmFHp1U4I3zrJrqqykSpBsbZEgB0bPyUb3XMpEGx 2/ornztqXh1zhSgULEocr9dKLZimZmc= X-Rspamd-Server: rspam04 X-Rspam-User: X-Rspamd-Queue-Id: 54D821A0004 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ug3xHbvJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3cuQsYwcKCA8E3zttutv33v0t.r310x29C-11zAprz.36v@flex--zokeefe.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3cuQsYwcKCA8E3zttutv33v0t.r310x29C-11zAprz.36v@flex--zokeefe.bounces.google.com X-Stat-Signature: ruuxiiwm6d91awtuuj8z5ng9bkq3taet X-HE-Tag: 1663886451-370459 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extend 'mm/thp: add flag to enforce sysfs THP in hugepage_vma_check()' to shmem, allowing callers to ignore /sys/kernel/transparent_hugepage/shmem_enabled and tmpfs huge= mount. This is intended to be used by MADV_COLLAPSE, and the rationale is analogous to the anon/file case: MADV_COLLAPSE is not coupled to directives that advise the kernel's decisions on when THPs should be considered eligible. shmem/tmpfs always claims large folio support, regardless of sysfs or mount options. [shy828301@gmail.com: test shmem_huge_force explicitly] Link: https://lore.kernel.org/linux-mm/CAHbLzko3A5-TpS0BgBeKkx5cuOkWgLvWXQH=TdgW-baO4rPtdg@mail.gmail.com/ Link: https://lkml.kernel.org/r/20220907144521.3115321-2-zokeefe@google.com Signed-off-by: Zach O'Keefe Reviewed-by: Yang Shi Cc: Axel Rasmussen Cc: Chris Kennelly Cc: David Hildenbrand Cc: David Rientjes Cc: Hugh Dickins Cc: James Houghton Cc: "Kirill A. Shutemov" Cc: Matthew Wilcox Cc: Miaohe Lin Cc: Minchan Kim Cc: Pasha Tatashin Cc: Peter Xu Cc: Rongwei Wang Cc: SeongJae Park Cc: Song Liu Cc: Vlastimil Babka --- include/linux/shmem_fs.h | 10 ++++++---- mm/huge_memory.c | 2 +- mm/shmem.c | 18 ++++++++++-------- 3 files changed, 17 insertions(+), 13 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index f24071e3c826..d500ea967dc7 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -92,11 +92,13 @@ extern struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); -extern bool shmem_is_huge(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index); -static inline bool shmem_huge_enabled(struct vm_area_struct *vma) +extern bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, + pgoff_t index, bool shmem_huge_force); +static inline bool shmem_huge_enabled(struct vm_area_struct *vma, + bool shmem_huge_force) { - return shmem_is_huge(vma, file_inode(vma->vm_file), vma->vm_pgoff); + return shmem_is_huge(vma, file_inode(vma->vm_file), vma->vm_pgoff, + shmem_huge_force); } extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4938defe4e73..1cc4a5f4791e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -119,7 +119,7 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, * own flags. */ if (!in_pf && shmem_file(vma->vm_file)) - return shmem_huge_enabled(vma); + return shmem_huge_enabled(vma, !enforce_sysfs); /* Enforce sysfs THP requirements as necessary */ if (enforce_sysfs && diff --git a/mm/shmem.c b/mm/shmem.c index 305dbca9ceef..fd15b7ca1d7e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -462,20 +462,22 @@ static bool shmem_confirm_swap(struct address_space *mapping, static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; -bool shmem_is_huge(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index) +bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, + pgoff_t index, bool shmem_huge_force) { loff_t i_size; if (!S_ISREG(inode->i_mode)) return false; - if (shmem_huge == SHMEM_HUGE_DENY) - return false; if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) || test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) return false; + if (shmem_huge_force) + return true; if (shmem_huge == SHMEM_HUGE_FORCE) return true; + if (shmem_huge == SHMEM_HUGE_DENY) + return false; switch (SHMEM_SB(inode->i_sb)->huge) { case SHMEM_HUGE_ALWAYS: @@ -670,8 +672,8 @@ static long shmem_unused_huge_count(struct super_block *sb, #define shmem_huge SHMEM_HUGE_DENY -bool shmem_is_huge(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index) +bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, + pgoff_t index, bool shmem_huge_force) { return false; } @@ -1058,7 +1060,7 @@ static int shmem_getattr(struct user_namespace *mnt_userns, STATX_ATTR_NODUMP); generic_fillattr(&init_user_ns, inode, stat); - if (shmem_is_huge(NULL, inode, 0)) + if (shmem_is_huge(NULL, inode, 0, false)) stat->blksize = HPAGE_PMD_SIZE; if (request_mask & STATX_BTIME) { @@ -1900,7 +1902,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - if (!shmem_is_huge(vma, inode, index)) + if (!shmem_is_huge(vma, inode, index, false)) goto alloc_nohuge; huge_gfp = vma_thp_gfp_mask(vma);