From patchwork Wed Sep 7 14:45:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zach O'Keefe X-Patchwork-Id: 12969070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F8F0C6FA89 for ; Wed, 7 Sep 2022 14:45:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BADBE6B0073; Wed, 7 Sep 2022 10:45:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B36346B0074; Wed, 7 Sep 2022 10:45:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FDC58D0001; Wed, 7 Sep 2022 10:45:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 934F56B0073 for ; Wed, 7 Sep 2022 10:45:39 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 430A9121995 for ; Wed, 7 Sep 2022 14:45:39 +0000 (UTC) X-FDA: 79885563198.28.735E912 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) by imf18.hostedemail.com (Postfix) with ESMTP id B47A21C0071 for ; Wed, 7 Sep 2022 14:45:38 +0000 (UTC) Received: by mail-pg1-f201.google.com with SMTP id 136-20020a63008e000000b0042d707c94fbso7550009pga.9 for ; Wed, 07 Sep 2022 07:45:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=MN/ZlHKT+7W41miOn2/0m2ZA9qELlvXE7LledaUHh44=; b=JOIc6Jl/2jszGxnKakpKeyS70LVbbi4bcyql7qNYfzOaushdICpG9od2gtLL7fyvjt S7uWIFeB1CQ3VrLmUXVITGdbIJ0yuW/CMx7ZRxjxXzx/DvLkIUbKZ6wqCZ8xoxvtvJAD QStEyMMpJAXdcuBWJSvbT7kyTAtZM7odCYGsZSfgMwGx+uLwcqNd++RYRwQHwFDBT9Xv 8R7JGgBr2EHsr5FwVe7FRzXZXQ3bcPANuiJuiBPwavWkMMDBJYeyep0nyU+/361IWFWm GWb7CkhmcgJOwlj/Ms3dMjQml6ONqy5eIB53gRnx56BT26M49f/PW7aKyP1lhkJHvgOB YWXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=MN/ZlHKT+7W41miOn2/0m2ZA9qELlvXE7LledaUHh44=; b=CNesjZlmJStq1fuDTEo4qmNxHdJI7HR74Yj86m+2TSjGIsTiWkJbw5ya4zeSIvMiv9 f8/oJL/JUPl+JIRNkVnO+ZXllGx1HPGO0O+sepLrzmLoVJETBpBNw396FGcnvVV/3IVA CaMNYNhTW/AjjhiQv9Ip8I6NMK+yeM4up4vihjvzUWXuvp67PTEAAfNfS5XVYszbVVDo te0sIhQYmHWvPZxlKOno84NSv3FnI8xLXC584TH/hR3DOSncaToFihjp+na9vrP/hvkW HvUUFcEk5QaTdxBZNBTakesQKch7EYWSTjnq1oERvYJgx8IlGNPNt/31naYtBV4HLEJ1 E2bQ== X-Gm-Message-State: ACgBeo2NFTLlQVSZAW2jLkgh0wsOdlclGLBWhJlPIXRFk3I6nO/KVJfM EUspnH/VM1js5OwcMrDRDutNXOW0K+aCMqdd05eJvthrADVSr/VwFORQ8IvJkBfFh2vu6jBK0vZ 8eHaylqo4P9be6LSsJXplGQHtHG7/8RBKcDIAKHZeSMXL5N8vCOKScGFhd+Q= X-Google-Smtp-Source: AA6agR78bGgOXV5fSVnv0NdZOOIOhzqfjAlFgDDjCKJnYnJ6IVzy/LTPPfuU4zmHHtyq3VFWKqY+w+sikor4 X-Received: from zokeefe3.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1b6]) (user=zokeefe job=sendgmr) by 2002:a17:902:db0e:b0:177:e590:df8d with SMTP id m14-20020a170902db0e00b00177e590df8dmr1689764plx.157.1662561936532; Wed, 07 Sep 2022 07:45:36 -0700 (PDT) Date: Wed, 7 Sep 2022 07:45:12 -0700 In-Reply-To: <20220907144521.3115321-1-zokeefe@google.com> Mime-Version: 1.0 References: <20220907144521.3115321-1-zokeefe@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220907144521.3115321-2-zokeefe@google.com> Subject: [PATCH mm-unstable v3 01/10] mm/shmem: add flag to enforce shmem THP in hugepage_vma_check() From: "Zach O'Keefe" To: linux-mm@kvack.org Cc: Andrew Morton , linux-api@vger.kernel.org, Axel Rasmussen , James Houghton , Hugh Dickins , Yang Shi , Miaohe Lin , David Hildenbrand , David Rientjes , Matthew Wilcox , Pasha Tatashin , Peter Xu , Rongwei Wang , SeongJae Park , Song Liu , Vlastimil Babka , Chris Kennelly , "Kirill A. Shutemov" , Minchan Kim , Patrick Xia , "Zach O'Keefe" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662561938; a=rsa-sha256; cv=none; b=5Cq/XqoRio2qbsOYqPiCUPCKMk1ASTjef5ml2hoI+0uxXL2kYcMYUseO/vzBlyF5JU8WTe fFGTBmszbBVhY+ce5tZzOfIaCq/0sd2elNItSzW3j8eRtiFqJeCZlG1RatZq47oykQoHYO sfnvCW+ZdL3aHmLP4DaliI4Iksc45IQ= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="JOIc6Jl/"; spf=pass (imf18.hostedemail.com: domain of 3kK4YYwcKCG8mbXRRSRTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--zokeefe.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3kK4YYwcKCG8mbXRRSRTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662561938; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MN/ZlHKT+7W41miOn2/0m2ZA9qELlvXE7LledaUHh44=; b=YXFwHTywcf7QPZTsHaWPylA2UnLmBQmXT94RGbQeGXOiLJ1sVZUUj+EKdILCdX+oIJ17M8 nUOsbi6jLUorWdvZ+l4hJ+Jw3PHotfqJP0L3TX+iDLQ2e155GEJezgG6pPnESuvj+Jugtb TFnT9kZoTEubDrnFI0N2Mvoioewu8rE= X-Rspam-User: X-Rspamd-Queue-Id: B47A21C0071 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="JOIc6Jl/"; spf=pass (imf18.hostedemail.com: domain of 3kK4YYwcKCG8mbXRRSRTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--zokeefe.bounces.google.com designates 209.85.215.201 as permitted sender) smtp.mailfrom=3kK4YYwcKCG8mbXRRSRTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--zokeefe.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: p7c9k74fyg6tjbt3y8emrr9c1g65xccc X-Rspamd-Server: rspam03 X-HE-Tag: 1662561938-735364 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Extend 'mm/thp: add flag to enforce sysfs THP in hugepage_vma_check()' to shmem, allowing callers to ignore /sys/kernel/transparent_hugepage/shmem_enabled and tmpfs huge= mount. This is intended to be used by MADV_COLLAPSE, and the rationale is analogous to the anon/file case: MADV_COLLAPSE is not coupled to directives that advise the kernel's decisions on when THPs should be considered eligible. shmem/tmpfs always claims large folio support, regardless of sysfs or mount options. Signed-off-by: Zach O'Keefe Reviewed-by: Yang Shi --- include/linux/shmem_fs.h | 10 ++++++---- mm/huge_memory.c | 2 +- mm/shmem.c | 18 +++++++++--------- 3 files changed, 16 insertions(+), 14 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index f24071e3c826..d500ea967dc7 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -92,11 +92,13 @@ extern struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); -extern bool shmem_is_huge(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index); -static inline bool shmem_huge_enabled(struct vm_area_struct *vma) +extern bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, + pgoff_t index, bool shmem_huge_force); +static inline bool shmem_huge_enabled(struct vm_area_struct *vma, + bool shmem_huge_force) { - return shmem_is_huge(vma, file_inode(vma->vm_file), vma->vm_pgoff); + return shmem_is_huge(vma, file_inode(vma->vm_file), vma->vm_pgoff, + shmem_huge_force); } extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7fa74b9749a6..53d170dac332 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -119,7 +119,7 @@ bool hugepage_vma_check(struct vm_area_struct *vma, unsigned long vm_flags, * own flags. */ if (!in_pf && shmem_file(vma->vm_file)) - return shmem_huge_enabled(vma); + return shmem_huge_enabled(vma, !enforce_sysfs); /* Enforce sysfs THP requirements as necessary */ if (enforce_sysfs && diff --git a/mm/shmem.c b/mm/shmem.c index 99b7341bd0bf..47c42c566fd1 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -461,20 +461,20 @@ static bool shmem_confirm_swap(struct address_space *mapping, static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; -bool shmem_is_huge(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index) +bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, + pgoff_t index, bool shmem_huge_force) { loff_t i_size; if (!S_ISREG(inode->i_mode)) return false; - if (shmem_huge == SHMEM_HUGE_DENY) - return false; if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) || test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) return false; - if (shmem_huge == SHMEM_HUGE_FORCE) + if (shmem_huge == SHMEM_HUGE_FORCE || shmem_huge_force) return true; + if (shmem_huge == SHMEM_HUGE_DENY) + return false; switch (SHMEM_SB(inode->i_sb)->huge) { case SHMEM_HUGE_ALWAYS: @@ -669,8 +669,8 @@ static long shmem_unused_huge_count(struct super_block *sb, #define shmem_huge SHMEM_HUGE_DENY -bool shmem_is_huge(struct vm_area_struct *vma, - struct inode *inode, pgoff_t index) +bool shmem_is_huge(struct vm_area_struct *vma, struct inode *inode, + pgoff_t index, bool shmem_huge_force) { return false; } @@ -1056,7 +1056,7 @@ static int shmem_getattr(struct user_namespace *mnt_userns, STATX_ATTR_NODUMP); generic_fillattr(&init_user_ns, inode, stat); - if (shmem_is_huge(NULL, inode, 0)) + if (shmem_is_huge(NULL, inode, 0, false)) stat->blksize = HPAGE_PMD_SIZE; if (request_mask & STATX_BTIME) { @@ -1888,7 +1888,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - if (!shmem_is_huge(vma, inode, index)) + if (!shmem_is_huge(vma, inode, index, false)) goto alloc_nohuge; huge_gfp = vma_thp_gfp_mask(vma);