From patchwork Thu Jul 11 05:42:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13730035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAAF4C3271E for ; Thu, 11 Jul 2024 05:43:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E6AE6B009D; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 596046B009E; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45E246B009F; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 266226B009D for ; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BB459803E5 for ; Thu, 11 Jul 2024 05:43:31 +0000 (UTC) X-FDA: 82326379422.18.C0FFCA0 Received: from out199-10.us.a.mail.aliyun.com (out199-10.us.a.mail.aliyun.com [47.90.199.10]) by imf30.hostedemail.com (Postfix) with ESMTP id 79CC780004 for ; Thu, 11 Jul 2024 05:43:29 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=mkAiFOSr; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.10 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720676593; a=rsa-sha256; cv=none; b=w6gbCCRSjG7YVh4dg2A0B3gHEubz1X/6fXkt0ERI8VNLhEQhbm+B9HGgHZ8psaqBTikSYe McMhrOFspUdSs9ZrEYQP7yu3eliGFLpVKOaXYfV8lKnBG5EOfq/djk1MfKQPQtqzboToJ6 ZB6ax4UE6b7bnYx+evh0rMeVG2ZAeqE= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=mkAiFOSr; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.10 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720676593; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7WxesvSvGAiqBGFGmtnmHTIoOhG04LC4fr9aMiI02FY=; b=ky2RbXy4MiL9lyIbJkjoK0qBg5k1H+kTINXscMcNC1lZUQv3n2+YULo9QwrcGa0hB9eMv0 mKvTCVEUExZtHsBLIZRGfYAGGZvJL6kMW7I78GgIRuZr+J8Wz1Mo5WLHZD4Bx93btCNbjy s9fYhbknzsLC54V4LhtC4Dk/+DUx7JU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1720676585; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=7WxesvSvGAiqBGFGmtnmHTIoOhG04LC4fr9aMiI02FY=; b=mkAiFOSrRQ8sv+/feVBowazbUAkFB/SM0CGEWxQYkGvWPkdhsS+wYMpNxlfVHlIxS3nydzLSQtDh3SuV1QYHwek0sShA0njA0rv+kVvS2YaYao1XR6kX/NJkTeJIj4iyNx/vDEJlyBBU/W3fLeZEh/i0RCOFWqaRnlF4fG7ZWv4= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R621e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067113;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0WAIkxZr_1720676584; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WAIkxZr_1720676584) by smtp.aliyun-inc.com; Thu, 11 Jul 2024 13:43:04 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ziy@nvidia.com, ioworker0@gmail.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders() Date: Thu, 11 Jul 2024 13:42:50 +0800 Message-Id: <6e5858d345304d0428c1c2c2a25c289c062b4ea8.1720668581.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: u1hqch4ck1j6uhswi9xnuattndutc91u X-Rspamd-Queue-Id: 79CC780004 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720676609-339803 X-HE-Meta: U2FsdGVkX19tQgnmRZ8olVY98GelJW4x0JRhJ+yRf0HhY/Zf/rGLbpAYylfJxFcM1fZR2KQow8Abw/nR67eMeZTyQKqW3+zBb0FO5ahK0Qqio2jPb+zZxMn1296NnHFzJO7hLFCp7ieH0De0zYJu3yn7/Z5WYyFydP7qhOOG/y9s1j5JQEZJMXOLKFVqW1/yl3TuRFvCiA4c45esojWU/IsKct44PkdNTWHFsu8SYQBV4lWFZPC6ICdnIioXdkQo8qE6gisMIKgTtsS+vzXGFiK8JJCeJxRpVXiIV1jheacEoLzYnJ/3bYLMvl0Wi1g3Z0GvnExJy7t1z8EfnVctuDzsczsAKYWLCQESeomLf/5aC6jorRgWqTsEZ+uDUqV8O/mQ34KfU7rPY1CyO+GMkSXmP8jwhx8WJu3115Y5QD3uzcTBumq5GrB6JCBsYjRYoxUgxRvyTpeDjs5JBCJ6I7i+pPsNIoudTNDd7RdyVowiE9XsY+2aKwhIBETH34z9JGXU5dv07uW2cOLNN9bgn2OyQNbJPkl2Exylz7C8wt8OL0f+p2/f5WtCZ5xbxwhGkBjfVs2deQm8MzAsLy+2ehz0Q9TYFx7ovvFEnsEzwYcDZaT3JR7Bt/ZYGd8ow8kFFhUr2C+Vt7SiyJof9W16OMrTENnuPvJt2S0z1GRtEd21bDGeFv4JeUoPqChlWgZfx4thcQ8lgpInQQmnB7f+cVha6gyNrP7b9PSZKwFk14OdjN12C+o3r+9NaeRmOHhZwextJ24QMkfOjnvk/lfz8u5RfTigCdslmD5D+Ev5W9N4I06AY4gzqprFgAno1vo5GfWz/ovU1x+608gDorhmSv8gIx0hiztHPR4nc2WNjm4GbP+PBdyAAd7C+tqViwULg6ephsFLIbW9gmirAFCaqJ2ZR8A2R/g3Eg/+lyhG6LeZ3387xj6iRr2f70gAPdGb+mIM1oSfAOFlu9XRKmb BnlRGDa2 dZ5ml8BwUby+Jil2eXWsyAApWDVsNpVXC4olcK6tcxhhVLVi4FnoT+NAxPwWCEG4HTaCiclgw3K7tRG8YnHIOaCSbcWiDCAAscqr5s8uYL8Q8GeKwb+wQQhMxk4uBOV88rcrkabo5RJxPi9IdFmiyyFAb8A76LKPtBU40KHhKpEpu890kR73hJYuWtTItAYWBHTLLiTrEPEw3gkC5sARBnx34J3xG8McUZ1Re4+myHZDtSOkBF+Bc+E3hI/H4boXCP8wyZlqpNXBF01xni86EWJdt0g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move shmem_huge_global_enabled() into the shmem_allowable_huge_orders() function, so that shmem_allowable_huge_orders() can also help to find the allowable huge orders for tmpfs. Moreover the shmem_huge_global_enabled() can become static. No functional changes. Signed-off-by: Baolin Wang --- include/linux/shmem_fs.h | 12 ++---------- mm/huge_memory.c | 12 +++--------- mm/shmem.c | 34 ++++++++++++++++++++-------------- 3 files changed, 25 insertions(+), 33 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 405ee8d3589a..1564d7d3ca61 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -111,21 +111,13 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); #ifdef CONFIG_TRANSPARENT_HUGEPAGE -extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, - struct mm_struct *mm, unsigned long vm_flags); unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, - bool global_huge); + bool shmem_huge_force); #else -static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, - bool shmem_huge_force, struct mm_struct *mm, - unsigned long vm_flags) -{ - return false; -} static inline unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, - bool global_huge) + bool shmem_huge_force) { return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cc9bad12be75..f69980b5b5fc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -151,16 +151,10 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, * Must be done before hugepage flags check since shmem has its * own flags. */ - if (!in_pf && shmem_file(vma->vm_file)) { - bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file), - vma->vm_pgoff, !enforce_sysfs, - vma->vm_mm, vm_flags); - - if (!vma_is_anon_shmem(vma)) - return global_huge ? orders : 0; + if (!in_pf && shmem_file(vma->vm_file)) return shmem_allowable_huge_orders(file_inode(vma->vm_file), - vma, vma->vm_pgoff, global_huge); - } + vma, vma->vm_pgoff, + !enforce_sysfs); if (!vma_is_anonymous(vma)) { /* diff --git a/mm/shmem.c b/mm/shmem.c index 1445dcd39b6f..e9610e2b2a43 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -581,7 +581,7 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index, } } -bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, +static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, struct mm_struct *mm, unsigned long vm_flags) { @@ -1625,27 +1625,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) #ifdef CONFIG_TRANSPARENT_HUGEPAGE unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, - bool global_huge) + bool shmem_huge_force) { unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); - unsigned long vm_flags = vma->vm_flags; + unsigned long vm_flags = vma ? vma->vm_flags : 0; + struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL; /* * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that * are enabled for this vma. */ unsigned long orders = BIT(PMD_ORDER + 1) - 1; + bool global_huge; loff_t i_size; int order; - if ((vm_flags & VM_NOHUGEPAGE) || - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) + if (vma && ((vm_flags & VM_NOHUGEPAGE) || + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) return 0; /* If the hardware/firmware marked hugepage support disabled. */ if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) return 0; + global_huge = shmem_huge_global_enabled(inode, index, shmem_huge_force, + fault_mm, vm_flags); + if (!vma || !vma_is_anon_shmem(vma)) { + /* + * For tmpfs, we now only support PMD sized THP if huge page + * is enabled, otherwise fallback to order 0. + */ + return global_huge ? BIT(HPAGE_PMD_ORDER) : 0; + } + /* * Following the 'deny' semantics of the top level, force the huge * option off from all mounts. @@ -2081,7 +2093,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct mm_struct *fault_mm; struct folio *folio; int error; - bool alloced, huge; + bool alloced; unsigned long orders = 0; if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping))) @@ -2154,14 +2166,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - huge = shmem_huge_global_enabled(inode, index, false, fault_mm, - vma ? vma->vm_flags : 0); - /* Find hugepage orders that are allowed for anonymous shmem. */ - if (vma && vma_is_anon_shmem(vma)) - orders = shmem_allowable_huge_orders(inode, vma, index, huge); - else if (huge) - orders = BIT(HPAGE_PMD_ORDER); - + /* Find hugepage orders that are allowed for anonymous shmem and tmpfs. */ + orders = shmem_allowable_huge_orders(inode, vma, index, false); if (orders > 0) { gfp_t huge_gfp;