From patchwork Thu Jul 11 05:42:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13730032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE52CC3DA41 for ; Thu, 11 Jul 2024 05:43:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0BA2C6B0098; Thu, 11 Jul 2024 01:43:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 065786B009C; Thu, 11 Jul 2024 01:43:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E223D6B009A; Thu, 11 Jul 2024 01:43:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C133B6B0093 for ; Thu, 11 Jul 2024 01:43:13 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2DA7A403DB for ; Thu, 11 Jul 2024 05:43:13 +0000 (UTC) X-FDA: 82326378666.10.DC50DCF Received: from out30-124.freemail.mail.aliyun.com (out30-124.freemail.mail.aliyun.com [115.124.30.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 8E9B8160002 for ; Thu, 11 Jul 2024 05:43:09 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="E/HyPgbx"; spf=pass (imf08.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720676565; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Djj9GEjNq9Gnbpe2Z/mIcDmYI8xxQmETjEn+X/PnVoU=; b=uF6EtFeqpmORQgYlmFzfHpGKHoszQmeqLE3b2dK1upwb69PEHpTtDfKA9hEnZj3oAxyy1P StNiuCYXUwSJ5lZpjN7cPDlpzbKygteWVWw+s5FAx4LiQVBT8g2QYl2Wis9IunqDZ8aIMm syeI4/sKR8dXyCZhDiHbR0VpR4mMlTg= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="E/HyPgbx"; spf=pass (imf08.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.124 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720676565; a=rsa-sha256; cv=none; b=PSdi+BAJCLKxOj5R7qM9enL5H/KvXDYwLQqAkSZjfYOjy4YQzHHXz1hbadkg25+kCc0/92 7BD/2eO650gl4G6JPL+JIL6ch3o9YYGtXegiGBoxha+eHKG6o5yP/wGhoZucvlaq8guPW7 r9/sYvGVEPuR/UZy26fMEYhjRg+Ay3w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1720676586; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=Djj9GEjNq9Gnbpe2Z/mIcDmYI8xxQmETjEn+X/PnVoU=; b=E/HyPgbxT2tqaJkMD7lot699FN5JX4U0B0jR3QkIpBH4QudtHj4OpgSRujeGpXl7VVTemRgoy3B7fU8jMDdE99K/XgpHzIrRh7eKtSxqyxFqOWqrlhOmx5qd8wdtBMRCZ1vjLDXHMpKAf/KiUVgwmgg+Tdp4UvmWH7HD4/45eYw= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R621e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045046011;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0WAIiL2T_1720676582; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WAIiL2T_1720676582) by smtp.aliyun-inc.com; Thu, 11 Jul 2024 13:43:02 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ziy@nvidia.com, ioworker0@gmail.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Date: Thu, 11 Jul 2024 13:42:48 +0800 Message-Id: X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: stg5694wfq8fyr1jdc7egg6gd5cwq5xx X-Rspam-User: X-Rspamd-Queue-Id: 8E9B8160002 X-Rspamd-Server: rspam02 X-HE-Tag: 1720676589-349200 X-HE-Meta: U2FsdGVkX195OJkiMeYr1QMSlZFvQr53NXmtVV3T6w1ObHhhfxcCP8E6+d/Ra6CgLNPwH2dJWBBC6GGULjLJibQoh+Fp0b/xv3gliqQTLYHpGFO0rNA/RDHFTRhsvCpbxY/qE8MPszayPWnPh0G6iXc/5dp2aC39ouj6Yap+EHzOh4ILs0KaH/Jumdu3MTRA230k6YK67dhfN7PngnAacxQmbjfUjU4Zko7hkzuUiwqtc9w2rKbJmGco7oa2Y+H0x/mysY2750ksfo6KVj6ujbw/DhKmtQvZ2+yFEJ0IWgbrAlSNMbZ49j/+HTPKjI614cQZblhxmYnLsLnJCQrtyRMxDtrFBuTkNbxRggKB/WVz57EUw278O9YNKtwj0mqimdltf8wZH6z0QKwQORRb4Xztsg/I06AMIX6d7zfyjuBwUJwwwribN4uI4RQBUPZGObhyft9atFb2hS5zS9i2or7Qd7VshCT/MpKPl8V046qT8C4xA1ih8U+yWW3hTB4DyIMmE1kpHPkPyuIR7FXbX10I43zPAO8mKTQs4MJ2ZtdTOGNTlGMULtNcNLmTmqmYM7VyiywUG8UezfV5n4LpQOH14TSxaRGJmLo6j9d1waBj6CXzsOCO4xebqYWCHDDJAf5faWTemQ6dVg5K3jLpsUv4iQW891eUz+mflji7TMFDfMMhA5UOXogHP5tCJiId8IcL2ro6bfT4kkxIk3UUGIQfiLQ0bQMgLluJqOxj2bW8AaJdP+3HViiPQsu647Q0caqPpsfSH/UafX4hHvOwPbXRqFKi2o5oNIlTdUJloQHblYKceud0zXwpkzLn8KpgImLGnCvSfqmX/zMuT6YCc3Wy7VYsNy+9sj8Dvkg8R7lME6KelpfJ1vVctRi1Q4lwMOW5z02tYZkmwDgs7Dwtyd5TnbB/b6EAeQTOsLNyrn+y5QmD+wczjvXZ58Ra+bmO+oiXdZDyx7ESOrkMwUY WzHS7Qeq 0yC4fu0TGUi3cszQObV5xjMqwm3H0r2RvdDsULLAJ5xK+HFinLgUfgir9W4jKwdWGmlQKpFmgkZZuyip2lc3k8mJAyW0R/fAdYVLMN1BOKF8mtqfwo0B0wvnjNh9doJxPj9Nakvd42W4NimlhSgH9KrL3Z+0QyBa9FDNZPWLvhVpo6gZZO2MYSmiPujQdRZ+JDjQyYXCFW0O48FnIfmQdgQFnHmFJHmmAXsRVrWNbgSCKXEA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000031, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move the suitable huge orders validation into shmem_suitable_orders() for tmpfs, which can reuse some code to simplify the logic. In addition, we don't have special handling for the error code -E2BIG when checking for conflicts with PMD sized THP in the pagecache for tmpfs, instead, it will just fallback to order-0 allocations like this patch does, so this simplification will not add functional changes. Signed-off-by: Baolin Wang --- mm/shmem.c | 39 +++++++++++++++------------------------ 1 file changed, 15 insertions(+), 24 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index f24dfbd387ba..db7e9808830f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1685,19 +1685,29 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault struct address_space *mapping, pgoff_t index, unsigned long orders) { - struct vm_area_struct *vma = vmf->vma; + struct vm_area_struct *vma = vmf ? vmf->vma : NULL; unsigned long pages; int order; - orders = thp_vma_suitable_orders(vma, vmf->address, orders); - if (!orders) - return 0; + if (vma) { + orders = thp_vma_suitable_orders(vma, vmf->address, orders); + if (!orders) + return 0; + } /* Find the highest order that can add into the page cache */ order = highest_order(orders); while (orders) { pages = 1UL << order; index = round_down(index, pages); + /* + * Check for conflict before waiting on a huge allocation. + * Conflict might be that a huge page has just been allocated + * and added to page cache by a racing thread, or that there + * is already at least one small page in the huge extent. + * Be careful to retry when appropriate, but not forever! + * Elsewhere -EEXIST would be the right code, but not here. + */ if (!xa_find(&mapping->i_pages, &index, index + pages - 1, XA_PRESENT)) break; @@ -1735,7 +1745,6 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, { struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); - struct vm_area_struct *vma = vmf ? vmf->vma : NULL; unsigned long suitable_orders = 0; struct folio *folio = NULL; long pages; @@ -1745,26 +1754,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, orders = 0; if (orders > 0) { - if (vma && vma_is_anon_shmem(vma)) { - suitable_orders = shmem_suitable_orders(inode, vmf, + suitable_orders = shmem_suitable_orders(inode, vmf, mapping, index, orders); - } else if (orders & BIT(HPAGE_PMD_ORDER)) { - pages = HPAGE_PMD_NR; - suitable_orders = BIT(HPAGE_PMD_ORDER); - index = round_down(index, HPAGE_PMD_NR); - - /* - * Check for conflict before waiting on a huge allocation. - * Conflict might be that a huge page has just been allocated - * and added to page cache by a racing thread, or that there - * is already at least one small page in the huge extent. - * Be careful to retry when appropriate, but not forever! - * Elsewhere -EEXIST would be the right code, but not here. - */ - if (xa_find(&mapping->i_pages, &index, - index + HPAGE_PMD_NR - 1, XA_PRESENT)) - return ERR_PTR(-E2BIG); - } order = highest_order(suitable_orders); while (suitable_orders) { From patchwork Thu Jul 11 05:42:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13730033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BC13C3DA45 for ; Thu, 11 Jul 2024 05:43:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 34B056B0099; Thu, 11 Jul 2024 01:43:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 289146B0096; Thu, 11 Jul 2024 01:43:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E96E96B0099; Thu, 11 Jul 2024 01:43:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C20786B0096 for ; Thu, 11 Jul 2024 01:43:13 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2D2FCA20D1 for ; Thu, 11 Jul 2024 05:43:13 +0000 (UTC) X-FDA: 82326378666.09.EF6CA0F Received: from out30-100.freemail.mail.aliyun.com (out30-100.freemail.mail.aliyun.com [115.124.30.100]) by imf16.hostedemail.com (Postfix) with ESMTP id 1FF91180005 for ; Thu, 11 Jul 2024 05:43:08 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=tZdeXgf0; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720676574; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Tf+iCn9J1tf0mbYcwxtjryzLvO9qhkxHYDxVzJSYovE=; b=0Boo5uT/JP5BRIYVD34l9RLdBtYpeg1U2eZ/tjL1aNJ/BQVnrRbr3wZc6thabY+RLUphOC YpHB8PKL//qMYJcvzsCrjZxCNmLfiBj1VPHZt+ZbijI7+Udp2x4coneII4Wg7qmeJv75FB lOXDflAsVU0sjoTCQn+B3bokoDrTvPA= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=tZdeXgf0; spf=pass (imf16.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.100 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720676574; a=rsa-sha256; cv=none; b=D0CLo2obQgeUM0TgbaQYg/nwwglORXvvl1jPSuWpirgz6k2sAiIRqvF6+d7B4HtS9/DHig knnAd76GDLBs3ZbrQ1ngsyxUNg5MVGaszNrcodA07/mc9cTxWKzD1eEqJvRlGS+gZMGu7s 8OtZxYSYpORNSUZLTAtjXAAXymCIJqA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1720676586; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=Tf+iCn9J1tf0mbYcwxtjryzLvO9qhkxHYDxVzJSYovE=; b=tZdeXgf0HkI/o+oVSQe91492e1M9cdVWvG/A7fpJv2owDQ9VJkfyyINdW0EbK2ligSzuluCEC4dUdxOfhT9MUxgUpNKwBo4bzyfsts3O78jpKsoxPOODkjzL94wF3OdeSKgNp8MH5szUapkkqgdzqcqzHNBNjg9BIrRuQU7Z2Vg= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067111;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0WAImoHy_1720676583; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WAImoHy_1720676583) by smtp.aliyun-inc.com; Thu, 11 Jul 2024 13:43:03 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ziy@nvidia.com, ioworker0@gmail.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Date: Thu, 11 Jul 2024 13:42:49 +0800 Message-Id: <26dfca33f394b5cfa68e4dbda60bf5f54e41c534.1720668581.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 1FF91180005 X-Stat-Signature: ekm4nk1rxhcsmntrn4wx5cxdqg3m3kjq X-HE-Tag: 1720676588-135317 X-HE-Meta: U2FsdGVkX1+RLWDuRFdVtroznBdu/ryvfIpqEPlOU+jn/FquHwOIEm0COsxvUfN+5TaRHyUV9CAyABh8ESeL2p/7ddckoELewtfYwUbexev2SkQGVlkaDXB4GGMGuXIiXAUfSQS8TUzyL3BbORfJlAkOlCXZqJ6KP6P3bFPl3Lgm/B+1S1mfOHV+HR+D/yqyyEQXmN/eByXY/sM8iWb+6705CKecoTsEV2sY/5DGmUZ92ORXwrkLPEQNHBjYXDfz6Y1O0Zsj5xM/G/aaTx7+WNptIcG0rEGbgB8mSjkB7VQI6rXlgqF+jgTuM1HuwPFGr7trSCElm8FJO3iaO/PtgEWf45m/vS+z7M9FGsrTB9oalJa0VzFLhuUGE9yJdvts7m4K6WcaDBnlipqqTSVD8iM1PL94zytophIVBjdKJzOlbARw33Kza9XBx601aW1C8eO2qF5cwi4A2KC7cGGAJFmn5BnjTA7Wd82xfpPkFbjMN8NKCeVN2G3v4j759zHypj0t1hRyC84+2YRbIHxp2AEkIMkXs2OFHudEePi0t0bvPBc3q379YRFj6hD8c2Dp7qqmf0fOZ0GGV7hvTxSa1iEbF2iu8cxluiIC4IrU7csQM+2dGSAkIbxMgRteBE3R1bgWG4/QcwIGs454bYdVTcPv3QUJELbp1VfpcFVUDZlbSNLIhYcHNlFMPDOHqX8xkflzcvydqvktxS8FD9UTnedR0S4Yabv1XXu74QERFk1kfSpVBFuNqmlY2/O9Jl0Z1psn8A2wValZvxbCuQRDzBei+1OmiKxsZtxWtLDjYfDtN7rDoyY8acrlpdJxUuHZpt7SFI1eHTWXszhu09jXzvsqAJN6BBU8/jgzWbBlManFOsNwTCLWJcjf1LpS3bF3xDrZsKpfDt9RIT1aos2RnD2zVmk8k5w6GGgbLAyKY8pqs0Odznua5qroY2JAIWZO8ZS1KECx3VabT1dhyR8 69VtR103 IejbuggHLv4wZyjCm/IkG0/1ez8SSp8q/SKMJvYKt1NfR1uiyAWAn9c6F/LBqqywrxLKOLkICbQNTXxTPBv41k2GfJNFvJKTk3DtvVjJOVGph55LRlJbsjlAyTI9Ek/AIAGBFpnnepWIi9glOVt/mX3ojCYT7De3m6cyJqt2EsvWTYgOlxpR7kDz7tQT/gW7RcXdjEMCzFlIdkZXUTPsdGWbF0x6dKJnaMvmfqlD9f0dSowjRsdaiJht73JKcFhibM113PxgINWYza3Ai8wg0ind7bStiyA3T+zz0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The shmem_is_huge() is now used to check if the top-level huge page is enabled, thus rename it to reflect its usage. Signed-off-by: Baolin Wang --- include/linux/shmem_fs.h | 9 +++++---- mm/huge_memory.c | 5 +++-- mm/shmem.c | 15 ++++++++------- 3 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 1d06b1e5408a..405ee8d3589a 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -111,14 +111,15 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); #ifdef CONFIG_TRANSPARENT_HUGEPAGE -extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, - struct mm_struct *mm, unsigned long vm_flags); +extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, + struct mm_struct *mm, unsigned long vm_flags); unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, bool global_huge); #else -static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, - struct mm_struct *mm, unsigned long vm_flags) +static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, + bool shmem_huge_force, struct mm_struct *mm, + unsigned long vm_flags) { return false; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f9696c94e211..cc9bad12be75 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -152,8 +152,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, * own flags. */ if (!in_pf && shmem_file(vma->vm_file)) { - bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff, - !enforce_sysfs, vma->vm_mm, vm_flags); + bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file), + vma->vm_pgoff, !enforce_sysfs, + vma->vm_mm, vm_flags); if (!vma_is_anon_shmem(vma)) return global_huge ? orders : 0; diff --git a/mm/shmem.c b/mm/shmem.c index db7e9808830f..1445dcd39b6f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -548,9 +548,9 @@ static bool shmem_confirm_swap(struct address_space *mapping, static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; -static bool __shmem_is_huge(struct inode *inode, pgoff_t index, - bool shmem_huge_force, struct mm_struct *mm, - unsigned long vm_flags) +static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index, + bool shmem_huge_force, struct mm_struct *mm, + unsigned long vm_flags) { loff_t i_size; @@ -581,14 +581,15 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index, } } -bool shmem_is_huge(struct inode *inode, pgoff_t index, +bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, struct mm_struct *mm, unsigned long vm_flags) { if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) return false; - return __shmem_is_huge(inode, index, shmem_huge_force, mm, vm_flags); + return __shmem_huge_global_enabled(inode, index, shmem_huge_force, + mm, vm_flags); } #if defined(CONFIG_SYSFS) @@ -1156,7 +1157,7 @@ static int shmem_getattr(struct mnt_idmap *idmap, STATX_ATTR_NODUMP); generic_fillattr(idmap, request_mask, inode, stat); - if (shmem_is_huge(inode, 0, false, NULL, 0)) + if (shmem_huge_global_enabled(inode, 0, false, NULL, 0)) stat->blksize = HPAGE_PMD_SIZE; if (request_mask & STATX_BTIME) { @@ -2153,7 +2154,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - huge = shmem_is_huge(inode, index, false, fault_mm, + huge = shmem_huge_global_enabled(inode, index, false, fault_mm, vma ? vma->vm_flags : 0); /* Find hugepage orders that are allowed for anonymous shmem. */ if (vma && vma_is_anon_shmem(vma)) From patchwork Thu Jul 11 05:42:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13730035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAAF4C3271E for ; Thu, 11 Jul 2024 05:43:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E6AE6B009D; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 596046B009E; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45E246B009F; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 266226B009D for ; Thu, 11 Jul 2024 01:43:32 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BB459803E5 for ; Thu, 11 Jul 2024 05:43:31 +0000 (UTC) X-FDA: 82326379422.18.C0FFCA0 Received: from out199-10.us.a.mail.aliyun.com (out199-10.us.a.mail.aliyun.com [47.90.199.10]) by imf30.hostedemail.com (Postfix) with ESMTP id 79CC780004 for ; Thu, 11 Jul 2024 05:43:29 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=mkAiFOSr; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.10 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720676593; a=rsa-sha256; cv=none; b=w6gbCCRSjG7YVh4dg2A0B3gHEubz1X/6fXkt0ERI8VNLhEQhbm+B9HGgHZ8psaqBTikSYe McMhrOFspUdSs9ZrEYQP7yu3eliGFLpVKOaXYfV8lKnBG5EOfq/djk1MfKQPQtqzboToJ6 ZB6ax4UE6b7bnYx+evh0rMeVG2ZAeqE= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=mkAiFOSr; spf=pass (imf30.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 47.90.199.10 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720676593; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7WxesvSvGAiqBGFGmtnmHTIoOhG04LC4fr9aMiI02FY=; b=ky2RbXy4MiL9lyIbJkjoK0qBg5k1H+kTINXscMcNC1lZUQv3n2+YULo9QwrcGa0hB9eMv0 mKvTCVEUExZtHsBLIZRGfYAGGZvJL6kMW7I78GgIRuZr+J8Wz1Mo5WLHZD4Bx93btCNbjy s9fYhbknzsLC54V4LhtC4Dk/+DUx7JU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1720676585; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=7WxesvSvGAiqBGFGmtnmHTIoOhG04LC4fr9aMiI02FY=; b=mkAiFOSrRQ8sv+/feVBowazbUAkFB/SM0CGEWxQYkGvWPkdhsS+wYMpNxlfVHlIxS3nydzLSQtDh3SuV1QYHwek0sShA0njA0rv+kVvS2YaYao1XR6kX/NJkTeJIj4iyNx/vDEJlyBBU/W3fLeZEh/i0RCOFWqaRnlF4fG7ZWv4= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R621e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033037067113;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=11;SR=0;TI=SMTPD_---0WAIkxZr_1720676584; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WAIkxZr_1720676584) by smtp.aliyun-inc.com; Thu, 11 Jul 2024 13:43:04 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ziy@nvidia.com, ioworker0@gmail.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders() Date: Thu, 11 Jul 2024 13:42:50 +0800 Message-Id: <6e5858d345304d0428c1c2c2a25c289c062b4ea8.1720668581.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: u1hqch4ck1j6uhswi9xnuattndutc91u X-Rspamd-Queue-Id: 79CC780004 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720676609-339803 X-HE-Meta: U2FsdGVkX19tQgnmRZ8olVY98GelJW4x0JRhJ+yRf0HhY/Zf/rGLbpAYylfJxFcM1fZR2KQow8Abw/nR67eMeZTyQKqW3+zBb0FO5ahK0Qqio2jPb+zZxMn1296NnHFzJO7hLFCp7ieH0De0zYJu3yn7/Z5WYyFydP7qhOOG/y9s1j5JQEZJMXOLKFVqW1/yl3TuRFvCiA4c45esojWU/IsKct44PkdNTWHFsu8SYQBV4lWFZPC6ICdnIioXdkQo8qE6gisMIKgTtsS+vzXGFiK8JJCeJxRpVXiIV1jheacEoLzYnJ/3bYLMvl0Wi1g3Z0GvnExJy7t1z8EfnVctuDzsczsAKYWLCQESeomLf/5aC6jorRgWqTsEZ+uDUqV8O/mQ34KfU7rPY1CyO+GMkSXmP8jwhx8WJu3115Y5QD3uzcTBumq5GrB6JCBsYjRYoxUgxRvyTpeDjs5JBCJ6I7i+pPsNIoudTNDd7RdyVowiE9XsY+2aKwhIBETH34z9JGXU5dv07uW2cOLNN9bgn2OyQNbJPkl2Exylz7C8wt8OL0f+p2/f5WtCZ5xbxwhGkBjfVs2deQm8MzAsLy+2ehz0Q9TYFx7ovvFEnsEzwYcDZaT3JR7Bt/ZYGd8ow8kFFhUr2C+Vt7SiyJof9W16OMrTENnuPvJt2S0z1GRtEd21bDGeFv4JeUoPqChlWgZfx4thcQ8lgpInQQmnB7f+cVha6gyNrP7b9PSZKwFk14OdjN12C+o3r+9NaeRmOHhZwextJ24QMkfOjnvk/lfz8u5RfTigCdslmD5D+Ev5W9N4I06AY4gzqprFgAno1vo5GfWz/ovU1x+608gDorhmSv8gIx0hiztHPR4nc2WNjm4GbP+PBdyAAd7C+tqViwULg6ephsFLIbW9gmirAFCaqJ2ZR8A2R/g3Eg/+lyhG6LeZ3387xj6iRr2f70gAPdGb+mIM1oSfAOFlu9XRKmb BnlRGDa2 dZ5ml8BwUby+Jil2eXWsyAApWDVsNpVXC4olcK6tcxhhVLVi4FnoT+NAxPwWCEG4HTaCiclgw3K7tRG8YnHIOaCSbcWiDCAAscqr5s8uYL8Q8GeKwb+wQQhMxk4uBOV88rcrkabo5RJxPi9IdFmiyyFAb8A76LKPtBU40KHhKpEpu890kR73hJYuWtTItAYWBHTLLiTrEPEw3gkC5sARBnx34J3xG8McUZ1Re4+myHZDtSOkBF+Bc+E3hI/H4boXCP8wyZlqpNXBF01xni86EWJdt0g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move shmem_huge_global_enabled() into the shmem_allowable_huge_orders() function, so that shmem_allowable_huge_orders() can also help to find the allowable huge orders for tmpfs. Moreover the shmem_huge_global_enabled() can become static. No functional changes. Signed-off-by: Baolin Wang --- include/linux/shmem_fs.h | 12 ++---------- mm/huge_memory.c | 12 +++--------- mm/shmem.c | 34 ++++++++++++++++++++-------------- 3 files changed, 25 insertions(+), 33 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 405ee8d3589a..1564d7d3ca61 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -111,21 +111,13 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); #ifdef CONFIG_TRANSPARENT_HUGEPAGE -extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, - struct mm_struct *mm, unsigned long vm_flags); unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, - bool global_huge); + bool shmem_huge_force); #else -static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, - bool shmem_huge_force, struct mm_struct *mm, - unsigned long vm_flags) -{ - return false; -} static inline unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, - bool global_huge) + bool shmem_huge_force) { return 0; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cc9bad12be75..f69980b5b5fc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -151,16 +151,10 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, * Must be done before hugepage flags check since shmem has its * own flags. */ - if (!in_pf && shmem_file(vma->vm_file)) { - bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file), - vma->vm_pgoff, !enforce_sysfs, - vma->vm_mm, vm_flags); - - if (!vma_is_anon_shmem(vma)) - return global_huge ? orders : 0; + if (!in_pf && shmem_file(vma->vm_file)) return shmem_allowable_huge_orders(file_inode(vma->vm_file), - vma, vma->vm_pgoff, global_huge); - } + vma, vma->vm_pgoff, + !enforce_sysfs); if (!vma_is_anonymous(vma)) { /* diff --git a/mm/shmem.c b/mm/shmem.c index 1445dcd39b6f..e9610e2b2a43 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -581,7 +581,7 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index, } } -bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, +static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force, struct mm_struct *mm, unsigned long vm_flags) { @@ -1625,27 +1625,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) #ifdef CONFIG_TRANSPARENT_HUGEPAGE unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, - bool global_huge) + bool shmem_huge_force) { unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); - unsigned long vm_flags = vma->vm_flags; + unsigned long vm_flags = vma ? vma->vm_flags : 0; + struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL; /* * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that * are enabled for this vma. */ unsigned long orders = BIT(PMD_ORDER + 1) - 1; + bool global_huge; loff_t i_size; int order; - if ((vm_flags & VM_NOHUGEPAGE) || - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) + if (vma && ((vm_flags & VM_NOHUGEPAGE) || + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) return 0; /* If the hardware/firmware marked hugepage support disabled. */ if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED)) return 0; + global_huge = shmem_huge_global_enabled(inode, index, shmem_huge_force, + fault_mm, vm_flags); + if (!vma || !vma_is_anon_shmem(vma)) { + /* + * For tmpfs, we now only support PMD sized THP if huge page + * is enabled, otherwise fallback to order 0. + */ + return global_huge ? BIT(HPAGE_PMD_ORDER) : 0; + } + /* * Following the 'deny' semantics of the top level, force the huge * option off from all mounts. @@ -2081,7 +2093,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct mm_struct *fault_mm; struct folio *folio; int error; - bool alloced, huge; + bool alloced; unsigned long orders = 0; if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping))) @@ -2154,14 +2166,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - huge = shmem_huge_global_enabled(inode, index, false, fault_mm, - vma ? vma->vm_flags : 0); - /* Find hugepage orders that are allowed for anonymous shmem. */ - if (vma && vma_is_anon_shmem(vma)) - orders = shmem_allowable_huge_orders(inode, vma, index, huge); - else if (huge) - orders = BIT(HPAGE_PMD_ORDER); - + /* Find hugepage orders that are allowed for anonymous shmem and tmpfs. */ + orders = shmem_allowable_huge_orders(inode, vma, index, false); if (orders > 0) { gfp_t huge_gfp;