From patchwork Thu Aug 11 23:16:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 810E9C25B0E for ; Thu, 11 Aug 2022 23:16:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 646F26B0075; Thu, 11 Aug 2022 19:16:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5F6E96B0078; Thu, 11 Aug 2022 19:16:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 470788E0001; Thu, 11 Aug 2022 19:16:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 39F616B0075 for ; Thu, 11 Aug 2022 19:16:47 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0DED38193D for ; Thu, 11 Aug 2022 23:16:47 +0000 (UTC) X-FDA: 79788873654.13.1E51A93 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf11.hostedemail.com (Postfix) with ESMTP id 8655B401AC for ; Thu, 11 Aug 2022 23:16:46 +0000 (UTC) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 2E5035C004F; Thu, 11 Aug 2022 19:16:46 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 11 Aug 2022 19:16:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259806; x= 1660346206; bh=1iLo5zKcUaFVcj26pwobGK4S6KcUcul00ec8KXKkfvE=; b=I LfFZ7EK8wNAD6rgcb9Em1maoqbdivA0EFJX6GFXH667iTwmNcK2WjlYTLlUYKpWt RbAeVk//2Aj2ANVrzYwNyJz48qklTFNZBQ7RnxGQow6Ei724RZtAoI8sjlJ1wSDc FzIKJ8kGKjn0rHBfipvWrxo6xi5B/03CbsxdaNJfwS3saBum+K9IWcjyC2RM5UvB TVo6lPuuMfYRBavwMi2LQPvnYNArwCLSmR7PUp9qhHVtN1wzmND/rFFrcXH0tkaO 1s0oqWtrh2LduSgaoqEoEK1WEmcXaEP7zoALmh0x6r4HrrTAunjRMP9BgBeSYNTN FU4SoetDn+E/5iYfA079Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259806; x=1660346206; bh=1 iLo5zKcUaFVcj26pwobGK4S6KcUcul00ec8KXKkfvE=; b=rDnRDXgdmH2ZrihjB 9dIuR8vN0BEwvcUHNcMXNGIvK8MfPbkqen4fsiZDCKPqHG4hnMrTp42Ht9/Dl/yu ClEAh0aiDLxMBYGdJ7Sqfzjd3SuUf58uFJ0qP1CFcaH9e1AQNDxBW29blXOit5X8 w5WY6K02A709cr9geAUK/o7zOdhElJfOhkgzp+3FC+sNyiRxKjahFgfM2OY61O45 yRRxaMbnbV6+qOs0Ic6cwIIcQFW+vdfboyPVjJKMZQQfs58UisFyIEI5PT2o78Cq jRYg79eNZ45jaqkR+So9spFeUCqLpDkvMI0rycAZ1dsjqQOSTic/7lp5KrKVN9Qn KLULg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:45 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 01/12] arch: mm: rename FORCE_MAX_ZONEORDER to ARCH_FORCE_MAX_ORDER Date: Thu, 11 Aug 2022 19:16:32 -0400 Message-Id: <20220811231643.1012912-2-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259806; a=rsa-sha256; cv=none; b=3nTcna0GzoCl2ke9vcDCu6/sYPPgs8oGjEBASEK5r9A/SN3Gc+gLI0goFOEQqiZrZgA0G3 IR5d8XnaqUnFkcFcQe0SnU1Z7dEZvUwBneFNNYSdgK8qZqL4KsLkfvBNF8Jm2SMHfXzB2F 12QkQyGa3GLOpe/482hB2J4OGGQ/7C8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259806; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1iLo5zKcUaFVcj26pwobGK4S6KcUcul00ec8KXKkfvE=; b=7N4i3+mANOlhSGXYze8TxD/Gp48GEMAEAXacsK7oAkwXpF9mVD2X3xLaOJEgGsPgUzlYc+ yP56JTs4fXnZjTRmjswYkFWFifYShV5r6CG7YyY6WZcQIaTc3o5sb9oIVXD2gTiYG35DnR JFoWSUKJtdTY52fSixTS4doDpdW1CnY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="I LfFZ7E"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=rDnRDXgd; spf=pass (imf11.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Stat-Signature: 9nz89tmqokx4nwkfzz5t9u47ege1ae1i X-Rspamd-Queue-Id: 8655B401AC Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="I LfFZ7E"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=rDnRDXgd; spf=pass (imf11.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1660259806-989246 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan This Kconfig option is used by individual arch to set its desired MAX_ORDER. Rename it to reflect its actual use. Signed-off-by: Zi Yan Cc: Vineet Gupta Cc: Shawn Guo Cc: Catalin Marinas Cc: Guo Ren Cc: Geert Uytterhoeven Cc: Thomas Bogendoerfer Cc: Ley Foon Tan Cc: Michael Ellerman Cc: Yoshinori Sato Cc: "David S. Miller" Cc: Chris Zankel Cc: linux-snps-arc@lists.infradead.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-oxnas@groups.io Cc: linux-csky@vger.kernel.org Cc: linux-ia64@vger.kernel.org Cc: linux-m68k@lists.linux-m68k.org Cc: linux-mips@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: linux-xtensa@linux-xtensa.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Acked-by: Mike Rapoport --- arch/arc/Kconfig | 2 +- arch/arm/Kconfig | 2 +- arch/arm/configs/imx_v6_v7_defconfig | 2 +- arch/arm/configs/milbeaut_m10v_defconfig | 2 +- arch/arm/configs/oxnas_v6_defconfig | 2 +- arch/arm/configs/sama7_defconfig | 2 +- arch/arm64/Kconfig | 2 +- arch/csky/Kconfig | 2 +- arch/ia64/Kconfig | 2 +- arch/ia64/include/asm/sparsemem.h | 6 +++--- arch/m68k/Kconfig.cpu | 2 +- arch/mips/Kconfig | 2 +- arch/nios2/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/powerpc/configs/85xx/ge_imp3a_defconfig | 2 +- arch/powerpc/configs/fsl-emb-nonhw.config | 2 +- arch/sh/configs/ecovec24_defconfig | 2 +- arch/sh/mm/Kconfig | 2 +- arch/sparc/Kconfig | 2 +- arch/xtensa/Kconfig | 2 +- include/linux/mmzone.h | 4 ++-- 21 files changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 9e3653253ef2..d9a13ccf89a3 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -554,7 +554,7 @@ config ARC_BUILTIN_DTB_NAME endmenu # "ARC Architecture Configuration" -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" default "12" if ARC_HUGEPAGE_16M default "11" diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 87badeae3181..e6c8ee56ac52 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1434,7 +1434,7 @@ config ARM_MODULE_PLTS Disabling this is usually safe for small single-platform configurations. If unsure, say y. -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" default "12" if SOC_AM33XX default "9" if SA1111 diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig index 01012537a9b9..fb283059daa0 100644 --- a/arch/arm/configs/imx_v6_v7_defconfig +++ b/arch/arm/configs/imx_v6_v7_defconfig @@ -31,7 +31,7 @@ CONFIG_SOC_VF610=y CONFIG_SMP=y CONFIG_ARM_PSCI=y CONFIG_HIGHMEM=y -CONFIG_FORCE_MAX_ZONEORDER=14 +CONFIG_ARCH_FORCE_MAX_ORDER=14 CONFIG_CMDLINE="noinitrd console=ttymxc0,115200" CONFIG_KEXEC=y CONFIG_CPU_FREQ=y diff --git a/arch/arm/configs/milbeaut_m10v_defconfig b/arch/arm/configs/milbeaut_m10v_defconfig index 58810e98de3d..8620061e19a8 100644 --- a/arch/arm/configs/milbeaut_m10v_defconfig +++ b/arch/arm/configs/milbeaut_m10v_defconfig @@ -26,7 +26,7 @@ CONFIG_THUMB2_KERNEL=y # CONFIG_THUMB2_AVOID_R_ARM_THM_JUMP11 is not set # CONFIG_ARM_PATCH_IDIV is not set CONFIG_HIGHMEM=y -CONFIG_FORCE_MAX_ZONEORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=12 CONFIG_SECCOMP=y CONFIG_KEXEC=y CONFIG_EFI=y diff --git a/arch/arm/configs/oxnas_v6_defconfig b/arch/arm/configs/oxnas_v6_defconfig index 600f78b363dd..5c163a9d1429 100644 --- a/arch/arm/configs/oxnas_v6_defconfig +++ b/arch/arm/configs/oxnas_v6_defconfig @@ -12,7 +12,7 @@ CONFIG_ARCH_OXNAS=y CONFIG_MACH_OX820=y CONFIG_SMP=y CONFIG_NR_CPUS=16 -CONFIG_FORCE_MAX_ZONEORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=12 CONFIG_SECCOMP=y CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_ATAG_DTB_COMPAT=y diff --git a/arch/arm/configs/sama7_defconfig b/arch/arm/configs/sama7_defconfig index 0384030d8b25..8b2cf6ddd568 100644 --- a/arch/arm/configs/sama7_defconfig +++ b/arch/arm/configs/sama7_defconfig @@ -19,7 +19,7 @@ CONFIG_ATMEL_CLOCKSOURCE_TCB=y # CONFIG_CACHE_L2X0 is not set # CONFIG_ARM_PATCH_IDIV is not set # CONFIG_CPU_SW_DOMAIN_PAN is not set -CONFIG_FORCE_MAX_ZONEORDER=15 +CONFIG_ARCH_FORCE_MAX_ORDER=15 CONFIG_UACCESS_WITH_MEMCPY=y # CONFIG_ATAGS is not set CONFIG_CMDLINE="console=ttyS0,115200 earlyprintk ignore_loglevel" diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 571cc234d0b3..c6fcd8746f60 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1401,7 +1401,7 @@ config XEN help Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64. -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int default "14" if ARM64_64K_PAGES default "12" if ARM64_16K_PAGES diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig index 3cbc2dc62baf..adee6ab36862 100644 --- a/arch/csky/Kconfig +++ b/arch/csky/Kconfig @@ -332,7 +332,7 @@ config HIGHMEM select KMAP_LOCAL default y -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" default "11" diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 26ac8ea15a9e..c6e06cdc738f 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -200,7 +200,7 @@ config IA64_CYCLONE Say Y here to enable support for IBM EXA Cyclone time source. If you're unsure, answer N. -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "MAX_ORDER (11 - 17)" if !HUGETLB_PAGE range 11 17 if !HUGETLB_PAGE default "17" if HUGETLB_PAGE diff --git a/arch/ia64/include/asm/sparsemem.h b/arch/ia64/include/asm/sparsemem.h index 42ed5248fae9..84e8ce387b69 100644 --- a/arch/ia64/include/asm/sparsemem.h +++ b/arch/ia64/include/asm/sparsemem.h @@ -11,10 +11,10 @@ #define SECTION_SIZE_BITS (30) #define MAX_PHYSMEM_BITS (50) -#ifdef CONFIG_FORCE_MAX_ZONEORDER -#if ((CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS) +#ifdef CONFIG_ARCH_FORCE_MAX_ORDER +#if ((CONFIG_ARCH_FORCE_MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS) #undef SECTION_SIZE_BITS -#define SECTION_SIZE_BITS (CONFIG_FORCE_MAX_ZONEORDER - 1 + PAGE_SHIFT) +#define SECTION_SIZE_BITS (CONFIG_ARCH_FORCE_MAX_ORDER - 1 + PAGE_SHIFT) #endif #endif diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu index e0e9e31339c1..3b2f39508524 100644 --- a/arch/m68k/Kconfig.cpu +++ b/arch/m68k/Kconfig.cpu @@ -399,7 +399,7 @@ config SINGLE_MEMORY_CHUNK order" to save memory that could be wasted for unused memory map. Say N if not sure. -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" if ADVANCED depends on !SINGLE_MEMORY_CHUNK default "11" diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index ec21f8999249..70d28976a40d 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -2140,7 +2140,7 @@ config PAGE_SIZE_64KB endchoice -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" range 14 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB default "14" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig index 4167f1eb4cd8..a582f72104f3 100644 --- a/arch/nios2/Kconfig +++ b/arch/nios2/Kconfig @@ -44,7 +44,7 @@ menu "Kernel features" source "kernel/Kconfig.hz" -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" range 9 20 default "11" diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 4c466acdc70d..39d71d7701bd 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -845,7 +845,7 @@ config DATA_SHIFT in that case. If PIN_TLB is selected, it must be aligned to 8M as 8M pages will be pinned. -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" range 8 9 if PPC64 && PPC_64K_PAGES default "9" if PPC64 && PPC_64K_PAGES diff --git a/arch/powerpc/configs/85xx/ge_imp3a_defconfig b/arch/powerpc/configs/85xx/ge_imp3a_defconfig index f29c166998af..e7672c186325 100644 --- a/arch/powerpc/configs/85xx/ge_imp3a_defconfig +++ b/arch/powerpc/configs/85xx/ge_imp3a_defconfig @@ -30,7 +30,7 @@ CONFIG_PREEMPT=y # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set CONFIG_BINFMT_MISC=m CONFIG_MATH_EMULATION=y -CONFIG_FORCE_MAX_ZONEORDER=17 +CONFIG_ARCH_FORCE_MAX_ORDER=17 CONFIG_PCI=y CONFIG_PCIEPORTBUS=y CONFIG_PCI_MSI=y diff --git a/arch/powerpc/configs/fsl-emb-nonhw.config b/arch/powerpc/configs/fsl-emb-nonhw.config index f14c6dbd7346..ab8a8c4530d9 100644 --- a/arch/powerpc/configs/fsl-emb-nonhw.config +++ b/arch/powerpc/configs/fsl-emb-nonhw.config @@ -41,7 +41,7 @@ CONFIG_FIXED_PHY=y CONFIG_FONT_8x16=y CONFIG_FONT_8x8=y CONFIG_FONTS=y -CONFIG_FORCE_MAX_ZONEORDER=13 +CONFIG_ARCH_FORCE_MAX_ORDER=13 CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAME_WARN=1024 CONFIG_FTL=y diff --git a/arch/sh/configs/ecovec24_defconfig b/arch/sh/configs/ecovec24_defconfig index e699e2e04128..b52e14ccb450 100644 --- a/arch/sh/configs/ecovec24_defconfig +++ b/arch/sh/configs/ecovec24_defconfig @@ -8,7 +8,7 @@ CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y # CONFIG_BLK_DEV_BSG is not set CONFIG_CPU_SUBTYPE_SH7724=y -CONFIG_FORCE_MAX_ZONEORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=12 CONFIG_MEMORY_SIZE=0x10000000 CONFIG_FLATMEM_MANUAL=y CONFIG_SH_ECOVEC=y diff --git a/arch/sh/mm/Kconfig b/arch/sh/mm/Kconfig index ba569cfb4368..411fdc0901f7 100644 --- a/arch/sh/mm/Kconfig +++ b/arch/sh/mm/Kconfig @@ -18,7 +18,7 @@ config PAGE_OFFSET default "0x80000000" if MMU default "0x00000000" -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" range 9 64 if PAGE_SIZE_16KB default "9" if PAGE_SIZE_16KB diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 1c852bb530ec..4d3d1af90d52 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -269,7 +269,7 @@ config ARCH_SPARSEMEM_ENABLE config ARCH_SPARSEMEM_DEFAULT def_bool y if SPARC64 -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" default "13" help diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index 12ac277282ba..bcb0c5d2abc2 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -771,7 +771,7 @@ config HIGHMEM If unsure, say Y. -config FORCE_MAX_ZONEORDER +config ARCH_FORCE_MAX_ORDER int "Maximum zone order" default "11" help diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8f571dc7c524..ca285ed3c6e0 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -24,10 +24,10 @@ #include /* Free memory management - zoned buddy allocator. */ -#ifndef CONFIG_FORCE_MAX_ZONEORDER +#ifndef CONFIG_ARCH_FORCE_MAX_ORDER #define MAX_ORDER 11 #else -#define MAX_ORDER CONFIG_FORCE_MAX_ZONEORDER +#define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER #endif #define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) From patchwork Thu Aug 11 23:16:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941800 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADA99C25B0F for ; Thu, 11 Aug 2022 23:16:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFAA58E0005; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E82E18E0003; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B4FA48E0005; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9580F8E0002 for ; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 69431AC3A9 for ; Thu, 11 Aug 2022 23:16:49 +0000 (UTC) X-FDA: 79788873738.21.392929C Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf07.hostedemail.com (Postfix) with ESMTP id 30F244017F for ; Thu, 11 Aug 2022 23:16:47 +0000 (UTC) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id D23E95C015B; Thu, 11 Aug 2022 19:16:46 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 11 Aug 2022 19:16:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259806; x= 1660346206; bh=sas7p0h+HfwjUG+446PhmSPx9OM2Xn7AyggPrr0InTg=; b=Z WJ7c6F3rCEwzl8XB6AvT4pH8RwpnSXR7cHr4fXPAp/kcccLukz1ckZI6KslaBcXG x0Gh2dFcjHVIWDPN1Hjordou1Lm6OHXI4Eci5wyVPKbpOtz/CrMsONlqvfUSc4Nv hQHMAajqffh2qLKVXoyZWDa7qbrr4m/lqvZVQbPFjQ2COnX6uOZkULctet8ILUIz 0FhAW+F45W61DsVRB1+8EchzWh2xsv68KNg31Ta9MYE65jNknfpr8mqifHdk2eI1 9CRWIf65pRHVkiPrDh0mraczmeo1f/ibrznCU/pFUXSGRnibsRE6K6Y2ULELGya/ rUUuX9pske0QCyvUeCz+g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259806; x=1660346206; bh=s as7p0h+HfwjUG+446PhmSPx9OM2Xn7AyggPrr0InTg=; b=bdj5JHpL/TEmTzrW/ fkzI3pDYzzwrxcsxcDuT9Si4YDluBn6rOSntLrbq6DDJhPwKr+n7BedOBhR/8Q9W gPcL9o2buM9fugS8TrUGIgT/1iWLCUbmDfCsQIOz1u4lZvUKopEjkz/uDdjBS3zs 6/PqbDW6iNTfMMm7TWDxxFsif2i8GQKDxbHyTjSSzAGaQvUQ3/WyWQY+bkyz0ar8 HwtNhfnH+gfhiTknyFY29mTsP6tuysgeuhPLe9ovM+XPNPUXvUl/KDpF5QQtRxck FrvhXnzghj279GFg/Gsforu9/5dflLjt1VWyldVeg00S/SMgp/QJOlYmYmMpvGp/ N2nCg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:46 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 02/12] mm: rectify MAX_ORDER semantics to be the largest page order from buddy allocator Date: Thu, 11 Aug 2022 19:16:33 -0400 Message-Id: <20220811231643.1012912-3-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259808; a=rsa-sha256; cv=none; b=E6SyXnslaJBiSJ+2gaA4tcdJOdfB2HhsTfhZCwR9bDCyM8CL3GGMDuAbN5bAn3ALnKLDA8 aLm1E63XytWxZupy6kSWv9A/GcNYVYKrHqdAY6CtcHfcrMLUOcaXBMPhdy+nkjznUb2cFx UV+2UHGmWLoSxWHi4hT6KL/KzkPmdpU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="Z WJ7c6F"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=bdj5JHpL; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf07.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259808; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sas7p0h+HfwjUG+446PhmSPx9OM2Xn7AyggPrr0InTg=; b=Go/OsySM+AkZoLT5UBQ17318Ud+2Ru5CRbo7vggsLfFYapQRUZVemvQT1Q0vpKqZmxLSEo HZUk090kI1vUdEKYIKrUOMuJmTVOXOb5aPmz9fb5rkAkWdnhjF6Dwj8qzZS5aRiBBRBUZJ Mph3RkP0ORbJrRoHE1H10H93cZfEN4I= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 30F244017F Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="Z WJ7c6F"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=bdj5JHpL; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf07.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Stat-Signature: 1ekejnhcz8ax68rjq9ikadeywp43qc37 X-Rspam-User: X-HE-Tag: 1660259807-776804 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan MAX_ORDER used to denote the largest page order + 1, but that was confusing and caused several off-by-1 errors in the code. Fix it by setting MAX_ORDER to the largest page order from buddy allocator like what its name says. Add a warning in checkpatch.pl about the semantics change. Signed-off-by: Zi Yan --- .../admin-guide/kdump/vmcoreinfo.rst | 4 +- .../admin-guide/kernel-parameters.txt | 4 +- arch/arc/Kconfig | 4 +- arch/arm/Kconfig | 12 +++--- arch/arm/configs/imx_v6_v7_defconfig | 2 +- arch/arm/configs/milbeaut_m10v_defconfig | 2 +- arch/arm/configs/oxnas_v6_defconfig | 2 +- arch/arm/configs/sama7_defconfig | 2 +- arch/arm64/Kconfig | 16 ++++---- arch/arm64/include/asm/sparsemem.h | 2 +- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 2 +- arch/csky/Kconfig | 2 +- arch/ia64/Kconfig | 8 ++-- arch/ia64/include/asm/sparsemem.h | 4 +- arch/ia64/mm/hugetlbpage.c | 2 +- arch/m68k/Kconfig.cpu | 8 ++-- arch/mips/Kconfig | 22 +++++----- arch/nios2/Kconfig | 10 ++--- arch/powerpc/Kconfig | 30 +++++++------- arch/powerpc/configs/85xx/ge_imp3a_defconfig | 2 +- arch/powerpc/configs/fsl-emb-nonhw.config | 2 +- arch/powerpc/mm/book3s64/iommu_api.c | 2 +- arch/powerpc/mm/hugetlbpage.c | 2 +- arch/powerpc/platforms/powernv/pci-ioda.c | 2 +- arch/sh/configs/ecovec24_defconfig | 2 +- arch/sh/mm/Kconfig | 20 +++++----- arch/sparc/Kconfig | 8 ++-- arch/sparc/kernel/pci_sun4v.c | 2 +- arch/sparc/kernel/traps_64.c | 2 +- arch/xtensa/Kconfig | 8 ++-- drivers/base/regmap/regmap-debugfs.c | 8 ++-- drivers/crypto/hisilicon/sgl.c | 6 +-- .../gpu/drm/i915/gem/selftests/huge_pages.c | 2 +- drivers/gpu/drm/ttm/ttm_pool.c | 22 +++++----- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 +- drivers/irqchip/irq-gic-v3-its.c | 4 +- drivers/md/dm-bufio.c | 2 +- drivers/misc/genwqe/card_utils.c | 2 +- drivers/net/ethernet/ibm/ibmvnic.h | 2 +- drivers/video/fbdev/hyperv_fb.c | 6 +-- drivers/virtio/virtio_balloon.c | 2 +- drivers/virtio/virtio_mem.c | 8 ++-- fs/ramfs/file-nommu.c | 2 +- include/drm/ttm/ttm_pool.h | 2 +- include/linux/hugetlb.h | 2 +- include/linux/mmzone.h | 10 ++--- include/linux/pageblock-flags.h | 4 +- include/linux/slab.h | 8 ++-- kernel/crash_core.c | 2 +- kernel/dma/pool.c | 6 +-- mm/Kconfig | 6 +-- mm/compaction.c | 8 ++-- mm/debug_vm_pgtable.c | 4 +- mm/huge_memory.c | 2 +- mm/hugetlb.c | 4 +- mm/memblock.c | 2 +- mm/memory_hotplug.c | 4 +- mm/page_alloc.c | 40 +++++++++---------- mm/page_isolation.c | 14 +++---- mm/page_owner.c | 6 +-- mm/page_reporting.c | 4 +- mm/shuffle.h | 2 +- mm/slab.c | 2 +- mm/slub.c | 4 +- mm/vmstat.c | 14 +++---- net/smc/smc_ib.c | 2 +- scripts/checkpatch.pl | 8 ++++ security/integrity/ima/ima_crypto.c | 2 +- tools/testing/memblock/linux/mmzone.h | 6 +-- 69 files changed, 208 insertions(+), 218 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index 8419019b6a88..c572b5230fe0 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -172,7 +172,7 @@ variables. Offset of the free_list's member. This value is used to compute the number of free pages. -Each zone has a free_area structure array called free_area[MAX_ORDER]. +Each zone has a free_area structure array called free_area[MAX_ORDER + 1]. The free_list represents a linked list of free page blocks. (list_head, next|prev) @@ -189,7 +189,7 @@ Offsets of the vmap_area's members. They carry vmalloc-specific information. Makedumpfile gets the start address of the vmalloc region from this. -(zone.free_area, MAX_ORDER) +(zone.free_area, MAX_ORDER + 1) --------------------------- Free areas descriptor. User-space tools use this value to iterate the diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index db5de5f0b9d3..ff33971e1630 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -928,7 +928,7 @@ buddy allocator. Bigger value increase the probability of catching random memory corruption, but reduce the amount of memory for normal system use. The maximum - possible value is MAX_ORDER/2. Setting this parameter + possible value is (MAX_ORDER + 1)/2. Setting this parameter to 1 or 2 should be enough to identify most random memory corruption problems caused by bugs in kernel or driver code when a CPU writes to (or reads from) a @@ -3899,7 +3899,7 @@ [KNL] Minimal page reporting order Format: Adjust the minimal page reporting order. The page - reporting is disabled when it exceeds (MAX_ORDER-1). + reporting is disabled when it exceeds MAX_ORDER. panic= [KNL] Kernel behaviour on panic: delay timeout > 0: seconds before rebooting diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index d9a13ccf89a3..ab6d701365bb 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -556,7 +556,7 @@ endmenu # "ARC Architecture Configuration" config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "12" if ARC_HUGEPAGE_16M - default "11" + default "11" if ARC_HUGEPAGE_16M + default "10" source "kernel/power/Kconfig" diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index e6c8ee56ac52..c8f2e46cc8c4 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1436,19 +1436,17 @@ config ARM_MODULE_PLTS config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "12" if SOC_AM33XX - default "9" if SA1111 - default "11" + default "11" if SOC_AM33XX + default "8" if SA1111 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. config ALIGNMENT_TRAP def_bool CPU_CP15_MMU diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig index fb283059daa0..eeb14499479d 100644 --- a/arch/arm/configs/imx_v6_v7_defconfig +++ b/arch/arm/configs/imx_v6_v7_defconfig @@ -31,7 +31,7 @@ CONFIG_SOC_VF610=y CONFIG_SMP=y CONFIG_ARM_PSCI=y CONFIG_HIGHMEM=y -CONFIG_ARCH_FORCE_MAX_ORDER=14 +CONFIG_ARCH_FORCE_MAX_ORDER=13 CONFIG_CMDLINE="noinitrd console=ttymxc0,115200" CONFIG_KEXEC=y CONFIG_CPU_FREQ=y diff --git a/arch/arm/configs/milbeaut_m10v_defconfig b/arch/arm/configs/milbeaut_m10v_defconfig index 8620061e19a8..22732f19e79b 100644 --- a/arch/arm/configs/milbeaut_m10v_defconfig +++ b/arch/arm/configs/milbeaut_m10v_defconfig @@ -26,7 +26,7 @@ CONFIG_THUMB2_KERNEL=y # CONFIG_THUMB2_AVOID_R_ARM_THM_JUMP11 is not set # CONFIG_ARM_PATCH_IDIV is not set CONFIG_HIGHMEM=y -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_SECCOMP=y CONFIG_KEXEC=y CONFIG_EFI=y diff --git a/arch/arm/configs/oxnas_v6_defconfig b/arch/arm/configs/oxnas_v6_defconfig index 5c163a9d1429..7e43aa355467 100644 --- a/arch/arm/configs/oxnas_v6_defconfig +++ b/arch/arm/configs/oxnas_v6_defconfig @@ -12,7 +12,7 @@ CONFIG_ARCH_OXNAS=y CONFIG_MACH_OX820=y CONFIG_SMP=y CONFIG_NR_CPUS=16 -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_SECCOMP=y CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_ATAG_DTB_COMPAT=y diff --git a/arch/arm/configs/sama7_defconfig b/arch/arm/configs/sama7_defconfig index 8b2cf6ddd568..c200de3947e3 100644 --- a/arch/arm/configs/sama7_defconfig +++ b/arch/arm/configs/sama7_defconfig @@ -19,7 +19,7 @@ CONFIG_ATMEL_CLOCKSOURCE_TCB=y # CONFIG_CACHE_L2X0 is not set # CONFIG_ARM_PATCH_IDIV is not set # CONFIG_CPU_SW_DOMAIN_PAN is not set -CONFIG_ARCH_FORCE_MAX_ORDER=15 +CONFIG_ARCH_FORCE_MAX_ORDER=14 CONFIG_UACCESS_WITH_MEMCPY=y # CONFIG_ATAGS is not set CONFIG_CMDLINE="console=ttyS0,115200 earlyprintk ignore_loglevel" diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index c6fcd8746f60..1afcfc9d2dc0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1403,25 +1403,23 @@ config XEN config ARCH_FORCE_MAX_ORDER int - default "14" if ARM64_64K_PAGES - default "12" if ARM64_16K_PAGES - default "11" + default "13" if ARM64_64K_PAGES + default "11" if ARM64_16K_PAGES + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. We make sure that we can allocate upto a HugePage size for each configuration. Hence we have : - MAX_ORDER = (PMD_SHIFT - PAGE_SHIFT) + 1 => PAGE_SHIFT - 2 + MAX_ORDER = PMD_SHIFT - PAGE_SHIFT = PAGE_SHIFT - 3 - However for 4K, we choose a higher default value, 11 as opposed to 10, giving us + However for 4K, we choose a higher default value, 10 as opposed to 9, giving us 4M allocations matching the default size used by generic code. config UNMAP_KERNEL_AT_EL0 diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 4b73463423c3..5f5437621029 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -10,7 +10,7 @@ /* * Section size must be at least 512MB for 64K base * page size config. Otherwise it will be less than - * (MAX_ORDER - 1) and the build process will fail. + * MAX_ORDER and the build process will fail. */ #ifdef CONFIG_ARM64_64K_PAGES #define SECTION_SIZE_BITS 29 diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 0a048dc06a7d..fe5472a184a3 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -16,7 +16,7 @@ struct hyp_pool { * API at EL2. */ hyp_spinlock_t lock; - struct list_head free_area[MAX_ORDER]; + struct list_head free_area[MAX_ORDER + 1]; phys_addr_t range_start; phys_addr_t range_end; unsigned short max_order; diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig index adee6ab36862..a35fc882e97e 100644 --- a/arch/csky/Kconfig +++ b/arch/csky/Kconfig @@ -334,7 +334,7 @@ config HIGHMEM config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "11" + default "10" config DRAM_BASE hex "DRAM start addr (the same with memory-section in dts)" diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index c6e06cdc738f..d85f6fbd0746 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -201,10 +201,10 @@ config IA64_CYCLONE If you're unsure, answer N. config ARCH_FORCE_MAX_ORDER - int "MAX_ORDER (11 - 17)" if !HUGETLB_PAGE - range 11 17 if !HUGETLB_PAGE - default "17" if HUGETLB_PAGE - default "11" + int "MAX_ORDER (10 - 16)" if !HUGETLB_PAGE + range 10 16 if !HUGETLB_PAGE + default "16" if HUGETLB_PAGE + default "10" config SMP bool "Symmetric multi-processing support" diff --git a/arch/ia64/include/asm/sparsemem.h b/arch/ia64/include/asm/sparsemem.h index 84e8ce387b69..04f03a56c166 100644 --- a/arch/ia64/include/asm/sparsemem.h +++ b/arch/ia64/include/asm/sparsemem.h @@ -12,9 +12,9 @@ #define SECTION_SIZE_BITS (30) #define MAX_PHYSMEM_BITS (50) #ifdef CONFIG_ARCH_FORCE_MAX_ORDER -#if ((CONFIG_ARCH_FORCE_MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS) +#if ((CONFIG_ARCH_FORCE_MAX_ORDER + PAGE_SHIFT) > SECTION_SIZE_BITS) #undef SECTION_SIZE_BITS -#define SECTION_SIZE_BITS (CONFIG_ARCH_FORCE_MAX_ORDER - 1 + PAGE_SHIFT) +#define SECTION_SIZE_BITS (CONFIG_ARCH_FORCE_MAX_ORDER + PAGE_SHIFT) #endif #endif diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c index f993cb36c062..87cc2e8908b4 100644 --- a/arch/ia64/mm/hugetlbpage.c +++ b/arch/ia64/mm/hugetlbpage.c @@ -185,7 +185,7 @@ static int __init hugetlb_setup_sz(char *str) size = memparse(str, &str); if (*str || !is_power_of_2(size) || !(tr_pages & size) || size <= PAGE_SIZE || - size >= (1UL << PAGE_SHIFT << MAX_ORDER)) { + size > (1UL << PAGE_SHIFT << MAX_ORDER)) { printk(KERN_WARNING "Invalid huge page size specified\n"); return 1; } diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu index 3b2f39508524..d3832e1ca7df 100644 --- a/arch/m68k/Kconfig.cpu +++ b/arch/m68k/Kconfig.cpu @@ -402,22 +402,20 @@ config SINGLE_MEMORY_CHUNK config ARCH_FORCE_MAX_ORDER int "Maximum zone order" if ADVANCED depends on !SINGLE_MEMORY_CHUNK - default "11" + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. For systems that have holes in their physical address space this value also defines the minimal size of the hole that allows freeing unused memory map. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - config 060_WRITETHROUGH bool "Use write-through caching for 68060 supervisor accesses" depends on ADVANCED && M68060 diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 70d28976a40d..37116c811e60 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -2142,24 +2142,22 @@ endchoice config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 14 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB - default "14" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB - range 13 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB - default "13" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB - range 12 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB - default "12" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB - range 0 64 - default "11" + range 13 63 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB + default "13" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB + range 12 63 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB + default "12" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB + range 11 63 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB + default "11" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB + range 0 63 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. The page size is not necessarily 4KB. Keep this in mind when choosing a value for this option. diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig index a582f72104f3..0cccaf8b7fdf 100644 --- a/arch/nios2/Kconfig +++ b/arch/nios2/Kconfig @@ -46,18 +46,16 @@ source "kernel/Kconfig.hz" config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 9 20 - default "11" + range 8 19 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. endmenu diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 39d71d7701bd..d052cf27883e 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -847,28 +847,26 @@ config DATA_SHIFT config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 8 9 if PPC64 && PPC_64K_PAGES - default "9" if PPC64 && PPC_64K_PAGES - range 13 13 if PPC64 && !PPC_64K_PAGES - default "13" if PPC64 && !PPC_64K_PAGES - range 9 64 if PPC32 && PPC_16K_PAGES - default "9" if PPC32 && PPC_16K_PAGES - range 7 64 if PPC32 && PPC_64K_PAGES - default "7" if PPC32 && PPC_64K_PAGES - range 5 64 if PPC32 && PPC_256K_PAGES - default "5" if PPC32 && PPC_256K_PAGES - range 11 64 - default "11" + range 7 8 if PPC64 && PPC_64K_PAGES + default "8" if PPC64 && PPC_64K_PAGES + range 12 12 if PPC64 && !PPC_64K_PAGES + default "12" if PPC64 && !PPC_64K_PAGES + range 8 63 if PPC32 && PPC_16K_PAGES + default "8" if PPC32 && PPC_16K_PAGES + range 6 63 if PPC32 && PPC_64K_PAGES + default "6" if PPC32 && PPC_64K_PAGES + range 4 63 if PPC32 && PPC_256K_PAGES + default "4" if PPC32 && PPC_256K_PAGES + range 10 63 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 11 means that the largest free memory + block is 2^10 pages. The page size is not necessarily 4KB. For example, on 64-bit systems, 64KB pages can be enabled via CONFIG_PPC_64K_PAGES. Keep diff --git a/arch/powerpc/configs/85xx/ge_imp3a_defconfig b/arch/powerpc/configs/85xx/ge_imp3a_defconfig index e7672c186325..b8be8280a200 100644 --- a/arch/powerpc/configs/85xx/ge_imp3a_defconfig +++ b/arch/powerpc/configs/85xx/ge_imp3a_defconfig @@ -30,7 +30,7 @@ CONFIG_PREEMPT=y # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set CONFIG_BINFMT_MISC=m CONFIG_MATH_EMULATION=y -CONFIG_ARCH_FORCE_MAX_ORDER=17 +CONFIG_ARCH_FORCE_MAX_ORDER=16 CONFIG_PCI=y CONFIG_PCIEPORTBUS=y CONFIG_PCI_MSI=y diff --git a/arch/powerpc/configs/fsl-emb-nonhw.config b/arch/powerpc/configs/fsl-emb-nonhw.config index ab8a8c4530d9..3009b0efaf34 100644 --- a/arch/powerpc/configs/fsl-emb-nonhw.config +++ b/arch/powerpc/configs/fsl-emb-nonhw.config @@ -41,7 +41,7 @@ CONFIG_FIXED_PHY=y CONFIG_FONT_8x16=y CONFIG_FONT_8x8=y CONFIG_FONTS=y -CONFIG_ARCH_FORCE_MAX_ORDER=13 +CONFIG_ARCH_FORCE_MAX_ORDER=12 CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAME_WARN=1024 CONFIG_FTL=y diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 7fcfba162e0d..81d7185e2ae8 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -97,7 +97,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, } mmap_read_lock(mm); - chunk = (1UL << (PAGE_SHIFT + MAX_ORDER - 1)) / + chunk = (1UL << (PAGE_SHIFT + MAX_ORDER)) / sizeof(struct vm_area_struct *); chunk = min(chunk, entries); for (entry = 0; entry < entries; entry += chunk) { diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index bc84a594ca62..8d63934783dc 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -652,7 +652,7 @@ void __init gigantic_hugetlb_cma_reserve(void) order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT; if (order) { - VM_WARN_ON(order < MAX_ORDER); + VM_WARN_ON(order <= MAX_ORDER); hugetlb_cma_reserve(order); } } diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 9de9b2fb163d..8e29a57924ef 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -1740,7 +1740,7 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe) * DMA window can be larger than available memory, which will * cause errors later. */ - const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_ORDER - 1); + const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_ORDER); /* * We create the default window as big as we can. The constraint is diff --git a/arch/sh/configs/ecovec24_defconfig b/arch/sh/configs/ecovec24_defconfig index b52e14ccb450..4d655e8d4d74 100644 --- a/arch/sh/configs/ecovec24_defconfig +++ b/arch/sh/configs/ecovec24_defconfig @@ -8,7 +8,7 @@ CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y # CONFIG_BLK_DEV_BSG is not set CONFIG_CPU_SUBTYPE_SH7724=y -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_MEMORY_SIZE=0x10000000 CONFIG_FLATMEM_MANUAL=y CONFIG_SH_ECOVEC=y diff --git a/arch/sh/mm/Kconfig b/arch/sh/mm/Kconfig index 411fdc0901f7..e60e77c6edca 100644 --- a/arch/sh/mm/Kconfig +++ b/arch/sh/mm/Kconfig @@ -20,23 +20,21 @@ config PAGE_OFFSET config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 9 64 if PAGE_SIZE_16KB - default "9" if PAGE_SIZE_16KB - range 7 64 if PAGE_SIZE_64KB - default "7" if PAGE_SIZE_64KB - range 11 64 - default "14" if !MMU - default "11" + range 8 63 if PAGE_SIZE_16KB + default "8" if PAGE_SIZE_16KB + range 6 63 if PAGE_SIZE_64KB + default "6" if PAGE_SIZE_64KB + range 10 63 + default "13" if !MMU + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. The page size is not necessarily 4KB. Keep this in mind when choosing a value for this option. diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 4d3d1af90d52..099d0b31ea69 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -271,17 +271,15 @@ config ARCH_SPARSEMEM_DEFAULT config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "13" + default "12" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 13 means that the largest free memory block is 2^12 pages. + increase this value. A value of 12 means that the largest free memory + block is 2^12 pages. if SPARC64 source "kernel/power/Kconfig" diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index 384480971805..7d91ca6aa675 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -193,7 +193,7 @@ static void *dma_4v_alloc_coherent(struct device *dev, size_t size, size = IO_PAGE_ALIGN(size); order = get_order(size); - if (unlikely(order >= MAX_ORDER)) + if (unlikely(order > MAX_ORDER)) return NULL; npages = size >> IO_PAGE_SHIFT; diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c index 5b4de4a89dec..08ffd17d5ec3 100644 --- a/arch/sparc/kernel/traps_64.c +++ b/arch/sparc/kernel/traps_64.c @@ -897,7 +897,7 @@ void __init cheetah_ecache_flush_init(void) /* Now allocate error trap reporting scoreboard. */ sz = NR_CPUS * (2 * sizeof(struct cheetah_err_info)); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { if ((PAGE_SIZE << order) >= sz) break; } diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index bcb0c5d2abc2..2d1d91718263 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -773,17 +773,15 @@ config HIGHMEM config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "11" + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of pages. This option selects the largest power of two that the kernel keeps in the memory allocator. If you need to allocate very large blocks of physically contiguous memory, then you may need to - increase this value. - - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. endmenu diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c index 817eda2075aa..c491fabe3617 100644 --- a/drivers/base/regmap/regmap-debugfs.c +++ b/drivers/base/regmap/regmap-debugfs.c @@ -226,8 +226,8 @@ static ssize_t regmap_read_debugfs(struct regmap *map, unsigned int from, if (*ppos < 0 || !count) return -EINVAL; - if (count > (PAGE_SIZE << (MAX_ORDER - 1))) - count = PAGE_SIZE << (MAX_ORDER - 1); + if (count > (PAGE_SIZE << MAX_ORDER)) + count = PAGE_SIZE << MAX_ORDER; buf = kmalloc(count, GFP_KERNEL); if (!buf) @@ -373,8 +373,8 @@ static ssize_t regmap_reg_ranges_read_file(struct file *file, if (*ppos < 0 || !count) return -EINVAL; - if (count > (PAGE_SIZE << (MAX_ORDER - 1))) - count = PAGE_SIZE << (MAX_ORDER - 1); + if (count > (PAGE_SIZE << MAX_ORDER)) + count = PAGE_SIZE << MAX_ORDER; buf = kmalloc(count, GFP_KERNEL); if (!buf) diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c index 2b6f2281cfd6..f30cf96b0a41 100644 --- a/drivers/crypto/hisilicon/sgl.c +++ b/drivers/crypto/hisilicon/sgl.c @@ -70,11 +70,11 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev, HISI_ACC_SGL_ALIGN_SIZE); /* - * the pool may allocate a block of memory of size PAGE_SIZE * 2^(MAX_ORDER - 1), + * the pool may allocate a block of memory of size PAGE_SIZE * 2^MAX_ORDER, * block size may exceed 2^31 on ia64, so the max of block size is 2^31 */ - block_size = 1 << (PAGE_SHIFT + MAX_ORDER <= 32 ? - PAGE_SHIFT + MAX_ORDER - 1 : 31); + block_size = 1 << (PAGE_SHIFT + MAX_ORDER <= 31 ? + PAGE_SHIFT + MAX_ORDER : 31); sgl_num_per_block = block_size / sgl_size; block_num = count / sgl_num_per_block; remain_sgl = count % sgl_num_per_block; diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index 72ce2c9f42fd..84498c7f845d 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -111,7 +111,7 @@ static int get_huge_pages(struct drm_i915_gem_object *obj) do { struct page *page; - GEM_BUG_ON(order >= MAX_ORDER); + GEM_BUG_ON(order > MAX_ORDER); page = alloc_pages(GFP | __GFP_ZERO, order); if (!page) goto err; diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 21b61631f73a..85d19f425af6 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -64,11 +64,11 @@ module_param(page_pool_size, ulong, 0644); static atomic_long_t allocated_pages; -static struct ttm_pool_type global_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_uncached[MAX_ORDER]; +static struct ttm_pool_type global_write_combined[MAX_ORDER + 1]; +static struct ttm_pool_type global_uncached[MAX_ORDER + 1]; -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; +static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER + 1]; +static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; @@ -382,7 +382,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, else gfp_flags |= GFP_HIGHUSER; - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages)); + for (order = min_t(unsigned int, MAX_ORDER, __fls(num_pages)); num_pages; order = min_t(unsigned int, order, __fls(num_pages))) { bool apply_caching = false; @@ -507,7 +507,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, if (use_dma_alloc) { for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) - for (j = 0; j < MAX_ORDER; ++j) + for (j = 0; j <= MAX_ORDER; ++j) ttm_pool_type_init(&pool->caching[i].orders[j], pool, i, j); } @@ -527,7 +527,7 @@ void ttm_pool_fini(struct ttm_pool *pool) if (pool->use_dma_alloc) { for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) - for (j = 0; j < MAX_ORDER; ++j) + for (j = 0; j <= MAX_ORDER; ++j) ttm_pool_type_fini(&pool->caching[i].orders[j]); } @@ -581,7 +581,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m) unsigned int i; seq_puts(m, "\t "); - for (i = 0; i < MAX_ORDER; ++i) + for (i = 0; i <= MAX_ORDER; ++i) seq_printf(m, " ---%2u---", i); seq_puts(m, "\n"); } @@ -592,7 +592,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, { unsigned int i; - for (i = 0; i < MAX_ORDER; ++i) + for (i = 0; i <= MAX_ORDER; ++i) seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); seq_puts(m, "\n"); } @@ -701,7 +701,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) spin_lock_init(&shrinker_lock); INIT_LIST_HEAD(&shrinker_list); - for (i = 0; i < MAX_ORDER; ++i) { + for (i = 0; i <= MAX_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, ttm_write_combined, i); ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i); @@ -734,7 +734,7 @@ void ttm_pool_mgr_fini(void) { unsigned int i; - for (i = 0; i < MAX_ORDER; ++i) { + for (i = 0; i <= MAX_ORDER; ++i) { ttm_pool_type_fini(&global_write_combined[i]); ttm_pool_type_fini(&global_uncached[i]); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index cd48590ada30..c5ea361bf757 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -182,7 +182,7 @@ #ifdef CONFIG_CMA_ALIGNMENT #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT) #else -#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_ORDER - 1) +#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_ORDER) #endif /* diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 5ff09de6c48f..c867432919d8 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -2438,8 +2438,8 @@ static bool its_parse_indirect_baser(struct its_node *its, * feature is not supported by hardware. */ new_order = max_t(u32, get_order(esz << ids), new_order); - if (new_order >= MAX_ORDER) { - new_order = MAX_ORDER - 1; + if (new_order > MAX_ORDER) { + new_order = MAX_ORDER; ids = ilog2(PAGE_ORDER_TO_SIZE(new_order) / (int)esz); pr_warn("ITS@%pa: %s Table too large, reduce ids %llu->%u\n", &its->phys_base, its_base_type_string[type], diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index acd6d6b47434..eee05abbc0be 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -407,7 +407,7 @@ static void __cache_size_refresh(void) * If the allocation may fail we use __get_free_pages. Memory fragmentation * won't have a fatal effect here, but it just causes flushes of some other * buffers and more I/O will be performed. Don't use __get_free_pages if it - * always fails (i.e. order >= MAX_ORDER). + * always fails (i.e. order > MAX_ORDER). * * If the allocation shouldn't fail we use __vmalloc. This is only for the * initial reserve allocation, so there's no risk of wasting all vmalloc diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c index 1167463f26fb..361514cd575c 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c @@ -210,7 +210,7 @@ u32 genwqe_crc32(u8 *buff, size_t len, u32 init) void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size, dma_addr_t *dma_handle) { - if (get_order(size) >= MAX_ORDER) + if (get_order(size) > MAX_ORDER) return NULL; return dma_alloc_coherent(&cd->pci_dev->dev, size, dma_handle, diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index e5c6ff3d0c47..608f9df67eb8 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -75,7 +75,7 @@ * pool for the 4MB. Thus the 16 Rx and Tx queues require 32 * 5 = 160 * plus 16 for the TSO pools for a total of 176 LTB mappings per VNIC. */ -#define IBMVNIC_ONE_LTB_MAX ((u32)((1 << (MAX_ORDER - 1)) * PAGE_SIZE)) +#define IBMVNIC_ONE_LTB_MAX ((u32)((1 << MAX_ORDER) * PAGE_SIZE)) #define IBMVNIC_ONE_LTB_SIZE min((u32)(8 << 20), IBMVNIC_ONE_LTB_MAX) #define IBMVNIC_LTB_SET_SIZE (38 << 20) diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c index 886c564787f1..a852ab6c1f52 100644 --- a/drivers/video/fbdev/hyperv_fb.c +++ b/drivers/video/fbdev/hyperv_fb.c @@ -944,8 +944,8 @@ static phys_addr_t hvfb_get_phymem(struct hv_device *hdev, if (request_size == 0) return -1; - if (order < MAX_ORDER) { - /* Call alloc_pages if the size is less than 2^MAX_ORDER */ + if (order <= MAX_ORDER) { + /* Call alloc_pages if the size is no greater than 2^MAX_ORDER */ page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (!page) return -1; @@ -975,7 +975,7 @@ static void hvfb_release_phymem(struct hv_device *hdev, { unsigned int order = get_order(size); - if (order < MAX_ORDER) + if (order <= MAX_ORDER) __free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order); else dma_free_coherent(&hdev->device, diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 3f78a3a1eb75..5b15936a5214 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -33,7 +33,7 @@ #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \ __GFP_NOMEMALLOC) /* The order of free page blocks to report to host */ -#define VIRTIO_BALLOON_HINT_BLOCK_ORDER (MAX_ORDER - 1) +#define VIRTIO_BALLOON_HINT_BLOCK_ORDER MAX_ORDER /* The size of a free page block in bytes */ #define VIRTIO_BALLOON_HINT_BLOCK_BYTES \ (1 << (VIRTIO_BALLOON_HINT_BLOCK_ORDER + PAGE_SHIFT)) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 0c2892ec6817..0e1253e3423a 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1120,13 +1120,13 @@ static void virtio_mem_clear_fake_offline(unsigned long pfn, */ static void virtio_mem_fake_online(unsigned long pfn, unsigned long nr_pages) { - unsigned long order = MAX_ORDER - 1; + unsigned long order = MAX_ORDER; unsigned long i; /* * We might get called for ranges that don't cover properly aligned - * MAX_ORDER - 1 pages; however, we can only online properly aligned - * pages with an order of MAX_ORDER - 1 at maximum. + * MAX_ORDER pages; however, we can only online properly aligned + * pages with an order of MAX_ORDER at maximum. */ while (!IS_ALIGNED(pfn | nr_pages, 1 << order)) order--; @@ -1237,7 +1237,7 @@ static void virtio_mem_online_page(struct virtio_mem *vm, bool do_online; /* - * We can get called with any order up to MAX_ORDER - 1. If our + * We can get called with any order up to MAX_ORDER. If our * subblock size is smaller than that and we have a mixture of plugged * and unplugged subblocks within such a page, we have to process in * smaller granularity. In that case we'll adjust the order exactly once diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c index ba3525ccc27e..b3b7519a6519 100644 --- a/fs/ramfs/file-nommu.c +++ b/fs/ramfs/file-nommu.c @@ -70,7 +70,7 @@ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize) /* make various checks */ order = get_order(newsize); - if (unlikely(order >= MAX_ORDER)) + if (unlikely(order > MAX_ORDER)) return -EFBIG; ret = inode_newsize_ok(inode, newsize); diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index ef09b23d29e3..8ce14f9d202a 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -72,7 +72,7 @@ struct ttm_pool { bool use_dma32; struct { - struct ttm_pool_type orders[MAX_ORDER]; + struct ttm_pool_type orders[MAX_ORDER + 1]; } caching[TTM_NUM_CACHING_TYPES]; }; diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 3ec981a0d8b3..68485a264865 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -746,7 +746,7 @@ static inline unsigned huge_page_shift(struct hstate *h) static inline bool hstate_is_gigantic(struct hstate *h) { - return huge_page_order(h) >= MAX_ORDER; + return huge_page_order(h) > MAX_ORDER; } static inline unsigned int pages_per_huge_page(const struct hstate *h) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index ca285ed3c6e0..e93faa3d7f1d 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -25,11 +25,11 @@ /* Free memory management - zoned buddy allocator. */ #ifndef CONFIG_ARCH_FORCE_MAX_ORDER -#define MAX_ORDER 11 +#define MAX_ORDER 10 #else #define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER #endif -#define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) +#define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) /* * PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed @@ -92,7 +92,7 @@ static inline bool migratetype_is_mergeable(int mt) } #define for_each_migratetype_order(order, type) \ - for (order = 0; order < MAX_ORDER; order++) \ + for (order = 0; order <= MAX_ORDER; order++) \ for (type = 0; type < MIGRATE_TYPES; type++) extern int page_group_by_mobility_disabled; @@ -632,7 +632,7 @@ struct zone { ZONE_PADDING(_pad1_) /* free areas of different sizes */ - struct free_area free_area[MAX_ORDER]; + struct free_area free_area[MAX_ORDER + 1]; /* zone flags, see below */ unsigned long flags; @@ -1379,7 +1379,7 @@ static inline bool movable_only_nodes(nodemask_t *nodes) #define SECTION_BLOCKFLAGS_BITS \ ((1UL << (PFN_SECTION_SHIFT - pageblock_order)) * NR_PAGEBLOCK_BITS) -#if (MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS +#if (MAX_ORDER + PAGE_SHIFT) > SECTION_SIZE_BITS #error Allocator MAX_ORDER exceeds SECTION_SIZE #endif diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index 83c7248053a1..940efcffd374 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -41,14 +41,14 @@ extern unsigned int pageblock_order; * Huge pages are a constant size, but don't exceed the maximum allocation * granularity. */ -#define pageblock_order min_t(unsigned int, HUGETLB_PAGE_ORDER, MAX_ORDER - 1) +#define pageblock_order min_t(unsigned int, HUGETLB_PAGE_ORDER, MAX_ORDER) #endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */ #else /* CONFIG_HUGETLB_PAGE */ /* If huge pages are not used, group by MAX_ORDER_NR_PAGES */ -#define pageblock_order (MAX_ORDER-1) +#define pageblock_order MAX_ORDER #endif /* CONFIG_HUGETLB_PAGE */ diff --git a/include/linux/slab.h b/include/linux/slab.h index 0fefdf528e0d..568b5dfb3bd9 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -251,8 +251,8 @@ static inline unsigned int arch_slab_minalign(void) * to do various tricks to work around compiler limitations in order to * ensure proper constant folding. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT - 1) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT - 1) : 25) +#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT) <= 25 ? \ + (MAX_ORDER + PAGE_SHIFT) : 25) #define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 @@ -265,7 +265,7 @@ static inline unsigned int arch_slab_minalign(void) * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif @@ -278,7 +278,7 @@ static inline unsigned int arch_slab_minalign(void) * be allocated from the same page. */ #define KMALLOC_SHIFT_HIGH PAGE_SHIFT -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif diff --git a/kernel/crash_core.c b/kernel/crash_core.c index a0eb4d5cf557..245e2ee20718 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -471,7 +471,7 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(list_head, prev); VMCOREINFO_OFFSET(vmap_area, va_start); VMCOREINFO_OFFSET(vmap_area, list); - VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER); + VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER + 1); log_buf_vmcoreinfo_setup(); VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES); VMCOREINFO_NUMBER(NR_FREE_PAGES); diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 1bf6de398986..e20f168a34c7 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -84,8 +84,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, void *addr; int ret = -ENOMEM; - /* Cannot allocate larger than MAX_ORDER-1 */ - order = min(get_order(pool_size), MAX_ORDER-1); + /* Cannot allocate larger than MAX_ORDER */ + order = min(get_order(pool_size), MAX_ORDER); do { pool_size = 1 << (PAGE_SHIFT + order); @@ -190,7 +190,7 @@ static int __init dma_atomic_pool_init(void) /* * If coherent_pool was not used on the command line, default the pool - * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1. + * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER. */ if (!atomic_pool_size) { unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K); diff --git a/mm/Kconfig b/mm/Kconfig index 0331f1461f81..bbe31e85afee 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -307,7 +307,7 @@ config SHUFFLE_PAGE_ALLOCATOR the presence of a memory-side-cache. There are also incidental security benefits as it reduces the predictability of page allocations to compliment SLAB_FREELIST_RANDOM, but the - default granularity of shuffling on the "MAX_ORDER - 1" i.e, + default granularity of shuffling on the "MAX_ORDER" i.e, 10th order of pages is selected based on cache utilization benefits on x86. @@ -621,8 +621,8 @@ config HUGETLB_PAGE_SIZE_VARIABLE HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available on a platform. - Note that the pageblock_order cannot exceed MAX_ORDER - 1 and will be - clamped down to MAX_ORDER - 1. + Note that the pageblock_order cannot exceed MAX_ORDER and will be + clamped down to MAX_ORDER. config CONTIG_ALLOC def_bool (MEMORY_ISOLATION && COMPACTION) || CMA diff --git a/mm/compaction.c b/mm/compaction.c index 640fa76228dd..4a282c658ac4 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -586,7 +586,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, if (PageCompound(page)) { const unsigned int order = compound_order(page); - if (likely(order < MAX_ORDER)) { + if (likely(order <= MAX_ORDER)) { blockpfn += (1UL << order) - 1; cursor += (1UL << order) - 1; } @@ -941,7 +941,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * a valid page order. Consider only values in the * valid order range to prevent low_pfn overflow. */ - if (freepage_order > 0 && freepage_order < MAX_ORDER) + if (freepage_order > 0 && freepage_order <= MAX_ORDER) low_pfn += (1UL << freepage_order) - 1; continue; } @@ -957,7 +957,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (PageCompound(page) && !cc->alloc_contig) { const unsigned int order = compound_order(page); - if (likely(order < MAX_ORDER)) + if (likely(order <= MAX_ORDER)) low_pfn += (1UL << order) - 1; goto isolate_fail; } @@ -2118,7 +2118,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) /* Direct compactor: Is a suitable page free? */ ret = COMPACT_NO_SUITABLE_PAGE; - for (order = cc->order; order < MAX_ORDER; order++) { + for (order = cc->order; order <= MAX_ORDER; order++) { struct free_area *area = &cc->zone->free_area[order]; bool can_steal; diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index dc7df1254f0a..7e53c4a42047 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -1094,7 +1094,7 @@ debug_vm_pgtable_alloc_huge_page(struct pgtable_debug_args *args, int order) struct page *page = NULL; #ifdef CONFIG_CONTIG_ALLOC - if (order >= MAX_ORDER) { + if (order > MAX_ORDER) { page = alloc_contig_pages((1 << order), GFP_KERNEL, first_online_node, NULL); if (page) { @@ -1104,7 +1104,7 @@ debug_vm_pgtable_alloc_huge_page(struct pgtable_debug_args *args, int order) } #endif - if (order < MAX_ORDER) + if (order <= MAX_ORDER) page = alloc_pages(GFP_KERNEL, order); return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3222b40a0f6d..9b1655950049 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -469,7 +469,7 @@ static int __init hugepage_init(void) /* * hugepages can't be allocated by the buddy allocator */ - MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER >= MAX_ORDER); + MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER > MAX_ORDER); /* * we use page->mapping and page->index in second tail page * as list_head: assuming THP order >= 2 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 28516881a1b2..15ff582687a3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1903,7 +1903,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) pgoff_t index = page_index(page_head); unsigned long compound_idx; - if (compound_order(page_head) >= MAX_ORDER) + if (compound_order(page_head) > MAX_ORDER) compound_idx = page_to_pfn(page) - page_to_pfn(page_head); else compound_idx = page - page_head; @@ -4313,7 +4313,7 @@ static int __init default_hugepagesz_setup(char *s) * The number of default huge pages (for this size) could have been * specified as the first hugetlb parameter: hugepages=X. If so, * then default_hstate_max_huge_pages is set. If the default huge - * page size is gigantic (>= MAX_ORDER), then the pages must be + * page size is gigantic (> MAX_ORDER), then the pages must be * allocated here from bootmem allocator. */ if (default_hstate_max_huge_pages) { diff --git a/mm/memblock.c b/mm/memblock.c index b5d3026979fc..d1525463c05e 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2030,7 +2030,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) int order; while (start < end) { - order = min(MAX_ORDER - 1UL, __ffs(start)); + order = min_t(unsigned long, MAX_ORDER, __ffs(start)); while (start + (1UL << order) > end) order--; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index fad6d1f2262a..5540499007ae 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -596,7 +596,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages) unsigned long pfn; /* - * Online the pages in MAX_ORDER - 1 aligned chunks. The callback might + * Online the pages in MAX_ORDER aligned chunks. The callback might * decide to not expose all pages to the buddy (e.g., expose them * later). We account all pages as being online and belonging to this * zone ("present"). @@ -605,7 +605,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages) * this and the first chunk to online will be pageblock_nr_pages. */ for (pfn = start_pfn; pfn < end_pfn;) { - int order = min(MAX_ORDER - 1UL, __ffs(pfn)); + int order = min_t(unsigned long, MAX_ORDER, __ffs(pfn)); (*online_page_callback)(pfn_to_page(pfn), order); pfn += (1UL << order); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7e030d7cac81..07ad8074950f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -847,7 +847,7 @@ static int __init debug_guardpage_minorder_setup(char *buf) { unsigned long res; - if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) { + if (kstrtoul(buf, 10, &res) < 0 || res > (MAX_ORDER + 1) / 2) { pr_err("Bad debug_guardpage_minorder value\n"); return 0; } @@ -1065,7 +1065,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, unsigned long higher_page_pfn; struct page *higher_page; - if (order >= MAX_ORDER - 2) + if (order >= MAX_ORDER - 1) return false; higher_page_pfn = buddy_pfn & pfn; @@ -1120,7 +1120,7 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); - while (order < MAX_ORDER - 1) { + while (order < MAX_ORDER) { if (compaction_capture(capc, page, order, migratetype)) { __mod_zone_freepage_state(zone, -(1 << order), migratetype); @@ -2559,7 +2559,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, struct page *page; /* Find a page of the appropriate size in the preferred list */ - for (current_order = order; current_order < MAX_ORDER; ++current_order) { + for (current_order = order; current_order <= MAX_ORDER; ++current_order) { area = &(zone->free_area[current_order]); page = get_page_from_free_area(area, migratetype); if (!page) @@ -2934,7 +2934,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, continue; spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct free_area *area = &(zone->free_area[order]); page = get_page_from_free_area(area, MIGRATE_HIGHATOMIC); @@ -3018,7 +3018,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, * approximates finding the pageblock with the most free pages, which * would be too costly to do exactly. */ - for (current_order = MAX_ORDER - 1; current_order >= min_order; + for (current_order = MAX_ORDER; current_order >= min_order; --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, @@ -3044,7 +3044,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, return false; find_smallest: - for (current_order = order; current_order < MAX_ORDER; + for (current_order = order; current_order <= MAX_ORDER; current_order++) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, @@ -3057,7 +3057,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, * This should not happen - we already found a suitable fallback * when looking for the largest page. */ - VM_BUG_ON(current_order == MAX_ORDER); + VM_BUG_ON(current_order == MAX_ORDER + 1); do_steal: page = get_page_from_free_area(area, fallback_mt); @@ -4005,7 +4005,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; /* For a high-order request, check at least one suitable page is free */ - for (o = order; o < MAX_ORDER; o++) { + for (o = order; o <= MAX_ORDER; o++) { struct free_area *area = &z->free_area[o]; int mt; @@ -5480,7 +5480,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, * There are several places where we assume that the order value is sane * so bail out early if the request is out of bound. */ - if (WARN_ON_ONCE_GFP(order >= MAX_ORDER, gfp)) + if (WARN_ON_ONCE_GFP(order > MAX_ORDER, gfp)) return NULL; gfp &= gfp_allowed_mask; @@ -6183,8 +6183,8 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) for_each_populated_zone(zone) { unsigned int order; - unsigned long nr[MAX_ORDER], flags, total = 0; - unsigned char types[MAX_ORDER]; + unsigned long nr[MAX_ORDER + 1], flags, total = 0; + unsigned char types[MAX_ORDER + 1]; if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) continue; @@ -6192,7 +6192,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) printk(KERN_CONT "%s: ", zone->name); spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct free_area *area = &zone->free_area[order]; int type; @@ -6206,7 +6206,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) } } spin_unlock_irqrestore(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { printk(KERN_CONT "%lu*%lukB ", nr[order], K(1UL) << order); if (nr[order]) @@ -7545,7 +7545,7 @@ static inline void setup_usemap(struct zone *zone) {} /* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */ void __init set_pageblock_order(void) { - unsigned int order = MAX_ORDER - 1; + unsigned int order = MAX_ORDER; /* Check that pageblock_nr_pages has not already been setup */ if (pageblock_order) @@ -9051,7 +9051,7 @@ void *__init alloc_large_system_hash(const char *tablename, else table = memblock_alloc_raw(size, SMP_CACHE_BYTES); - } else if (get_order(size) >= MAX_ORDER || hashdist) { + } else if (get_order(size) > MAX_ORDER || hashdist) { table = vmalloc_huge(size, gfp_flags); virt = true; if (table) @@ -9265,7 +9265,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, order = 0; outer_start = start; while (!PageBuddy(pfn_to_page(outer_start))) { - if (++order >= MAX_ORDER) { + if (++order > MAX_ORDER) { outer_start = start; break; } @@ -9524,7 +9524,7 @@ bool is_free_buddy_page(struct page *page) unsigned long pfn = page_to_pfn(page); unsigned int order; - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct page *page_head = page - (pfn & ((1 << order) - 1)); if (PageBuddy(page_head) && @@ -9532,7 +9532,7 @@ bool is_free_buddy_page(struct page *page) break; } - return order < MAX_ORDER; + return order <= MAX_ORDER; } EXPORT_SYMBOL(is_free_buddy_page); @@ -9583,7 +9583,7 @@ bool take_page_off_buddy(struct page *page) bool ret = false; spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct page *page_head = page - (pfn & ((1 << order) - 1)); int page_order = buddy_order(page_head); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 9d73dc38e3d7..8d33120a81b2 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -226,7 +226,7 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) */ if (PageBuddy(page)) { order = buddy_order(page); - if (order >= pageblock_order && order < MAX_ORDER - 1) { + if (order >= pageblock_order && order <= MAX_ORDER) { buddy = find_buddy_page_pfn(page, page_to_pfn(page), order, NULL); if (buddy && !is_migrate_isolate_page(buddy)) { @@ -289,11 +289,11 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * @skip_isolation: the flag to skip the pageblock isolation in second * isolate_single_pageblock() * - * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one + * Free and in-use pages can be as big as MAX_ORDER and contain more than one * pageblock. When not all pageblocks within a page are isolated at the same * time, free page accounting can go wrong. For example, in the case of - * MAX_ORDER-1 = pageblock_order + 1, a MAX_ORDER-1 page has two pagelbocks. - * [ MAX_ORDER-1 ] + * MAX_ORDER = pageblock_order + 1, a MAX_ORDER page has two pagelbocks. + * [ MAX_ORDER ] * [ pageblock0 | pageblock1 ] * When either pageblock is isolated, if it is a free page, the page is not * split into separate migratetype lists, which is supposed to; if it is an @@ -450,7 +450,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, * the free page to the right migratetype list. * * head_pfn is not used here as a hugetlb page order - * can be bigger than MAX_ORDER-1, but after it is + * can be bigger than MAX_ORDER, but after it is * freed, the free page order is not. Use pfn within * the range to find the head of the free page. */ @@ -458,7 +458,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, outer_pfn = pfn; while (!PageBuddy(pfn_to_page(outer_pfn))) { /* stop if we cannot find the free page */ - if (++order >= MAX_ORDER) + if (++order > MAX_ORDER) goto failed; outer_pfn &= ~0UL << order; } @@ -639,7 +639,7 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, int ret; /* - * Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages + * Note: pageblock_order != MAX_ORDER. Then, chunks of free pages * are not aligned to pageblock_nr_pages. * Then we just check migratetype first. */ diff --git a/mm/page_owner.c b/mm/page_owner.c index 223bbf8674ec..80cf367362c3 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -318,7 +318,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, unsigned long freepage_order; freepage_order = buddy_order_unsafe(page); - if (freepage_order < MAX_ORDER) + if (freepage_order <= MAX_ORDER) pfn += (1UL << freepage_order) - 1; continue; } @@ -552,7 +552,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos) if (PageBuddy(page)) { unsigned long freepage_order = buddy_order_unsafe(page); - if (freepage_order < MAX_ORDER) + if (freepage_order <= MAX_ORDER) pfn += (1UL << freepage_order) - 1; continue; } @@ -645,7 +645,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) if (PageBuddy(page)) { unsigned long order = buddy_order_unsafe(page); - if (order > 0 && order < MAX_ORDER) + if (order > 0 && order <= MAX_ORDER) pfn += (1UL << order) - 1; continue; } diff --git a/mm/page_reporting.c b/mm/page_reporting.c index 382958eef8a9..d52a55bca6d5 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -11,7 +11,7 @@ #include "page_reporting.h" #include "internal.h" -unsigned int page_reporting_order = MAX_ORDER; +unsigned int page_reporting_order = MAX_ORDER + 1; module_param(page_reporting_order, uint, 0644); MODULE_PARM_DESC(page_reporting_order, "Set page reporting order"); @@ -244,7 +244,7 @@ page_reporting_process_zone(struct page_reporting_dev_info *prdev, return err; /* Process each free list starting from lowest order/mt */ - for (order = page_reporting_order; order < MAX_ORDER; order++) { + for (order = page_reporting_order; order <= MAX_ORDER; order++) { for (mt = 0; mt < MIGRATE_TYPES; mt++) { /* We do not pull pages from the isolate free list */ if (is_migrate_isolate(mt)) diff --git a/mm/shuffle.h b/mm/shuffle.h index cec62984f7d3..a6bdf54f96f1 100644 --- a/mm/shuffle.h +++ b/mm/shuffle.h @@ -4,7 +4,7 @@ #define _MM_SHUFFLE_H #include -#define SHUFFLE_ORDER (MAX_ORDER-1) +#define SHUFFLE_ORDER MAX_ORDER #ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key); diff --git a/mm/slab.c b/mm/slab.c index 10e96137b44f..530f418a4930 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -466,7 +466,7 @@ static int __init slab_max_order_setup(char *str) { get_option(&str, &slab_max_order); slab_max_order = slab_max_order < 0 ? 0 : - min(slab_max_order, MAX_ORDER - 1); + min(slab_max_order, MAX_ORDER); slab_max_order_set = true; return 1; diff --git a/mm/slub.c b/mm/slub.c index 862dbd9af4f5..5acf5407cbc6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3877,7 +3877,7 @@ static inline int calculate_order(unsigned int size) * Doh this slab cannot be placed using slub_max_order. */ order = calc_slab_order(size, 1, MAX_ORDER, 1); - if (order < MAX_ORDER) + if (order <= MAX_ORDER) return order; return -ENOSYS; } @@ -4388,7 +4388,7 @@ __setup("slub_min_order=", setup_slub_min_order); static int __init setup_slub_max_order(char *str) { get_option(&str, (int *)&slub_max_order); - slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER - 1); + slub_max_order = min_t(unsigned int, slub_max_order, MAX_ORDER); return 1; } diff --git a/mm/vmstat.c b/mm/vmstat.c index 90af9a8572f5..9fc206477fb7 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1068,7 +1068,7 @@ static void fill_contig_page_info(struct zone *zone, info->free_blocks_total = 0; info->free_blocks_suitable = 0; - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { unsigned long blocks; /* @@ -1101,7 +1101,7 @@ static int __fragmentation_index(unsigned int order, struct contig_page_info *in { unsigned long requested = 1UL << order; - if (WARN_ON_ONCE(order >= MAX_ORDER)) + if (WARN_ON_ONCE(order > MAX_ORDER)) return 0; if (!info->free_blocks_total) @@ -1474,7 +1474,7 @@ static void frag_show_print(struct seq_file *m, pg_data_t *pgdat, int order; seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); - for (order = 0; order < MAX_ORDER; ++order) + for (order = 0; order <= MAX_ORDER; ++order) /* * Access to nr_free is lockless as nr_free is used only for * printing purposes. Use data_race to avoid KCSAN warning. @@ -1503,7 +1503,7 @@ static void pagetypeinfo_showfree_print(struct seq_file *m, pgdat->node_id, zone->name, migratetype_names[mtype]); - for (order = 0; order < MAX_ORDER; ++order) { + for (order = 0; order <= MAX_ORDER; ++order) { unsigned long freecount = 0; struct free_area *area; struct list_head *curr; @@ -1543,7 +1543,7 @@ static void pagetypeinfo_showfree(struct seq_file *m, void *arg) /* Print header */ seq_printf(m, "%-43s ", "Free pages count per migrate type at order"); - for (order = 0; order < MAX_ORDER; ++order) + for (order = 0; order <= MAX_ORDER; ++order) seq_printf(m, "%6d ", order); seq_putc(m, '\n'); @@ -2168,7 +2168,7 @@ static void unusable_show_print(struct seq_file *m, seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); - for (order = 0; order < MAX_ORDER; ++order) { + for (order = 0; order <= MAX_ORDER; ++order) { fill_contig_page_info(zone, order, &info); index = unusable_free_index(order, &info); seq_printf(m, "%d.%03d ", index / 1000, index % 1000); @@ -2220,7 +2220,7 @@ static void extfrag_show_print(struct seq_file *m, seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); - for (order = 0; order < MAX_ORDER; ++order) { + for (order = 0; order <= MAX_ORDER; ++order) { fill_contig_page_info(zone, order, &info); index = __fragmentation_index(order, &info); seq_printf(m, "%2d.%03d ", index / 1000, index % 1000); diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index 854772dd52fd..9b66d6aeeb1a 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -843,7 +843,7 @@ long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev) goto out; /* the calculated number of cq entries fits to mlx5 cq allocation */ cqe_size_order = cache_line_size() == 128 ? 7 : 6; - smc_order = MAX_ORDER - cqe_size_order - 1; + smc_order = MAX_ORDER - cqe_size_order; if (SMC_MAX_CQE + 2 > (0x00000001 << smc_order) * PAGE_SIZE) cqattr.cqe = (0x00000001 << smc_order) * PAGE_SIZE - 2; smcibdev->roce_cq_send = ib_create_cq(smcibdev->ibdev, diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 79e759aac543..e736847ef3ac 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -7368,6 +7368,14 @@ sub process { } } +# check for MAX_ORDER uses as its semantics has changed. +# MAX_ORDER now really means the max order of a page that can come out of +# kernel buddy allocator + if ($line =~ /MAX_ORDER/) { + WARN("MAX_ORDER", + "MAX_ORDER has changed its semantics. The max order of a page that can be allocated from buddy allocator is MAX_ORDER instead of MAX_ORDER - 1.") + } + # Mode permission misuses where it seems decimal should be octal # This uses a shortcut match to avoid unnecessary uses of a slow foreach loop # o Ignore module_param*(...) uses with a decimal 0 permission as that has a diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 64499056648a..51ad29940f05 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -38,7 +38,7 @@ static int param_set_bufsize(const char *val, const struct kernel_param *kp) size = memparse(val, NULL); order = get_order(size); - if (order >= MAX_ORDER) + if (order > MAX_ORDER) return -EINVAL; ima_maxorder = order; ima_bufsize = PAGE_SIZE << order; diff --git a/tools/testing/memblock/linux/mmzone.h b/tools/testing/memblock/linux/mmzone.h index 7c2eb5c9bb54..d79748b263e7 100644 --- a/tools/testing/memblock/linux/mmzone.h +++ b/tools/testing/memblock/linux/mmzone.h @@ -17,10 +17,10 @@ enum zone_type { }; #define MAX_NR_ZONES __MAX_NR_ZONES -#define MAX_ORDER 11 -#define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) +#define MAX_ORDER 10 +#define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) -#define pageblock_order (MAX_ORDER - 1) +#define pageblock_order MAX_ORDER #define pageblock_nr_pages BIT(pageblock_order) struct zone { From patchwork Thu Aug 11 23:16:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CE3AC282E7 for ; Thu, 11 Aug 2022 23:16:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F142B6B0078; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF95F6B007D; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C138D8E0001; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A66036B0078 for ; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7B3DE1A192C for ; Thu, 11 Aug 2022 23:16:48 +0000 (UTC) X-FDA: 79788873696.04.686E96E Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf08.hostedemail.com (Postfix) with ESMTP id 003D21601A2 for ; Thu, 11 Aug 2022 23:16:47 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 8266E5C0159; Thu, 11 Aug 2022 19:16:47 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 11 Aug 2022 19:16:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259807; x= 1660346207; bh=F4b6XOChwbbk0OQUqouspJ2OncV1W77wbWPpJfolhW8=; b=R ax/+SLx/5j0yVaVVzeq4qdsZ9YVPYkfsUtBya8pPZB8tMczKeHa51Bh0gqndh/Dx OiloKGcYDd8T6rgWZA1d+K1XqdSl8z2WKAfPcHDsYAjDYzhNBIJI1xt1r0rCTnfj 48gYZ2ped0DjQPdBUegeNo2+SeqrKYsrGoE3IU0VBiKdbbXQ9Si4Wfi45tyRaczi cYeTtxfHFFxyYJV+M95Gd1/ZolnsZrODhlQdano1PugnXJvhX1pG/bvf6LYfEWnY qhBxsvo6kmNP3avp/hurzNwf/LcMNT7SCS9SyUMz3UZkUqh4Ml2X4zAP9aU2GYoz nMCPMvj0lmwkMHg/tJH/Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259807; x=1660346207; bh=F 4b6XOChwbbk0OQUqouspJ2OncV1W77wbWPpJfolhW8=; b=uuc2ktM7EJN+rdOHv QyLhN0lOHI2Q5rBH2peroXIbn3yIaXMt0Bj8hfXnUF1hIXlQmjp8vVLWQ06yY4rp VMr71I1vGgRIEiizgpzlYwsvpU1+BVTQMOdNrJKOK8bWkjxZXmn/FaucGjWNSTiF xzeF2VBk3C0WLto7frJkNI3GQDoTOvYn5vueJmWxwnAlRa09mzZ8LtwYdMnfp6V+ 75LkEvtId5OgHaHhqih69uruO3mdjj/AAWZuB0xMB+C/mV9SeCnJG3bHLZZJTv/3 bzLKpqWslAqiv0qKMqeV8Ql2TakLXHyBtnYNYDAYKhTkniyNeTLjxehbyRao+dbK LNuOQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgudelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:46 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 03/12] mm: replace MAX_ORDER when it is used to indicate max physical contiguity. Date: Thu, 11 Aug 2022 19:16:34 -0400 Message-Id: <20220811231643.1012912-4-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="R ax/+SL"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=uuc2ktM7; spf=pass (imf08.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259808; a=rsa-sha256; cv=none; b=Jt8M5MttEIkthR9o9ysOCBpLF/lQ07CJ7EkRH192gOkTdLOeoRrltanCKvFkcKq7M3JwMI CmiCIani0xlpIgA28SAKHI+6EQ3IWBWS0gH5Q/wUSwGtrNlhs5hiQ1+5NDfYjerVaYVeVL gOx9wQwI5iQcDJ/V7xh0fHmUhcJ+gKk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259808; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F4b6XOChwbbk0OQUqouspJ2OncV1W77wbWPpJfolhW8=; b=Bunfw6335nW6j4qARKmJUEdzI1c7SASKxo8+uFEmE0YTpZdhAPx8W4yuR/XeNnuGaGn0Hs mUMQLv8kXiZIA+Y/G3lrPK0qmgzq7UqY0n9jvc/wG3bcd5Joyvdx/SCJrNXuJkXt7Z8TnQ WiIM00imB9v2HisZnv7Uxlk53JESXHk= Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="R ax/+SL"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=uuc2ktM7; spf=pass (imf08.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 003D21601A2 X-Stat-Signature: ag5e9nfjeizzy59tgaku3c4iypeog6j6 X-Rspam-User: X-HE-Tag: 1660259807-418690 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan MAX_ORDER is limited at a memory section size, thus widely used as a variable to indicate maximum physically contiguous page size. But this limitation is no longer necessary as kernel only supports sparse memory model. Add a new variable MAX_PHYS_CONTIG_ORDER to replace such uses of MAX_ORDER. Signed-off-by: Zi Yan --- Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/sparc/mm/tsb.c | 4 ++-- arch/um/kernel/um_arch.c | 4 ++-- include/linux/pageblock-flags.h | 12 ++++++++++++ kernel/dma/pool.c | 8 ++++---- mm/hugetlb.c | 2 +- mm/internal.h | 8 ++++---- mm/memory.c | 4 ++-- mm/memory_hotplug.c | 6 +++--- mm/page_isolation.c | 2 +- mm/page_reporting.c | 4 ++-- 11 files changed, 34 insertions(+), 22 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ff33971e1630..ec519225b671 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3899,7 +3899,7 @@ [KNL] Minimal page reporting order Format: Adjust the minimal page reporting order. The page - reporting is disabled when it exceeds MAX_ORDER. + reporting is disabled when it exceeds MAX_PHYS_CONTIG_ORDER. panic= [KNL] Kernel behaviour on panic: delay timeout > 0: seconds before rebooting diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c index 912205787161..15c31d050dab 100644 --- a/arch/sparc/mm/tsb.c +++ b/arch/sparc/mm/tsb.c @@ -402,8 +402,8 @@ void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long rss) unsigned long new_rss_limit; gfp_t gfp_flags; - if (max_tsb_size > (PAGE_SIZE << MAX_ORDER)) - max_tsb_size = (PAGE_SIZE << MAX_ORDER); + if (max_tsb_size > (PAGE_SIZE << MAX_PHYS_CONTIG_ORDER)) + max_tsb_size = (PAGE_SIZE << MAX_PHYS_CONTIG_ORDER); new_cache_index = 0; for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) { diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index e0de60e503b9..52a474f4f1c7 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -368,10 +368,10 @@ int __init linux_main(int argc, char **argv) max_physmem = TASK_SIZE - uml_physmem - iomem_size - MIN_VMALLOC; /* - * Zones have to begin on a 1 << MAX_ORDER page boundary, + * Zones have to begin on a 1 << MAX_PHYS_CONTIG_ORDER page boundary, * so this makes sure that's true for highmem */ - max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER)) - 1); + max_physmem &= ~((1 << (PAGE_SHIFT + MAX_PHYS_CONTIG_ORDER)) - 1); if (physmem_size + iomem_size > max_physmem) { highmem = physmem_size + iomem_size - max_physmem; physmem_size -= highmem; diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index 940efcffd374..358b871b07ca 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -54,6 +54,18 @@ extern unsigned int pageblock_order; #define pageblock_nr_pages (1UL << pageblock_order) +/* + * memory section is only defined in sparsemem and in flatmem, pages are always + * physically contiguous, but we use MAX_ORDER since all users assume so. + */ +#ifdef CONFIG_FLATMEM +#define MAX_PHYS_CONTIG_ORDER MAX_ORDER +#else /* SPARSEMEM */ +#define MAX_PHYS_CONTIG_ORDER (min(PFN_SECTION_SHIFT, MAX_ORDER)) +#endif /* CONFIG_FLATMEM */ + +#define MAX_PHYS_CONTIG_NR_PAGES (1UL << MAX_PHYS_CONTIG_ORDER) + /* Forward declaration */ struct page; diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index e20f168a34c7..b10f1dd52871 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -84,8 +84,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, void *addr; int ret = -ENOMEM; - /* Cannot allocate larger than MAX_ORDER */ - order = min(get_order(pool_size), MAX_ORDER); + /* Cannot allocate larger than MAX_PHYS_CONTIG_ORDER */ + order = min(get_order(pool_size), MAX_PHYS_CONTIG_ORDER); do { pool_size = 1 << (PAGE_SHIFT + order); @@ -190,11 +190,11 @@ static int __init dma_atomic_pool_init(void) /* * If coherent_pool was not used on the command line, default the pool - * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER. + * sizes to 128KB per 1GB of memory, min 128KB, max MAX_PHYS_CONTIG_ORDER. */ if (!atomic_pool_size) { unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K); - pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES); + pages = min_t(unsigned long, pages, MAX_PHYS_CONTIG_NR_PAGES); atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K); } INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 15ff582687a3..36eedeed1b22 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1903,7 +1903,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) pgoff_t index = page_index(page_head); unsigned long compound_idx; - if (compound_order(page_head) > MAX_ORDER) + if (compound_order(page_head) > MAX_PHYS_CONTIG_ORDER) compound_idx = page_to_pfn(page) - page_to_pfn(page_head); else compound_idx = page - page_head; diff --git a/mm/internal.h b/mm/internal.h index 4df67b6b8cce..1433e3a6fdd0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -302,7 +302,7 @@ static inline bool page_is_buddy(struct page *page, struct page *buddy, * satisfies the following equation: * P = B & ~(1 << O) * - * Assumption: *_mem_map is contiguous at least up to MAX_ORDER + * Assumption: *_mem_map is contiguous at least up to MAX_PHYS_CONTIG_ORDER */ static inline unsigned long __find_buddy_pfn(unsigned long page_pfn, unsigned int order) @@ -642,11 +642,11 @@ static inline void vunmap_range_noflush(unsigned long start, unsigned long end) /* * Return the mem_map entry representing the 'offset' subpage within * the maximally aligned gigantic page 'base'. Handle any discontiguity - * in the mem_map at MAX_ORDER_NR_PAGES boundaries. + * in the mem_map at MAX_PHYS_CONTIG_NR_PAGES boundaries. */ static inline struct page *mem_map_offset(struct page *base, int offset) { - if (unlikely(offset >= MAX_ORDER_NR_PAGES)) + if (unlikely(offset >= MAX_PHYS_CONTIG_NR_PAGES)) return nth_page(base, offset); return base + offset; } @@ -658,7 +658,7 @@ static inline struct page *mem_map_offset(struct page *base, int offset) static inline struct page *mem_map_next(struct page *iter, struct page *base, int offset) { - if (unlikely((offset & (MAX_ORDER_NR_PAGES - 1)) == 0)) { + if (unlikely((offset & (MAX_PHYS_CONTIG_NR_PAGES - 1)) == 0)) { unsigned long pfn = page_to_pfn(base) + offset; if (!pfn_valid(pfn)) return NULL; diff --git a/mm/memory.c b/mm/memory.c index bd8e7e79be99..3b82945aaa3d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5660,7 +5660,7 @@ void clear_huge_page(struct page *page, unsigned long addr = addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); - if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { + if (unlikely(pages_per_huge_page > MAX_PHYS_CONTIG_NR_PAGES)) { clear_gigantic_page(page, addr, pages_per_huge_page); return; } @@ -5713,7 +5713,7 @@ void copy_user_huge_page(struct page *dst, struct page *src, .vma = vma, }; - if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { + if (unlikely(pages_per_huge_page > MAX_PHYS_CONTIG_NR_PAGES)) { copy_user_gigantic_page(dst, src, addr, vma, pages_per_huge_page); return; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 5540499007ae..8930823e5067 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -596,16 +596,16 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages) unsigned long pfn; /* - * Online the pages in MAX_ORDER aligned chunks. The callback might + * Online the pages in MAX_PHYS_CONTIG_ORDER aligned chunks. The callback might * decide to not expose all pages to the buddy (e.g., expose them * later). We account all pages as being online and belonging to this * zone ("present"). * When using memmap_on_memory, the range might not be aligned to - * MAX_ORDER_NR_PAGES - 1, but pageblock aligned. __ffs() will detect + * MAX_PHYS_CONTIG_NR_PAGES - 1, but pageblock aligned. __ffs() will detect * this and the first chunk to online will be pageblock_nr_pages. */ for (pfn = start_pfn; pfn < end_pfn;) { - int order = min_t(unsigned long, MAX_ORDER, __ffs(pfn)); + int order = min_t(unsigned long, MAX_PHYS_CONTIG_ORDER, __ffs(pfn)); (*online_page_callback)(pfn_to_page(pfn), order); pfn += (1UL << order); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 8d33120a81b2..801835f91c44 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -226,7 +226,7 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) */ if (PageBuddy(page)) { order = buddy_order(page); - if (order >= pageblock_order && order <= MAX_ORDER) { + if (order >= pageblock_order && order <= MAX_PHYS_CONTIG_ORDER) { buddy = find_buddy_page_pfn(page, page_to_pfn(page), order, NULL); if (buddy && !is_migrate_isolate_page(buddy)) { diff --git a/mm/page_reporting.c b/mm/page_reporting.c index d52a55bca6d5..b48d6ad82998 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -11,7 +11,7 @@ #include "page_reporting.h" #include "internal.h" -unsigned int page_reporting_order = MAX_ORDER + 1; +unsigned int page_reporting_order = MAX_PHYS_CONTIG_ORDER + 1; module_param(page_reporting_order, uint, 0644); MODULE_PARM_DESC(page_reporting_order, "Set page reporting order"); @@ -244,7 +244,7 @@ page_reporting_process_zone(struct page_reporting_dev_info *prdev, return err; /* Process each free list starting from lowest order/mt */ - for (order = page_reporting_order; order <= MAX_ORDER; order++) { + for (order = page_reporting_order; order <= MAX_PHYS_CONTIG_ORDER; order++) { for (mt = 0; mt < MIGRATE_TYPES; mt++) { /* We do not pull pages from the isolate free list */ if (is_migrate_isolate(mt)) From patchwork Thu Aug 11 23:16:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32D9CC25B06 for ; Thu, 11 Aug 2022 23:16:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44BED8E0001; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D4B28E0002; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 164658E0001; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E670D6B007B for ; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BF7D8A19BE for ; Thu, 11 Aug 2022 23:16:48 +0000 (UTC) X-FDA: 79788873696.30.A67A1D1 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf10.hostedemail.com (Postfix) with ESMTP id 72431C0185 for ; Thu, 11 Aug 2022 23:16:48 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 193645C013A; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 11 Aug 2022 19:16:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259808; x= 1660346208; bh=3J6erN9Meq/qjQOGkR6S/xWYYRm40MXKsAmQN5L+EFk=; b=t 0SW/+nwNBkcP5/lWXHShtewEsmEPAI04/9aw2b9RP2wvNLXAFCo5e2pNZGGQc0/5 enT6UZIKJ63YsQ+DOQOstXJMI2NZ2rweD8x7gQsCgyZ6t9hTPvkEBNcIx27MSmoR NPecHeLTR8ULn03DdCssP70gdFtUJ1q45yeKYawweuOFT4QW5gLntHUti5i3YgoJ VSDuCKoSkcL8DQttPdRd+4kOMfIw7NJfVQ6Ee5Tu7in2tW9GZnj4zcx04vlRTWzn VdyTLID40U64nuwVJ0+kziZaPk9L1ocb7MTGqgq9ehqDjUad68S8lx9E0EUCqYD3 GPp1Ce7fDbNHWqf3O+56g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259808; x=1660346208; bh=3 J6erN9Meq/qjQOGkR6S/xWYYRm40MXKsAmQN5L+EFk=; b=NatkXLZ8QrtCktwi7 HEtm6vzzaK0AmMkDeJPoDyP5ESfnZRBL9gJGwZa6Mi/g5zGhUbHGnIZBNpTEPuKI Bd16M1dGGP90ranNjSUBIYyLZqkLMdM3MNN7+Sa89qYoZPcg8z7ZUsDrccXWdXa3 Xa1fYT/p69HpXQ14C1BefJbQgQWXMwimcUUwUuc+MSgsDLPFIDPpJmjVifSE4CIO +6eNGJg8xNm0IslzJN97LIKJ27EhBwycwrlUNR5LNOTChpOza3Z6Kb0BpbRmBz0I ze7HyFDMU6wgVI0aJM3jLsaDBwR/yc0pP3ksyhviAnXktnw4IXS4KWBR7fr40/r2 9il2A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgudelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:47 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 04/12] mm: adapt deferred struct page init to new MAX_ORDER. Date: Thu, 11 Aug 2022 19:16:35 -0400 Message-Id: <20220811231643.1012912-5-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259808; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3J6erN9Meq/qjQOGkR6S/xWYYRm40MXKsAmQN5L+EFk=; b=d/rf7wA28MDbcw8U8L8L6+WTHOEBoIDjt6mkFZNVM7SEV3nb+c5LoNp3obzAla5c79eLt6 aHzI2nkq+CDm1pCfSQiR5H6L62wnPWquxTkNEtKoYMSX/42nHrhkCtBXclIbBhW1UlA2lW uVLxxJRRj4PGeQapBkBmkfuiSJ05tp8= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="t 0SW/+n"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=NatkXLZ8; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf10.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259808; a=rsa-sha256; cv=none; b=d1AUjSeZcUlWB7C4W1U/3irxmcze24nSTIovc4vdUTYbY45hJz5Hs87lUqkgJe2PcXgaNW sEvR3qH2tI85t9XzzzVRyXSQG7gPbKtL9sDEuVqbai1+nuMvkcO1Tex3Jsuqa6Otae7+V1 mvqRXE96I+vxpxF1+JWhKk8tvZnAVZ0= X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 72431C0185 X-Rspam-User: X-Stat-Signature: yn6ef1wkcbjuzhbijtqxjsja85c5n6na Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="t 0SW/+n"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=NatkXLZ8; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf10.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-HE-Tag: 1660259808-151912 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan deferred_init only initializes first section of a zone and defers the rest and the rest of the zone will be initialized in size of a section. When MAX_ORDER grows beyond a section size, early_page_uninitialised() did not prevent pages beyond first section from initialization, since it only checked the starting pfn and assumes MAX_ORDER is smaller than a section size. In addition, deferred_init_maxorder() uses MAX_ORDER_NR_PAGES as the initialization unit, which can cause the initialized chunk of memory overlapping with other initialization jobs. For the first issue, make early_page_uninitialised() decrease the order for non-deferred memory initialization when it is bigger than first section. For the second issue, when adjust pfn alignment in deferred_init_maxorder(), make sure the alignment is not bigger than a section size. Signed-off-by: Zi Yan --- mm/internal.h | 2 +- mm/memblock.c | 6 ++++-- mm/page_alloc.c | 26 +++++++++++++++++++------- 3 files changed, 24 insertions(+), 10 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 1433e3a6fdd0..cbe745670c6e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -355,7 +355,7 @@ extern int __isolate_free_page(struct page *page, unsigned int order); extern void __putback_isolated_page(struct page *page, unsigned int order, int mt); extern void memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order); + unsigned int *order); extern void __free_pages_core(struct page *page, unsigned int order); extern void prep_compound_page(struct page *page, unsigned int order); extern void post_alloc_hook(struct page *page, unsigned int order, diff --git a/mm/memblock.c b/mm/memblock.c index d1525463c05e..dc2ce6df8fe3 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1640,7 +1640,9 @@ void __init memblock_free_late(phys_addr_t base, phys_addr_t size) end = PFN_DOWN(base + size); for (; cursor < end; cursor++) { - memblock_free_pages(pfn_to_page(cursor), cursor, 0); + unsigned int order = 0; + + memblock_free_pages(pfn_to_page(cursor), cursor, &order); totalram_pages_inc(); } } @@ -2035,7 +2037,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) while (start + (1UL << order) > end) order--; - memblock_free_pages(pfn_to_page(start), start, order); + memblock_free_pages(pfn_to_page(start), start, &order); start += (1UL << order); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 07ad8074950f..3f3af7cd5164 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -463,13 +463,19 @@ static inline bool deferred_pages_enabled(void) } /* Returns true if the struct page for the pfn is uninitialised */ -static inline bool __meminit early_page_uninitialised(unsigned long pfn) +static inline bool __meminit early_page_uninitialised(unsigned long pfn, unsigned int *order) { int nid = early_pfn_to_nid(pfn); if (node_online(nid) && pfn >= NODE_DATA(nid)->first_deferred_pfn) return true; + /* clamp down order to not exceed first_deferred_pfn */ + if (order) + *order = min_t(unsigned int, + *order, + ilog2(NODE_DATA(nid)->first_deferred_pfn - pfn)); + return false; } @@ -515,7 +521,7 @@ static inline bool deferred_pages_enabled(void) return false; } -static inline bool early_page_uninitialised(unsigned long pfn) +static inline bool early_page_uninitialised(unsigned long pfn, unsigned int *order) { return false; } @@ -1644,7 +1650,7 @@ static void __meminit init_reserved_page(unsigned long pfn) pg_data_t *pgdat; int nid, zid; - if (!early_page_uninitialised(pfn)) + if (!early_page_uninitialised(pfn, NULL)) return; nid = early_pfn_to_nid(pfn); @@ -1800,11 +1806,11 @@ int __meminit early_pfn_to_nid(unsigned long pfn) #endif /* CONFIG_NUMA */ void __init memblock_free_pages(struct page *page, unsigned long pfn, - unsigned int order) + unsigned int *order) { - if (early_page_uninitialised(pfn)) + if (early_page_uninitialised(pfn, order)) return; - __free_pages_core(page, order); + __free_pages_core(page, *order); } /* @@ -2030,7 +2036,13 @@ static unsigned long __init deferred_init_maxorder(u64 *i, struct zone *zone, unsigned long *start_pfn, unsigned long *end_pfn) { - unsigned long mo_pfn = ALIGN(*start_pfn + 1, MAX_ORDER_NR_PAGES); + /* + * deferred_init_memmap_chunk gives out jobs with max size to + * PAGES_PER_SECTION. Do not align mo_pfn beyond that. + */ + unsigned long align = min_t(unsigned long, + MAX_ORDER_NR_PAGES, PAGES_PER_SECTION); + unsigned long mo_pfn = ALIGN(*start_pfn + 1, align); unsigned long spfn = *start_pfn, epfn = *end_pfn; unsigned long nr_pages = 0; u64 j = *i; From patchwork Thu Aug 11 23:16:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BDB4C25B06 for ; Thu, 11 Aug 2022 23:16:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A4768E0002; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C0CA8E0006; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE8818E0002; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B1D8D8E0003 for ; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 88E95141921 for ; Thu, 11 Aug 2022 23:16:49 +0000 (UTC) X-FDA: 79788873738.27.866C3B0 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf24.hostedemail.com (Postfix) with ESMTP id 0321918006D for ; Thu, 11 Aug 2022 23:16:48 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id A85585C016A; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 11 Aug 2022 19:16:48 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259808; x= 1660346208; bh=h7KvVoFgGWP9FjrKQ1xzyC233beztE1gWDCxy87SIdw=; b=M S7Yo7dRKmxYgBaUAIPZt6K1dzK+mkREVRbiFyc33eCtEKxHMAai/dK4sMQgi/KjQ aSuIPaMDagxyUtNK0nqGqkrtzmvgCvNUYQ+Ty5dssjYJzuqJ149DqvxG02cJ4vFr uqSAmpried/ZVC7Mu+yt6ogLuOjAauEh2X9vk/PkStbTcwMh2bYhoq1RMYAnXKgs xzMuYU7VzOiiGlNASVJ+xpKEyegvuvVK1o4Hbcy4yJWJCGMFacbYEANSgS58xiSp f1cKkCwWfl7XeViuc9c6zjUAxfqThqGARJngxcJ85V8LZNEZ1IFEqKmQyYj4t5V9 1lNqLh6HH5Dozbu1ZuOIg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259808; x=1660346208; bh=h 7KvVoFgGWP9FjrKQ1xzyC233beztE1gWDCxy87SIdw=; b=zFmpwpxtxD4EJ/NJ4 iy8+P15+cNAjsKn2vaINhpPO/H5BsFI5QldjHbAA4r7RcpFo88NQrDPt9JVxI5h9 T+yynq60tigfn/M7vXfcCX+u+ps4+SOcoQMs55ZL9+K+qTbn0pCrKDkRamV2pFvK I6eqhqGfcUY7/dQFj1AXb9NAva2zrAfgA47oVmC9IKitjHBje0i9zWH579hb6wkz G9WSzKhMfKrMoJ1GMIbIm332W1nv1cnk0WDtz9NBHVRegRgsaBf+fFbYG5iIC3bM mDepKmzGm06FfWMu7vMFbGo4V6UU/y2FEi4o8y7a8GlO2ZmFKe3aSeMUsfsjKRCV uAALA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 05/12] mm: prevent pageblock size being larger than section size. Date: Thu, 11 Aug 2022 19:16:36 -0400 Message-Id: <20220811231643.1012912-6-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259809; a=rsa-sha256; cv=none; b=jPJ+zd35TqRBZ70q71afNU+37vJ/kpXf8v5qybKWxWPAVBBG39ngsR8NOxBzsFMpVwGdPH 7+Ba8Ka//eYSn7jr73b9JCWsrlCBzgI8nM9TVbsh+0fDvr4CkhPJwsxGPawVGBQapS9aDL 6sjQpDHpWgxtNx+pZHhpaWfyAmhFUM8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="M S7Yo7d"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=zFmpwpxt; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf24.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259809; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h7KvVoFgGWP9FjrKQ1xzyC233beztE1gWDCxy87SIdw=; b=Kp9PK3TnI+Tkcv6FGcEtZHxcz0zGAIWxIeZw5CA6j+dWjsPZGhWjVoWKJFO8tgszNahXbC /r9fVn2aXAWAmkRAU+haM/WBqdSnZnWEIIKx0OBzN/2Jj7iqmVF73WzHYAXRVkn6WrEiUv 8vnt+RdF2mj4V6laigeeKA+OfOy84Oo= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0321918006D Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="M S7Yo7d"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=zFmpwpxt; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf24.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Stat-Signature: 1secdsytqgui6zdyzj37khpi5oznfbue X-Rspam-User: X-HE-Tag: 1660259808-545700 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan Only physical pages from a section can be guaranteed to be contiguous and so far a pageblock can only group contiguous physical pages by design. Set pageblock_order properly to prevent pageblock going beyond section size. Signed-off-by: Zi Yan Cc: Wei Yang Cc: Vlastimil Babka Cc: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- include/linux/pageblock-flags.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index 358b871b07ca..2679b2b4c079 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -47,8 +47,11 @@ extern unsigned int pageblock_order; #else /* CONFIG_HUGETLB_PAGE */ -/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */ -#define pageblock_order MAX_ORDER +/* + * If huge pages are not used, group by MAX_ORDER_NR_PAGES or + * PAGES_PER_SECTION when MAX_ORDER_NR_PAGES is larger. + */ +#define pageblock_order (min(PFN_SECTION_SHIFT, MAX_ORDER)) #endif /* CONFIG_HUGETLB_PAGE */ From patchwork Thu Aug 11 23:16:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8F42C282E7 for ; Thu, 11 Aug 2022 23:16:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 737688E0003; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E66F8E0006; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E37C8E0003; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1CEF38E0007 for ; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id EB0621A192D for ; Thu, 11 Aug 2022 23:16:49 +0000 (UTC) X-FDA: 79788873738.03.DBBB64D Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf22.hostedemail.com (Postfix) with ESMTP id 9B6FFC0156 for ; Thu, 11 Aug 2022 23:16:49 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 476FF5C016D; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 11 Aug 2022 19:16:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259809; x= 1660346209; bh=tt6jId4iDxONEi40HuzS8jeXRqOseXM3BtI6NV+Ndgc=; b=W Tgbgmg7N1ETpGczX0PYaZkoekKvHlvQP6u/5NV4zNueSNzesxv53I7Kin6hB3SjX x82du9nZlp3uY9i6l79aj4wYlZtyFiqdgyN0N4UrdV44xKEtU+bTbAsiU2AMC9RB WtLrQZfDN3Tu7oW6G25rkWVc3EFolHXzwAjGBVi0pZLQDDIjwdo2mNHHMF7OJjjt cWQpw7Nb482VaSwyCIYrrxSp5Lk0610xWfpZwZWm0k1gzNm9/CNYKPFUf9fM7ZU1 UpDfsBQY9K+Posy/7o5b+BwNwnY0aVh68SOB+A8Gg71g+xGWUtveD90NDpxoltZM wVeqrM/c8yl1c/kQQyhsw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259809; x=1660346209; bh=t t6jId4iDxONEi40HuzS8jeXRqOseXM3BtI6NV+Ndgc=; b=R8BJEVgGcUJVB2sgf MCpg0SHu4nrjsCt/2RI+W2OoP5UnYmK4wNWmkU3XEsYAHoDps02pSz+PL1sn1efK 0ha62gCDXQ3yirK31sE/jQRWeVWxEuzR4/p1tLb1HY4MkSOmeLoEcKPfeIpCfx1N wnlChwVf+szOFJTZsA8HPb+Z1PHitKSzCp5lZvpzbvQOOhe3aDOvqyUc43gS95d4 YkAuxI/0tiuy/DtoKlA6LEvp09406HWyz2+QMOkNmRXyhuGzJamKmnTiHPTWPmL6 2i+OtpUnzYhAzyMMNWYKwctnj7a6lhlKMue6xp4dXx4UYKwwAIfGftTeBlwzYvMh DnQJw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:48 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 06/12] fs: proc: use pageblock_nr_pages for reschedule period in read_kcore() Date: Thu, 11 Aug 2022 19:16:37 -0400 Message-Id: <20220811231643.1012912-7-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259809; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tt6jId4iDxONEi40HuzS8jeXRqOseXM3BtI6NV+Ndgc=; b=lVLDBJOpmYVvbFANPdirkyKj17YpqImqseq9ItnVt8VQfNXaEtyhXkT2h62YKXHima2wyT V+YfYuAMfSPWbWjfcgF+BlCuMktullGfpHBeQVr/+ZR0h6XD28Cv0YjhI1HxDIL1R9VBW1 Idg6Zhj/gMRJRbBzsyzcU+qTrLnsieE= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="W Tgbgmg"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=R8BJEVgG; spf=pass (imf22.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259809; a=rsa-sha256; cv=none; b=EFE1fn0Iq2Fc0U0p6/iDTmGn5yDvHzQMfZhe22ZPiIIkV+i0mOmUthfdpBfvrFH0HeLhQS LuVuv8s6teP8AQ+2GYnZLqsPndFtha1N7gNGQGgKwDkicStyZ4wcSgffSs63qRMNsY30dd X9pd2/xc2h79xRhpGoGpY1rj9isArWc= X-Stat-Signature: bmha3yntbzrr7emdxqrfo3j3zrgomtku X-Rspamd-Queue-Id: 9B6FFC0156 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="W Tgbgmg"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=R8BJEVgG; spf=pass (imf22.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1660259809-952588 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan MAX_ORDER_NR_PAGES can be increased when it becomes a boot time parameter in later commits. To make sure read_kcore() reschedule its work in a constant period, use pageblock_nr_pages instead for reschedule period, since pageblock_nr_pages is a constant and either the same or half of MAX_ORDER_NR_PAGES. Signed-off-by: Zi Yan Cc: Mike Rapoport Cc: David Hildenbrand Cc: Oscar Salvador Cc: Ying Chen Cc: Feng Zhou Cc: linux-fsdevel@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Reviewed-by: David Hildenbrand --- fs/proc/kcore.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index dff921f7ca33..7dc09d211b48 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -491,7 +491,7 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) } } - if (page_offline_frozen++ % MAX_ORDER_NR_PAGES == 0) { + if (page_offline_frozen++ % pageblock_nr_pages == 0) { page_offline_thaw(); cond_resched(); page_offline_freeze(); From patchwork Thu Aug 11 23:16:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941802 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC03AC25B06 for ; Thu, 11 Aug 2022 23:16:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EAD4D8E0006; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E5D5F8E0007; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C62668E0006; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 8E6398E0007 for ; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6F2E91A191C for ; Thu, 11 Aug 2022 23:16:50 +0000 (UTC) X-FDA: 79788873780.02.8494496 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf08.hostedemail.com (Postfix) with ESMTP id 2849F16019E for ; Thu, 11 Aug 2022 23:16:50 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id D0E195C015C; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 11 Aug 2022 19:16:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259809; x= 1660346209; bh=581oZMKShZxT8Zn0zMTVJFcTkBUcW/2CxVH64DF0VuQ=; b=m rs6vrbJsfqGYoj2XllLGP6k0bBKIeQAXZq+8slnkzIt6qYpHeZEAveoCBv29EsKe 6cwbZk4Lcaq3J4lyIHnjJOHF3tIEz6ytuFkazkLYiuCOb/ShDAehH+rou8pa0eqA rO1EHofDIIfRYsiCf7nnlVs2AN1L6tXN7XbdBa1mohCuFWbUDR0OX/c2q+h2C7A9 DutPxAftBizB0uxlz1ZKes3fNpkhnsawhOQtZpzHYNIFanJuCZUW1XJPy3JSORhL i3rTSpkGLHxZLzHqWBPLR/2MLpZjkU4DlXipWLclUGG5WKh8RQR46oieP9NhYIq7 /JLKstTjHUtZHGsMqsz/g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259809; x=1660346209; bh=5 81oZMKShZxT8Zn0zMTVJFcTkBUcW/2CxVH64DF0VuQ=; b=ruWSPI7GHkiJJtZH4 YQ14GNsFKLV1B/p+GrExipeGWDt6nU2munMqzAgANctH2saJFTmNv+9qqn7n7DRt yj6/To9gMxrkV6xAJ69EAAT6jmg0K5pmfCQ0K586DPJzvlm+L9FqVrCZx53bAWsq c3aqjGn3ohc3m6CEqcpX58nm/VDaLGNPIqB8oiiBZv/dAXN72VB2xHZNlx01Qy+1 3XWWBPjN0Vi1ngMF8aQt/cNzx+0St6g4Vdf+ELktsz9B8RSVhTZoeVWGSh3C+0Fv GWuClr51TgAOisHwT8JmBIhqAQ2faVVgaZxHUQRKjv5hA6tkXTYRDs4KWOedF9k5 KHQBQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgudelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 07/12] virtio: virtio_balloon: use pageblock_order instead of MAX_ORDER Date: Thu, 11 Aug 2022 19:16:38 -0400 Message-Id: <20220811231643.1012912-8-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259810; a=rsa-sha256; cv=none; b=2ARd96ga9lO8pkXELjXGn9NVM0dx9DChcwQRezcyQd1GARsLLowIcEm4mYMgeNVk028Go8 HUzPJVs4dyYCdJdY3iNyxcXgpOC63QyT4muv24ea1JVuFx5vMT2MMIDtXJHnFeAwWyY+2I ME0u7uagiYhHmEyKwhqdtSp5tSelQ+U= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="m rs6vrb"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=ruWSPI7G; spf=pass (imf08.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259810; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=581oZMKShZxT8Zn0zMTVJFcTkBUcW/2CxVH64DF0VuQ=; b=aoUnxrKVb3aejKGLELTMktYQeUXTjqZhYCcnlzxh/zle9haC0g6WUO/o2kYgTepj+whFKl ddQLTMYDJS1bwIM8CyK/oFaZls7uYDspLDP69p1IaIRpisSNVqfsRaDQbrwYXM4wOMJGtL H8fVaLR+u0ts4gUAhdmQHTGfG4/+GNs= X-Stat-Signature: sacx13ks3pyd1nkr5xj8oj4w194efqiw X-Rspamd-Queue-Id: 2849F16019E X-Rspam-User: X-Rspamd-Server: rspam03 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="m rs6vrb"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=ruWSPI7G; spf=pass (imf08.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-HE-Tag: 1660259810-17532 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan virtio_balloon used MAX_ORDER to report free page blocks to host, as MAX_ORDER becomes modifiable in later commits, the reported free size might be too big. pageblock_order is either 1/2 of or the same as MAX_ORDER currently. Use pageblock_order instead to make virtio_balloon have a constant free page block report size when MAX_ORDER is changed in the later commits. Signed-off-by: Zi Yan Cc: "Michael S. Tsirkin" Cc: David Hildenbrand Cc: Jason Wang Cc: virtualization@lists.linux-foundation.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- drivers/virtio/virtio_balloon.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 5b15936a5214..51447737538b 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -33,7 +33,7 @@ #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \ __GFP_NOMEMALLOC) /* The order of free page blocks to report to host */ -#define VIRTIO_BALLOON_HINT_BLOCK_ORDER MAX_ORDER +#define VIRTIO_BALLOON_HINT_BLOCK_ORDER pageblock_order /* The size of a free page block in bytes */ #define VIRTIO_BALLOON_HINT_BLOCK_BYTES \ (1 << (VIRTIO_BALLOON_HINT_BLOCK_ORDER + PAGE_SHIFT)) From patchwork Thu Aug 11 23:16:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E074C25B0E for ; Thu, 11 Aug 2022 23:17:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A3EF08E000B; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F18E8E0007; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 820758E000D; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5DEA28E000B for ; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 30FCD161906 for ; Thu, 11 Aug 2022 23:16:54 +0000 (UTC) X-FDA: 79788873948.18.E23947E Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf28.hostedemail.com (Postfix) with ESMTP id CC042C016F for ; Thu, 11 Aug 2022 23:16:53 +0000 (UTC) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 792A95C017D; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 11 Aug 2022 19:16:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259810; x= 1660346210; bh=uhHjqO7fMake/4Yw6e31S9FQXGCwAM7iMxFEnUEtZO8=; b=I xburOzFI2YavZy7hDOd13A3/uBpUcF+4U/SAds6S8HDCkF5Owkz2M1kTkV93F+95 xGjM71TLIEd3xgzhnZ9ZfpptKuGEFPZU64C9A3/zc28ET8TTHb8wxHzdxVgNv/VJ hIzwRgASSRZ8TkmYC6yB1s3aBcTTDZ41K7s0RpGsHD3hEOedm26KRUXCGB+HegFs H74C6EsqK6KS3/wV+iMkruv+P/RphinFbCk6/XoE5dzlTXfqWaE7d49wx5lC45f4 wXUg8PVjTZr8V+EfQwBv8nlnUqPQ9eX7VH7tLImQnhkmGdc/gSEZ/E/nmmplc5Nx 0N8vqm1fOyE+EdSu4utFA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259810; x=1660346210; bh=u hHjqO7fMake/4Yw6e31S9FQXGCwAM7iMxFEnUEtZO8=; b=exTAiZiEZnbLkmnO/ ikhXgYLh0lCm+0VKyrVL765VJ44oLKYdhfx5vsxY5vruHwptG/h5YFBNpsXnDNtN 4pJXt3l/EpZwAYFORNASEG7NC6f/eDIc4d700hpeBAtQD951YLEd3QvrhmZqK0O3 Q0EG0ZGd0KqxysvOJvMuZBdF0587x4u91gBzC8aTsDnNove6Yb/OxolIYkTwSAZB 6UaRRMpT/t9Izm8fMuEbP5+QXmV6/7Z2hlg/HusepiRko2DObkTWk6/rzVePPpOE AUqsxv/Fn+I/TmBsIAR84URimQAitewoZ+cdFddMV7a+eztde1pL1pFZOFSINFHn jLnIA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:49 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 08/12] mm/page_reporting: set page_reporting_order to -1 to prevent it running Date: Thu, 11 Aug 2022 19:16:39 -0400 Message-Id: <20220811231643.1012912-9-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259813; a=rsa-sha256; cv=none; b=4t71/+aOre4b0HAIdOI2fBPQh+59pQ4dOBN9Ft5tXfi7IaeohzZSpZ7NBGWr4QGLE9u6RY UlBTD+TFFuu9rN47tU7ssF5cmkzM50quUlzp6L5+ApjBIEltPiFCOR6lnvpLibK1eVB594 VTAKPUKHbr8UXjlHQGTcUvqtaG97KO0= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="I xburOz"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=exTAiZiE; spf=pass (imf28.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259813; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uhHjqO7fMake/4Yw6e31S9FQXGCwAM7iMxFEnUEtZO8=; b=kly1PIb/btx/T1ULleL0+aocMYs+knF1GRxiLIy2CD1Egip33PelHM7ohahOXP7XuYCjvN J2e2A1f4+dcpL/pz7555dCGm176dlRGPV633Imu4JfhQSAqymyvBXZ73Khapq5KvxaqqtI 6B6wdzC/iNZwhH3lysrbsQKsoADlB/w= X-Stat-Signature: wix6em6odjo5nhpt5ko1hccfwftpic8p X-Rspamd-Queue-Id: CC042C016F X-Rspam-User: X-Rspamd-Server: rspam03 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="I xburOz"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=exTAiZiE; spf=pass (imf28.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-HE-Tag: 1660259813-522277 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan page_reporting_order was initialized to MAX_ORDER to prevent it running before its value is overwritten. Use -1 instead to remove the dependency on MAX_ORDER. Signed-off-by: Zi Yan Cc: David Hildenbrand Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/page_reporting.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/page_reporting.c b/mm/page_reporting.c index b48d6ad82998..001438f3dbeb 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -11,7 +11,11 @@ #include "page_reporting.h" #include "internal.h" -unsigned int page_reporting_order = MAX_PHYS_CONTIG_ORDER + 1; +/* + * Set page_reporting_order to (unsigned int)-1 to prevent it running until the + * value is being overwritten + */ +unsigned int page_reporting_order = (unsigned int)-1; module_param(page_reporting_order, uint, 0644); MODULE_PARM_DESC(page_reporting_order, "Set page reporting order"); From patchwork Thu Aug 11 23:16:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87188C25B0E for ; Thu, 11 Aug 2022 23:17:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4C2958E0008; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4743D8E0007; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B2828E0008; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 07FF28E0007 for ; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DB7B516190B for ; Thu, 11 Aug 2022 23:16:51 +0000 (UTC) X-FDA: 79788873822.30.2DD7F91 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf06.hostedemail.com (Postfix) with ESMTP id 7A1E718016F for ; Thu, 11 Aug 2022 23:16:51 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 14A385C017E; Thu, 11 Aug 2022 19:16:51 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 11 Aug 2022 19:16:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259811; x= 1660346211; bh=nROMsz3ppqqra2BzZ2pVt6LG8fK1ajXOrNIiQ4woogA=; b=H GhqMn6eaMGEs9dwuksbm0QcC3P1rGK72b5S0638UqCuG1FAGndSFzv4qnD4/RbJ/ lbu/YQ69n7Zx0ar+GDe+0F1raAkVeAf0pPq1tsyjn0t6ZpbN+gFN7QaawTabboLd r+6g520P+po+Rg8by3137hkpijb0MIM9NQ5cXMJxfjsxqt1DzLF+tdIFH3GgediS gSHViQ0w+6TclsLvjXA9hkATpo8Eij7yul54/KFP1StkA/dLCC2TKkMS9wBLNoGl GKStVAtIBRbe9RqfzuZ4aiAMJxxlUoV9A3gmiJPplxI1oMYnrPj1quw0WkHpYhm4 ilvsLH29cn6MJvcVKozDg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259811; x=1660346211; bh=n ROMsz3ppqqra2BzZ2pVt6LG8fK1ajXOrNIiQ4woogA=; b=f/gWE8mW8X+chpd90 Y/rfrDG37b2vNVgdyu5fpeAsXvdMf80MvaxJOJE5T9nhMdZ4kxVewOBntrVyoQKm UJHjRyaAcKIYvpsogON+CZf5+4+c+BUavfhMDmp1CMv20uIr735Oi90POIVGv1ia JextHr2gCouPaso7Kfis1Rhm+6O0ObpBeIdwpSwiKtjdPkUolSxzNl/lPl41T7np 1jFP2ffXKLF85ioAGiqMcARR1nwY2EOXmSSlcroFAuAnOY7eOEbtJQW3Gp4i6Gig rDLv9/lvPIGfes18MppOBpEfjL0+Kp7ialY7Jx9dd3w9CBLwI/avm8pjF3whh72w Yta2A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgudelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:50 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 09/12] mm: Make MAX_ORDER of buddy allocator configurable via Kconfig SET_MAX_ORDER. Date: Thu, 11 Aug 2022 19:16:40 -0400 Message-Id: <20220811231643.1012912-10-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259811; a=rsa-sha256; cv=none; b=rH4tyUnJ50sJ7aM4LipdoeI3+JwTonuHL7rM/KISHtmyEuDMYXj/FwQlxsk3wHbQW/Cg/z yD0ZIJ/2UKuDcoa3phcANSzpg7typYwWjrpzmZd/IST2HUi0zOpusJxscfT9cPFYxDF4Ax Eulgs5UX+dIjl0N17KbvzgUWDPjU2b4= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="H GhqMn6"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b="f/gWE8mW"; spf=pass (imf06.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259811; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nROMsz3ppqqra2BzZ2pVt6LG8fK1ajXOrNIiQ4woogA=; b=t5PJpR76/Hb7/Y3ntxohKAV+O/fnO4HUfFwSE+CgyylxZR0LxlSVYRSebERptirKxYOuH4 L14YQKgrNxpq3Ye/MephOjOe462X72ADlR5g6nVNwWdDfI8mV6jtYNPv+Urqp4ClFgEcAk PHpSSi36NLu1IzuR8WyBHeo3KaN/QJ4= X-Stat-Signature: 6knf37sxm5wogxcnbxbemqkej74z6jeu X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 7A1E718016F Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="H GhqMn6"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b="f/gWE8mW"; spf=pass (imf06.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspam-User: X-HE-Tag: 1660259811-700243 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan With SPARSEMEM_VMEMMAP, all struct page are virtually contigous, thus kernel can manipulate arbitrarily large pages. By checking PFN validity during buddy page merging process, all free pages in buddy allocator's free area have their PFNs contiguous even if the system has several not physically contiguous memory sections. With these two conditions, it is OK to remove the restriction of MAX_ORDER + PAGE_SHIFT < SECTION_SIZE_BITS and change MAX_ORDER freely. Add SET_MAX_ORDER to allow MAX_ORDER adjustment when arch does not set its own MAX_ORDER via ARCH_FORCE_MAX_ORDER. Make it depend on SPARSEMEM_VMEMMAP, when MAX_ORDER is not limited by SECTION_SIZE_BITS. Signed-off-by: Zi Yan Cc: Kees Cook Cc: Peter Zijlstra Cc: Nicholas Piggin Cc: Thomas Gleixner Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- arch/Kconfig | 4 ++++ include/linux/mmzone.h | 17 ++++++++++++++--- mm/Kconfig | 14 ++++++++++++++ 3 files changed, 32 insertions(+), 3 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index f330410da63a..24baee6c3feb 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -11,6 +11,10 @@ source "arch/$(SRCARCH)/Kconfig" menu "General architecture-dependent options" +config ARCH_FORCE_MAX_ORDER + int + default "0" + config CRASH_CORE bool diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index e93faa3d7f1d..b83b481e250b 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -24,11 +24,14 @@ #include /* Free memory management - zoned buddy allocator. */ -#ifndef CONFIG_ARCH_FORCE_MAX_ORDER -#define MAX_ORDER 10 -#else +#ifdef CONFIG_SET_MAX_ORDER +#define MAX_ORDER CONFIG_SET_MAX_ORDER +#elif CONFIG_ARCH_FORCE_MAX_ORDER != 0 #define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER +#else +#define MAX_ORDER 10 #endif + #define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) /* @@ -1379,9 +1382,17 @@ static inline bool movable_only_nodes(nodemask_t *nodes) #define SECTION_BLOCKFLAGS_BITS \ ((1UL << (PFN_SECTION_SHIFT - pageblock_order)) * NR_PAGEBLOCK_BITS) +/* + * The MAX_ORDER check is not necessary when CONFIG_SET_MAX_ORDER is set, since + * it depends on CONFIG_SPARSEMEM_VMEMMAP, where all struct page are virtually + * contiguous, thus > section size pages can be allocated and manipulated + * without worrying about non-contiguous struct page. + */ +#ifndef CONFIG_SET_MAX_ORDER #if (MAX_ORDER + PAGE_SHIFT) > SECTION_SIZE_BITS #error Allocator MAX_ORDER exceeds SECTION_SIZE #endif +#endif /* CONFIG_SET_MAX_ORDER*/ static inline unsigned long pfn_to_section_nr(unsigned long pfn) { diff --git a/mm/Kconfig b/mm/Kconfig index bbe31e85afee..e558f5679707 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -441,6 +441,20 @@ config SPARSEMEM_VMEMMAP pfn_to_page and page_to_pfn operations. This is the most efficient option when sufficient kernel resources are available. +config SET_MAX_ORDER + int "Set maximum order of buddy allocator" + depends on SPARSEMEM_VMEMMAP && (ARCH_FORCE_MAX_ORDER = 0) + range 10 255 + default "10" + help + The kernel memory allocator divides physically contiguous memory + blocks into "zones", where each zone is a power of two number of + pages. This option selects the largest power of two that the kernel + keeps in the memory allocator. If you need to allocate very large + blocks of physically contiguous memory, then you may need to + increase this value. A value of 10 means that the largest free memory + block is 2^10 pages. + config HAVE_MEMBLOCK_PHYS_MAP bool From patchwork Thu Aug 11 23:16:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941804 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4960DC25B06 for ; Thu, 11 Aug 2022 23:17:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C15368E0009; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B9D148E0007; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97C158E0009; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7273F8E0007 for ; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 567A91219CD for ; Thu, 11 Aug 2022 23:16:52 +0000 (UTC) X-FDA: 79788873864.11.2417361 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf24.hostedemail.com (Postfix) with ESMTP id E3E9F18019D for ; Thu, 11 Aug 2022 23:16:51 +0000 (UTC) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 8CB655C017F; Thu, 11 Aug 2022 19:16:51 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 11 Aug 2022 19:16:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259811; x= 1660346211; bh=KqyAltuV0l6VggMJZ/X8DJ5z8l2OLxGSSS19zX3wJgA=; b=t 6aBBIUIiKBgyF0nq9M2l/FkGxLKyoQ5GA3HDr25AhRzkcwFdcL2GgtYrOWJ5zQqg 55uAFWTBlpUg3YLJ/35T98aRWOuWVIfw4vAx+4qGHKsRpVG/pJkx9+tINY2FemoY 1L9IEZZHzkZoZTIUgYwdalFoqXsQ6orMDiaP54yoiH7wpz2/1SlAae+WwBU1OBYC dtzN/xngzp1wkdyXaP2QJ3BDMcks9vfOpeQV+bktQo978O3SlzrX2BVib8iLFQfv iBsnt0V8B/Dlt0PvsOeWVmeHPEqgZTlhEeQF66UrLaPrvymUUsTezvHovlss3qSh 9kv/O8teFkcyEPo45wzFw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259811; x=1660346211; bh=K qyAltuV0l6VggMJZ/X8DJ5z8l2OLxGSSS19zX3wJgA=; b=UNOQe8eOqO2joIxWT odoyDyprra/+Mk8v4f74AJRsjFXjz8fzmX9jJ+5owNqpXzPCT4H1H0RbUBnkV6o2 HvE1cUH/FlJhhJvsni7HgTcNoAuj6pY3TaJHkr2nyMBJ96BhD8yhwNgWp9Ajo/2a xd5GE5CYIA6nQjQPpZRHpNo1XqzQQMG41K7EKiitIMZqSIZpwIHyNPA2kxzJXd8z 7sOAdzLF7wRxp4Vo+aGs8vqRgiX9g8UVwjXy0vBLbYViflOkfCKWJVU+GjadMzb/ rdJzKSWbLJv2j4Jlh22eEYEsPlZKsMfpB1xvWyy3WbpfNBUP0Q20eaIJppZ2JW64 h0KSQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:51 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 10/12] mm: convert MAX_ORDER sized static arrays to dynamic ones. Date: Thu, 11 Aug 2022 19:16:41 -0400 Message-Id: <20220811231643.1012912-11-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259812; a=rsa-sha256; cv=none; b=1NBqWvd9u7Yxdwd3U+RM95SMtBhjOt+Me/tLKvvyKKq2U8Atp/DQnXM6I730637KQtwI13 8tzxHb6niScSnwEZ2GaReihVK7o1XROziWsMWFzkWNSBoIdoSFdmh9jpdSU9xTXj5nyUD2 X86iI/pdJkI5PaL6n3DuDa16I7hXbmw= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="t 6aBBIU"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=UNOQe8eO; spf=pass (imf24.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259812; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KqyAltuV0l6VggMJZ/X8DJ5z8l2OLxGSSS19zX3wJgA=; b=LUFwVOEJ9BnSiJpjNgNH3TaH/xdRyHoWrH4WfTZEXdQJDu0RZXzDL8aQidXOGfHB+PLl7p C53NqsfMsHw5LQqLX6DfFYhGOPnjdFJ7tZMqxCoSL4s35N0UPqyNfbp8wCjt8pX0Rnp84h seGksJDSZk6XKub3Zotw3dRSRAzng6E= X-Rspam-User: X-Stat-Signature: nr957odndagawcainsjaxd4do6e3e8gi X-Rspamd-Queue-Id: E3E9F18019D Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="t 6aBBIU"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=UNOQe8eO; spf=pass (imf24.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspamd-Server: rspam01 X-HE-Tag: 1660259811-820758 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan This prepares for the upcoming changes to make MAX_ORDER a boot time parameter instead of compilation time constant. All static arrays with MAX_ORDER size are converted to pointers and their memory is allocated at runtime. free_area array in struct zone is allocated using memblock_alloc_node() at boot time and using kzalloc() when memory is hot-added. Signed-off-by: Zi Yan Cc: Dave Young Cc: Jonathan Corbet Cc: Christian Koenig Cc: David Airlie Cc: kexec@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- .../admin-guide/kdump/vmcoreinfo.rst | 2 +- drivers/gpu/drm/ttm/ttm_device.c | 7 ++- drivers/gpu/drm/ttm/ttm_pool.c | 58 +++++++++++++++++-- include/drm/ttm/ttm_pool.h | 4 +- include/linux/mmzone.h | 2 +- mm/page_alloc.c | 32 ++++++++-- 6 files changed, 87 insertions(+), 18 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index c572b5230fe0..a775462aa7c7 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -172,7 +172,7 @@ variables. Offset of the free_list's member. This value is used to compute the number of free pages. -Each zone has a free_area structure array called free_area[MAX_ORDER + 1]. +Each zone has a free_area structure array called free_area with length of MAX_ORDER + 1. The free_list represents a linked list of free page blocks. (list_head, next|prev) diff --git a/drivers/gpu/drm/ttm/ttm_device.c b/drivers/gpu/drm/ttm/ttm_device.c index e7147e304637..442a77bb5b4f 100644 --- a/drivers/gpu/drm/ttm/ttm_device.c +++ b/drivers/gpu/drm/ttm/ttm_device.c @@ -92,7 +92,9 @@ static int ttm_global_init(void) >> PAGE_SHIFT; num_dma32 = min(num_dma32, 2UL << (30 - PAGE_SHIFT)); - ttm_pool_mgr_init(num_pages); + ret = ttm_pool_mgr_init(num_pages); + if (ret) + goto out; ttm_tt_mgr_init(num_pages, num_dma32); glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); @@ -218,7 +220,8 @@ int ttm_device_init(struct ttm_device *bdev, struct ttm_device_funcs *funcs, bdev->funcs = funcs; ttm_sys_man_init(bdev); - ttm_pool_init(&bdev->pool, dev, use_dma_alloc, use_dma32); + if (ttm_pool_init(&bdev->pool, dev, use_dma_alloc, use_dma32)) + return -ENOMEM; bdev->vma_manager = vma_manager; INIT_DELAYED_WORK(&bdev->wq, ttm_device_delayed_workqueue); diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index 85d19f425af6..d76f7d476421 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -64,11 +64,11 @@ module_param(page_pool_size, ulong, 0644); static atomic_long_t allocated_pages; -static struct ttm_pool_type global_write_combined[MAX_ORDER + 1]; -static struct ttm_pool_type global_uncached[MAX_ORDER + 1]; +static struct ttm_pool_type *global_write_combined; +static struct ttm_pool_type *global_uncached; -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER + 1]; -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; +static struct ttm_pool_type *global_dma32_write_combined; +static struct ttm_pool_type *global_dma32_uncached; static spinlock_t shrinker_lock; static struct list_head shrinker_list; @@ -493,8 +493,10 @@ EXPORT_SYMBOL(ttm_pool_free); * @use_dma32: true if GFP_DMA32 should be used * * Initialize the pool and its pool types. + * + * Returns: 0 on successe, negative error code otherwise */ -void ttm_pool_init(struct ttm_pool *pool, struct device *dev, +int ttm_pool_init(struct ttm_pool *pool, struct device *dev, bool use_dma_alloc, bool use_dma32) { unsigned int i, j; @@ -506,11 +508,30 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, pool->use_dma32 = use_dma32; if (use_dma_alloc) { - for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) + for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { + pool->caching[i].orders = + kvcalloc(MAX_ORDER + 1, sizeof(struct ttm_pool_type), + GFP_KERNEL); + if (!pool->caching[i].orders) { + i--; + goto failed; + } for (j = 0; j <= MAX_ORDER; ++j) ttm_pool_type_init(&pool->caching[i].orders[j], pool, i, j); + + } + return 0; + +failed: + for (; i >= 0; i--) { + for (j = 0; j <= MAX_ORDER; ++j) + ttm_pool_type_fini(&pool->caching[i].orders[j]); + kfree(pool->caching[i].orders); + } + return -ENOMEM; } + return 0; } /** @@ -701,6 +722,31 @@ int ttm_pool_mgr_init(unsigned long num_pages) spin_lock_init(&shrinker_lock); INIT_LIST_HEAD(&shrinker_list); + if (!global_write_combined) { + global_write_combined = kvcalloc(MAX_ORDER + 1, sizeof(struct ttm_pool_type), + GFP_KERNEL); + if (!global_write_combined) + return -ENOMEM; + } + if (!global_uncached) { + global_uncached = kvcalloc(MAX_ORDER + 1, sizeof(struct ttm_pool_type), + GFP_KERNEL); + if (!global_uncached) + return -ENOMEM; + } + if (!global_dma32_write_combined) { + global_dma32_write_combined = kvcalloc(MAX_ORDER + 1, sizeof(struct ttm_pool_type), + GFP_KERNEL); + if (!global_dma32_write_combined) + return -ENOMEM; + } + if (!global_dma32_uncached) { + global_dma32_uncached = kvcalloc(MAX_ORDER + 1, sizeof(struct ttm_pool_type), + GFP_KERNEL); + if (!global_dma32_uncached) + return -ENOMEM; + } + for (i = 0; i <= MAX_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, ttm_write_combined, i); diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index 8ce14f9d202a..f5ce60f629ae 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -72,7 +72,7 @@ struct ttm_pool { bool use_dma32; struct { - struct ttm_pool_type orders[MAX_ORDER + 1]; + struct ttm_pool_type *orders; } caching[TTM_NUM_CACHING_TYPES]; }; @@ -80,7 +80,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, struct ttm_operation_ctx *ctx); void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt); -void ttm_pool_init(struct ttm_pool *pool, struct device *dev, +int ttm_pool_init(struct ttm_pool *pool, struct device *dev, bool use_dma_alloc, bool use_dma32); void ttm_pool_fini(struct ttm_pool *pool); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b83b481e250b..60d8cce2aed8 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -635,7 +635,7 @@ struct zone { ZONE_PADDING(_pad1_) /* free areas of different sizes */ - struct free_area free_area[MAX_ORDER + 1]; + struct free_area *free_area; /* zone flags, see below */ unsigned long flags; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3f3af7cd5164..941a94bb8cf0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6195,11 +6195,21 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) for_each_populated_zone(zone) { unsigned int order; - unsigned long nr[MAX_ORDER + 1], flags, total = 0; - unsigned char types[MAX_ORDER + 1]; + unsigned long *nr, flags, total = 0; + unsigned char *types; if (show_mem_node_skip(filter, zone_to_nid(zone), nodemask)) continue; + + nr = kmalloc_array(MAX_ORDER + 1, sizeof(unsigned long), GFP_KERNEL); + if (!nr) + break; + types = kmalloc_array(MAX_ORDER + 1, sizeof(unsigned char), GFP_KERNEL); + if (!types) { + kfree(nr); + break; + } + show_node(zone); printk(KERN_CONT "%s: ", zone->name); @@ -7649,8 +7659,8 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) lruvec_init(&pgdat->__lruvec); } -static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, - unsigned long remaining_pages) +static void __init zone_init_internals(struct zone *zone, enum zone_type idx, int nid, + unsigned long remaining_pages, bool hotplug) { atomic_long_set(&zone->managed_pages, remaining_pages); zone_set_nid(zone, nid); @@ -7659,6 +7669,16 @@ static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, spin_lock_init(&zone->lock); zone_seqlock_init(zone); zone_pcp_init(zone); + if (hotplug) + zone->free_area = + kcalloc_node(MAX_ORDER + 1, sizeof(struct free_area), + GFP_KERNEL, nid); + else + zone->free_area = + memblock_alloc_node(sizeof(struct free_area) * (MAX_ORDER + 1), + sizeof(struct free_area), nid); + BUG_ON(!zone->free_area); + } /* @@ -7697,7 +7717,7 @@ void __ref free_area_init_core_hotplug(struct pglist_data *pgdat) } for (z = 0; z < MAX_NR_ZONES; z++) - zone_init_internals(&pgdat->node_zones[z], z, nid, 0); + zone_init_internals(&pgdat->node_zones[z], z, nid, 0, true); } #endif @@ -7760,7 +7780,7 @@ static void __init free_area_init_core(struct pglist_data *pgdat) * when the bootmem allocator frees pages into the buddy system. * And all highmem pages will be managed by the buddy system. */ - zone_init_internals(zone, j, nid, freesize); + zone_init_internals(zone, j, nid, freesize, false); if (!size) continue; From patchwork Thu Aug 11 23:16:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941806 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D84FC3F6B0 for ; Thu, 11 Aug 2022 23:17:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 622388E000C; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AB718E0007; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44A418E000B; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2B5A78E0007 for ; Thu, 11 Aug 2022 19:16:54 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E46671618AF for ; Thu, 11 Aug 2022 23:16:53 +0000 (UTC) X-FDA: 79788873906.13.C27C972 Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf21.hostedemail.com (Postfix) with ESMTP id 6AB7B1C0075 for ; Thu, 11 Aug 2022 23:16:52 +0000 (UTC) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 1E2265C0181; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Thu, 11 Aug 2022 19:16:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259812; x= 1660346212; bh=JJp8/WbfdgzTShpKb6EAlVBgY9k+ndflpg8+oR/k5/U=; b=a VzCd2BkWTYFNhH59UG/DCqw8k/WSQRR/u3dt97Rx/Xp2zW3uPUhMloYuCEv9VLql NEG0HZhwo16heeyZusiNHd7PVXiM1n/awcRkb6NwS9Sb9YxnR8q3M9GfR2pOWr8d 78sZ8fOk6lqn4iYSIOpwqaFR51D8tpz1EZ0I6v7yf0BCLDfutjxLR8vWWbB2FWmN FXWoSvgtXWDeo6zrYpDztC9JJ/nQUzC5p0NNGjlKly4j7WmHg4jiPEZVEcNNZRlT VSj+ZUnbTURnGGstbGxYV1NB9BzknroblSrEK8xVpFVfkEFBZYJbcJDyLnrSqKj6 aFL16VKkv4mtcwVXbFxGw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259812; x=1660346212; bh=J Jp8/WbfdgzTShpKb6EAlVBgY9k+ndflpg8+oR/k5/U=; b=LdFtYcpqWUM2Q+SmN b9rKag+KtCSNa5Awkn9HIwke+3Lu/DectUgGfRZymL6m4yXGqaS+/FjHa5Jpc7cE WO1lStQn4KtYb68M3fOA4/nXY8smCm4bBCrtIWa46+02sXU+kQNRg2GVC6Lv/zra D3axOMKaP3Z9BVOta3SV/FLqIAsxoGFUm0d1T3m7bBlSdRaWJoSN2UY9gLcy6Ebn ga3Y9VXSBWZ+0ZKOAeRLAbfVxzhRppYYAL0/+HdV8kuo2Y+OJCNivzuq7wB9Hytj yK0xAT6grDeibh9+v0txk898fpM6wyyXIpid4udGX/gDGmlOp6YDTAs46ezPrVtH Qecyw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgudelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:51 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 11/12] mm: introduce MIN_MAX_ORDER to replace MAX_ORDER as compile time constant. Date: Thu, 11 Aug 2022 19:16:42 -0400 Message-Id: <20220811231643.1012912-12-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="a VzCd2B"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=LdFtYcpq; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf21.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259812; a=rsa-sha256; cv=none; b=Rv/M1OW6zDpcr0TJQwDTBxDl60c41LEEkTsss91WPYNlBtmbdYS/1umb8NCXuJFyFXgsBK aJySeBmHLTOIODAZU8gpPe4s9hZ3izxlCUXlyaSbHrsgP0dvymS+Ifc2anI0Ncr134NrUF PiHU7s2U3/Wa6vXrDzPSm4Yk23D51Cw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259812; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JJp8/WbfdgzTShpKb6EAlVBgY9k+ndflpg8+oR/k5/U=; b=qfH91TgBWMiTv81CdLSKXJgXcI9awNi+fQNVcYiF7aBzbdcLGkJ/0+kQq62A8vxtVF+Y5T S3xwHzvjiTnMjDU4TmqoI7ViKrr7rCJn9/yk1C1BVAWxsBm4QcKU9W5GGvEGh4EqcdcBrY 09+6QqAqknOk1yam194Lf22xqkUjWJ4= X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="a VzCd2B"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=LdFtYcpq; dmarc=pass (policy=none) header.from=sent.com; spf=pass (imf21.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com X-Stat-Signature: z45g8p13zfstfqzbfpje5fkftw46id6y X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 6AB7B1C0075 X-HE-Tag: 1660259812-849014 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan For other MAX_ORDER uses (described below), there is no need or too much hassle to convert certain static array to dynamic ones. Add MIN_MAX_ORDER to serve as compile time constant in place of MAX_ORDER. ARM64 hypervisor maintains its own free page list and does not import any core kernel symbols, so soon-to-be runtime variable MAX_ORDER is not accessible in ARM64 hypervisor code. Also there is no need to allocating very large pages. In SLAB/SLOB/SLUB, 2-D array kmalloc_caches uses MAX_ORDER in its second dimension. It is too much hassle to allocate memory for kmalloc_caches before any proper memory allocator is set up. Signed-off-by: Zi Yan Cc: Marc Zyngier Cc: Catalin Marinas Cc: Christoph Lameter Cc: Vlastimil Babka Cc: Quentin Perret Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 2 +- include/linux/mmzone.h | 3 +++ include/linux/slab.h | 8 ++++---- mm/slab.c | 2 +- mm/slub.c | 6 +++--- 6 files changed, 13 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index fe5472a184a3..29b92f68ab69 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -16,7 +16,7 @@ struct hyp_pool { * API at EL2. */ hyp_spinlock_t lock; - struct list_head free_area[MAX_ORDER + 1]; + struct list_head free_area[MIN_MAX_ORDER + 1]; phys_addr_t range_start; phys_addr_t range_end; unsigned short max_order; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index d40f0b30b534..7ebbac3e2e76 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -241,7 +241,7 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, int i; hyp_spin_lock_init(&pool->lock); - pool->max_order = min(MAX_ORDER, get_order((nr_pages + 1) << PAGE_SHIFT)); + pool->max_order = min(MIN_MAX_ORDER, get_order((nr_pages + 1) << PAGE_SHIFT)); for (i = 0; i < pool->max_order; i++) INIT_LIST_HEAD(&pool->free_area[i]); pool->range_start = phys; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 60d8cce2aed8..b5774e4c2700 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -26,10 +26,13 @@ /* Free memory management - zoned buddy allocator. */ #ifdef CONFIG_SET_MAX_ORDER #define MAX_ORDER CONFIG_SET_MAX_ORDER +#define MIN_MAX_ORDER CONFIG_SET_MAX_ORDER #elif CONFIG_ARCH_FORCE_MAX_ORDER != 0 #define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER +#define MIN_MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER #else #define MAX_ORDER 10 +#define MIN_MAX_ORDER MAX_ORDER #endif #define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) diff --git a/include/linux/slab.h b/include/linux/slab.h index 568b5dfb3bd9..e34b2c9bda09 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -251,8 +251,8 @@ static inline unsigned int arch_slab_minalign(void) * to do various tricks to work around compiler limitations in order to * ensure proper constant folding. */ -#define KMALLOC_SHIFT_HIGH ((MAX_ORDER + PAGE_SHIFT) <= 25 ? \ - (MAX_ORDER + PAGE_SHIFT) : 25) +#define KMALLOC_SHIFT_HIGH ((MIN_MAX_ORDER + PAGE_SHIFT) <= 25 ? \ + (MIN_MAX_ORDER + PAGE_SHIFT) : 25) #define KMALLOC_SHIFT_MAX KMALLOC_SHIFT_HIGH #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 @@ -265,7 +265,7 @@ static inline unsigned int arch_slab_minalign(void) * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) +#define KMALLOC_SHIFT_MAX (MIN_MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif @@ -278,7 +278,7 @@ static inline unsigned int arch_slab_minalign(void) * be allocated from the same page. */ #define KMALLOC_SHIFT_HIGH PAGE_SHIFT -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) +#define KMALLOC_SHIFT_MAX (MIN_MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif diff --git a/mm/slab.c b/mm/slab.c index 530f418a4930..23798c32bb38 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -466,7 +466,7 @@ static int __init slab_max_order_setup(char *str) { get_option(&str, &slab_max_order); slab_max_order = slab_max_order < 0 ? 0 : - min(slab_max_order, MAX_ORDER); + min(slab_max_order, MIN_MAX_ORDER); slab_max_order_set = true; return 1; diff --git a/mm/slub.c b/mm/slub.c index 5acf5407cbc6..940fe48ea298 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3876,8 +3876,8 @@ static inline int calculate_order(unsigned int size) /* * Doh this slab cannot be placed using slub_max_order. */ - order = calc_slab_order(size, 1, MAX_ORDER, 1); - if (order <= MAX_ORDER) + order = calc_slab_order(size, 1, MIN_MAX_ORDER, 1); + if (order <= MIN_MAX_ORDER) return order; return -ENOSYS; } @@ -4388,7 +4388,7 @@ __setup("slub_min_order=", setup_slub_min_order); static int __init setup_slub_max_order(char *str) { get_option(&str, (int *)&slub_max_order); - slub_max_order = min_t(unsigned int, slub_max_order, MAX_ORDER); + slub_max_order = min_t(unsigned int, slub_max_order, MIN_MAX_ORDER); return 1; } From patchwork Thu Aug 11 23:16:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 12941805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F5BBC25B0E for ; Thu, 11 Aug 2022 23:17:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D003A8E000A; Thu, 11 Aug 2022 19:16:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C95078E0007; Thu, 11 Aug 2022 19:16:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB3A78E000A; Thu, 11 Aug 2022 19:16:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8EC768E0007 for ; Thu, 11 Aug 2022 19:16:53 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 69A7414193F for ; Thu, 11 Aug 2022 23:16:53 +0000 (UTC) X-FDA: 79788873906.19.3CF733C Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by imf02.hostedemail.com (Postfix) with ESMTP id 0C1A080029 for ; Thu, 11 Aug 2022 23:16:52 +0000 (UTC) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id AC7C05C0162; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 11 Aug 2022 19:16:52 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to :reply-to:sender:subject:subject:to:to; s=fm1; t=1660259812; x= 1660346212; bh=JX6+NOODRkUOKgp6gGFaE5LIUkI5KzCmUjjUDH9D5W8=; b=o MAZmAaohLGJohsfh/rU7NMYoGj7Har723r+mkHEsKIh8ZvsnqGOrQ+BJj2B1xIVA DkG5FYekOq/RAdfYnJ74QuRFTqXGOWQxxVkBU5B5KiH6UvQljCq1wXs6zT0Wz8a7 Fjtuu0uCgxv+fSWxokelAQLwx1rYQNWUEwNeQ864euqkzDTwCkaI0ZWNntY6PUqe ytMjmrDhNA1qQBTzHD7HgVGtlcG7BC0omuWZcAYN6xbChCtaU2UCSGQBiCuIk6HE lA5D/bG/ctDByeCHhcN7qsvXLr1oJ7diYvKISNJ2CmEzNSldklCnsaYvSm/q2z8a I8Pn7c1Es0vv6yDUu/fGQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1660259812; x=1660346212; bh=J X6+NOODRkUOKgp6gGFaE5LIUkI5KzCmUjjUDH9D5W8=; b=e+ZAtnJh3F/myKnVO xJIUtKWprOiXXdBENhlDb7s3V0K5VmFj2hGRAtNm7ei2YtnuxPGAzxbhvbYkxhah OJ178oCM+ASVPLcSjsumkCienO+zBbwssxYOr8lyvUgF5h3T+8dyVW1m9ZjWLYvU p8k044vrw+q/pMPwJdVrINlYqCKPI3lE5IgJ/e30b8zzPvcZw6LatoWgzagF39xQ EY+NFKF/RyJu/RCrTMIfsV1RRt+7svG1WKwa3ik/Dg+aUa9N0ufpdsaPm9ie8Jwx RwLtBfHq3dRQvVStvolPgEOUwJmk+Q7TOl0D2XtA2n5oWxAMPzomWMOVuCFQ8eZx 85VYQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrvdeghedgudelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrggfgsedtqhertdertddtnecuhfhrohhmpegkihcu jggrnhcuoeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeegge ehudfgudduvdelheehteegledtteeiveeuhfffveekhfevueefieeijeegvdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhessh gvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 11 Aug 2022 19:16:52 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org Cc: David Hildenbrand , Matthew Wilcox , Vlastimil Babka , "Kirill A . Shutemov" , Mike Kravetz , John Hubbard , Yang Shi , David Rientjes , James Houghton , Mike Rapoport , linux-kernel@vger.kernel.org Subject: [RFC PATCH v2 12/12] mm: make MAX_ORDER a kernel boot time parameter. Date: Thu, 11 Aug 2022 19:16:43 -0400 Message-Id: <20220811231643.1012912-13-zi.yan@sent.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220811231643.1012912-1-zi.yan@sent.com> References: <20220811231643.1012912-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660259813; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JX6+NOODRkUOKgp6gGFaE5LIUkI5KzCmUjjUDH9D5W8=; b=xSgS0o6pxcmMAA+jl8B6qBrp5F0sXl8L74xOVx5H2csXOWVUuksKksyHA8FfSqKR31FDLb fMEMIBlGRlM4ZgWNRPjcs3QtkoTtCIgPYXAki8nw6gCIzBXKHt/9jBBqI7mzOMbLP/2KTZ 95HoowdKo1xQLn6qad834WrWF7O5fN0= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="o MAZmAa"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=e+ZAtnJh; spf=pass (imf02.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660259813; a=rsa-sha256; cv=none; b=4ATL0XldaDmQXktnv8nHmQUW2KMX50JJLDt7ENAciJ4mQ1jH02RcJGJAMSxpZOTQ8KaJWp TVyPmnKpH95lwAr7PNqgio7HQrFl6JfMzzriDT8Syxq2MZddn/DEtVOPJdHxD7+1oUGD5g kkRhXUVR+LIwgIigAOmZW0LZcPiV/3w= X-Stat-Signature: f3k9opy5xrw14ddrebd56m7tq49usx84 X-Rspamd-Queue-Id: 0C1A080029 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=sent.com header.s=fm1 header.b="o MAZmAa"; dkim=pass header.d=messagingengine.com header.s=fm1 header.b=e+ZAtnJh; spf=pass (imf02.hostedemail.com: domain of zi.yan@sent.com designates 66.111.4.28 as permitted sender) smtp.mailfrom=zi.yan@sent.com; dmarc=pass (policy=none) header.from=sent.com X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1660259812-518786 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zi Yan With the new buddy_alloc_max_order, users can specify larger MAX_ORDER than set in CONFIG_ARCH_MAX_ORDER or CONFIG_SET_MAX_ORDER. It can be set any value >= CONFIG_ARCH_MAX_ORDER or CONFIG_SET_MAX_ORDER, but < 256 (limited by vmscan scan_control and per-cpu free page list). Signed-off-by: Zi Yan Cc: Jonathan Corbet Cc: "Paul E. McKenney" Cc: Randy Dunlap Cc: Thomas Gleixner Cc: Vlastimil Babka Cc: linux-doc@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- .../admin-guide/kernel-parameters.txt | 5 +++ include/linux/mmzone.h | 8 +++++ mm/Kconfig | 13 +++++++ mm/page_alloc.c | 34 ++++++++++++++++++- mm/vmscan.c | 1 - 5 files changed, 59 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ec519225b671..0f71233ae396 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -494,6 +494,11 @@ bttv.pll= See Documentation/admin-guide/media/bttv.rst bttv.tuner= + buddy_alloc_max_order= [KNL] This parameter adjusts the size of largest + pages that can be allocated from kernel buddy allocator. The largest + page size is 2^buddy_alloc_max_order * PAGE_SIZE. + Format: integer + bulk_remove=off [PPC] This parameter disables the use of the pSeries firmware feature for flushing multiple hpte entries at a time. diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b5774e4c2700..90121d25d660 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -35,6 +35,14 @@ #define MIN_MAX_ORDER MAX_ORDER #endif +/* remap MAX_ORDER to buddy_alloc_max_order for boot time adjustment */ +#ifdef CONFIG_BOOT_TIME_MAX_ORDER +/* Defined in mm/page_alloc.c */ +extern int buddy_alloc_max_order; +#undef MAX_ORDER +#define MAX_ORDER buddy_alloc_max_order +#endif /* CONFIG_BOOT_TIME_MAX_ORDER */ + #define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) /* diff --git a/mm/Kconfig b/mm/Kconfig index e558f5679707..acccb919d72d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -455,6 +455,19 @@ config SET_MAX_ORDER increase this value. A value of 10 means that the largest free memory block is 2^10 pages. +config BOOT_TIME_MAX_ORDER + bool "Set maximum order of buddy allocator at boot time" + depends on SPARSEMEM_VMEMMAP && (ARCH_FORCE_MAX_ORDER != 0 || SET_MAX_ORDER != 0) + help + It enables users to set the maximum order of buddy allocator at system + boot time instead of a static MACRO set at compilation time. Systems with + a lot of memory might want to allocate large pages whereas it is much + less feasible and desirable for systems with less memory. This option + allows different systems to control the largest page they want to + allocate. By default, MAX_ORDER will be set to ARCH_FORCE_MAX_ORDER or + SET_MAX_ORDER, whichever is non-zero, when the boot time parameter is not + set. The maximum of MAX_ORDER is currently limited at 256. + config HAVE_MEMBLOCK_PHYS_MAP bool diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 941a94bb8cf0..4c4d68da1922 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1581,7 +1581,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, order = pindex_to_order(pindex); nr_pages = 1 << order; - BUILD_BUG_ON(MAX_ORDER >= (1<= (1< MIN_MAX_ORDER && max_order <= S8_MAX && + max_order <= (1< S8_MAX); BUILD_BUG_ON(DEF_PRIORITY > S8_MAX); BUILD_BUG_ON(MAX_NR_ZONES > S8_MAX);