From patchwork Mon Apr 22 09:44:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13638190 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B00FFC07E8F for ; Mon, 22 Apr 2024 09:47:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A3136B00D0; Mon, 22 Apr 2024 05:47:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42B116B00D1; Mon, 22 Apr 2024 05:47:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2A49B6B00D2; Mon, 22 Apr 2024 05:47:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0A52F6B00D0 for ; Mon, 22 Apr 2024 05:47:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A7116C07CB for ; Mon, 22 Apr 2024 09:47:36 +0000 (UTC) X-FDA: 82036690512.15.02332B1 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf01.hostedemail.com (Postfix) with ESMTP id 5089740009 for ; Mon, 22 Apr 2024 09:47:33 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RyfPrgZi; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713779255; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L2IvRt6u6jqo/e5amHBUqWQrOE6g/Jgm0JvMVudiXTE=; b=Grqx+2NO1p6n1F6kmceI49bOh4OG+XifgN3vg4wPo6FTOVnNa2WubHwAprzYQmAAByHCZ2 22KTdE/sCdVXStm0ba985M/gAZws6kGgu4vsiK8PRjkcWEKc/+f+TeLN2YD8Pm4NVA3ixG CehDDEOZjjYRza57flrsSzX1Wx3Sjw0= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=RyfPrgZi; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713779255; a=rsa-sha256; cv=none; b=hEsWJUtqaAFzQKqDITAFeDPDaSNJChIuuws8N42z5U+gwWQ3og3nQ6rtqp+Jo42zgPlfwU 1tdaBPPLiffgaTpx5tbnaC9LsWzTDtIOPEWFMaagonVcVzb+x+8NOUUko5hgMofZZD0OkI BJD4imhVRHg0UxWnh5tWZPLIMW04iDA= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 6CF06CE01BE; Mon, 22 Apr 2024 09:47:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49A4BC4AF12; Mon, 22 Apr 2024 09:47:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1713779250; bh=dj8Jeem56mTOW4r4fQ6hbW3YTYti2N5hSv58QgNFPjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RyfPrgZiDWc4RkUjrZlx545lnRLFdbeC61xDsyK+e6MatRYTvVBCg9/2SoxfuUH6Z gVkeJ5AubNGZabonmRRFpfMz3ZeEIz9tZt9ifefl2p63gw+xxsLSko/AynhJxuTrfR yMtOeJbNbWX4Lp0H9EhGsYFkFRP7v5FHr577C2+2DLEolVO1kzSA+f1TEME9MDmMZm wxgjX57g5pjxmhJNvboOJbNbz0EwYzzlPKwpzZBS27md9j7SanSG0VtybVpu/LDnlp bzG32QBlwNathK1Z2Mg/HpJsrwbzbEOU9MjgiIdIfkbyt/pC9x+yxzCkHozNkv+03T HuD00Xt1BjuFQ== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH v5 13/15] powerpc: use CONFIG_EXECMEM instead of CONFIG_MODULES where appropriate Date: Mon, 22 Apr 2024 12:44:34 +0300 Message-ID: <20240422094436.3625171-14-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240422094436.3625171-1-rppt@kernel.org> References: <20240422094436.3625171-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5089740009 X-Rspam-User: X-Stat-Signature: i46sn6g97ago86x3cx5cp9y66njur4fr X-HE-Tag: 1713779253-988089 X-HE-Meta: U2FsdGVkX1+75YGJKKCXVDCvuI3tkzvVFNGhjfendVfQkUZDXZt427/lKMP/nJlqUVwmRa9lrvbxRlqLEFnos/XnIqaK6WMdDaKxzkVsVzi9wB/pcLysJOLmdMUV2ezJ+oBjnL49zFyoH/zR5MkbtBe8lFlyjk66sH8BArLvnNkP2awmppOlaHZxJ6QFYjCaOGDEZNe+W8NLI/0gArpU9mlt2yE6xbF14SNXQoI76HV2N2czlCIiD4Yjaxtm55gsy2xMQGr96vywUgB1UjSioguxDUFafL+k4im2Je2ITpr7UCtNHwX9N/jvI++iB4IxguexRi2+sV1z+hFbSC+c7RF4iexq6zW52hV4iUMQLsb2Pib7fe1xrgmOo31txfYz86i6fQsh5H4D+Tqh2kL0cRRAKrEewqKSjCDBx9LTJHL9QoeBatLMechKBjtfo+InEFScqijOrdsqt4wraylrNJTlrYU88wNhrSWaZehrOOLUtF858f+7oK7a/R2gRIB8zlKYFvDNs5a0DJ5eB/oR1JNKfrnFUhT1HzFIpYL2V9P0enPawM+SdIeW9YRr8icDaCoeF2oAOJ7kVZhW2k7PnrinJ6cGg0j1K9IXYY/+CkdwhU80Hmb/P3F+cqed+GwnmrqYJB5DlQ8yKprIgr5s29wWCopLXXqBIOh6Z9VesdCxoPICx/PLeeX8bjgM56dg9BM1meQMf8XlOjHWw8hanNXNYzW8vSXzPESbWE8O/2GOjGGZTrr4Wn8am6/AfVzlG43EkeMoi8BJuDTxuoqmX6gHY0YBq3l55I4k78YAWaNrML0cSjwX0qf6T25/jS7BRw2bTkZUkkc9JOUETCRRZoBc49xQvTWQC8Ms16jU9rkCLb/LwpT96pZ2InR8caNJumg6EjXK1BTmAfvmX0RV45FDzotILHGq7l5vC6Rv88r0Ex5fqJBGGKYPeSitD8WVMWSkvNGV9yZkX65PoXS HCQ0PoSc SFD+hvZvTHd956YbGl87IVIz03WXtdbGyWyHIZdC5U0y4+WQ7gK3G0VB9Y78FByC76/6ljdjOEfuirH8UhZEHLnVXs+ElPEA4mBux6pMpPt9wyezJcaKQEXopeKtbAq/DJGjNZB7eFFTms7ksr/34idL/y+6MTr5IvRXn+LHiWHi0ak+FclYGTIO0XeVqxnl4rLeuM4nkooWB4NBGl8P0m8drYdI6fOAoVsSSf+QbbZLIcK8ysETYroDocHeN41E0habgZK4kO5ak3QQx2ixUqz1FBmPaIiZV9FleXJPlhAMRh48oCWf/dDJMyeRYOgtw8c72MKR/Qi703jwiIwx46QeTsXSoNHu9bcA73PDTYtHtM08= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (IBM)" There are places where CONFIG_MODULES guards the code that depends on memory allocation being done with module_alloc(). Replace CONFIG_MODULES with CONFIG_EXECMEM in such places. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/kasan.h | 2 +- arch/powerpc/kernel/head_8xx.S | 4 ++-- arch/powerpc/kernel/head_book3s_32.S | 6 +++--- arch/powerpc/lib/code-patching.c | 2 +- arch/powerpc/mm/book3s32/mmu.c | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1c4be3373686..2e586733a464 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -285,7 +285,7 @@ config PPC select IOMMU_HELPER if PPC64 select IRQ_DOMAIN select IRQ_FORCED_THREADING - select KASAN_VMALLOC if KASAN && MODULES + select KASAN_VMALLOC if KASAN && EXECMEM select LOCK_MM_AND_FIND_VMA select MMU_GATHER_PAGE_SIZE select MMU_GATHER_RCU_TABLE_FREE diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h index 365d2720097c..b5bbb94c51f6 100644 --- a/arch/powerpc/include/asm/kasan.h +++ b/arch/powerpc/include/asm/kasan.h @@ -19,7 +19,7 @@ #define KASAN_SHADOW_SCALE_SHIFT 3 -#if defined(CONFIG_MODULES) && defined(CONFIG_PPC32) +#if defined(CONFIG_EXECMEM) && defined(CONFIG_PPC32) #define KASAN_KERN_START ALIGN_DOWN(PAGE_OFFSET - SZ_256M, SZ_256M) #else #define KASAN_KERN_START PAGE_OFFSET diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S index 647b0b445e89..edc479a7c2bc 100644 --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -199,12 +199,12 @@ instruction_counter: mfspr r10, SPRN_SRR0 /* Get effective address of fault */ INVALIDATE_ADJACENT_PAGES_CPU15(r10, r11) mtspr SPRN_MD_EPN, r10 -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM mfcr r11 compare_to_kernel_boundary r10, r10 #endif mfspr r10, SPRN_M_TWB /* Get level 1 table */ -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM blt+ 3f rlwinm r10, r10, 0, 20, 31 oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S index c1d89764dd22..57196883a00e 100644 --- a/arch/powerpc/kernel/head_book3s_32.S +++ b/arch/powerpc/kernel/head_book3s_32.S @@ -419,14 +419,14 @@ InstructionTLBMiss: */ /* Get PTE (linux-style) and check access */ mfspr r3,SPRN_IMISS -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM lis r1, TASK_SIZE@h /* check if kernel address */ cmplw 0,r1,r3 #endif mfspr r2, SPRN_SDR1 li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC rlwinm r2, r2, 28, 0xfffff000 -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM li r0, 3 bgt- 112f lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ @@ -442,7 +442,7 @@ InstructionTLBMiss: andc. r1,r1,r2 /* check access & ~permission */ bne- InstructionAddressInvalid /* return if access not permitted */ /* Convert linux-style PTE to low word of PPC-style PTE */ -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM rlwimi r2, r0, 0, 31, 31 /* userspace ? -> PP lsb */ #endif ori r1, r1, 0xe06 /* clear out reserved bits */ diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index c6ab46156cda..7af791446ddf 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -225,7 +225,7 @@ void __init poking_init(void) static unsigned long get_patch_pfn(void *addr) { - if (IS_ENABLED(CONFIG_MODULES) && is_vmalloc_or_module_addr(addr)) + if (IS_ENABLED(CONFIG_EXECMEM) && is_vmalloc_or_module_addr(addr)) return vmalloc_to_pfn(addr); else return __pa_symbol(addr) >> PAGE_SHIFT; diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c index 100f999871bc..625fe7d08e06 100644 --- a/arch/powerpc/mm/book3s32/mmu.c +++ b/arch/powerpc/mm/book3s32/mmu.c @@ -184,7 +184,7 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top) static bool is_module_segment(unsigned long addr) { - if (!IS_ENABLED(CONFIG_MODULES)) + if (!IS_ENABLED(CONFIG_EXECMEM)) return false; if (addr < ALIGN_DOWN(MODULES_VADDR, SZ_256M)) return false;