From patchwork Sun May 5 16:06:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 13654476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95EA6C4345F for ; Sun, 5 May 2024 16:09:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35FF26B00A8; Sun, 5 May 2024 12:09:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 310596B00A9; Sun, 5 May 2024 12:09:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D85C6B00AA; Sun, 5 May 2024 12:09:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F2D326B00A8 for ; Sun, 5 May 2024 12:09:39 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B314F8054D for ; Sun, 5 May 2024 16:09:39 +0000 (UTC) X-FDA: 82084827678.03.525422B Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf17.hostedemail.com (Postfix) with ESMTP id 7991740011 for ; Sun, 5 May 2024 16:09:37 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iGgFisiA; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1714925378; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L2IvRt6u6jqo/e5amHBUqWQrOE6g/Jgm0JvMVudiXTE=; b=OYZzBR6Ouwn5yfS8GRcJntShLgEa1vIjcgPVFhFYW03SngNjrTnMrP2F016NIjRyLy4otY aotEOZsyfrmwpc0K2zp1mLnKA5AsU7ysS2Gohyiuu+elZf13w6WbOLa4slyQ+t6/OK1u0v DPQyZyh4iOh/bsYHL91WIdu8JPGBLe4= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iGgFisiA; spf=pass (imf17.hostedemail.com: domain of rppt@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1714925378; a=rsa-sha256; cv=none; b=aCu8T0TW7hp2WXtvOaZJ2L/5DMipNK1XSy/ZiraqrOYljwKOwpi6scss5BROnFIolUtstL G1u9avzUqhgYv92pXKLNsOQHYXaBjkB37QiM785pZ9B7kHuxnfkLtUbpH18KXu9YCl0qse 4v5UiDaYhRB2jwk27ay1blC691WBlYg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 8EEB5CE0AB3; Sun, 5 May 2024 16:09:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A6A0C4DDE1; Sun, 5 May 2024 16:09:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714925373; bh=dj8Jeem56mTOW4r4fQ6hbW3YTYti2N5hSv58QgNFPjs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iGgFisiAEFSOr/0NCt1B6/ofJq6Y8eCTFiY7pWI9j4hTDzXZ6ABNwSpG76WCIUhcT th66rOSuWjs7gukW+VtRqy7le5COM2JbDmDXcMGGGmHwf7ZvSC66EEsfujIm7AOXyg k38kTuIo+MCs9Grb4mPndP5TB041EVmntdGbWC2MuJwyToWGtfpujdCPayDRUPDIJN HdVZ99JXyxdKwXN46hvq/r3eCJ7bQf/vEAL0OSAzMW+PjCBy8Dj/4Cb/+uB5v1F+1G KX2YObmfWD4r1+JmPC5Sblz7icAwe6/z7uxqhxFVXupHuqAjz2nIKTh3WUW+H0pvnN trj9hNJzYmw5A== From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexandre Ghiti , Andrew Morton , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Catalin Marinas , Christophe Leroy , "David S. Miller" , Dinh Nguyen , Donald Dutile , Eric Chanudet , Heiko Carstens , Helge Deller , Huacai Chen , Kent Overstreet , Liviu Dudau , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Michael Ellerman , Mike Rapoport , Nadav Amit , Palmer Dabbelt , Peter Zijlstra , =?utf-8?q?Philippe_Mathieu-Daud?= =?utf-8?q?=C3=A9?= , Rick Edgecombe , Russell King , Sam Ravnborg , Song Liu , Steven Rostedt , Thomas Bogendoerfer , Thomas Gleixner , Will Deacon , bpf@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, loongarch@lists.linux.dev, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org Subject: [PATCH RESEND v8 14/16] powerpc: use CONFIG_EXECMEM instead of CONFIG_MODULES where appropriate Date: Sun, 5 May 2024 19:06:26 +0300 Message-ID: <20240505160628.2323363-15-rppt@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240505160628.2323363-1-rppt@kernel.org> References: <20240505160628.2323363-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7991740011 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: zg5r9nwmskuyuhe8y6q8qjcps4zixbnm X-HE-Tag: 1714925377-406693 X-HE-Meta: U2FsdGVkX1/DBnnBwXSrhRWU3ST7NLzuiacf7eqH+KCPB8oiMaL9XB+nZKdQAcYSMWy0uNzOBZ0HKNAlL5iG81j64xrppD5J+C6pFBVlD9HOyhB4Efqq6Z8Bc/oPyOunFoIc+/5PrEoE4Bb2r+mKyaPHm+QZ/joX7Hwb7s+e2Usc7DwCwi/kmLinFlfGOaEePPXuynCA9Lj78coA7l+M5uYrN8fOIzKLzUKXynXWzDB+RKXFAIhp/dhk6Ko0zaf7XDnLV/7NvXBfa2zp+9oZOPzo/+Mjzcsj2lwLfEeMv9Scv3HzXBfAYCkZRk42gwCMcX9usr1qkOumKIco+QuTxAnOvuqeNJgetjRczG4bA2AKLUXhoYSbnFNRemK8Qe3JI1hkg5o3MXkHLTLo5/Ny5Rf/geZGK435M7tQNZLjg2gztbFvC2EgwJf9tbkcX0Kh5LKm6JjnG5vnwZ1djdInDFCFjYJKFGMalkgX61v4Wb6MUlU4/hdXEdreTfsoYdQtz5UmX0td2s5/siHVaaRpFkrEYLeKA2t9ZxiJHPQJRfsd7fze6r7RxIAZskB3uX7mQFcY0YCG7zKpI1YZ7X06MH4tAYzQ+32YclbLl/ayKJGAxy/cU53o2+QxDkomzDOd2CviXLIhkJ/uF/pWckp9kqLGhMQt9Bu1k/U7lxR/a1xWRXfcLbzZy7FDa/K37ikesqB5/eiXDnVuk+sPnk/aMW6rGeJ31hkhB3uItEb3ZWUAA0MYa/OFVTTxl7t4ak4jHLAOYkCUktTBYxYw6hS/ZQL3dIICaj0oG/XQRnOpfEWAIBkKbuUVOs2snd3ZhF4a4cKSwWkQ+Fv1SEZKGtanPnGe1NKS3AFeSoYxfotncBL5KctrLwVWdvIKxSGluOzXvfXNZ9YbOoPcVI6iUkfJ6OTDl82ZwQaF+iI+VqzFZf0tEKdo42AGRhSBdWeeL3j+YwODwZ2caBUcKK3xPGA mfYJXCTv +bHL5RchIVphuLTMZLpx1egwxiSqW0xU1qujkzcVmGl7DNdGxOxP3QET7dtnda2D9vV1aLG8y/Y+s+rsdq7pfIOEbcGMkftROKXW2vv3HK86ebzfi52pJgLSlnWVBPH9Jjn534bnJ4hP2nMduw0P2zLcmraFhKOkNEUVT3Vi/+te6ZYwwrQ1PG1qT07N0HpdPw0w/4GiyWSo1/hqHkwf5ubTyttWMFPmrjCtOc087HWwys7/UIfCIzRFI6i0rZHgyzLbWGXkC/5f00l3wFP53OsADaRZN9n7+Ti6FyubeiSxdqWIKvCFzPd44JgJ6rKAYqppPhokR1poBxj8uYEhmfVaQePuL0S1Ed9lNHmjlb+8BZw6krwwvG94pNHgoSGkREHBuTCSxiuQUtotbceWlCejKZAe3NK3jK3EH67k1426KlyTM8z79TOSC7A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (IBM)" There are places where CONFIG_MODULES guards the code that depends on memory allocation being done with module_alloc(). Replace CONFIG_MODULES with CONFIG_EXECMEM in such places. Signed-off-by: Mike Rapoport (IBM) --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/kasan.h | 2 +- arch/powerpc/kernel/head_8xx.S | 4 ++-- arch/powerpc/kernel/head_book3s_32.S | 6 +++--- arch/powerpc/lib/code-patching.c | 2 +- arch/powerpc/mm/book3s32/mmu.c | 2 +- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1c4be3373686..2e586733a464 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -285,7 +285,7 @@ config PPC select IOMMU_HELPER if PPC64 select IRQ_DOMAIN select IRQ_FORCED_THREADING - select KASAN_VMALLOC if KASAN && MODULES + select KASAN_VMALLOC if KASAN && EXECMEM select LOCK_MM_AND_FIND_VMA select MMU_GATHER_PAGE_SIZE select MMU_GATHER_RCU_TABLE_FREE diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h index 365d2720097c..b5bbb94c51f6 100644 --- a/arch/powerpc/include/asm/kasan.h +++ b/arch/powerpc/include/asm/kasan.h @@ -19,7 +19,7 @@ #define KASAN_SHADOW_SCALE_SHIFT 3 -#if defined(CONFIG_MODULES) && defined(CONFIG_PPC32) +#if defined(CONFIG_EXECMEM) && defined(CONFIG_PPC32) #define KASAN_KERN_START ALIGN_DOWN(PAGE_OFFSET - SZ_256M, SZ_256M) #else #define KASAN_KERN_START PAGE_OFFSET diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S index 647b0b445e89..edc479a7c2bc 100644 --- a/arch/powerpc/kernel/head_8xx.S +++ b/arch/powerpc/kernel/head_8xx.S @@ -199,12 +199,12 @@ instruction_counter: mfspr r10, SPRN_SRR0 /* Get effective address of fault */ INVALIDATE_ADJACENT_PAGES_CPU15(r10, r11) mtspr SPRN_MD_EPN, r10 -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM mfcr r11 compare_to_kernel_boundary r10, r10 #endif mfspr r10, SPRN_M_TWB /* Get level 1 table */ -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM blt+ 3f rlwinm r10, r10, 0, 20, 31 oris r10, r10, (swapper_pg_dir - PAGE_OFFSET)@ha diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S index c1d89764dd22..57196883a00e 100644 --- a/arch/powerpc/kernel/head_book3s_32.S +++ b/arch/powerpc/kernel/head_book3s_32.S @@ -419,14 +419,14 @@ InstructionTLBMiss: */ /* Get PTE (linux-style) and check access */ mfspr r3,SPRN_IMISS -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM lis r1, TASK_SIZE@h /* check if kernel address */ cmplw 0,r1,r3 #endif mfspr r2, SPRN_SDR1 li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC rlwinm r2, r2, 28, 0xfffff000 -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM li r0, 3 bgt- 112f lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ @@ -442,7 +442,7 @@ InstructionTLBMiss: andc. r1,r1,r2 /* check access & ~permission */ bne- InstructionAddressInvalid /* return if access not permitted */ /* Convert linux-style PTE to low word of PPC-style PTE */ -#ifdef CONFIG_MODULES +#ifdef CONFIG_EXECMEM rlwimi r2, r0, 0, 31, 31 /* userspace ? -> PP lsb */ #endif ori r1, r1, 0xe06 /* clear out reserved bits */ diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index c6ab46156cda..7af791446ddf 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -225,7 +225,7 @@ void __init poking_init(void) static unsigned long get_patch_pfn(void *addr) { - if (IS_ENABLED(CONFIG_MODULES) && is_vmalloc_or_module_addr(addr)) + if (IS_ENABLED(CONFIG_EXECMEM) && is_vmalloc_or_module_addr(addr)) return vmalloc_to_pfn(addr); else return __pa_symbol(addr) >> PAGE_SHIFT; diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c index 100f999871bc..625fe7d08e06 100644 --- a/arch/powerpc/mm/book3s32/mmu.c +++ b/arch/powerpc/mm/book3s32/mmu.c @@ -184,7 +184,7 @@ unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top) static bool is_module_segment(unsigned long addr) { - if (!IS_ENABLED(CONFIG_MODULES)) + if (!IS_ENABLED(CONFIG_EXECMEM)) return false; if (addr < ALIGN_DOWN(MODULES_VADDR, SZ_256M)) return false;