From patchwork Wed Jun 16 03:21:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 12323821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0E27C48BDF for ; Wed, 16 Jun 2021 03:23:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B01861246 for ; Wed, 16 Jun 2021 03:23:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B01861246 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=t2hFXgSEUC5f0yv5abNVkV3Fx7/YPgNLBxzNSxSaPBo=; b=wf+/2dV+vCDub6 Hdh1qaqaDv9GspaR7S7JPmt2ZPEfRS7mMXgzfJXtD8D/q1EWmqF3mbaxPwdU+g9HgGGeeqa9iPvl/ l7n8o0N4BkaZISAQuXgeW867PqGoeGBROkkb9Zm4SGIDTIReDBW8UQfomRrAHZgJge1R/2nR4MGTh kMZXEvA86oowhICDHfKasO1WhYdpWIrFoRDxLpcuDlTb02TzB4YoBs5vUXdD9FgUwBcwi2/U8vXuh YRW8lCdktMchTKUYeK3yORwBPbq+M5IyphufCiFdR/u5DWVcpObkQPLXzKCXJ+SAMK0Pd9p5KErQ9 bZ0j82JjO4mDLmFIemRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltM7K-004YCz-0R; Wed, 16 Jun 2021 03:21:26 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltM7E-004Y9R-Gq for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 03:21:23 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2742F613C2; Wed, 16 Jun 2021 03:21:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623813680; bh=uwtQcyv5VK7EnNeu4zAuSiFMzgWca2SQ0MXYCFdFfoE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DUx2c2QdVsJ/e6o1nIEcvcwpfMTTwRZITr/IRvy6cFj1GF8ZkgGPcr9dJhZ0Aahcj Db0jmOS2dBE11jCaQ94u8RnJ6pJzeI5ilMOqW96akpc7Cvs2/ytVSzbGwsBKmp3Rav x9sATIjBcy21giShX/AUB2ujLJV5m/EcwvZ9k6ouu4vHJvfF+Kt4yDD+0EUlgw92UQ Rha8tWvK2dwWtfEwCxxcGLfFBqPBd6JyRHbdy33OZPUUUtbHT9r6q8UGcDUU8UZaD8 rkDV+j+cu6M5kPmGGXrB1FyGOdj2y3fmZIqPlEim+yeXdHGWVovFbYqHPhBF5kbX0j I8AWloY7Mh75A== From: Andy Lutomirski To: x86@kernel.org Cc: Dave Hansen , LKML , linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Mathieu Desnoyers , Nicholas Piggin , Peter Zijlstra , Russell King , linux-arm-kernel@lists.infradead.org Subject: [PATCH 7/8] membarrier: Remove arm (32) support for SYNC_CORE Date: Tue, 15 Jun 2021 20:21:12 -0700 Message-Id: <2142129092ff9aa00e600c42a26c4015b7f5ceec.1623813516.git.luto@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210615_202120_594231_37B0E19F X-CRM114-Status: GOOD ( 10.39 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On arm32, the only way to safely flush icache from usermode is to call cacheflush(2). This also handles any required pipeline flushes, so membarrier's SYNC_CORE feature is useless on arm. Remove it. Cc: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Peter Zijlstra Cc: Russell King Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Andy Lutomirski Acked-by: Russell King (Oracle) --- arch/arm/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 24804f11302d..89a885fba724 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -10,7 +10,6 @@ config ARM select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_KEEPINITRD select ARCH_HAS_KCOV - select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_SPECIAL if ARM_LPAE select ARCH_HAS_PHYS_TO_DMA From patchwork Wed Jun 16 03:21:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 12323823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17A29C48BE5 for ; Wed, 16 Jun 2021 03:23:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CD25261246 for ; Wed, 16 Jun 2021 03:23:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD25261246 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YsT/3HJ0noCXSt1rez4tWhhw54ZYb0kSSxrnDHdCOuw=; b=cgttdi5hu7ymVZ khY33fmvHHXVMONqoVfJtlqJhmobCak4M4gKzvIG8/DLr4XjZXMvGmY/2OQz7eQFKg/lDfjP1atN/ QPNWuogqf2hJAWnk/XN4VXez+zagPnvEMp/PbZliW1ydHu2Kkwy4lvsA9kdCSDTKzlCGwDJPFA+H7 +mpHnlNYsDPEKpXgBxUQXYDMtAvOnt9rpZGqwpJ0hQGY7IPQvJKue4L4np2UN01bI/mIxiLjmEm2i p6+7UslCdeoMkgjtmXIXEw/OhQfNE2PoS5FaLOFD5ezY01jdtvV8ZFwySdzbESGiWqHnt3kLKP7Oi 6K4ZZD3qglQwodH9ilcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltM7U-004YG1-Gw; Wed, 16 Jun 2021 03:21:36 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltM7F-004Y9p-7r for linux-arm-kernel@lists.infradead.org; Wed, 16 Jun 2021 03:21:25 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id D051F613C7; Wed, 16 Jun 2021 03:21:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623813681; bh=c1mW/zRWUMzz2nK2JXBUnfYw2Qug1l+u6NJy058tCKU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B70I5bf52TGkG4s1VVE6Hd7Y7ayRYIqoQY+VCilqwGmR637HAkxETW5LDO2WUAU41 JPrMUMhJxEVryOCY1LLy6bILDYQkRx/Up5OfTCeWH1y8+ocLCbPZKDINW09hhWUMhf x/iKZE8FCbJ1rxisppx4MLiLiEzDxLiSKnuAbNbxuObVxTQGxiJsQwt8JbAwy86JJb 5d2zQrZFSt6N3/cbRkVui0tket2wwvCsLk4YOXvILUCJUWirQLDg8Frqib8KziXQLk ysBwfFSYxXKB/Jb9ME6MVI7PetyyF1uTaBSC8eXu4Av1hnsrGlkvApjo+ZhKswSnv/ m6AaPR7Pa2FhA== From: Andy Lutomirski To: x86@kernel.org Cc: Dave Hansen , LKML , linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, Nicholas Piggin , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, Mathieu Desnoyers , Peter Zijlstra , stable@vger.kernel.org Subject: [PATCH 8/8] membarrier: Rewrite sync_core_before_usermode() and improve documentation Date: Tue, 15 Jun 2021 20:21:13 -0700 Message-Id: <07a8b963002cb955b7516e61bad19514a3acaa82.1623813516.git.luto@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210615_202121_467584_73A832D6 X-CRM114-Status: GOOD ( 34.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The old sync_core_before_usermode() comments suggested that a non-icache-syncing return-to-usermode instruction is x86-specific and that all other architectures automatically notice cross-modified code on return to userspace. This is misleading. The incantation needed to modify code from one CPU and execute it on another CPU is highly architecture dependent. On x86, according to the SDM, one must modify the code, issue SFENCE if the modification was WC or nontemporal, and then issue a "serializing instruction" on the CPU that will execute the code. membarrier() can do the latter. On arm64 and powerpc, one must flush the icache and then flush the pipeline on the target CPU, although the CPU manuals don't necessarily use this language. So let's drop any pretense that we can have a generic way to define or implement membarrier's SYNC_CORE operation and instead require all architectures to define the helper and supply their own documentation as to how to use it. This means x86, arm64, and powerpc for now. Let's also rename the function from sync_core_before_usermode() to membarrier_sync_core_before_usermode() because the precise flushing details may very well be specific to membarrier, and even the concept of "sync_core" in the kernel is mostly an x86-ism. (It may well be the case that, on real x86 processors, synchronizing the icache (which requires no action at all) and "flushing the pipeline" is sufficient, but trying to use this language would be confusing at best. LFENCE does something awfully like "flushing the pipeline", but the SDM does not permit LFENCE as an alternative to a "serializing instruction" for this purpose.) Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Cc: Catalin Marinas Cc: Will Deacon Cc: linux-arm-kernel@lists.infradead.org Cc: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Peter Zijlstra Cc: x86@kernel.org Cc: stable@vger.kernel.org Fixes: 70216e18e519 ("membarrier: Provide core serializing command, *_SYNC_CORE") Signed-off-by: Andy Lutomirski Acked-by: Nicholas Piggin Acked-by: Will Deacon --- .../membarrier-sync-core/arch-support.txt | 68 ++++++------------- arch/arm64/include/asm/sync_core.h | 19 ++++++ arch/powerpc/include/asm/sync_core.h | 14 ++++ arch/x86/Kconfig | 1 - arch/x86/include/asm/sync_core.h | 7 +- arch/x86/kernel/alternative.c | 2 +- arch/x86/kernel/cpu/mce/core.c | 2 +- arch/x86/mm/tlb.c | 3 +- drivers/misc/sgi-gru/grufault.c | 2 +- drivers/misc/sgi-gru/gruhandles.c | 2 +- drivers/misc/sgi-gru/grukservices.c | 2 +- include/linux/sched/mm.h | 1 - include/linux/sync_core.h | 21 ------ init/Kconfig | 3 - kernel/sched/membarrier.c | 15 ++-- 15 files changed, 75 insertions(+), 87 deletions(-) create mode 100644 arch/arm64/include/asm/sync_core.h create mode 100644 arch/powerpc/include/asm/sync_core.h delete mode 100644 include/linux/sync_core.h diff --git a/Documentation/features/sched/membarrier-sync-core/arch-support.txt b/Documentation/features/sched/membarrier-sync-core/arch-support.txt index 883d33b265d6..41c9ebcb275f 100644 --- a/Documentation/features/sched/membarrier-sync-core/arch-support.txt +++ b/Documentation/features/sched/membarrier-sync-core/arch-support.txt @@ -5,51 +5,25 @@ # # Architecture requirements # -# * arm/arm64/powerpc # -# Rely on implicit context synchronization as a result of exception return -# when returning from IPI handler, and when returning to user-space. -# -# * x86 -# -# x86-32 uses IRET as return from interrupt, which takes care of the IPI. -# However, it uses both IRET and SYSEXIT to go back to user-space. The IRET -# instruction is core serializing, but not SYSEXIT. -# -# x86-64 uses IRET as return from interrupt, which takes care of the IPI. -# However, it can return to user-space through either SYSRETL (compat code), -# SYSRETQ, or IRET. -# -# Given that neither SYSRET{L,Q}, nor SYSEXIT, are core serializing, we rely -# instead on write_cr3() performed by switch_mm() to provide core serialization -# after changing the current mm, and deal with the special case of kthread -> -# uthread (temporarily keeping current mm into active_mm) by issuing a -# sync_core_before_usermode() in that specific case. -# - ----------------------- - | arch |status| - ----------------------- - | alpha: | TODO | - | arc: | TODO | - | arm: | ok | - | arm64: | ok | - | csky: | TODO | - | h8300: | TODO | - | hexagon: | TODO | - | ia64: | TODO | - | m68k: | TODO | - | microblaze: | TODO | - | mips: | TODO | - | nds32: | TODO | - | nios2: | TODO | - | openrisc: | TODO | - | parisc: | TODO | - | powerpc: | ok | - | riscv: | TODO | - | s390: | TODO | - | sh: | TODO | - | sparc: | TODO | - | um: | TODO | - | x86: | ok | - | xtensa: | TODO | - ----------------------- +# An architecture that wants to support +# MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE needs to define precisely what it +# is supposed to do and implement membarrier_sync_core_before_usermode() to +# make it do that. Then it can select ARCH_HAS_MEMBARRIER_SYNC_CORE via +# Kconfig.Unfortunately, MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE is not a +# fantastic API and may not make sense on all architectures. Once an +# architecture meets these requirements, +# +# On x86, a program can safely modify code, issue +# MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE, and then execute that code, via +# the modified address or an alias, from any thread in the calling process. +# +# On arm64, a program can modify code, flush the icache as needed, and issue +# MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE to force a "context synchronizing +# event", aka pipeline flush on all CPUs that might run the calling process. +# Then the program can execute the modified code as long as it is executed +# from an address consistent with the icache flush and the CPU's cache type. +# +# On powerpc, a program can use MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE +# similarly to arm64. It would be nice if the powerpc maintainers could +# add a more clear explanantion. diff --git a/arch/arm64/include/asm/sync_core.h b/arch/arm64/include/asm/sync_core.h new file mode 100644 index 000000000000..74996bf533bb --- /dev/null +++ b/arch/arm64/include/asm/sync_core.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_ARM64_SYNC_CORE_H +#define _ASM_ARM64_SYNC_CORE_H + +#include + +/* + * On arm64, anyone trying to use membarrier() to handle JIT code is + * required to first flush the icache and then do SYNC_CORE. All that's + * needed after the icache flush is to execute a "context synchronization + * event". Right now, ERET does this, and we are guaranteed to ERET before + * any user code runs. If Linux ever programs the CPU to make ERET stop + * being a context synchronizing event, then this will need to be adjusted. + */ +static inline void membarrier_sync_core_before_usermode(void) +{ +} + +#endif /* _ASM_ARM64_SYNC_CORE_H */ diff --git a/arch/powerpc/include/asm/sync_core.h b/arch/powerpc/include/asm/sync_core.h new file mode 100644 index 000000000000..589fdb34beab --- /dev/null +++ b/arch/powerpc/include/asm/sync_core.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_SYNC_CORE_H +#define _ASM_POWERPC_SYNC_CORE_H + +#include + +/* + * XXX: can a powerpc person put an appropriate comment here? + */ +static inline void membarrier_sync_core_before_usermode(void) +{ +} + +#endif /* _ASM_POWERPC_SYNC_CORE_H */ diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0045e1b44190..f010897a1e8a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -89,7 +89,6 @@ config X86 select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_STRICT_KERNEL_RWX select ARCH_HAS_STRICT_MODULE_RWX - select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_UBSAN_SANITIZE_ALL select ARCH_HAS_DEBUG_WX diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h index ab7382f92aff..c665b453969a 100644 --- a/arch/x86/include/asm/sync_core.h +++ b/arch/x86/include/asm/sync_core.h @@ -89,11 +89,10 @@ static inline void sync_core(void) } /* - * Ensure that a core serializing instruction is issued before returning - * to user-mode. x86 implements return to user-space through sysexit, - * sysrel, and sysretq, which are not core serializing. + * Ensure that the CPU notices any instruction changes before the next time + * it returns to usermode. */ -static inline void sync_core_before_usermode(void) +static inline void membarrier_sync_core_before_usermode(void) { /* With PTI, we unconditionally serialize before running user code. */ if (static_cpu_has(X86_FEATURE_PTI)) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 6974b5174495..52ead5f4fcdc 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -17,7 +17,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index bf7fe87a7e88..4a577980d4d1 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -41,12 +41,12 @@ #include #include #include -#include #include #include #include #include +#include #include #include #include diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 59488d663e68..35b622fd2ed1 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -538,7 +539,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, */ if (unlikely(atomic_read(&next->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE)) - sync_core_before_usermode(); + membarrier_sync_core_before_usermode(); #endif return; diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c index 723825524ea0..48fd5b101de1 100644 --- a/drivers/misc/sgi-gru/grufault.c +++ b/drivers/misc/sgi-gru/grufault.c @@ -20,8 +20,8 @@ #include #include #include -#include #include +#include #include "gru.h" #include "grutables.h" #include "grulib.h" diff --git a/drivers/misc/sgi-gru/gruhandles.c b/drivers/misc/sgi-gru/gruhandles.c index 1d75d5e540bc..c8cba1c1b00f 100644 --- a/drivers/misc/sgi-gru/gruhandles.c +++ b/drivers/misc/sgi-gru/gruhandles.c @@ -16,7 +16,7 @@ #define GRU_OPERATION_TIMEOUT (((cycles_t) local_cpu_data->itc_freq)*10) #define CLKS2NSEC(c) ((c) *1000000000 / local_cpu_data->itc_freq) #else -#include +#include #include #define GRU_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000) #define CLKS2NSEC(c) ((c) * 1000000 / tsc_khz) diff --git a/drivers/misc/sgi-gru/grukservices.c b/drivers/misc/sgi-gru/grukservices.c index 0ea923fe6371..ce03ff3f7c3a 100644 --- a/drivers/misc/sgi-gru/grukservices.c +++ b/drivers/misc/sgi-gru/grukservices.c @@ -16,10 +16,10 @@ #include #include #include -#include #include #include #include +#include #include #include "gru.h" #include "grulib.h" diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h index c6eebbafadb0..845db11190cd 100644 --- a/include/linux/sched/mm.h +++ b/include/linux/sched/mm.h @@ -7,7 +7,6 @@ #include #include #include -#include /* * Routines for handling mm_structs diff --git a/include/linux/sync_core.h b/include/linux/sync_core.h deleted file mode 100644 index 013da4b8b327..000000000000 --- a/include/linux/sync_core.h +++ /dev/null @@ -1,21 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_SYNC_CORE_H -#define _LINUX_SYNC_CORE_H - -#ifdef CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE -#include -#else -/* - * This is a dummy sync_core_before_usermode() implementation that can be used - * on all architectures which return to user-space through core serializing - * instructions. - * If your architecture returns to user-space through non-core-serializing - * instructions, you need to write your own functions. - */ -static inline void sync_core_before_usermode(void) -{ -} -#endif - -#endif /* _LINUX_SYNC_CORE_H */ - diff --git a/init/Kconfig b/init/Kconfig index 1ea12c64e4c9..e5d552b0823e 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -2377,9 +2377,6 @@ source "kernel/Kconfig.locks" config ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE bool -config ARCH_HAS_SYNC_CORE_BEFORE_USERMODE - bool - # It may be useful for an architecture to override the definitions of the # SYSCALL_DEFINE() and __SYSCALL_DEFINEx() macros in # and the COMPAT_ variants in , in particular to use a diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index c32c32a2441e..f72a6ab3fac2 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -5,6 +5,9 @@ * membarrier system call */ #include "sched.h" +#ifdef CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE +#include +#endif /* * The basic principle behind the regular memory barrier mode of membarrier() @@ -221,6 +224,7 @@ static void ipi_mb(void *info) smp_mb(); /* IPIs should be serializing but paranoid. */ } +#ifdef CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE static void ipi_sync_core(void *info) { /* @@ -230,13 +234,14 @@ static void ipi_sync_core(void *info) * the big comment at the top of this file. * * A sync_core() would provide this guarantee, but - * sync_core_before_usermode() might end up being deferred until - * after membarrier()'s smp_mb(). + * membarrier_sync_core_before_usermode() might end up being deferred + * until after membarrier()'s smp_mb(). */ smp_mb(); /* IPIs should be serializing but paranoid. */ - sync_core_before_usermode(); + membarrier_sync_core_before_usermode(); } +#endif static void ipi_rseq(void *info) { @@ -368,12 +373,14 @@ static int membarrier_private_expedited(int flags, int cpu_id) smp_call_func_t ipi_func = ipi_mb; if (flags == MEMBARRIER_FLAG_SYNC_CORE) { - if (!IS_ENABLED(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE)) +#ifndef CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE return -EINVAL; +#else if (!(atomic_read(&mm->membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY)) return -EPERM; ipi_func = ipi_sync_core; +#endif } else if (flags == MEMBARRIER_FLAG_RSEQ) { if (!IS_ENABLED(CONFIG_RSEQ)) return -EINVAL;