From patchwork Tue Jan 23 21:16:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 13528145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 75B6AC47DDC for ; Tue, 23 Jan 2024 21:17:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References:Message-Id :MIME-Version:Subject:Date:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j/40Al3w8cdRaHdCYdvkGqQ6OXDOcjKPN5zVYwMIAUE=; b=ek52QRWNBkJ2vz 76UP/3XXRViU0DfwQ+DJpmkPm6HfmQWVimT+2W75FXecAo4iq8S/iPjv80XFbEmeQqDwlm7bYJm2Q NVDrK3Yvp3FR0VqIyz79kN4iNteAjwTLHZZ6HH+ehCh0t/2uuYaS8setQRLkIYiJgQ/0WXSESoZvb r2tjF7bIONA3TWfEGBLKIRPZ5oo5NBSoLNL8AwPlMQ/ncN8gVm2gRb2b8Dw/vpApEz5ZvYir/u266 /j0OQU3SUJAaHCyyRq4GRHR+mzbBfRPzF7giYhVJrKXRV5C5tct8CfxT57zi7LnMydbsjKL56PRIF 8m8jWs8iATCjyv0Ys92w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rSO8I-000UwZ-1b; Tue, 23 Jan 2024 21:16:34 +0000 Received: from mail-lf1-x12a.google.com ([2a00:1450:4864:20::12a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rSO85-000Usv-1O for linux-arm-kernel@lists.infradead.org; Tue, 23 Jan 2024 21:16:22 +0000 Received: by mail-lf1-x12a.google.com with SMTP id 2adb3069b0e04-50e78f1f41fso4982044e87.2 for ; Tue, 23 Jan 2024 13:16:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1706044578; x=1706649378; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ZFpflVTrSqZq+OIvSg55/4vxK9r3hNXp8u8Whz8PYyI=; b=LcEyo0rf3iEN+r693SU2YKOt+YNyVtXBCOwW6qePysh3KzYHB0OGkYMimmQxnOd/mv OZVT/bL97hvnvFfub+ReWrE+M1FrSjAB19rfVRgE0+cm8YvgXCC5yvHTHnGmresqu4cx u39LH01Zjv3X/8dY3uKKnDmVgR+SUliOUGKR0v01/DVXeT1YniRDKsyRJrEckDqvgLhl ROEcKqxaor5OOuSxve3a3SGFFa7YGInoTInuI9VjieokjsaIUaTVNMxH2T0RCMGKE86I T6BRXqK+WqscPND2mZfOANmk5t9/QOTpf685Z0Qqkvj3s4AVy1RahCNUC/eQzH0/fy23 Jk2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706044578; x=1706649378; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZFpflVTrSqZq+OIvSg55/4vxK9r3hNXp8u8Whz8PYyI=; b=veSXNX64wBIPcJ7ufUNZgMkjQaeTuurDCf5P/PpaSCUEUVOYchrTdBYUIo0uX2oG+h swjJodBQe2dlkMVjV+BXAYEyLplgn0ExPYxvQdpYJ4IiJKK+xw4lwFAhs+bIXMuKgj4r DNxZXh2V8YTZAWSnrka88Fi170czH0o5M7UGiqSVX+J1xB+7JJXF2qHvDSR8JiOhlqGZ CWt1LPaRXqFJepjf4OK2xlAYrX6rI3S+s52ewJv8VJFo3vFWpQGHEASOqE1aRAyFKYUp FQcQfY0GVVpzCgoMnJKQhq2rJMg+sHWxHlxtBLMyrhQJElhlycJHgbLP220FdDvmFH2R QRaw== X-Gm-Message-State: AOJu0YydPValyAB9kMG8ie0hqvUTaJ4e1ssO3kHxkb3QfFZUKzfCeGjq CqEuoPVVWx7GnYfv0fTHm/iqiev3HV9yVFRebSyaZh2jNz26O2i/9VzT+p2l4iE= X-Google-Smtp-Source: AGHT+IHUM+yRzGSe7jvZ4KD5VGbUBI0KsEq96+OcT27I68DjmypSiqYTQwfGQDw26dh74B9govTXTw== X-Received: by 2002:a05:6512:3150:b0:50e:e173:b02f with SMTP id s16-20020a056512315000b0050ee173b02fmr2533201lfi.63.1706044578297; Tue, 23 Jan 2024 13:16:18 -0800 (PST) Received: from [127.0.1.1] ([85.235.12.238]) by smtp.gmail.com with ESMTPSA id o3-20020a056512230300b0050ee557f1dcsm2385427lfu.115.2024.01.23.13.16.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jan 2024 13:16:18 -0800 (PST) From: Linus Walleij Date: Tue, 23 Jan 2024 22:16:16 +0100 Subject: [PATCH 3/4] ARM: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN MIME-Version: 1.0 Message-Id: <20240123-arm32-lpae-pan-v1-3-7ea98a20514c@linaro.org> References: <20240123-arm32-lpae-pan-v1-0-7ea98a20514c@linaro.org> In-Reply-To: <20240123-arm32-lpae-pan-v1-0-7ea98a20514c@linaro.org> To: Russell King , Ard Biesheuvel , Arnd Bergmann , Stefan Wahren , Kees Cook , Geert Uytterhoeven Cc: linux-arm-kernel@lists.infradead.org, Linus Walleij , Catalin Marinas X-Mailer: b4 0.12.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240123_131621_489160_FEFA247E X-CRM114-Status: GOOD ( 15.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Catalin Marinas This is a clean-up patch aimed at reducing the number of checks on CONFIG_CPU_SW_DOMAIN_PAN, together with some empty lines for better clarity once the CONFIG_CPU_TTBR0_PAN is introduced. Signed-off-by: Catalin Marinas Reviewed-by: Kees Cook Signed-off-by: Linus Walleij --- arch/arm/include/asm/uaccess-asm.h | 16 ++++++++++++---- arch/arm/include/asm/uaccess.h | 21 +++++++++++++++------ arch/arm/lib/csumpartialcopyuser.S | 6 +++++- 3 files changed, 32 insertions(+), 11 deletions(-) diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h index 65da32e1f1c1..ea42ba25920f 100644 --- a/arch/arm/include/asm/uaccess-asm.h +++ b/arch/arm/include/asm/uaccess-asm.h @@ -39,8 +39,9 @@ #endif .endm - .macro uaccess_disable, tmp, isb=1 #ifdef CONFIG_CPU_SW_DOMAIN_PAN + + .macro uaccess_disable, tmp, isb=1 /* * Whenever we re-enter userspace, the domains should always be * set appropriately. @@ -50,11 +51,9 @@ .if \isb instr_sync .endif -#endif .endm .macro uaccess_enable, tmp, isb=1 -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* * Whenever we re-enter userspace, the domains should always be * set appropriately. @@ -64,9 +63,18 @@ .if \isb instr_sync .endif -#endif .endm +#else + + .macro uaccess_disable, tmp, isb=1 + .endm + + .macro uaccess_enable, tmp, isb=1 + .endm + +#endif + #if defined(CONFIG_CPU_SW_DOMAIN_PAN) || defined(CONFIG_CPU_USE_DOMAINS) #define DACR(x...) x #else diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 9556d04387f7..9b9234d1bb6a 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -24,9 +24,10 @@ * perform such accesses (eg, via list poison values) which could then * be exploited for priviledge escalation. */ +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) + static __always_inline unsigned int uaccess_save_and_enable(void) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN unsigned int old_domain = get_domain(); /* Set the current domain access to permit user accesses */ @@ -34,19 +35,27 @@ static __always_inline unsigned int uaccess_save_and_enable(void) domain_val(DOMAIN_USER, DOMAIN_CLIENT)); return old_domain; -#else - return 0; -#endif } static __always_inline void uaccess_restore(unsigned int flags) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* Restore the user access mask */ set_domain(flags); -#endif } +#else + +static inline unsigned int uaccess_save_and_enable(void) +{ + return 0; +} + +static inline void uaccess_restore(unsigned int flags) +{ +} + +#endif + /* * These two are intentionally not defined anywhere - if the kernel * code generates any references to them, that's a bug. diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index 6928781e6bee..04d8d9d741c7 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -13,7 +13,8 @@ .text -#ifdef CONFIG_CPU_SW_DOMAIN_PAN +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) + .macro save_regs mrc p15, 0, ip, c3, c0, 0 stmfd sp!, {r1, r2, r4 - r8, ip, lr} @@ -25,7 +26,9 @@ mcr p15, 0, ip, c3, c0, 0 ret lr .endm + #else + .macro save_regs stmfd sp!, {r1, r2, r4 - r8, lr} .endm @@ -33,6 +36,7 @@ .macro load_regs ldmfd sp!, {r1, r2, r4 - r8, pc} .endm + #endif .macro load1b, reg1