From patchwork Thu Aug 29 01:01:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Samuel Holland X-Patchwork-Id: 13782308 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 30CE1C71159 for ; Thu, 29 Aug 2024 01:02:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ze4IriuDsfaDt5SwrUTZFKcW/zvSSj2Au80XfgW8zv4=; b=s6mqKdu17nG6P5 E2Wy1Z0hWb0bXCq+LZPmTloCLq4yEupRjn/04uMLX9RIVAX/6K4aHfN8p2K7bHxV9nd3tdfO1k3fc tHQ5ttbGpKNqDPVB5T+QrvpGrQzB9c/lV4i6qsLWROBp8fD+LjB7inw2DBQ4ewxSqDpNvWV6waVKW EJ7Tz7tTwjPZ0LUx2Yt5CS68+AfS4NNok0i6MWDZOcqVvWVEoT8UqTrT77lQ0TXN2O6OlENlCAymx gD7lg6RnhLhIgiLXUq73efsd9/r8Aiq3286CI1LttbRZ6hwYgvaNE5Fy2RQDQs8fyUzIQi9sC9cn6 Lm/jnXlFTyDSTzHW9MZA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjTY7-000000003fu-1Nca; Thu, 29 Aug 2024 01:02:07 +0000 Received: from mail-pf1-x42d.google.com ([2607:f8b0:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjTY1-000000003cO-1HjL for linux-riscv@lists.infradead.org; Thu, 29 Aug 2024 01:02:02 +0000 Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-7142e002aceso106912b3a.2 for ; Wed, 28 Aug 2024 18:02:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1724893320; x=1725498120; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C3qDkoS5rxZK3Tpvbz7Odl2ZyEBU08AZZIxTPut6X68=; b=HPUdVphDHtYTuWHaC8JC1+04kL5FjzQ6a7TftiIke9YMvjmHpMFzHP7ob4cvMvRvyY X3sfszYX6pvt9Ew5b76t99ynbY0X4FiDi+xy/MbgJGuyWR09G1ddOSrIjjDrGCNtipkt 0M5AsA1kkwLagAY5PhrQO8piOK02307x3aW98hWQHwgnxxVhSR+LR+7eXiKIt+e7p5sb 0PpKwl3H01HZd5QDzjlYF6TP/vcQZYr1J3NTqjJvvsVGZoxLFW8nI58nnG2x2IfMOYnI Onj6vbigbQ4/CoNZwWl5s/v+YLnZWIkS+xjNnQmJd8hzy1sTm60BL1vucGcJ01WuoyY8 mZgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724893320; x=1725498120; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C3qDkoS5rxZK3Tpvbz7Odl2ZyEBU08AZZIxTPut6X68=; b=bhDV75FTlWsC7M7mmuxOcg3KObsjughnwyQCA53+4j/uLrHtKvbcTVtZeOSFnPhBlX OEZzs62yff5wZbMyorQRVNtsVdNncTtm9Z7MsH6bdp82EEAn8BOE1tKNOaXjw3EBm4ru v3kE5DA6EOY43QnwfVF3oPVo/yfGtwhqvxfGvvqrOANe+nzZDr5CQm62JJwg6oH+tDR8 USsiM8DVUcXKxbzvrPQVqoQW6Z1+Che1WOCaYPLsDjvZuSF68hK8kPGUAjhld8wgHIKP PUMG+pE0yauAteI3qFoXVMe4uXdaKb/NIt2/kjS5+wQF4xatDDPvies5KEMSnkqvG9XT Ajfw== X-Forwarded-Encrypted: i=1; AJvYcCXyMkYkkJW0j7kSdhOoUl4eiA1NDezuaxki31clQ0HeSs9BbDdfZJaOqUrBVE8+kJgOBulOtCQsjKOn+g==@lists.infradead.org X-Gm-Message-State: AOJu0YyfO+GB10z1J2Rq+btL+2MeJnDJEEWh6XSzksGbBjsPHphWCXJ8 m3lQwv7/VUMuRVLZvidq6ZjBjni6PAWpvnX9Ovfhmr3L+IFeaeB42G7Mfx9ALF8= X-Google-Smtp-Source: AGHT+IFOayRDO6d6CIxxXv/BCnoTsrUdbFOblkYCeBJiq1yBR5gusHkRDSJAjFrefsDJqpZGzgvoBg== X-Received: by 2002:a05:6a21:178a:b0:1cc:a104:c9f0 with SMTP id adf61e73a8af0-1cce102cfcemr1093597637.31.1724893320369; Wed, 28 Aug 2024 18:02:00 -0700 (PDT) Received: from sw06.internal.sifive.com ([4.53.31.132]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-715e5576a4dsm89670b3a.17.2024.08.28.18.01.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2024 18:01:59 -0700 (PDT) From: Samuel Holland To: Palmer Dabbelt , linux-riscv@lists.infradead.org Cc: devicetree@vger.kernel.org, Catalin Marinas , linux-kernel@vger.kernel.org, Anup Patel , Conor Dooley , kasan-dev@googlegroups.com, Atish Patra , Evgenii Stepanov , Krzysztof Kozlowski , Rob Herring , "Kirill A . Shutemov" , Samuel Holland Subject: [PATCH v4 04/10] riscv: Add support for userspace pointer masking Date: Wed, 28 Aug 2024 18:01:26 -0700 Message-ID: <20240829010151.2813377-5-samuel.holland@sifive.com> X-Mailer: git-send-email 2.45.1 In-Reply-To: <20240829010151.2813377-1-samuel.holland@sifive.com> References: <20240829010151.2813377-1-samuel.holland@sifive.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240828_180201_375349_DBF19650 X-CRM114-Status: GOOD ( 24.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org RISC-V supports pointer masking with a variable number of tag bits (which is called "PMLEN" in the specification) and which is configured at the next higher privilege level. Wire up the PR_SET_TAGGED_ADDR_CTRL and PR_GET_TAGGED_ADDR_CTRL prctls so userspace can request a lower bound on the number of tag bits and determine the actual number of tag bits. As with arm64's PR_TAGGED_ADDR_ENABLE, the pointer masking configuration is thread-scoped, inherited on clone() and fork() and cleared on execve(). Signed-off-by: Samuel Holland Reviewed-by: Charlie Jenkins Tested-by: Charlie Jenkins --- Changes in v4: - Switch IS_ENABLED back to #ifdef to fix riscv32 build Changes in v3: - Rename CONFIG_RISCV_ISA_POINTER_MASKING to CONFIG_RISCV_ISA_SUPM, since it only controls the userspace part of pointer masking - Use IS_ENABLED instead of #ifdef when possible - Use an enum for the supported PMLEN values - Simplify the logic in set_tagged_addr_ctrl() Changes in v2: - Rebase on riscv/linux.git for-next - Add and use the envcfg_update_bits() helper function - Inline flush_tagged_addr_state() arch/riscv/Kconfig | 11 ++++ arch/riscv/include/asm/processor.h | 8 +++ arch/riscv/include/asm/switch_to.h | 11 ++++ arch/riscv/kernel/process.c | 91 ++++++++++++++++++++++++++++++ include/uapi/linux/prctl.h | 3 + 5 files changed, 124 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 0f3cd7c3a436..817437157138 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -512,6 +512,17 @@ config RISCV_ISA_C If you don't know what to do here, say Y. +config RISCV_ISA_SUPM + bool "Supm extension for userspace pointer masking" + depends on 64BIT + default y + help + Add support for pointer masking in userspace (Supm) when the + underlying hardware extension (Smnpm or Ssnpm) is detected at boot. + + If this option is disabled, userspace will be unable to use + the prctl(PR_{SET,GET}_TAGGED_ADDR_CTRL) API. + config RISCV_ISA_SVNAPOT bool "Svnapot extension support for supervisor mode NAPOT pages" depends on 64BIT && MMU diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 586e4ab701c4..5c4d4fb97314 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -200,6 +200,14 @@ extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val); #define RISCV_SET_ICACHE_FLUSH_CTX(arg1, arg2) riscv_set_icache_flush_ctx(arg1, arg2) extern int riscv_set_icache_flush_ctx(unsigned long ctx, unsigned long per_thread); +#ifdef CONFIG_RISCV_ISA_SUPM +/* PR_{SET,GET}_TAGGED_ADDR_CTRL prctl */ +long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg); +long get_tagged_addr_ctrl(struct task_struct *task); +#define SET_TAGGED_ADDR_CTRL(arg) set_tagged_addr_ctrl(current, arg) +#define GET_TAGGED_ADDR_CTRL() get_tagged_addr_ctrl(current) +#endif + #endif /* __ASSEMBLY__ */ #endif /* _ASM_RISCV_PROCESSOR_H */ diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h index 9685cd85e57c..94e33216b2d9 100644 --- a/arch/riscv/include/asm/switch_to.h +++ b/arch/riscv/include/asm/switch_to.h @@ -70,6 +70,17 @@ static __always_inline bool has_fpu(void) { return false; } #define __switch_to_fpu(__prev, __next) do { } while (0) #endif +static inline void envcfg_update_bits(struct task_struct *task, + unsigned long mask, unsigned long val) +{ + unsigned long envcfg; + + envcfg = (task->thread.envcfg & ~mask) | val; + task->thread.envcfg = envcfg; + if (task == current) + csr_write(CSR_ENVCFG, envcfg); +} + static inline void __switch_to_envcfg(struct task_struct *next) { asm volatile (ALTERNATIVE("nop", "csrw " __stringify(CSR_ENVCFG) ", %0", diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index e4bc61c4e58a..f39221ab5ddd 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -7,6 +7,7 @@ * Copyright (C) 2017 SiFive */ +#include #include #include #include @@ -171,6 +172,10 @@ void flush_thread(void) memset(¤t->thread.vstate, 0, sizeof(struct __riscv_v_ext_state)); clear_tsk_thread_flag(current, TIF_RISCV_V_DEFER_RESTORE); #endif +#ifdef CONFIG_RISCV_ISA_SUPM + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM)) + envcfg_update_bits(current, ENVCFG_PMM, ENVCFG_PMM_PMLEN_0); +#endif } void arch_release_task_struct(struct task_struct *tsk) @@ -233,3 +238,89 @@ void __init arch_task_cache_init(void) { riscv_v_setup_ctx_cache(); } + +#ifdef CONFIG_RISCV_ISA_SUPM +enum { + PMLEN_0 = 0, + PMLEN_7 = 7, + PMLEN_16 = 16, +}; + +static bool have_user_pmlen_7; +static bool have_user_pmlen_16; + +long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg) +{ + unsigned long valid_mask = PR_PMLEN_MASK; + struct thread_info *ti = task_thread_info(task); + unsigned long pmm; + u8 pmlen; + + if (is_compat_thread(ti)) + return -EINVAL; + + if (arg & ~valid_mask) + return -EINVAL; + + /* + * Prefer the smallest PMLEN that satisfies the user's request, + * in case choosing a larger PMLEN has a performance impact. + */ + pmlen = FIELD_GET(PR_PMLEN_MASK, arg); + if (pmlen == PMLEN_0) + pmm = ENVCFG_PMM_PMLEN_0; + else if (pmlen <= PMLEN_7 && have_user_pmlen_7) + pmm = ENVCFG_PMM_PMLEN_7; + else if (pmlen <= PMLEN_16 && have_user_pmlen_16) + pmm = ENVCFG_PMM_PMLEN_16; + else + return -EINVAL; + + envcfg_update_bits(task, ENVCFG_PMM, pmm); + + return 0; +} + +long get_tagged_addr_ctrl(struct task_struct *task) +{ + struct thread_info *ti = task_thread_info(task); + long ret = 0; + + if (is_compat_thread(ti)) + return -EINVAL; + + switch (task->thread.envcfg & ENVCFG_PMM) { + case ENVCFG_PMM_PMLEN_7: + ret = FIELD_PREP(PR_PMLEN_MASK, PMLEN_7); + break; + case ENVCFG_PMM_PMLEN_16: + ret = FIELD_PREP(PR_PMLEN_MASK, PMLEN_16); + break; + } + + return ret; +} + +static bool try_to_set_pmm(unsigned long value) +{ + csr_set(CSR_ENVCFG, value); + return (csr_read_clear(CSR_ENVCFG, ENVCFG_PMM) & ENVCFG_PMM) == value; +} + +static int __init tagged_addr_init(void) +{ + if (!riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM)) + return 0; + + /* + * envcfg.PMM is a WARL field. Detect which values are supported. + * Assume the supported PMLEN values are the same on all harts. + */ + csr_clear(CSR_ENVCFG, ENVCFG_PMM); + have_user_pmlen_7 = try_to_set_pmm(ENVCFG_PMM_PMLEN_7); + have_user_pmlen_16 = try_to_set_pmm(ENVCFG_PMM_PMLEN_16); + + return 0; +} +core_initcall(tagged_addr_init); +#endif /* CONFIG_RISCV_ISA_SUPM */ diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h index 35791791a879..6e84c827869b 100644 --- a/include/uapi/linux/prctl.h +++ b/include/uapi/linux/prctl.h @@ -244,6 +244,9 @@ struct prctl_mm_map { # define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) /* Unused; kept only for source compatibility */ # define PR_MTE_TCF_SHIFT 1 +/* RISC-V pointer masking tag length */ +# define PR_PMLEN_SHIFT 24 +# define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT) /* Control reclaim behavior when allocating memory */ #define PR_SET_IO_FLUSHER 57