From patchwork Thu Jan 6 04:26:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12705094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA567C433FE for ; Thu, 6 Jan 2022 04:28:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233027AbiAFE2m (ORCPT ); Wed, 5 Jan 2022 23:28:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232991AbiAFE2l (ORCPT ); Wed, 5 Jan 2022 23:28:41 -0500 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BB2FC061245 for ; Wed, 5 Jan 2022 20:28:41 -0800 (PST) Received: by mail-pj1-x104a.google.com with SMTP id j5-20020a17090a738500b001b33f47e757so967717pjg.5 for ; Wed, 05 Jan 2022 20:28:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kbEpa3NW3/F54H+JiSS5EZmEGUFjbjn4q49NB5SkOFY=; b=F5MiqeiGa7mT0rV8KX4bnNCqVNXjhRt084mESpWkocsg0soeVmifRqlwvxgC/0GsnJ AaSLM7NzF+tlwOF5mFc622o7gr7nwsCptip41Y/Ph1Swj6R34CmAzu5hSLVJFZbXUJUm a61KKqDvbSMcjVvkPDCrimC0hVZJkvtt1VWPpOdn8XAmT2qXzsI7RxXjQqVEYQAPg9tN bvAjFSOh7Otm5E6SRs4XP0frxdSpV13j2L1wiRPRJIZoguEy4GqPpYxQY6OwMzr/099+ k+AQFgI9NwPdcRSaQDz3w2X8Ppm/5u6BdugPZQK7DwTgxn5BZ7NOzVfS3E/bL2xItv0X Q3lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kbEpa3NW3/F54H+JiSS5EZmEGUFjbjn4q49NB5SkOFY=; b=k6n79NE2/cZH4ldocb/amD45H/0BgrSdQg1n2oUgeG4FZx9hGLKjEi/HWeanBh8Whh 3wIakEqF1zQ9InEVAVL6nacOEfnN1RIsmCEg3afhyAfyt/MTGRyC5LqM72nfh8d45hMo bZcV5QVyIW05yQ1fOj6mI0A10tv7Rwgz6M2INMo84XlHfamgFNthE/OQrqjVNZc8qFfH KGiZX8vaakPraoBcWVN07PzWmSiX367gh/IKjvfX+LQAc4ffq+q3EkSQ1YePpouYfvcW WGLsPjUKhHPNQuOgRDt9FFDAPzIOsxSrAf7rkDrEYlr0LJkmU+s/Xz3BJ3dJis1ACOzW kykA== X-Gm-Message-State: AOAM533jEZIRQuIyOpk5rS1nAaGX+czRo6LlIOa5jCd6FsUm9mCQRfl0 18CqcwbR2JG1kN+B1Jf2AXkum8p6hEw= X-Google-Smtp-Source: ABdhPJyTnlP5OuFiYcs3Gvlpu341c7pETgqpLszhf4MJt8zQ5rVfV6AjjEqjrwjfR5pVVTgMWHWnhTfnA1M= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a05:6a00:114d:b0:4a2:87bd:37f with SMTP id b13-20020a056a00114d00b004a287bd037fmr58810603pfm.82.1641443321070; Wed, 05 Jan 2022 20:28:41 -0800 (PST) Date: Wed, 5 Jan 2022 20:26:49 -0800 In-Reply-To: <20220106042708.2869332-1-reijiw@google.com> Message-Id: <20220106042708.2869332-8-reijiw@google.com> Mime-Version: 1.0 References: <20220106042708.2869332-1-reijiw@google.com> X-Mailer: git-send-email 2.34.1.448.ga2b2bfdf31-goog Subject: [RFC PATCH v4 07/26] KVM: arm64: Make ID_AA64ISAR1_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds id_reg_info for ID_AA64ISAR1_EL1 to make it writable by userspace. Return an error if userspace tries to set PTRAUTH related fields of the register to values that conflict with PTRAUTH configuration, which was configured by KVM_ARM_VCPU_INIT, for the guest. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 77 +++++++++++++++++++++++++++++++++++---- 1 file changed, 69 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3f1313875be5..2f79997016a4 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -265,6 +265,24 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu, return read_zero(vcpu, p); } +#define PTRAUTH_MASK (ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI)) + +#define aa64isar1_has_apa(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_APA_SHIFT) >= \ + ID_AA64ISAR1_APA_ARCHITECTED) +#define aa64isar1_has_api(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_API_SHIFT) >= \ + ID_AA64ISAR1_API_IMP_DEF) +#define aa64isar1_has_gpa(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPA_SHIFT) >= \ + ID_AA64ISAR1_GPA_ARCHITECTED) +#define aa64isar1_has_gpi(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \ + ID_AA64ISAR1_GPI_IMP_DEF) + struct id_reg_info { u32 sys_reg; /* Register ID */ @@ -397,6 +415,36 @@ static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + bool has_gpi, has_gpa, has_api, has_apa; + bool generic, address; + + has_gpi = aa64isar1_has_gpi(val); + has_gpa = aa64isar1_has_gpa(val); + has_api = aa64isar1_has_api(val); + has_apa = aa64isar1_has_apa(val); + if ((has_gpi && has_gpa) || (has_api && has_apa)) + return -EINVAL; + + generic = has_gpi || has_gpa; + address = has_api || has_apa; + /* + * Since the current KVM guest implementation works by enabling + * both address/generic pointer authentication features, + * return an error if they conflict. + */ + if (generic ^ address) + return -EPERM; + + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */ + if (vcpu_has_ptrauth(vcpu) ^ (generic && address)) + return -EPERM; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -434,8 +482,14 @@ static void init_id_aa64pfr1_el1_info(struct id_reg_info *id_reg) id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); } +static void init_id_aa64isar1_el1_info(struct id_reg_info *id_reg) +{ + if (!system_has_full_ptr_auth()) + id_reg->vcpu_limit_val &= ~PTRAUTH_MASK; +} + static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, - const struct id_reg_info *idr) + const struct id_reg_info *idr) { return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE); } @@ -446,6 +500,12 @@ static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE)); } +static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return vcpu_has_ptrauth(vcpu) ? 0 : PTRAUTH_MASK; +} + static struct id_reg_info id_aa64pfr0_el1_info = { .sys_reg = SYS_ID_AA64PFR0_EL1, .ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), @@ -469,6 +529,13 @@ static struct id_reg_info id_aa64isar0_el1_info = { .validate = validate_id_aa64isar0_el1, }; +static struct id_reg_info id_aa64isar1_el1_info = { + .sys_reg = SYS_ID_AA64ISAR1_EL1, + .init = init_id_aa64isar1_el1_info, + .validate = validate_id_aa64isar1_el1, + .vcpu_mask = vcpu_mask_id_aa64isar1_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -481,6 +548,7 @@ static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, + [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val) @@ -1398,13 +1466,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val &= ~(id_reg->vcpu_mask(vcpu, id_reg)); switch (id) { - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI)); - break; case SYS_ID_AA64DFR0_EL1: /* Limit debug to ARMv8.0 */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);