From patchwork Wed Nov 17 06:43:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12623827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12DE2C433EF for ; Wed, 17 Nov 2021 06:52:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DF7226321B for ; Wed, 17 Nov 2021 06:52:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232556AbhKQGzi (ORCPT ); Wed, 17 Nov 2021 01:55:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229777AbhKQGzh (ORCPT ); Wed, 17 Nov 2021 01:55:37 -0500 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4FD1C061570 for ; Tue, 16 Nov 2021 22:52:39 -0800 (PST) Received: by mail-pf1-x44a.google.com with SMTP id l8-20020a056a0016c800b0049ffee8cebfso1093536pfc.20 for ; Tue, 16 Nov 2021 22:52:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+QM3vxnIY5ebG82kJmilfb/kEzJ5uds/3O+I1dlEIvk=; b=EZYIu6hRvSKsmk4QDlfw/1oaSm7K9pAmp16YJZndakKVaHT40P+1YH2Wv5+ZEWzivU odVdvfxwvHkVN+HLLWpxNd0JzQLFifHDo3lBDzaDkHv43dflu4A1cM0Z519KV5TLebja TnI8bUB1lY8FzCQ8HQnlZCSLZiz4X4ADVMAOe3Gh63zazWMB6iDzKS9XZ9vTRR2cM1TP /6gxGnlIlBAzu4aWWzTdq3sQVNdw8XctrPDaonPM2BGrQB9oQRXJBKNTrWph9ChuLV6S RUphRiV46Nwnp+2QL38ooleyWaT89WpMaMQunAP/6V6DNBix32380PwpUVuhDfG7+oYG MNKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+QM3vxnIY5ebG82kJmilfb/kEzJ5uds/3O+I1dlEIvk=; b=XX9v1BqsUZDnXXeJHpqIg6CWyLRzweH+IWBCPJHE+CSWGlZG445oXA5oQUaB2INiXu /+j15AtssgFNTzdl3oz1nUNsz0Ct7PoWF+mGlblS42Y9Lgj01H//VhHIo7Ayhn+aWweq 7GCSzyWWxgynifbC7T+4/Ys6uPTGSER2+Fcces3fUoyOjJHQwLe0FhgfgwkZCo3Fz2cf 27p4NYO6kINsvQxwwTwVqwb5WtFs+91trmmX5PdrBld1KZ7QoforuhJxwQXz9lAr5A4y PG2NQJ4dS6LSfnfHMArz2Gu7E/u3Leif/qXHGpnjA7QJnmCqexc7IQKSbmecEmD86nor b3TQ== X-Gm-Message-State: AOAM533Y5MAHvMP2ObqFh3uChIDsThzSzqO+JRByoRfqm9ViR3rz8TAd w18szXYmyE21QkMi+xU9Osa1KqzdhuE= X-Google-Smtp-Source: ABdhPJxotgq01k/w6uRxD8N6KAp8kDUMHpCn7kEiA8krb4HdDzKMIROjffEwMWJnsJlwqjvRMerWHPCmEb0= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a62:7b8e:0:b0:49f:a0ba:67ff with SMTP id w136-20020a627b8e000000b0049fa0ba67ffmr5007355pfc.64.1637131959115; Tue, 16 Nov 2021 22:52:39 -0800 (PST) Date: Tue, 16 Nov 2021 22:43:37 -0800 In-Reply-To: <20211117064359.2362060-1-reijiw@google.com> Message-Id: <20211117064359.2362060-8-reijiw@google.com> Mime-Version: 1.0 References: <20211117064359.2362060-1-reijiw@google.com> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog Subject: [RFC PATCH v3 07/29] KVM: arm64: Make ID_AA64ISAR1_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds id_reg_info for ID_AA64ISAR1_EL1 to make it writable by userspace. Return an error if userspace tries to set PTRAUTH related fields of the register to values that conflict with PTRAUTH configuration, which was configured by KVM_ARM_VCPU_INIT, for the guest. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 79 +++++++++++++++++++++++++++++++++++---- 1 file changed, 72 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index fdd707462fa8..5812e39602fe 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -358,6 +358,24 @@ static int arm64_check_features(u64 check_types, u64 val, u64 lim) return 0; } +#define PTRAUTH_MASK (ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | \ + ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI)) + +#define aa64isar1_has_apa(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_APA_SHIFT) >= \ + ID_AA64ISAR1_APA_ARCHITECTED) +#define aa64isar1_has_api(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_API_SHIFT) >= \ + ID_AA64ISAR1_API_IMP_DEF) +#define aa64isar1_has_gpa(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPA_SHIFT) >= \ + ID_AA64ISAR1_GPA_ARCHITECTED) +#define aa64isar1_has_gpi(val) \ + (cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \ + ID_AA64ISAR1_GPI_IMP_DEF) + struct id_reg_info { u32 sys_reg; /* Register ID */ @@ -471,6 +489,36 @@ static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu, return 0; } +static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + bool has_gpi, has_gpa, has_api, has_apa; + bool generic, address; + + has_gpi = aa64isar1_has_gpi(val); + has_gpa = aa64isar1_has_gpa(val); + has_api = aa64isar1_has_api(val); + has_apa = aa64isar1_has_apa(val); + if ((has_gpi && has_gpa) || (has_api && has_apa)) + return -EINVAL; + + generic = has_gpi || has_gpa; + address = has_api || has_apa; + /* + * Since the current KVM guest implementation works by enabling + * both address/generic pointer authentication features, + * return an error if they conflict. + */ + if (generic ^ address) + return -EPERM; + + /* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */ + if (vcpu_has_ptrauth(vcpu) ^ (generic && address)) + return -EPERM; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -508,6 +556,12 @@ static void init_id_aa64pfr1_el1_info(struct id_reg_info *id_reg) id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE); } +static void init_id_aa64isar1_el1_info(struct id_reg_info *id_reg) +{ + if (!system_has_full_ptr_auth()) + id_reg->vcpu_limit_val &= ~PTRAUTH_MASK; +} + static u64 get_reset_id_aa64pfr0_el1(struct kvm_vcpu *vcpu, const struct id_reg_info *idr) { @@ -530,6 +584,13 @@ static u64 get_reset_id_aa64pfr1_el1(struct kvm_vcpu *vcpu, (idr->vcpu_limit_val & ~(ARM64_FEATURE_MASK(ID_AA64PFR1_MTE))); } +static u64 get_reset_id_aa64isar1_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *idr) +{ + return vcpu_has_ptrauth(vcpu) ? + idr->vcpu_limit_val : (idr->vcpu_limit_val & ~PTRAUTH_MASK); +} + static struct id_reg_info id_aa64pfr0_el1_info = { .sys_reg = SYS_ID_AA64PFR0_EL1, .ftr_check_types = S_FCT(ID_AA64PFR0_ASIMD_SHIFT, FCT_LOWER_SAFE) | @@ -554,6 +615,16 @@ static struct id_reg_info id_aa64isar0_el1_info = { .validate = validate_id_aa64isar0_el1, }; +static struct id_reg_info id_aa64isar1_el1_info = { + .sys_reg = SYS_ID_AA64ISAR1_EL1, + .ftr_check_types = + U_FCT(ID_AA64ISAR1_API_SHIFT, FCT_EXACT_OR_ZERO_SAFE) | + U_FCT(ID_AA64ISAR1_APA_SHIFT, FCT_EXACT_OR_ZERO_SAFE), + .init = init_id_aa64isar1_el1_info, + .validate = validate_id_aa64isar1_el1, + .get_reset_val = get_reset_id_aa64isar1_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -566,6 +637,7 @@ static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR0_EL1)] = &id_aa64pfr0_el1_info, [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, + [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, @@ -1414,13 +1486,6 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu, u32 id) val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), gic_lim); } break; - case SYS_ID_AA64ISAR1_EL1: - if (!vcpu_has_ptrauth(vcpu)) - val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) | - ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI)); - break; case SYS_ID_AA64DFR0_EL1: /* Limit debug to ARMv8.0 */ val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);