From patchwork Thu Jan 6 04:26:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reiji Watanabe X-Patchwork-Id: 12705095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4BE7C433EF for ; Thu, 6 Jan 2022 04:28:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233053AbiAFE2o (ORCPT ); Wed, 5 Jan 2022 23:28:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232991AbiAFE2n (ORCPT ); Wed, 5 Jan 2022 23:28:43 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A130C061245 for ; Wed, 5 Jan 2022 20:28:43 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id d127-20020a623685000000b004bcdb7cce18so912716pfa.21 for ; Wed, 05 Jan 2022 20:28:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FYAWMgJ1GXCK4xG/yft+copCnvq5I5zKK2PH7FlAtFI=; b=c3B8anzf3EjmHy/jCcSXnzExaealsFE90+zm5dPK8RfPwdtLHPsmDrhrl8Jq8l4p42 4CrectJilOzE8B8CSPN93kKROQ+b/CnT4ema6ozQCfYn0PVTHRAYVuTNhKUaHDsYBCq2 Gl4kfBLLIz8gznaiJVxIkmqXs1kaK8FUjrLen0jqr0dzqiU3XGUk9sQWfilPY+wqaMUY Z9onCQlOGXH0tli3k2JzkTyxFD44B99wptWvoWKxfxPvJnN78LvM17UR/MFVtatiWWrv opIkNROcLQMaRVeEkonSnD8pcOGHDgChkjMfhqkLBtWztB9FqnGip35kC01NP8LA3R9q JEuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FYAWMgJ1GXCK4xG/yft+copCnvq5I5zKK2PH7FlAtFI=; b=oBRlz59zFfi9hzos+J+LASwifu9SVdKg4yl2Hd3VcBtNYHHlPVABkPpD1G7CrXoQ6p 6IdtVcXLIs8R0cpbMGt4DHg71E0WcztreXPLm+IPX/G90Q42lTsKulRkVgdoAC2QygGm i3xPER4afzRkbUOfI92DQz9o/JEQfeClDstZxQR7U9CTNIsNEhG9qgGxdZuSgAkSL4di 2ERFEALOmmoJTiG8tNvt4YpgsdDXZtzb6q+cL1HwiWtBagWFxc10xcTwu76eQ36nSUHS IeWR+DPxW/bwdV3BBgW8wIMG9KXtyTjGEHVlA3OPCz5rz7q5bR2SEA0J0tvK810qNFmi P4mA== X-Gm-Message-State: AOAM531hwVR6ch1NQwX4yXvopvVKc1CcwJ9ZZyaAaB5Z2y7ovWwFia9A GBzV21UjPrcHJRtAUyjT92qIRQPBI0o= X-Google-Smtp-Source: ABdhPJx4nw8sDxazxfcaU4f//9EHhcIyYEvher1FjW1hqgePsRXhREI6VNvfCeeMk32TK/65KntSJX+t0P0= X-Received: from reiji-vws-sp.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3d59]) (user=reijiw job=sendgmr) by 2002:a17:90a:604f:: with SMTP id h15mr7980595pjm.87.1641443322615; Wed, 05 Jan 2022 20:28:42 -0800 (PST) Date: Wed, 5 Jan 2022 20:26:50 -0800 In-Reply-To: <20220106042708.2869332-1-reijiw@google.com> Message-Id: <20220106042708.2869332-9-reijiw@google.com> Mime-Version: 1.0 References: <20220106042708.2869332-1-reijiw@google.com> X-Mailer: git-send-email 2.34.1.448.ga2b2bfdf31-goog Subject: [RFC PATCH v4 08/26] KVM: arm64: Make ID_AA64MMFR0_EL1 writable From: Reiji Watanabe To: Marc Zyngier , kvmarm@lists.cs.columbia.edu Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Will Deacon , Andrew Jones , Peng Liang , Peter Shier , Ricardo Koller , Oliver Upton , Jing Zhang , Raghavendra Rao Anata , Reiji Watanabe Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This patch adds id_reg_info for ID_AA64MMFR0_EL1 to make it writable by userspace. Since ID_AA64MMFR0_EL1 stage 2 granule size fields don't follow the standard ID scheme, we need a special handling to validate those fields. Signed-off-by: Reiji Watanabe --- arch/arm64/kvm/sys_regs.c | 127 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 2f79997016a4..723910267966 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -445,6 +445,118 @@ static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu, return 0; } +/* + * Check if the requested stage2 translation granule size indicated in + * @mmfr0 is also indicated in @mmfr0_lim. + * If TGranX_2 field is zero, the value must be validated based on TGranX + * field because that indicates the feature support is identified in + * TGranX field. + * This function relies on the fact TGranX fields are validated before + * through the arm64_check_features. + */ +static int aa64mmfr0_tgran2_check(int field, u64 mmfr0, u64 mmfr0_lim) +{ + s64 tgran2, lim_tgran2, rtgran1; + int f1; + bool is_signed; + + tgran2 = cpuid_feature_extract_unsigned_field(mmfr0, field); + lim_tgran2 = cpuid_feature_extract_unsigned_field(mmfr0_lim, field); + if (tgran2 && lim_tgran2) + /* + * We don't need to check TGranX field. We can simply + * compare tgran2 and lim_tgran2. + */ + return (tgran2 > lim_tgran2) ? -E2BIG : 0; + + if (tgran2 == lim_tgran2) + /* + * Both of them are zero. Since TGranX in @mmfr0 is already + * validated by arm64_check_features, tgran2 must be fine. + */ + return 0; + + /* + * Either tgran2 or lim_tgran2 is zero. + * Need stage1 granule size to validate tgran2. + */ + + /* + * Get TGranX's bit position by subtracting 12 from TGranX_2's bit + * position. + */ + f1 = field - 12; + + /* TGran4/TGran64 is signed and TGran16 is unsigned field. */ + is_signed = (f1 == ID_AA64MMFR0_TGRAN16_SHIFT) ? false : true; + + /* + * If tgran2 == 0 (&& lim_tgran2 != 0), the requested stage2 granule + * size is indicated in the stage1 granule size field of @mmfr0. + * So, validate the stage1 granule size against the stage2 limit + * granule size. + * If lim_tgran2 == 0 (&& tgran2 != 0), the stage2 limit granule size + * is indicated in the stage1 granule size field of @mmfr0_lim. + * So, validate the requested stage2 granule size against the stage1 + * limit granule size. + */ + + /* Get the relevant stage1 granule size to validate tgran2 */ + if (tgran2 == 0) + /* The requested stage1 granule size */ + rtgran1 = cpuid_feature_extract_field(mmfr0, f1, is_signed); + else /* lim_tgran2 == 0 */ + /* The stage1 limit granule size */ + rtgran1 = cpuid_feature_extract_field(mmfr0_lim, f1, is_signed); + + /* + * Adjust the value of rtgran1 to compare with stage2 granule size, + * which indicates: 1: Not supported, 2: Supported, etc. + */ + if (is_signed) + /* For signed, -1: Not supported, 0: Supported, etc. */ + rtgran1 += 0x2; + else + /* For unsigned, 0: Not supported, 1: Supported, etc. */ + rtgran1 += 0x1; + + if ((tgran2 == 0) && (rtgran1 > lim_tgran2)) + /* + * The requested stage1 granule size (== the requested stage2 + * granule size) is larger than the stage2 limit granule size. + */ + return -E2BIG; + else if ((lim_tgran2 == 0) && (tgran2 > rtgran1)) + /* + * The requested stage2 granule size is larger than the stage1 + * limit granulze size (== the stage2 limit granule size). + */ + return -E2BIG; + + return 0; +} + +static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu, + const struct id_reg_info *id_reg, u64 val) +{ + u64 limit = id_reg->vcpu_limit_val; + int ret; + + ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN4_2_SHIFT, val, limit); + if (ret) + return ret; + + ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN64_2_SHIFT, val, limit); + if (ret) + return ret; + + ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN16_2_SHIFT, val, limit); + if (ret) + return ret; + + return 0; +} + static void init_id_aa64pfr0_el1_info(struct id_reg_info *id_reg) { u64 limit = id_reg->vcpu_limit_val; @@ -536,6 +648,20 @@ static struct id_reg_info id_aa64isar1_el1_info = { .vcpu_mask = vcpu_mask_id_aa64isar1_el1, }; +static struct id_reg_info id_aa64mmfr0_el1_info = { + .sys_reg = SYS_ID_AA64MMFR0_EL1, + /* + * When TGranX_2 value is 0, validity of the value depend on TGranX + * value, and TGranX_2 value must be validated against TGranX value, + * which is done by validate_id_aa64mmfr0_el1. + * So, skip the regular validity checking for TGranX_2 fields. + */ + .ignore_mask = ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4_2) | + ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64_2) | + ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16_2), + .validate = validate_id_aa64mmfr0_el1, +}; + /* * An ID register that needs special handling to control the value for the * guest must have its own id_reg_info in id_reg_info_table. @@ -549,6 +675,7 @@ static struct id_reg_info *id_reg_info_table[KVM_ARM_ID_REG_MAX_NUM] = { [IDREG_IDX(SYS_ID_AA64PFR1_EL1)] = &id_aa64pfr1_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR0_EL1)] = &id_aa64isar0_el1_info, [IDREG_IDX(SYS_ID_AA64ISAR1_EL1)] = &id_aa64isar1_el1_info, + [IDREG_IDX(SYS_ID_AA64MMFR0_EL1)] = &id_aa64mmfr0_el1_info, }; static int validate_id_reg(struct kvm_vcpu *vcpu, u32 id, u64 val)