From patchwork Thu Oct 27 20:54:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022804 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A32BFA3743 for ; Thu, 27 Oct 2022 21:03:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236658AbiJ0VDA (ORCPT ); Thu, 27 Oct 2022 17:03:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234719AbiJ0VCd (ORCPT ); Thu, 27 Oct 2022 17:02:33 -0400 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49D8412D04; Thu, 27 Oct 2022 13:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904092; x=1698440092; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=cPFibT/Lo+4/qRDnJmMWYKTC877W6Ga2bdAy+wUJC0c=; b=m1fnIg3JhMgpNuypmiqjHkAta6OCf3yLwqtNa2zhI0H01ibCrlcUTluO pBzxepuxb5GMnJCAnlwHWKsZILV2z/xw0hSR+mv1vSKT51F92HiCMD+2R IAN2NVsFLiD3BSUb+tyH4HuK6kvOrjwh3MJ8gGc+Qn4fzeN8NM0PJ8eB8 Y=; Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-e7094f15.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-6001.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:50 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-e7094f15.us-west-2.amazon.com (Postfix) with ESMTPS id D1A744187C; Thu, 27 Oct 2022 20:54:48 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:40 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:40 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 01/34] Revert "x86/cpu: Add a steppings field to struct x86_cpu_id" Date: Thu, 27 Oct 2022 13:54:27 -0700 Message-ID: <20221027205430.17122-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D40UWC002.ant.amazon.com (10.43.162.191) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This reverts commit ae585de4296413ae4bbb8f2ac09faa38ff78f4cd. This is commit e9d7144597b10ff13ff2264c059f7d4a7fbc89ac upstream. Reverting this commit makes the following patches apply cleanly. This patch is then reapplied. Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/cpu_device_id.h | 27 --------------------------- arch/x86/kernel/cpu/match.c | 7 +------ include/linux/mod_devicetable.h | 6 ------ 3 files changed, 1 insertion(+), 39 deletions(-) diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h index 884466592943..baeba0567126 100644 --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -9,33 +9,6 @@ #include -#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins) - -/** - * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching - * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY - * The name is expanded to X86_VENDOR_@_vendor - * @_family: The family number or X86_FAMILY_ANY - * @_model: The model number, model constant or X86_MODEL_ANY - * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY - * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY - * @_data: Driver specific data or NULL. The internal storage - * format is unsigned long. The supplied value, pointer - * etc. is casted to unsigned long internally. - * - * Backport version to keep the SRBDS pile consistant. No shorter variants - * required for this. - */ -#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \ - _steppings, _feature, _data) { \ - .vendor = X86_VENDOR_##_vendor, \ - .family = _family, \ - .model = _model, \ - .steppings = _steppings, \ - .feature = _feature, \ - .driver_data = (unsigned long) _data \ -} - extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); #endif diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c index 751e59057466..3fed38812eea 100644 --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -34,18 +34,13 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match) const struct x86_cpu_id *m; struct cpuinfo_x86 *c = &boot_cpu_data; - for (m = match; - m->vendor | m->family | m->model | m->steppings | m->feature; - m++) { + for (m = match; m->vendor | m->family | m->model | m->feature; m++) { if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor) continue; if (m->family != X86_FAMILY_ANY && c->x86 != m->family) continue; if (m->model != X86_MODEL_ANY && c->x86_model != m->model) continue; - if (m->steppings != X86_STEPPING_ANY && - !(BIT(c->x86_stepping) & m->steppings)) - continue; if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature)) continue; return m; diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index 6f8eb1238235..f3c631fe076a 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -589,10 +589,6 @@ struct mips_cdmm_device_id { /* * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id. * Although gcc seems to ignore this error, clang fails without this define. - * - * Note: The ordering of the struct is different from upstream because the - * static initializers in kernels < 5.7 still use C89 style while upstream - * has been converted to proper C99 initializers. */ #define x86cpu_device_id x86_cpu_id struct x86_cpu_id { @@ -601,7 +597,6 @@ struct x86_cpu_id { __u16 model; __u16 feature; /* bit index */ kernel_ulong_t driver_data; - __u16 steppings; }; #define X86_FEATURE_MATCH(x) \ @@ -610,7 +605,6 @@ struct x86_cpu_id { #define X86_VENDOR_ANY 0xffff #define X86_FAMILY_ANY 0 #define X86_MODEL_ANY 0 -#define X86_STEPPING_ANY 0 #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */ /* From patchwork Thu Oct 27 20:54:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 935B8FA3745 for ; Thu, 27 Oct 2022 21:03:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236634AbiJ0VDC (ORCPT ); Thu, 27 Oct 2022 17:03:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236636AbiJ0VCd (ORCPT ); Thu, 27 Oct 2022 17:02:33 -0400 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BBB2EF00C; Thu, 27 Oct 2022 13:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904088; x=1698440088; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=dJvYCpOAdCjd9/3g2fkVS+3aDUOrfwYD/A6WJmmN1jI=; b=SSXYOgVm1mx7/W2Zhe5okuD/vmsnFebLxJtVPitXZxS+wgaptr+ppStb X900oPY/jvhsObs4VXdOX/gCBd2l88Qtbz8JaBHdJpS6R31Cv1dGHcEnZ zuDOuH3ZYlZug0MNqwzZMu+wDHF+IZNUfcN8wztwwY5w4jRmP9HC8eyfr c=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145243564" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-3ef535ca.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:48 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-3ef535ca.us-west-2.amazon.com (Postfix) with ESMTPS id ED8A761158; Thu, 27 Oct 2022 20:54:48 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:41 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:40 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 02/34] x86/cpufeature: Add facility to check for min microcode revisions Date: Thu, 27 Oct 2022 13:54:28 -0700 Message-ID: <20221027205430.17122-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205430.17122-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205430.17122-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D40UWC002.ant.amazon.com (10.43.162.191) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Kan Liang commit 0f42b790c9ba5ec2f25b7da8b0b6d361082d67b0 upstream For bug workarounds or checks, it is useful to check for specific microcode revisions. Add a new generic function to match the CPU with stepping. Add the other function to check the min microcode revisions for the matched CPU. A new table format is introduced to facilitate the quirk to fill the related information. This does not change the existing x86_cpu_id because it's an ABI shared with modules, and also has quite different requirements, as in no wildcards, but everything has to be matched exactly. Originally-by: Andi Kleen Suggested-by: Thomas Gleixner Signed-off-by: Kan Liang Signed-off-by: Peter Zijlstra (Intel) Acked-by: Borislav Petkov Cc: Linus Torvalds Cc: Peter Zijlstra Cc: eranian@google.com Link: https://lkml.kernel.org/r/1549319013-4522-1-git-send-email-kan.liang@linux.intel.com Signed-off-by: Ingo Molnar --- arch/x86/include/asm/cpu_device_id.h | 28 +++++++++++++++++++++++++ arch/x86/kernel/cpu/match.c | 31 ++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h index baeba0567126..3417110574c1 100644 --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -11,4 +11,32 @@ extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); +/* + * Match specific microcode revisions. + * + * vendor/family/model/stepping must be all set. + * + * Only checks against the boot CPU. When mixed-stepping configs are + * valid for a CPU model, add a quirk for every valid stepping and + * do the fine-tuning in the quirk handler. + */ + +struct x86_cpu_desc { + __u8 x86_family; + __u8 x86_vendor; + __u8 x86_model; + __u8 x86_stepping; + __u32 x86_microcode_rev; +}; + +#define INTEL_CPU_DESC(mod, step, rev) { \ + .x86_family = 6, \ + .x86_vendor = X86_VENDOR_INTEL, \ + .x86_model = mod, \ + .x86_stepping = step, \ + .x86_microcode_rev = rev, \ +} + +extern bool x86_cpu_has_min_microcode_rev(const struct x86_cpu_desc *table); + #endif diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c index 3fed38812eea..6dd78d8235e4 100644 --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -48,3 +48,34 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match) return NULL; } EXPORT_SYMBOL(x86_match_cpu); + +static const struct x86_cpu_desc * +x86_match_cpu_with_stepping(const struct x86_cpu_desc *match) +{ + struct cpuinfo_x86 *c = &boot_cpu_data; + const struct x86_cpu_desc *m; + + for (m = match; m->x86_family | m->x86_model; m++) { + if (c->x86_vendor != m->x86_vendor) + continue; + if (c->x86 != m->x86_family) + continue; + if (c->x86_model != m->x86_model) + continue; + if (c->x86_stepping != m->x86_stepping) + continue; + return m; + } + return NULL; +} + +bool x86_cpu_has_min_microcode_rev(const struct x86_cpu_desc *table) +{ + const struct x86_cpu_desc *res = x86_match_cpu_with_stepping(table); + + if (!res || res->x86_microcode_rev > boot_cpu_data.microcode) + return false; + + return true; +} +EXPORT_SYMBOL_GPL(x86_cpu_has_min_microcode_rev); From patchwork Thu Oct 27 20:54:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022806 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D5E4FA3743 for ; Thu, 27 Oct 2022 21:03:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236430AbiJ0VDD (ORCPT ); Thu, 27 Oct 2022 17:03:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235879AbiJ0VCe (ORCPT ); Thu, 27 Oct 2022 17:02:34 -0400 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A12DC15FC7; Thu, 27 Oct 2022 13:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904094; x=1698440094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=9Kahb+orhjnAV05AznrvMjsB3aa/Mnd5iCCJYwehWVU=; b=caRrJg33dipDtXroHG66RGmeuGuEp9sHTG1UAGzC4RnkPUCcuosCZ4cd DMUv1TM2PSHa2LyW0mphHYeM+68XABaAfPS/6tsauX8LPDP1VqhJJLEQW bMWLFINBtcdznvAVFef1Ouh6wVNO+Noj4pcyx4UzfKMQcoPNCtSVLJPI6 g=; Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-7fa2de02.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-6001.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:52 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-m6i4x-7fa2de02.us-west-2.amazon.com (Postfix) with ESMTPS id 4A909416F7; Thu, 27 Oct 2022 20:54:49 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:41 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:41 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 03/34] x86/cpufeature: Fix various quality problems in the header Date: Thu, 27 Oct 2022 13:54:29 -0700 Message-ID: <20221027205430.17122-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205430.17122-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205430.17122-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D40UWC002.ant.amazon.com (10.43.162.191) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Ingo Molnar commit 266d63a7d9d48c6d5dee486378ec0e8c86c4d74a upstream Thomas noticed that the new arch/x86/include/asm/cpu_device_id.h header is a train-wreck that didn't incorporate review feedback like not using __u8 in kernel-only headers. While at it also fix all the *other* problems this header has: - Use canonical names for the header guards. It's inexplicable why a non-standard guard was used. - Don't define the header guard to 1. Plus annotate the closing #endif as done absolutely every other header. Again, an inexplicable source of noise. - Move the kernel API calls provided by this header next to each other, there's absolutely no reason to have them spread apart in the header. - Align the INTEL_CPU_DESC() macro initializations vertically, this is easier to read and it's also the canonical style. - Actually name the macro arguments properly: instead of 'mod, step, rev', spell out 'model, stepping, revision' - it's not like we have a lack of characters in this header. - Actually make arguments macro-safe - again it's inexplicable why it wasn't done properly to begin with. Quite amazing how many problems a 41 lines header can contain. This kind of code quality is unacceptable, and it slipped through the review net of 2 developers and 2 maintainers, including myself, until Thomas noticed it. :-/ Reported-by: Thomas Gleixner Cc: Andi Kleen Cc: Kan Liang Cc: Peter Zijlstra (Intel) Cc: Borislav Petkov Cc: Linus Torvalds Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar --- arch/x86/include/asm/cpu_device_id.h | 31 ++++++++++++++-------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h index 3417110574c1..31c379c1da41 100644 --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _CPU_DEVICE_ID -#define _CPU_DEVICE_ID 1 +#ifndef _ASM_X86_CPU_DEVICE_ID +#define _ASM_X86_CPU_DEVICE_ID /* * Declare drivers belonging to specific x86 CPUs @@ -9,8 +9,6 @@ #include -extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); - /* * Match specific microcode revisions. * @@ -22,21 +20,22 @@ extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); */ struct x86_cpu_desc { - __u8 x86_family; - __u8 x86_vendor; - __u8 x86_model; - __u8 x86_stepping; - __u32 x86_microcode_rev; + u8 x86_family; + u8 x86_vendor; + u8 x86_model; + u8 x86_stepping; + u32 x86_microcode_rev; }; -#define INTEL_CPU_DESC(mod, step, rev) { \ - .x86_family = 6, \ - .x86_vendor = X86_VENDOR_INTEL, \ - .x86_model = mod, \ - .x86_stepping = step, \ - .x86_microcode_rev = rev, \ +#define INTEL_CPU_DESC(model, stepping, revision) { \ + .x86_family = 6, \ + .x86_vendor = X86_VENDOR_INTEL, \ + .x86_model = (model), \ + .x86_stepping = (stepping), \ + .x86_microcode_rev = (revision), \ } +extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); extern bool x86_cpu_has_min_microcode_rev(const struct x86_cpu_desc *table); -#endif +#endif /* _ASM_X86_CPU_DEVICE_ID */ From patchwork Thu Oct 27 20:54:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2220FA3743 for ; Thu, 27 Oct 2022 21:03:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236346AbiJ0VDH (ORCPT ); Thu, 27 Oct 2022 17:03:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236019AbiJ0VCf (ORCPT ); Thu, 27 Oct 2022 17:02:35 -0400 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B91218345; Thu, 27 Oct 2022 13:54:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904094; x=1698440094; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Ix4YcTG1XtW2o6fWOe/dLRf0+1zq05B/lQenTz63pFA=; b=bf5Vfxof46x4eKL9UTzjuBi+Nx5Vf5Kk47I/fHE5yxP7R6Z85BVESyEP LcVFBryLUCDqvhbYV9fWxabePifIvYKXnptElXpm5DCFY9+zh0NoKpuBg y9ehja3iWvbGJaVs8+378P2IDNG1fbbSMQFpryUkXOYmghU/NM2TY4Kaq c=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="260700157" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-7fa2de02.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:52 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-m6i4x-7fa2de02.us-west-2.amazon.com (Postfix) with ESMTPS id 7F4BE416EA; Thu, 27 Oct 2022 20:54:49 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:41 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:41 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 04/34] x86/devicetable: Move x86 specific macro out of generic code Date: Thu, 27 Oct 2022 13:54:30 -0700 Message-ID: <20221027205430.17122-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205430.17122-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205430.17122-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D40UWC002.ant.amazon.com (10.43.162.191) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Thomas Gleixner commit ba5bade4cc0d2013cdf5634dae554693c968a090 upstream There is no reason that this gunk is in a generic header file. The wildcard defines need to stay as they are required by file2alias. Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Link: https://lkml.kernel.org/r/20200320131508.736205164@linutronix.de --- arch/x86/include/asm/cpu_device_id.h | 13 ++++++++++++- arch/x86/kvm/svm.c | 1 + arch/x86/kvm/vmx.c | 1 + drivers/cpufreq/acpi-cpufreq.c | 1 + drivers/cpufreq/amd_freq_sensitivity.c | 1 + include/linux/mod_devicetable.h | 4 +--- 6 files changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h index 31c379c1da41..a28dc6ba5be1 100644 --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -6,9 +6,20 @@ * Declare drivers belonging to specific x86 CPUs * Similar in spirit to pci_device_id and related PCI functions */ - #include +/* + * The wildcard initializers are in mod_devicetable.h because + * file2alias needs them. Sigh. + */ + +#define X86_FEATURE_MATCH(x) { \ + .vendor = X86_VENDOR_ANY, \ + .family = X86_FAMILY_ANY, \ + .model = X86_MODEL_ANY, \ + .feature = x, \ +} + /* * Match specific microcode revisions. * diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 69056af43a98..7d8670896203 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -47,6 +47,7 @@ #include #include #include +#include #include #include "trace.h" diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 27d99928a10e..48b40e160e27 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -40,6 +40,7 @@ #include "x86.h" #include +#include #include #include #include diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c index 8a199b4047c2..6b5304177575 100644 --- a/drivers/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/acpi-cpufreq.c @@ -47,6 +47,7 @@ #include #include #include +#include MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski"); MODULE_DESCRIPTION("ACPI Processor P-States Driver"); diff --git a/drivers/cpufreq/amd_freq_sensitivity.c b/drivers/cpufreq/amd_freq_sensitivity.c index 042023bbbf62..b7692861b2f7 100644 --- a/drivers/cpufreq/amd_freq_sensitivity.c +++ b/drivers/cpufreq/amd_freq_sensitivity.c @@ -20,6 +20,7 @@ #include #include +#include #include "cpufreq_ondemand.h" diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index f3c631fe076a..e57cd43989fe 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -599,9 +599,7 @@ struct x86_cpu_id { kernel_ulong_t driver_data; }; -#define X86_FEATURE_MATCH(x) \ - { X86_VENDOR_ANY, X86_FAMILY_ANY, X86_MODEL_ANY, x } - +/* Wild cards for x86_cpu_id::vendor, family, model and feature */ #define X86_VENDOR_ANY 0xffff #define X86_FAMILY_ANY 0 #define X86_MODEL_ANY 0 From patchwork Thu Oct 27 20:54:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03A09FA3743 for ; Thu, 27 Oct 2022 21:03:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236075AbiJ0VDJ (ORCPT ); Thu, 27 Oct 2022 17:03:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46256 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235717AbiJ0VCi (ORCPT ); Thu, 27 Oct 2022 17:02:38 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9FE11B1CF; Thu, 27 Oct 2022 13:54:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904096; x=1698440096; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=czKhN6dhPG6+34wxHzNpw6mpujEI4u/ttDPwToFES7o=; b=a+yL3OJFUzH/UncfrlxY1qJVtrgFXXg3pICk6gwOZecUVWg/sq/WEwIu Rtc/CQUpronhyqvFegMC/pju9HvNFyPts4RmaYU+cKwfU2ZOcRz48No/v Yw3oZX7egRv0QFT9dh1sjgjouRIU/oJ/hW2bO088GAC3E0r/+yidsoiGF c=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153007" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-1box-2bm6-32cf6363.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:53 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-1box-2bm6-32cf6363.us-west-2.amazon.com (Postfix) with ESMTPS id 382FE81BB6; Thu, 27 Oct 2022 20:54:52 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:50 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.162.178) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:50 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 05/34] x86/cpu: Add consistent CPU match macros Date: Thu, 27 Oct 2022 13:54:39 -0700 Message-ID: <20221027205442.17210-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.178] X-ClientProxiedBy: EX13D35UWC003.ant.amazon.com (10.43.162.130) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Thomas Gleixner commit 20d437447c0089cda46c683db219d3b4e2cde40e upstream Finding all places which build x86_cpu_id match tables is tedious and the logic is hidden in lots of differently named macro wrappers. Most of these initializer macros use plain C89 initializers which rely on the ordering of the struct members. So new members could only be added at the end of the struct, but that's ugly as hell and C99 initializers are really the right thing to use. Provide a set of macros which: - Have a proper naming scheme, starting with X86_MATCH_ - Use C99 initializers The set of provided macros are all subsets of the base macro X86_MATCH_VENDOR_FAM_MODEL_FEATURE() which allows to supply all possible selection criteria: vendor, family, model, feature The other macros shorten this to avoid typing all arguments when they are not needed and would require one of the _ANY constants. They have been created due to the requirements of the existing usage sites. Also add a few model constants for Centaur CPUs and QUARK. Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov Reviewed-by: Greg Kroah-Hartman Link: https://lkml.kernel.org/r/20200320131508.826011988@linutronix.de --- arch/x86/include/asm/cpu_device_id.h | 140 +++++++++++++++++++++++++-- arch/x86/include/asm/intel-family.h | 6 ++ arch/x86/kernel/cpu/match.c | 13 ++- 3 files changed, 146 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h index a28dc6ba5be1..f11770fac73a 100644 --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -5,21 +5,143 @@ /* * Declare drivers belonging to specific x86 CPUs * Similar in spirit to pci_device_id and related PCI functions - */ -#include - -/* + * * The wildcard initializers are in mod_devicetable.h because * file2alias needs them. Sigh. */ +#include +/* Get the INTEL_FAM* model defines */ +#include +/* And the X86_VENDOR_* ones */ +#include -#define X86_FEATURE_MATCH(x) { \ - .vendor = X86_VENDOR_ANY, \ - .family = X86_FAMILY_ANY, \ - .model = X86_MODEL_ANY, \ - .feature = x, \ +/* Centaur FAM6 models */ +#define X86_CENTAUR_FAM6_C7_A 0xa +#define X86_CENTAUR_FAM6_C7_D 0xd +#define X86_CENTAUR_FAM6_NANO 0xf + +/** + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE - Base macro for CPU matching + * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@_vendor + * @_family: The family number or X86_FAMILY_ANY + * @_model: The model number, model constant or X86_MODEL_ANY + * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY + * @_data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * Use only if you need all selectors. Otherwise use one of the shorter + * macros of the X86_MATCH_* family. If there is no matching shorthand + * macro, consider to add one. If you really need to wrap one of the macros + * into another macro at the usage site for good reasons, then please + * start this local macro with X86_MATCH to allow easy grepping. + */ +#define X86_MATCH_VENDOR_FAM_MODEL_FEATURE(_vendor, _family, _model, \ + _feature, _data) { \ + .vendor = X86_VENDOR_##_vendor, \ + .family = _family, \ + .model = _model, \ + .feature = _feature, \ + .driver_data = (unsigned long) _data \ } +/** + * X86_MATCH_VENDOR_FAM_FEATURE - Macro for matching vendor, family and CPU feature + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @family: The family number or X86_FAMILY_ANY + * @feature: A X86_FEATURE bit + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FAM_FEATURE(vendor, family, feature, data) \ + X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, \ + X86_MODEL_ANY, feature, data) + +/** + * X86_MATCH_VENDOR_FEATURE - Macro for matching vendor and CPU feature + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @feature: A X86_FEATURE bit + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FEATURE(vendor, feature, data) \ + X86_MATCH_VENDOR_FAM_FEATURE(vendor, X86_FAMILY_ANY, feature, data) + +/** + * X86_MATCH_FEATURE - Macro for matching a CPU feature + * @feature: A X86_FEATURE bit + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_FEATURE(feature, data) \ + X86_MATCH_VENDOR_FEATURE(ANY, feature, data) + +/* Transitional to keep the existing code working */ +#define X86_FEATURE_MATCH(feature) X86_MATCH_FEATURE(feature, NULL) + +/** + * X86_MATCH_VENDOR_FAM_MODEL - Match vendor, family and model + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @family: The family number or X86_FAMILY_ANY + * @model: The model number, model constant or X86_MODEL_ANY + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set to wildcards. + */ +#define X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, data) \ + X86_MATCH_VENDOR_FAM_MODEL_FEATURE(vendor, family, model, \ + X86_FEATURE_ANY, data) + +/** + * X86_MATCH_VENDOR_FAM - Match vendor and family + * @vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@vendor + * @family: The family number or X86_FAMILY_ANY + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * All other missing arguments to X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are + * set of wildcards. + */ +#define X86_MATCH_VENDOR_FAM(vendor, family, data) \ + X86_MATCH_VENDOR_FAM_MODEL(vendor, family, X86_MODEL_ANY, data) + +/** + * X86_MATCH_INTEL_FAM6_MODEL - Match vendor INTEL, family 6 and model + * @model: The model name without the INTEL_FAM6_ prefix or ANY + * The model name is expanded to INTEL_FAM6_@model internally + * @data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * The vendor is set to INTEL, the family to 6 and all other missing + * arguments of X86_MATCH_VENDOR_FAM_MODEL_FEATURE() are set to wildcards. + * + * See X86_MATCH_VENDOR_FAM_MODEL_FEATURE() for further information. + */ +#define X86_MATCH_INTEL_FAM6_MODEL(model, data) \ + X86_MATCH_VENDOR_FAM_MODEL(INTEL, 6, INTEL_FAM6_##model, data) + /* * Match specific microcode revisions. * diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h index 05d2d7169ab8..7811d42e78ef 100644 --- a/arch/x86/include/asm/intel-family.h +++ b/arch/x86/include/asm/intel-family.h @@ -16,6 +16,9 @@ * that group keep the CPUID for the variants sorted by model number. */ +/* Wildcard match for FAM6 so X86_MATCH_INTEL_FAM6_MODEL(ANY) works */ +#define INTEL_FAM6_ANY X86_MODEL_ANY + #define INTEL_FAM6_CORE_YONAH 0x0E #define INTEL_FAM6_CORE2_MEROM 0x0F @@ -103,4 +106,7 @@ #define INTEL_FAM6_XEON_PHI_KNL 0x57 /* Knights Landing */ #define INTEL_FAM6_XEON_PHI_KNM 0x85 /* Knights Mill */ +/* Family 5 */ +#define INTEL_FAM5_QUARK_X1000 0x09 /* Quark X1000 SoC */ + #endif /* _ASM_X86_INTEL_FAMILY_H */ diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c index 6dd78d8235e4..d3482eb43ff3 100644 --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -16,12 +16,17 @@ * respective wildcard entries. * * A typical table entry would be to match a specific CPU - * { X86_VENDOR_INTEL, 6, 0x12 } - * or to match a specific CPU feature - * { X86_FEATURE_MATCH(X86_FEATURE_FOOBAR) } + * + * X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_BROADWELL, + * X86_FEATURE_ANY, NULL); * * Fields can be wildcarded with %X86_VENDOR_ANY, %X86_FAMILY_ANY, - * %X86_MODEL_ANY, %X86_FEATURE_ANY or 0 (except for vendor) + * %X86_MODEL_ANY, %X86_FEATURE_ANY (except for vendor) + * + * asm/cpu_device_id.h contains a set of useful macros which are shortcuts + * for various common selections. The above can be shortened to: + * + * X86_MATCH_INTEL_FAM6_MODEL(BROADWELL, NULL); * * Arrays used to match for this should also be declared using * MODULE_DEVICE_TABLE(x86cpu, ...) From patchwork Thu Oct 27 20:54:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022808 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DE2CFA3740 for ; Thu, 27 Oct 2022 21:03:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236913AbiJ0VDG (ORCPT ); Thu, 27 Oct 2022 17:03:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235514AbiJ0VCg (ORCPT ); Thu, 27 Oct 2022 17:02:36 -0400 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA0791B1D5; Thu, 27 Oct 2022 13:54:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904095; x=1698440095; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=HhKbIg4tzcUqelSNODWjqI3XMAhvpBqdGkitTiC4IOU=; b=KMNyMfeH2mf4yw4RQbh4Gwuu99D1PXVgUqwROj7niTotZkZ942OqID59 XgmFiB/8GDtxFaKgg2ljxMS31ewnaVwTvobro9IsB8pVvGSwOMLRKSCHz C53uoe38p2oeZuoApkXJlpTisVb5T8GInUh4HxJQ+557+EKqFTGUh0OSV o=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="260700167" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:53 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id 47F37417DA; Thu, 27 Oct 2022 20:54:52 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:50 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.162.178) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:50 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 06/34] x86/cpu: Add a steppings field to struct x86_cpu_id Date: Thu, 27 Oct 2022 13:54:40 -0700 Message-ID: <20221027205442.17210-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205442.17210-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205442.17210-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.178] X-ClientProxiedBy: EX13D35UWC003.ant.amazon.com (10.43.162.130) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Mark Gross commit e9d7144597b10ff13ff2264c059f7d4a7fbc89ac upstream Intel uses the same family/model for several CPUs. Sometimes the stepping must be checked to tell them apart. On x86 there can be at most 16 steppings. Add a steppings bitmask to x86_cpu_id and a X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and support for matching against family/model/stepping. [ bp: Massage. tglx: Lightweight variant for backporting ] Signed-off-by: Mark Gross Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Reviewed-by: Tony Luck Reviewed-by: Josh Poimboeuf Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpu_device_id.h | 27 +++++++++++++++++++++++++++ arch/x86/kernel/cpu/match.c | 7 ++++++- include/linux/mod_devicetable.h | 6 ++++++ 3 files changed, 39 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h index f11770fac73a..e54babe529c7 100644 --- a/arch/x86/include/asm/cpu_device_id.h +++ b/arch/x86/include/asm/cpu_device_id.h @@ -168,6 +168,33 @@ struct x86_cpu_desc { .x86_microcode_rev = (revision), \ } +#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins) + +/** + * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching + * @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY + * The name is expanded to X86_VENDOR_@_vendor + * @_family: The family number or X86_FAMILY_ANY + * @_model: The model number, model constant or X86_MODEL_ANY + * @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY + * @_feature: A X86_FEATURE bit or X86_FEATURE_ANY + * @_data: Driver specific data or NULL. The internal storage + * format is unsigned long. The supplied value, pointer + * etc. is casted to unsigned long internally. + * + * Backport version to keep the SRBDS pile consistant. No shorter variants + * required for this. + */ +#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \ + _steppings, _feature, _data) { \ + .vendor = X86_VENDOR_##_vendor, \ + .family = _family, \ + .model = _model, \ + .steppings = _steppings, \ + .feature = _feature, \ + .driver_data = (unsigned long) _data \ +} + extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match); extern bool x86_cpu_has_min_microcode_rev(const struct x86_cpu_desc *table); diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c index d3482eb43ff3..ad6776081e60 100644 --- a/arch/x86/kernel/cpu/match.c +++ b/arch/x86/kernel/cpu/match.c @@ -39,13 +39,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match) const struct x86_cpu_id *m; struct cpuinfo_x86 *c = &boot_cpu_data; - for (m = match; m->vendor | m->family | m->model | m->feature; m++) { + for (m = match; + m->vendor | m->family | m->model | m->steppings | m->feature; + m++) { if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor) continue; if (m->family != X86_FAMILY_ANY && c->x86 != m->family) continue; if (m->model != X86_MODEL_ANY && c->x86_model != m->model) continue; + if (m->steppings != X86_STEPPING_ANY && + !(BIT(c->x86_stepping) & m->steppings)) + continue; if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature)) continue; return m; diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h index e57cd43989fe..97794823eabd 100644 --- a/include/linux/mod_devicetable.h +++ b/include/linux/mod_devicetable.h @@ -589,6 +589,10 @@ struct mips_cdmm_device_id { /* * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id. * Although gcc seems to ignore this error, clang fails without this define. + * + * Note: The ordering of the struct is different from upstream because the + * static initializers in kernels < 5.7 still use C89 style while upstream + * has been converted to proper C99 initializers. */ #define x86cpu_device_id x86_cpu_id struct x86_cpu_id { @@ -597,12 +601,14 @@ struct x86_cpu_id { __u16 model; __u16 feature; /* bit index */ kernel_ulong_t driver_data; + __u16 steppings; }; /* Wild cards for x86_cpu_id::vendor, family, model and feature */ #define X86_VENDOR_ANY 0xffff #define X86_FAMILY_ANY 0 #define X86_MODEL_ANY 0 +#define X86_STEPPING_ANY 0 #define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */ /* From patchwork Thu Oct 27 20:54:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F0FAFA3744 for ; Thu, 27 Oct 2022 21:03:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236978AbiJ0VDL (ORCPT ); Thu, 27 Oct 2022 17:03:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236348AbiJ0VCj (ORCPT ); Thu, 27 Oct 2022 17:02:39 -0400 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AA411F62E; Thu, 27 Oct 2022 13:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904099; x=1698440099; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=6so0MzVbzuu7W7/xGX9tuwdaEVJrpztFAYeYMSCA6vo=; b=nn0R8gMKdQHJORBnGSwdeJhC3dQnauWkMpHdwf50sOqdz2NJ/MoRfToS t1np11s9fneHd/fdNM+3VH8sa30dG50Zhca5SgvoSGh+chLPSStFyUpK4 ZW+Zc3O1jYepRUazBV91/ogkiqOKleSq/jejmDAORxhI1fM2XjAxzBuaQ w=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="260700168" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:53 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id B84274174C; Thu, 27 Oct 2022 20:54:52 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:51 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.162.178) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:50 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 07/34] x86/entry: Remove skip_r11rcx Date: Thu, 27 Oct 2022 13:54:41 -0700 Message-ID: <20221027205442.17210-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205442.17210-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205442.17210-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.178] X-ClientProxiedBy: EX13D35UWC003.ant.amazon.com (10.43.162.130) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit 1b331eeea7b8676fc5dbdf80d0a07e41be226177 upstream. Yes, r11 and rcx have been restored previously, but since they're being popped anyway (into rsi) might as well pop them into their own regs -- setting them to the value they already are. Less magical code. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220506121631.365070674@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/calling.h | 10 +--------- arch/x86/entry/entry_64.S | 3 +-- 2 files changed, 2 insertions(+), 11 deletions(-) diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 1dbc62a96b85..273b562af8fa 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -146,27 +146,19 @@ For 32-bit we have the following conventions - kernel is built with .endm -.macro POP_REGS pop_rdi=1 skip_r11rcx=0 +.macro POP_REGS pop_rdi=1 popq %r15 popq %r14 popq %r13 popq %r12 popq %rbp popq %rbx - .if \skip_r11rcx - popq %rsi - .else popq %r11 - .endif popq %r10 popq %r9 popq %r8 popq %rax - .if \skip_r11rcx - popq %rsi - .else popq %rcx - .endif popq %rdx popq %rsi .if \pop_rdi diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index ac389ffb1822..d3472e773b5a 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -301,8 +301,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: - /* rcx and r11 are already restored (see code above) */ - POP_REGS pop_rdi=0 skip_r11rcx=1 + POP_REGS pop_rdi=0 /* * Now all regs are restored except RSP and RDI. From patchwork Thu Oct 27 20:54:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C635BFA3748 for ; Thu, 27 Oct 2022 21:03:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235052AbiJ0VDE (ORCPT ); Thu, 27 Oct 2022 17:03:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235447AbiJ0VCf (ORCPT ); Thu, 27 Oct 2022 17:02:35 -0400 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B63B115FFF; Thu, 27 Oct 2022 13:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904092; x=1698440092; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=zXlyR8c19K1XZ10mh0cdx+EPi/j2IsXRPaFvNf3/Wpw=; b=b1cXqOjGqs5eX5LnRxrh4RuNtxAvlytKRthtFkPJdfbxeaFkoy1qTmlY 06yY/iFXabwbCQlyxgxUQfNFAX2fKEF/9HydiA5+E1A70B62O7K0IjmGZ cXdSdmpyrq4P4E6ssfnufL76q5N3iCF8wUZg4NVef5LsnOMWzYjodbZO2 c=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145243590" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-fad5e78e.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:54:52 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-fad5e78e.us-west-2.amazon.com (Postfix) with ESMTPS id C52A8A10E9; Thu, 27 Oct 2022 20:54:52 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:54:51 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.162.178) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:54:51 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 08/34] x86/cpufeatures: Move RETPOLINE flags to word 11 Date: Thu, 27 Oct 2022 13:54:42 -0700 Message-ID: <20221027205442.17210-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205442.17210-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205442.17210-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.162.178] X-ClientProxiedBy: EX13D35UWC003.ant.amazon.com (10.43.162.130) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit a883d624aed463c84c22596006e5a96f5b44db31 upstream. In order to extend the RETPOLINE features to 4, move them to word 11 where there is still room. This mostly keeps DISABLE_RETPOLINE simple. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index d56634d6b10c..5378dc9aaec9 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -202,8 +202,8 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ -#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ -#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */ +/* FREE! ( 7*32+12) */ +/* FREE! ( 7*32+13) */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ @@ -283,6 +283,14 @@ #define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */ #define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ +/* FREE! (11*32+ 6) */ +/* FREE! (11*32+ 7) */ +/* FREE! (11*32+ 8) */ +/* FREE! (11*32+ 9) */ +/* FREE! (11*32+10) */ +/* FREE! (11*32+11) */ +#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ +#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */ #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */ From patchwork Thu Oct 27 20:54:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022812 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B007FA3740 for ; Thu, 27 Oct 2022 21:03:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236856AbiJ0VDN (ORCPT ); Thu, 27 Oct 2022 17:03:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235521AbiJ0VCo (ORCPT ); Thu, 27 Oct 2022 17:02:44 -0400 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 163B522BFB; Thu, 27 Oct 2022 13:55:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904102; x=1698440102; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Ih3tfvDBlvM0qRl12Y2vxqou611nhgIS2vTu2CtVJXs=; b=lVbL534F5lje5ixS/+AZDjqOX5BQ/IXjc/aI1b42gOHVyugQTlT0jT8k 0y4TqiWZzM66K6LZTI/5A2LXfcnivNnPdxTrhNeSf5kAYxOCfDbcVwOMX hVuHqaBmJr1vS+xQYFuK4kknkG6uvizGAykQJKlikzQybu6mbgdQpb/Zb Y=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="274379609" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-94edd59b.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:01 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-94edd59b.us-west-2.amazon.com (Postfix) with ESMTPS id DFBD141889; Thu, 27 Oct 2022 20:55:00 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:00 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:00 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 09/34] x86/bugs: Report AMD retbleed vulnerability Date: Thu, 27 Oct 2022 13:54:49 -0700 Message-ID: <20221027205452.17271-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D39UWA003.ant.amazon.com (10.43.160.235) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Alexandre Chartre commit 6b80b59b3555706508008f1f127b5412c89c7fd8 upstream. Report that AMD x86 CPUs are vulnerable to the RETBleed (Arbitrary Speculative Code Execution with Return Instructions) attack. [peterz: add hygon] [kim: invert parity; fam15h] Co-developed-by: Kim Phillips Signed-off-by: Kim Phillips Signed-off-by: Alexandre Chartre Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust context ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/bugs.c | 13 +++++++++++++ arch/x86/kernel/cpu/common.c | 15 +++++++++++++++ drivers/base/cpu.c | 8 ++++++++ include/linux/cpu.h | 2 ++ 5 files changed, 39 insertions(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 5378dc9aaec9..2e96129ed68b 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -403,5 +403,6 @@ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */ +#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 68056ee5dff9..6706c6af08a1 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1899,6 +1899,11 @@ static ssize_t srbds_show_state(char *buf) return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); } +static ssize_t retbleed_show_state(char *buf) +{ + return sprintf(buf, "Vulnerable\n"); +} + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, char *buf, unsigned int bug) { @@ -1942,6 +1947,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr case X86_BUG_MMIO_UNKNOWN: return mmio_stale_data_show_state(buf); + case X86_BUG_RETBLEED: + return retbleed_show_state(buf); + default: break; } @@ -2001,4 +2009,9 @@ ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *at else return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA); } + +ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED); +} #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index e72a21c20772..b6fa5fdc3e17 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -970,16 +970,24 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { {} }; +#define VULNBL(vendor, family, model, blacklist) \ + X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, blacklist) + #define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \ INTEL_FAM6_##model, steppings, \ X86_FEATURE_ANY, issues) +#define VULNBL_AMD(family, blacklist) \ + VULNBL(AMD, family, X86_MODEL_ANY, blacklist) + #define SRBDS BIT(0) /* CPU is affected by X86_BUG_MMIO_STALE_DATA */ #define MMIO BIT(1) /* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X86_BUG_MMIO_STALE_DATA */ #define MMIO_SBDS BIT(2) +/* CPU is affected by RETbleed, speculating where you would not expect it */ +#define RETBLEED BIT(3) static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), @@ -1012,6 +1020,10 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), + + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), + VULNBL_AMD(0x17, RETBLEED), {} }; @@ -1117,6 +1129,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } + if (cpu_matches(cpu_vuln_blacklist, RETBLEED)) + setup_force_cpu_bug(X86_BUG_RETBLEED); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c index ba4e7732e2c7..9ae153124380 100644 --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -564,6 +564,12 @@ ssize_t __weak cpu_show_mmio_stale_data(struct device *dev, return sysfs_emit(buf, "Not affected\n"); } +ssize_t __weak cpu_show_retbleed(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); @@ -574,6 +580,7 @@ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL); static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL); static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL); +static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL); static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -586,6 +593,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_itlb_multihit.attr, &dev_attr_srbds.attr, &dev_attr_mmio_stale_data.attr, + &dev_attr_retbleed.attr, NULL }; diff --git a/include/linux/cpu.h b/include/linux/cpu.h index f958ecc82de9..8c4d21e71774 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -68,6 +68,8 @@ extern ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, extern ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_retbleed(struct device *dev, + struct device_attribute *attr, char *buf); extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, From patchwork Thu Oct 27 20:54:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F24A6FA3743 for ; Thu, 27 Oct 2022 21:03:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237024AbiJ0VDP (ORCPT ); Thu, 27 Oct 2022 17:03:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236081AbiJ0VCy (ORCPT ); Thu, 27 Oct 2022 17:02:54 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C31FC1B1D9; Thu, 27 Oct 2022 13:55:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904105; x=1698440105; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=J5U38Tk1pvE7lZaGYO3rHFRbdrKKTLtPvUaNDZw34Go=; b=JM0Mq31m8v7Ug5vtcS22kowathxzzSTqIpOaWfDHCWjIo+RyfBrepwhD ozanyVr8Cl0R8Hob42foQ+tLyGbuBlfsgYk8y+Hi0lifc818lVcgQw+V9 eWCx8Fl+mg7dgLdlXoB7aEaqPa6RebjKDXzjAXB0HeaIHjxTaO3ICaYsf g=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153072" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-1box-2bm6-32cf6363.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:02 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-1box-2bm6-32cf6363.us-west-2.amazon.com (Postfix) with ESMTPS id 6327281C1A; Thu, 27 Oct 2022 20:55:01 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:00 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:00 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 10/34] x86/bugs: Add AMD retbleed= boot parameter Date: Thu, 27 Oct 2022 13:54:50 -0700 Message-ID: <20221027205452.17271-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205452.17271-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205452.17271-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D39UWA003.ant.amazon.com (10.43.160.235) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Alexandre Chartre commit 7fbf47c7ce50b38a64576b150e7011ae73d54669 upstream. Add the "retbleed=" boot parameter to select a mitigation for RETBleed. Possible values are "off", "auto" and "unret" (JMP2RET mitigation). The default value is "auto". Currently, "retbleed=auto" will select the unret mitigation on AMD and Hygon and no mitigation on Intel (JMP2RET is not effective on Intel). [peterz: rebase; add hygon] [jpoimboe: cleanups] Signed-off-by: Alexandre Chartre Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust context - no jmp2ret mitigation exists ] Signed-off-by: Suraj Jitindar Singh --- .../admin-guide/kernel-parameters.txt | 12 ++++ arch/x86/kernel/cpu/bugs.c | 70 ++++++++++++++++++- 2 files changed, 81 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 681d429c6426..4f1269d81f97 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3965,6 +3965,18 @@ retain_initrd [RAM] Keep initrd memory after extraction + retbleed= [X86] Control mitigation of RETBleed (Arbitrary + Speculative Code Execution with Return Instructions) + vulnerability. + + off - unconditionally disable + auto - automatically select a migitation + + Selecting 'auto' will choose a mitigation method at run + time according to the CPU. + + Not specifying this option is equivalent to retbleed=auto. + rfkill.default_state= 0 "airplane mode". All wifi, bluetooth, wimax, gps, fm, etc. communication is blocked by default. diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 6706c6af08a1..9249831fc3bb 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -36,6 +36,7 @@ #include "cpu.h" static void __init spectre_v1_select_mitigation(void); +static void __init retbleed_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); @@ -111,6 +112,12 @@ void __init check_bugs(void) /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); + retbleed_select_mitigation(); + /* + * spectre_v2_select_mitigation() relies on the state set by + * retbleed_select_mitigation(); specifically the STIBP selection is + * forced for UNRET. + */ spectre_v2_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); @@ -705,6 +712,67 @@ static int __init nospectre_v1_cmdline(char *str) } early_param("nospectre_v1", nospectre_v1_cmdline); +#undef pr_fmt +#define pr_fmt(fmt) "RETBleed: " fmt + +enum retbleed_mitigation { + RETBLEED_MITIGATION_NONE +}; + +enum retbleed_mitigation_cmd { + RETBLEED_CMD_OFF, + RETBLEED_CMD_AUTO +}; + +const char * const retbleed_strings[] = { + [RETBLEED_MITIGATION_NONE] = "Vulnerable" +}; + +static enum retbleed_mitigation retbleed_mitigation __ro_after_init = + RETBLEED_MITIGATION_NONE; +static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init = + RETBLEED_CMD_AUTO; + +static int __init retbleed_parse_cmdline(char *str) +{ + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) + retbleed_cmd = RETBLEED_CMD_OFF; + else if (!strcmp(str, "auto")) + retbleed_cmd = RETBLEED_CMD_AUTO; + else + pr_err("Unknown retbleed option (%s). Defaulting to 'auto'\n", str); + + return 0; +} +early_param("retbleed", retbleed_parse_cmdline); + +static void __init retbleed_select_mitigation(void) +{ + if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) + return; + + switch (retbleed_cmd) { + case RETBLEED_CMD_OFF: + return; + + case RETBLEED_CMD_AUTO: + default: + if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) + break; + break; + } + + switch (retbleed_mitigation) { + default: + break; + } + + pr_info("%s\n", retbleed_strings[retbleed_mitigation]); +} + #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt @@ -1901,7 +1969,7 @@ static ssize_t srbds_show_state(char *buf) static ssize_t retbleed_show_state(char *buf) { - return sprintf(buf, "Vulnerable\n"); + return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]); } static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, From patchwork Thu Oct 27 20:54:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CE5EFA3746 for ; Thu, 27 Oct 2022 21:03:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237045AbiJ0VDR (ORCPT ); Thu, 27 Oct 2022 17:03:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236640AbiJ0VCy (ORCPT ); Thu, 27 Oct 2022 17:02:54 -0400 Received: from smtp-fw-9103.amazon.com (smtp-fw-9103.amazon.com [207.171.188.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3183120A7; Thu, 27 Oct 2022 13:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904104; x=1698440104; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Z7sqfVSVSKh4TNpMhf6nX11hr+gOrxoKTq4f3WRMvmk=; b=JNFOqcmyoy+P4jmIqAAKAA6MXZSjzhtRU2F5Bh6bZCnK+oV8+7XZUg3S oIEi33L4K2xZF6kXKJ0nP02Afbq2svBscmZKhK5GU1lF8p8WuT1KU/YLT Lng5J5ApF2N5rTueR8l2L+QKgaajmMGQwCmbCPLyfP0Djue4b/T5jxlZo Y=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="1068508370" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-26a610d2.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:02 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-m6i4x-26a610d2.us-west-2.amazon.com (Postfix) with ESMTPS id D0A20417B7; Thu, 27 Oct 2022 20:55:01 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:01 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:01 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 11/34] x86/bugs: Keep a per-CPU IA32_SPEC_CTRL value Date: Thu, 27 Oct 2022 13:54:51 -0700 Message-ID: <20221027205452.17271-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205452.17271-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205452.17271-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D39UWA003.ant.amazon.com (10.43.160.235) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit caa0ff24d5d0e02abce5e65c3d2b7f20a6617be5 upstream. Due to TIF_SSBD and TIF_SPEC_IB the actual IA32_SPEC_CTRL value can differ from x86_spec_ctrl_base. As such, keep a per-CPU value reflecting the current task's MSR content. [jpoimboe: rename] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 28 +++++++++++++++++++++++----- arch/x86/kernel/process.c | 2 +- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 8a618fbf569f..6bc5a324dd65 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -291,6 +291,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; +extern void write_spec_ctrl_current(u64 val); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 9249831fc3bb..b0768341afbe 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -47,11 +47,29 @@ static void __init taa_select_mitigation(void); static void __init mmio_select_mitigation(void); static void __init srbds_select_mitigation(void); -/* The base value of the SPEC_CTRL MSR that always has to be preserved. */ +/* The base value of the SPEC_CTRL MSR without task-specific bits set */ u64 x86_spec_ctrl_base; EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); + +/* The current value of the SPEC_CTRL MSR with task-specific bits set */ +DEFINE_PER_CPU(u64, x86_spec_ctrl_current); +EXPORT_SYMBOL_GPL(x86_spec_ctrl_current); + static DEFINE_MUTEX(spec_ctrl_mutex); +/* + * Keep track of the SPEC_CTRL MSR value for the current task, which may differ + * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). + */ +void write_spec_ctrl_current(u64 val) +{ + if (this_cpu_read(x86_spec_ctrl_current) == val) + return; + + this_cpu_write(x86_spec_ctrl_current, val); + wrmsrl(MSR_IA32_SPEC_CTRL, val); +} + /* * The vendor and possibly platform specific bits which can be modified in * x86_spec_ctrl_base. @@ -1173,7 +1191,7 @@ static void __init spectre_v2_select_mitigation(void) if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } switch (mode) { @@ -1228,7 +1246,7 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1471,7 +1489,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } } @@ -1676,7 +1694,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index a07b09f68e7e..6e000c6ec6be 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -435,7 +435,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp, } if (updmsr) - wrmsrl(MSR_IA32_SPEC_CTRL, msr); + write_spec_ctrl_current(msr); } static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) From patchwork Thu Oct 27 20:54:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 031D6FA3744 for ; Thu, 27 Oct 2022 21:03:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235359AbiJ0VDU (ORCPT ); Thu, 27 Oct 2022 17:03:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236366AbiJ0VC4 (ORCPT ); Thu, 27 Oct 2022 17:02:56 -0400 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CED3327CCC; Thu, 27 Oct 2022 13:55:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904106; x=1698440106; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=mIJ1UjHUl1LHjcRTu0hF28yrgVye5hZR5zRGYyF7qnQ=; b=BLl/b71lLBqLh4lyDeb3Ciwq76s5CYFaQ2bQ2Z5cwVPdcPA4KKOdJ/I4 TNzE5932hgqr5ckmgO/oZXH+1WNLFHsT14Ilq1YU+CBBH9QEf4HWGdu2t 0wkmISlIk2htEIlqKqfQx76u1Z+9V0oiCLM3m3xro+Bg8WQDpSVLYzJz7 E=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="236704482" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-f7c754c9.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:04 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-f7c754c9.us-west-2.amazon.com (Postfix) with ESMTPS id 3ED56416A0; Thu, 27 Oct 2022 20:55:02 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:01 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:01 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 12/34] x86/entry: Add kernel IBRS implementation Date: Thu, 27 Oct 2022 13:54:52 -0700 Message-ID: <20221027205452.17271-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205452.17271-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205452.17271-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D39UWA003.ant.amazon.com (10.43.160.235) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Thadeu Lima de Souza Cascardo commit 2dbb887e875b1de3ca8f40ddf26bcfe55798c609 upstream. Implement Kernel IBRS - currently the only known option to mitigate RSB underflow speculation issues on Skylake hardware. Note: since IBRS_ENTER requires fuller context established than UNTRAIN_RET, it must be placed after it. However, since UNTRAIN_RET itself implies a RET, it must come after IBRS_ENTER. This means IBRS_ENTER needs to also move UNTRAIN_RET. Note 2: KERNEL_IBRS is sub-optimal for XenPV. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflict at arch/x86/entry/entry_64_compat.S] [cascardo: conflict fixups, no ANNOTATE_NOENDBR] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust context ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/entry/calling.h | 58 ++++++++++++++++++++++++++++++ arch/x86/entry/entry_64.S | 33 +++++++++++++++++ arch/x86/entry/entry_64_compat.S | 12 ++++++- arch/x86/include/asm/cpufeatures.h | 2 +- 4 files changed, 103 insertions(+), 2 deletions(-) diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 273b562af8fa..ef759951fd0f 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -6,6 +6,8 @@ #include #include #include +#include +#include /* @@ -328,6 +330,62 @@ For 32-bit we have the following conventions - kernel is built with #endif +/* + * IBRS kernel mitigation for Spectre_v2. + * + * Assumes full context is established (PUSH_REGS, CR3 and GS) and it clobbers + * the regs it uses (AX, CX, DX). Must be called before the first RET + * instruction (NOTE! UNTRAIN_RET includes a RET instruction) + * + * The optional argument is used to save/restore the current value, + * which is used on the paranoid paths. + * + * Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set. + */ +.macro IBRS_ENTER save_reg + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + +.ifnb \save_reg + rdmsr + shl $32, %rdx + or %rdx, %rax + mov %rax, \save_reg + test $SPEC_CTRL_IBRS, %eax + jz .Ldo_wrmsr_\@ + lfence + jmp .Lend_\@ +.Ldo_wrmsr_\@: +.endif + + movq PER_CPU_VAR(x86_spec_ctrl_current), %rdx + movl %edx, %eax + shr $32, %rdx + wrmsr +.Lend_\@: +.endm + +/* + * Similar to IBRS_ENTER, requires KERNEL GS,CR3 and clobbers (AX, CX, DX) + * regs. Must be called after the last RET. + */ +.macro IBRS_EXIT save_reg + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + +.ifnb \save_reg + mov \save_reg, %rdx +.else + movq PER_CPU_VAR(x86_spec_ctrl_current), %rdx + andl $(~SPEC_CTRL_IBRS), %edx +.endif + + movl %edx, %eax + shr $32, %rdx + wrmsr +.Lend_\@: +.endm + /* * Mitigate Spectre v1 for conditional swapgs code paths. * diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index d3472e773b5a..fee102f5a7d5 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -230,6 +230,10 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) /* IRQs are off. */ movq %rsp, %rdi + + /* clobbers %rax, make sure it is after saving the syscall nr */ + IBRS_ENTER + call do_syscall_64 /* returns with IRQs disabled */ TRACE_IRQS_IRETQ /* we're about to change IF */ @@ -301,6 +305,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe) * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: + IBRS_EXIT POP_REGS pop_rdi=0 /* @@ -590,6 +595,7 @@ GLOBAL(retint_user) TRACE_IRQS_IRETQ GLOBAL(swapgs_restore_regs_and_return_to_usermode) + IBRS_EXIT #ifdef CONFIG_DEBUG_ENTRY /* Assert that pt_regs indicates user mode. */ testb $3, CS(%rsp) @@ -1133,6 +1139,9 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1 * Save all registers in pt_regs, and switch gs if needed. * Use slow, but surefire "are we in kernel?" check. * Return: ebx=0: need swapgs on exit, ebx=1: otherwise + * + * R14 - old CR3 + * R15 - old SPEC_CTRL */ ENTRY(paranoid_entry) UNWIND_HINT_FUNC @@ -1156,6 +1165,12 @@ ENTRY(paranoid_entry) */ FENCE_SWAPGS_KERNEL_ENTRY + /* + * Once we have CR3 and %GS setup save and set SPEC_CTRL. Just like + * CR3 above, keep the old value in a callee saved register. + */ + IBRS_ENTER save_reg=%r15 + ret END(paranoid_entry) @@ -1170,9 +1185,19 @@ END(paranoid_entry) * to try to handle preemption here. * * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it) + * + * R14 - old CR3 + * R15 - old SPEC_CTRL */ ENTRY(paranoid_exit) UNWIND_HINT_REGS + + /* + * Must restore IBRS state before both CR3 and %GS since we need access + * to the per-CPU x86_spec_ctrl_shadow variable. + */ + IBRS_EXIT save_reg=%r15 + DISABLE_INTERRUPTS(CLBR_ANY) TRACE_IRQS_OFF_DEBUG testl %ebx, %ebx /* swapgs needed? */ @@ -1207,8 +1232,10 @@ ENTRY(error_entry) FENCE_SWAPGS_USER_ENTRY /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax + IBRS_ENTER .Lerror_entry_from_usermode_after_swapgs: + /* Put us onto the real thread stack. */ popq %r12 /* save return addr in %12 */ movq %rsp, %rdi /* arg0 = pt_regs pointer */ @@ -1271,6 +1298,7 @@ ENTRY(error_entry) SWAPGS FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=%rax + IBRS_ENTER /* * Pretend that the exception came from user mode: set up pt_regs @@ -1376,6 +1404,8 @@ ENTRY(nmi) PUSH_AND_CLEAR_REGS rdx=(%rdx) ENCODE_FRAME_POINTER + IBRS_ENTER + /* * At this point we no longer need to worry about stack damage * due to nesting -- we're on the normal thread stack and we're @@ -1599,6 +1629,9 @@ end_repeat_nmi: movq $-1, %rsi call do_nmi + /* Always restore stashed SPEC_CTRL value (see paranoid_entry) */ + IBRS_EXIT save_reg=%r15 + RESTORE_CR3 scratch_reg=%r15 save_reg=%r14 testl %ebx, %ebx /* swapgs needed? */ diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 304e3daf82dd..8cdc3af68c53 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -4,7 +4,6 @@ * * Copyright 2000-2002 Andi Kleen, SuSE Labs. */ -#include "calling.h" #include #include #include @@ -17,6 +16,8 @@ #include #include +#include "calling.h" + .section .entry.text, "ax" /* @@ -106,6 +107,8 @@ ENTRY(entry_SYSENTER_compat) xorl %r15d, %r15d /* nospec r15 */ cld + IBRS_ENTER + /* * SYSENTER doesn't filter flags, so we need to clear NT and AC * ourselves. To save a few cycles, we can check whether @@ -250,6 +253,8 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe) */ TRACE_IRQS_OFF + IBRS_ENTER + movq %rsp, %rdi call do_fast_syscall_32 /* XEN PV guests always use IRET path */ @@ -259,6 +264,9 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe) /* Opportunistic SYSRET */ sysret32_from_system_call: TRACE_IRQS_ON /* User mode traces as IRQs on. */ + + IBRS_EXIT + movq RBX(%rsp), %rbx /* pt_regs->rbx */ movq RBP(%rsp), %rbp /* pt_regs->rbp */ movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */ @@ -385,6 +393,8 @@ ENTRY(entry_INT80_compat) */ TRACE_IRQS_OFF + IBRS_ENTER + movq %rsp, %rdi call do_int80_syscall_32 .Lsyscall_32_done: diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 2e96129ed68b..b78acf1e70b6 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -202,7 +202,7 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ -/* FREE! ( 7*32+12) */ +#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ /* FREE! ( 7*32+13) */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ From patchwork Thu Oct 27 20:54:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4215ECAAA1 for ; Thu, 27 Oct 2022 21:03:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236792AbiJ0VDW (ORCPT ); Thu, 27 Oct 2022 17:03:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235472AbiJ0VC6 (ORCPT ); Thu, 27 Oct 2022 17:02:58 -0400 Received: from smtp-fw-80006.amazon.com (smtp-fw-80006.amazon.com [99.78.197.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 879C63ECC6; Thu, 27 Oct 2022 13:55:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904113; x=1698440113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=k1Pb7/lRnW+QeCv2rLsOkMYhyhe2KuOECK5oiw3zV/0=; b=MlXjlH/ZYmDWBQIYR9BTEc9nb1bv9/81qXBtzW+gaLbV1Vg73dM5pKzu Qcb2+SCTOXVFyTiNeFmKskm58BuAUvYIiUpx8R1xKMCmuYl+/c+J2mW9C Wx/V7A5Ii1LqgNpL8HuAaFAUE4yF2Fhscx4fvBCKQtEiwta0IaMS17Sls Q=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145138234" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-80006.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:12 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com (Postfix) with ESMTPS id 9CEFA81B31; Thu, 27 Oct 2022 20:55:11 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:11 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:10 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 13/34] x86/bugs: Optimize SPEC_CTRL MSR writes Date: Thu, 27 Oct 2022 13:54:59 -0700 Message-ID: <20221027205502.17426-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D08UWC002.ant.amazon.com (10.43.162.168) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit c779bc1a9002fa474175b80e72b85c9bf628abb0 upstream. When changing SPEC_CTRL for user control, the WRMSR can be delayed until return-to-user when KERNEL_IBRS has been enabled. This avoids an MSR write during context switch. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 18 ++++++++++++------ arch/x86/kernel/process.c | 2 +- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 6bc5a324dd65..884f3472ad4f 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -291,7 +291,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern void write_spec_ctrl_current(u64 val); +extern void write_spec_ctrl_current(u64 val, bool force); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index b0768341afbe..f537b831a054 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -61,13 +61,19 @@ static DEFINE_MUTEX(spec_ctrl_mutex); * Keep track of the SPEC_CTRL MSR value for the current task, which may differ * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). */ -void write_spec_ctrl_current(u64 val) +void write_spec_ctrl_current(u64 val, bool force) { if (this_cpu_read(x86_spec_ctrl_current) == val) return; this_cpu_write(x86_spec_ctrl_current, val); - wrmsrl(MSR_IA32_SPEC_CTRL, val); + + /* + * When KERNEL_IBRS this MSR is written on return-to-user, unless + * forced the update can be delayed until that time. + */ + if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS)) + wrmsrl(MSR_IA32_SPEC_CTRL, val); } /* @@ -1191,7 +1197,7 @@ static void __init spectre_v2_select_mitigation(void) if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } switch (mode) { @@ -1246,7 +1252,7 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1489,7 +1495,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } } @@ -1694,7 +1700,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 6e000c6ec6be..baa9254149e7 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -435,7 +435,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp, } if (updmsr) - write_spec_ctrl_current(msr); + write_spec_ctrl_current(msr, false); } static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) From patchwork Thu Oct 27 20:55:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022819 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC0FBFA3744 for ; Thu, 27 Oct 2022 21:03:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236629AbiJ0VDa (ORCPT ); Thu, 27 Oct 2022 17:03:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236871AbiJ0VDB (ORCPT ); Thu, 27 Oct 2022 17:03:01 -0400 Received: from smtp-fw-80006.amazon.com (smtp-fw-80006.amazon.com [99.78.197.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4840842AF7; Thu, 27 Oct 2022 13:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904116; x=1698440116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=JWZXJikU18fFRfHkdr4vFVFAGs4p6LCI7v7TAYK7WrE=; b=ns4F6f2wXLBruDOMc76i9ARldi6TWJWXl+E130+04M9uwZO9i1z2Oxoq 52d94nlGY+NpF2TvVdNtQjgXShzLDnS02sWHMHNZgfVDoGu9fPirjxH82 3aPrauWj8pbjUNZy+lMtA3c2lmoJfosdx5QoJImRodE1W5X6p75X00bZB E=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145138235" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-80006.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:12 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com (Postfix) with ESMTPS id 27C0681B31; Thu, 27 Oct 2022 20:55:12 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:11 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:11 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 14/34] x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS Date: Thu, 27 Oct 2022 13:55:00 -0700 Message-ID: <20221027205502.17426-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205502.17426-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205502.17426-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D08UWC002.ant.amazon.com (10.43.162.168) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Pawan Gupta commit 7c693f54c873691a4b7da05c7e0f74e67745d144 upstream. Extend spectre_v2= boot option with Kernel IBRS. [jpoimboe: no STIBP with IBRS] Signed-off-by: Pawan Gupta Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- .../admin-guide/kernel-parameters.txt | 1 + arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 66 +++++++++++++++---- 3 files changed, 54 insertions(+), 14 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 4f1269d81f97..629d7956ddf1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4216,6 +4216,7 @@ eibrs - enhanced IBRS eibrs,retpoline - enhanced IBRS + Retpolines eibrs,lfence - enhanced IBRS + LFENCE + ibrs - use IBRS to protect kernel Not specifying this option is equivalent to spectre_v2=auto. diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 884f3472ad4f..73f592d28396 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -228,6 +228,7 @@ enum spectre_v2_mitigation { SPECTRE_V2_EIBRS, SPECTRE_V2_EIBRS_RETPOLINE, SPECTRE_V2_EIBRS_LFENCE, + SPECTRE_V2_IBRS, }; /* The indirect branch speculation control variants */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index f537b831a054..e39a4f9ca69b 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -873,6 +873,7 @@ enum spectre_v2_mitigation_cmd { SPECTRE_V2_CMD_EIBRS, SPECTRE_V2_CMD_EIBRS_RETPOLINE, SPECTRE_V2_CMD_EIBRS_LFENCE, + SPECTRE_V2_CMD_IBRS, }; enum spectre_v2_user_cmd { @@ -945,11 +946,12 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) return SPECTRE_V2_USER_CMD_AUTO; } -static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode) +static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode) { - return (mode == SPECTRE_V2_EIBRS || - mode == SPECTRE_V2_EIBRS_RETPOLINE || - mode == SPECTRE_V2_EIBRS_LFENCE); + return mode == SPECTRE_V2_IBRS || + mode == SPECTRE_V2_EIBRS || + mode == SPECTRE_V2_EIBRS_RETPOLINE || + mode == SPECTRE_V2_EIBRS_LFENCE; } static void __init @@ -1014,12 +1016,12 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) } /* - * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not - * required. + * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible, + * STIBP is not required. */ if (!boot_cpu_has(X86_FEATURE_STIBP) || !smt_possible || - spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + spectre_v2_in_ibrs_mode(spectre_v2_enabled)) return; /* @@ -1044,6 +1046,7 @@ static const char * const spectre_v2_strings[] = { [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS", [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE", [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines", + [SPECTRE_V2_IBRS] = "Mitigation: IBRS", }; static const struct { @@ -1061,6 +1064,7 @@ static const struct { { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false }, { "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false }, { "auto", SPECTRE_V2_CMD_AUTO, false }, + { "ibrs", SPECTRE_V2_CMD_IBRS, false }, }; static void __init spec_v2_print_cond(const char *reason, bool secure) @@ -1123,6 +1127,24 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) return SPECTRE_V2_CMD_AUTO; } + if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) { + pr_err("%s selected but not Intel CPU. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + + if (cmd == SPECTRE_V2_CMD_IBRS && !boot_cpu_has(X86_FEATURE_IBRS)) { + pr_err("%s selected but CPU doesn't have IBRS. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + + if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_has(X86_FEATURE_XENPV)) { + pr_err("%s selected but running as XenPV guest. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + spec_v2_print_cond(mitigation_options[i].option, mitigation_options[i].secure); return cmd; @@ -1162,6 +1184,14 @@ static void __init spectre_v2_select_mitigation(void) break; } + if (boot_cpu_has_bug(X86_BUG_RETBLEED) && + retbleed_cmd != RETBLEED_CMD_OFF && + boot_cpu_has(X86_FEATURE_IBRS) && + boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { + mode = SPECTRE_V2_IBRS; + break; + } + mode = spectre_v2_select_retpoline(); break; @@ -1178,6 +1208,10 @@ static void __init spectre_v2_select_mitigation(void) mode = spectre_v2_select_retpoline(); break; + case SPECTRE_V2_CMD_IBRS: + mode = SPECTRE_V2_IBRS; + break; + case SPECTRE_V2_CMD_EIBRS: mode = SPECTRE_V2_EIBRS; break; @@ -1194,7 +1228,7 @@ static void __init spectre_v2_select_mitigation(void) if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); - if (spectre_v2_in_eibrs_mode(mode)) { + if (spectre_v2_in_ibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; write_spec_ctrl_current(x86_spec_ctrl_base, true); @@ -1205,6 +1239,10 @@ static void __init spectre_v2_select_mitigation(void) case SPECTRE_V2_EIBRS: break; + case SPECTRE_V2_IBRS: + setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); + break; + case SPECTRE_V2_LFENCE: case SPECTRE_V2_EIBRS_LFENCE: setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE); @@ -1231,17 +1269,17 @@ static void __init spectre_v2_select_mitigation(void) pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); /* - * Retpoline means the kernel is safe because it has no indirect - * branches. Enhanced IBRS protects firmware too, so, enable restricted - * speculation around firmware calls only when Enhanced IBRS isn't - * supported. + * Retpoline protects the kernel, but doesn't protect firmware. IBRS + * and Enhanced IBRS protect firmware too, so enable IBRS around + * firmware calls only when IBRS / Enhanced IBRS aren't otherwise + * enabled. * * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because * the user might select retpoline on the kernel command line and if * the CPU supports Enhanced IBRS, kernel might un-intentionally not * enable IBRS around firmware calls. */ - if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) { + if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) { setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); pr_info("Enabling Restricted Speculation for firmware calls\n"); } @@ -1935,7 +1973,7 @@ static ssize_t mmio_stale_data_show_state(char *buf) static char *stibp_state(void) { - if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) return ""; switch (spectre_v2_user_stibp) { From patchwork Thu Oct 27 20:55:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022818 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DB6AFA3740 for ; Thu, 27 Oct 2022 21:03:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236534AbiJ0VD0 (ORCPT ); Thu, 27 Oct 2022 17:03:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235754AbiJ0VDA (ORCPT ); Thu, 27 Oct 2022 17:03:00 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 394E340E1B; Thu, 27 Oct 2022 13:55:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904116; x=1698440116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=BP+YseJNNUjBumuODtre8gY1uS2EtWaJKxVtlNGQNC0=; b=ZdnNiHXUpo9FpMeO9kWDESFsDxC8TK/PvwXqhXI9470vGbG9Om0jSRF8 fu/A05sLczPZqGDUViAgzxsI0NOxsMwC2doAuYBY/2P3Y8AELoCyWNEKa BvGL9GI//eJWkOmqQP2yF65kaPMRsd9+QGxQoIM1jXrKbD8ENSfELfDnV 0=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153100" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:13 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com (Postfix) with ESMTPS id 9138C81A97; Thu, 27 Oct 2022 20:55:12 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:12 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:11 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 15/34] x86/bugs: Split spectre_v2_select_mitigation() and spectre_v2_user_select_mitigation() Date: Thu, 27 Oct 2022 13:55:01 -0700 Message-ID: <20221027205502.17426-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205502.17426-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205502.17426-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D08UWC002.ant.amazon.com (10.43.162.168) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit 166115c08a9b0b846b783088808a27d739be6e8d upstream. retbleed will depend on spectre_v2, while spectre_v2_user depends on retbleed. Break this cycle. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index e39a4f9ca69b..3ed67d5f1a04 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -36,8 +36,9 @@ #include "cpu.h" static void __init spectre_v1_select_mitigation(void); -static void __init retbleed_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); +static void __init retbleed_select_mitigation(void); +static void __init spectre_v2_user_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); @@ -136,13 +137,19 @@ void __init check_bugs(void) /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); + spectre_v2_select_mitigation(); + /* + * retbleed_select_mitigation() relies on the state set by + * spectre_v2_select_mitigation(); specifically it wants to know about + * spectre_v2=ibrs. + */ retbleed_select_mitigation(); /* - * spectre_v2_select_mitigation() relies on the state set by + * spectre_v2_user_select_mitigation() relies on the state set by * retbleed_select_mitigation(); specifically the STIBP selection is * forced for UNRET. */ - spectre_v2_select_mitigation(); + spectre_v2_user_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); md_clear_select_mitigation(); @@ -914,13 +921,15 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure) pr_info("spectre_v2_user=%s forced on command line.\n", reason); } +static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd; + static enum spectre_v2_user_cmd __init -spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) +spectre_v2_parse_user_cmdline(void) { char arg[20]; int ret, i; - switch (v2_cmd) { + switch (spectre_v2_cmd) { case SPECTRE_V2_CMD_NONE: return SPECTRE_V2_USER_CMD_NONE; case SPECTRE_V2_CMD_FORCE: @@ -955,7 +964,7 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode) } static void __init -spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) +spectre_v2_user_select_mitigation(void) { enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE; bool smt_possible = IS_ENABLED(CONFIG_SMP); @@ -968,7 +977,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) cpu_smt_control == CPU_SMT_NOT_SUPPORTED) smt_possible = false; - cmd = spectre_v2_parse_user_cmdline(v2_cmd); + cmd = spectre_v2_parse_user_cmdline(); switch (cmd) { case SPECTRE_V2_USER_CMD_NONE: goto set_mode; @@ -1285,7 +1294,7 @@ static void __init spectre_v2_select_mitigation(void) } /* Set up IBPB and STIBP depending on the general spectre V2 command */ - spectre_v2_user_select_mitigation(cmd); + spectre_v2_cmd = cmd; } static void update_stibp_msr(void * __unused) From patchwork Thu Oct 27 20:55:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022817 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46F60FA3744 for ; Thu, 27 Oct 2022 21:03:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236597AbiJ0VDY (ORCPT ); Thu, 27 Oct 2022 17:03:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234719AbiJ0VDA (ORCPT ); Thu, 27 Oct 2022 17:03:00 -0400 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 399F541D21; Thu, 27 Oct 2022 13:55:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904116; x=1698440116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=IFP17l3p2vXpRhyrCNJyVeYoanzUemr1XcTJhAIt5X8=; b=CfDGunjtCS7wzwamDjx7ZMiQvxNsmipUt93TJ/2KC0E7EzVFPxhO6iel cZKLQtmCWzqvgkuby6hTR1uQzNtAGsJzbq8zEZk7uLvuZxDNkZTBdZT8f kmMw1M/mB+83E/gORspQnb/gbqG8MPqXdygPyStSwUn+BtkXI3dSBZaHF 0=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="260700264" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-83883bdb.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:13 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-83883bdb.us-west-2.amazon.com (Postfix) with ESMTPS id EE19F61323; Thu, 27 Oct 2022 20:55:12 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:12 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:12 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 16/34] x86/bugs: Report Intel retbleed vulnerability Date: Thu, 27 Oct 2022 13:55:02 -0700 Message-ID: <20221027205502.17426-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205502.17426-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205502.17426-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D08UWC002.ant.amazon.com (10.43.162.168) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit 6ad0ad2bf8a67e27d1f9d006a1dabb0e1c360cc3 upstream. Skylake suffers from RSB underflow speculation issues; report this vulnerability and it's mitigation (spectre_v2=ibrs). [jpoimboe: cleanups, eibrs] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust to different processor family names ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/bugs.c | 42 ++++++++++++++++++++++++++------ arch/x86/kernel/cpu/common.c | 24 +++++++++--------- 3 files changed, 48 insertions(+), 19 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index c090d8e8fbb3..f5341b4de46a 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -73,6 +73,7 @@ #define MSR_IA32_ARCH_CAPABILITIES 0x0000010a #define ARCH_CAP_RDCL_NO BIT(0) /* Not susceptible to Meltdown */ #define ARCH_CAP_IBRS_ALL BIT(1) /* Enhanced IBRS support */ +#define ARCH_CAP_RSBA BIT(2) /* RET may use alternative branch predictors */ #define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH BIT(3) /* Skip L1D flush on vmentry */ #define ARCH_CAP_SSB_NO BIT(4) /* * Not susceptible to Speculative Store Bypass diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 3ed67d5f1a04..01604ea478b0 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -743,11 +743,16 @@ static int __init nospectre_v1_cmdline(char *str) } early_param("nospectre_v1", nospectre_v1_cmdline); +static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = + SPECTRE_V2_NONE; + #undef pr_fmt #define pr_fmt(fmt) "RETBleed: " fmt enum retbleed_mitigation { - RETBLEED_MITIGATION_NONE + RETBLEED_MITIGATION_NONE, + RETBLEED_MITIGATION_IBRS, + RETBLEED_MITIGATION_EIBRS, }; enum retbleed_mitigation_cmd { @@ -756,7 +761,9 @@ enum retbleed_mitigation_cmd { }; const char * const retbleed_strings[] = { - [RETBLEED_MITIGATION_NONE] = "Vulnerable" + [RETBLEED_MITIGATION_NONE] = "Vulnerable", + [RETBLEED_MITIGATION_IBRS] = "Mitigation: IBRS", + [RETBLEED_MITIGATION_EIBRS] = "Mitigation: Enhanced IBRS", }; static enum retbleed_mitigation retbleed_mitigation __ro_after_init = @@ -780,6 +787,8 @@ static int __init retbleed_parse_cmdline(char *str) } early_param("retbleed", retbleed_parse_cmdline); +#define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!\n" + static void __init retbleed_select_mitigation(void) { if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) @@ -791,8 +800,11 @@ static void __init retbleed_select_mitigation(void) case RETBLEED_CMD_AUTO: default: - if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) - break; + /* + * The Intel mitigation (IBRS) was already selected in + * spectre_v2_select_mitigation(). + */ + break; } @@ -801,15 +813,31 @@ static void __init retbleed_select_mitigation(void) break; } + /* + * Let IBRS trump all on Intel without affecting the effects of the + * retbleed= cmdline option. + */ + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { + switch (spectre_v2_enabled) { + case SPECTRE_V2_IBRS: + retbleed_mitigation = RETBLEED_MITIGATION_IBRS; + break; + case SPECTRE_V2_EIBRS: + case SPECTRE_V2_EIBRS_RETPOLINE: + case SPECTRE_V2_EIBRS_LFENCE: + retbleed_mitigation = RETBLEED_MITIGATION_EIBRS; + break; + default: + pr_err(RETBLEED_INTEL_MSG); + } + } + pr_info("%s\n", retbleed_strings[retbleed_mitigation]); } #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt -static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = - SPECTRE_V2_NONE; - static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init = SPECTRE_V2_USER_NONE; static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init = diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index b6fa5fdc3e17..c7202647daeb 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -999,24 +999,24 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(BROADWELL_GT3E, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL_CORE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | - BIT(7) | BIT(0xB), MMIO), - VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), + BIT(7) | BIT(0xB), MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_MOBILE, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(ICELAKE_MOBILE, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS | RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_XEON_D, X86_STEPPINGS(0x1, 0x1), MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), - VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), @@ -1129,7 +1129,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } - if (cpu_matches(cpu_vuln_blacklist, RETBLEED)) + if ((cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))) setup_force_cpu_bug(X86_BUG_RETBLEED); if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) From patchwork Thu Oct 27 20:55:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 934E8FA3740 for ; Thu, 27 Oct 2022 21:03:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236438AbiJ0VDj (ORCPT ); Thu, 27 Oct 2022 17:03:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236636AbiJ0VDD (ORCPT ); Thu, 27 Oct 2022 17:03:03 -0400 Received: from smtp-fw-9103.amazon.com (smtp-fw-9103.amazon.com [207.171.188.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECB7E4457C; Thu, 27 Oct 2022 13:55:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904123; x=1698440123; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=WnrVw/qJ+xW6uaQ7KumAASkcr57kUcTZanuevNFCbj4=; b=jFKm4zxnQy/4QVHnjSEZ5dfk+K0DpTLaNjhsL8fb8zVMOImwzL8Xfmfj DusMAJGlFmd/5aHRdSi5g1h97MlY/JPGQZRwkCgFQrP8ELslof/X9fpyl jvC6iw5QGul8AC0F6O23+qZ1MGZx846YEZDJ/ieAgDRA8/eNx0lTXtbNN 8=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="1068508416" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:23 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id AA96041740; Thu, 27 Oct 2022 20:55:22 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:22 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:21 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 17/34] entel_idle: Disable IBRS during long idle Date: Thu, 27 Oct 2022 13:55:09 -0700 Message-ID: <20221027205512.17684-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D11UWB002.ant.amazon.com (10.43.161.20) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit bf5835bcdb9635c97f85120dba9bfa21e111130f upstream. Having IBRS enabled while the SMT sibling is idle unnecessarily slows down the running sibling. OTOH, disabling IBRS around idle takes two MSR writes, which will increase the idle latency. Therefore, only disable IBRS around deeper idle states. Shallow idle states are bounded by the tick in duration, since NOHZ is not allowed for them by virtue of their short target residency. Only do this for mwait-driven idle, since that keeps interrupts disabled across idle, which makes disabling IBRS vs IRQ-entry a non-issue. Note: C6 is a random threshold, most importantly C1 probably shouldn't disable IBRS, benchmarking needed. Suggested-by: Tim Chen Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: no CPUIDLE_FLAG_IRQ_ENABLE] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust context ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 6 ++++ drivers/idle/intel_idle.c | 45 ++++++++++++++++++++++++---- 3 files changed, 46 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 73f592d28396..3a1e3062f244 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -293,6 +293,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; extern void write_spec_ctrl_current(u64 val, bool force); +extern u64 spec_ctrl_current(void); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 01604ea478b0..43a258e592d0 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -77,6 +77,12 @@ void write_spec_ctrl_current(u64 val, bool force) wrmsrl(MSR_IA32_SPEC_CTRL, val); } +u64 spec_ctrl_current(void) +{ + return this_cpu_read(x86_spec_ctrl_current); +} +EXPORT_SYMBOL_GPL(spec_ctrl_current); + /* * The vendor and possibly platform specific bits which can be modified in * x86_spec_ctrl_base. diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index 31f54a334b58..51cc492c2e35 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -58,11 +58,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include @@ -97,6 +99,8 @@ static const struct idle_cpu *icpu; static struct cpuidle_device __percpu *intel_idle_cpuidle_devices; static int intel_idle(struct cpuidle_device *dev, struct cpuidle_driver *drv, int index); +static int intel_idle_ibrs(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index); static void intel_idle_s2idle(struct cpuidle_device *dev, struct cpuidle_driver *drv, int index); static struct cpuidle_state *cpuidle_state_table; @@ -109,6 +113,12 @@ static struct cpuidle_state *cpuidle_state_table; */ #define CPUIDLE_FLAG_TLB_FLUSHED 0x10000 +/* + * Disable IBRS across idle (when KERNEL_IBRS), is exclusive vs IRQ_ENABLE + * above. + */ +#define CPUIDLE_FLAG_IBRS BIT(16) + /* * MWAIT takes an 8-bit "hint" in EAX "suggesting" * the C-state (top nibble) and sub-state (bottom nibble) @@ -617,7 +627,7 @@ static struct cpuidle_state skl_cstates[] = { { .name = "C6", .desc = "MWAIT 0x20", - .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 85, .target_residency = 200, .enter = &intel_idle, @@ -625,7 +635,7 @@ static struct cpuidle_state skl_cstates[] = { { .name = "C7s", .desc = "MWAIT 0x33", - .flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 124, .target_residency = 800, .enter = &intel_idle, @@ -633,7 +643,7 @@ static struct cpuidle_state skl_cstates[] = { { .name = "C8", .desc = "MWAIT 0x40", - .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 200, .target_residency = 800, .enter = &intel_idle, @@ -641,7 +651,7 @@ static struct cpuidle_state skl_cstates[] = { { .name = "C9", .desc = "MWAIT 0x50", - .flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 480, .target_residency = 5000, .enter = &intel_idle, @@ -649,7 +659,7 @@ static struct cpuidle_state skl_cstates[] = { { .name = "C10", .desc = "MWAIT 0x60", - .flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 890, .target_residency = 5000, .enter = &intel_idle, @@ -678,7 +688,7 @@ static struct cpuidle_state skx_cstates[] = { { .name = "C6", .desc = "MWAIT 0x20", - .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 133, .target_residency = 600, .enter = &intel_idle, @@ -935,6 +945,24 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev, return index; } +static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) +{ + bool smt_active = sched_smt_active(); + u64 spec_ctrl = spec_ctrl_current(); + int ret; + + if (smt_active) + wrmsrl(MSR_IA32_SPEC_CTRL, 0); + + ret = intel_idle(dev, drv, index); + + if (smt_active) + wrmsrl(MSR_IA32_SPEC_CTRL, spec_ctrl); + + return ret; +} + /** * intel_idle_s2idle - simplified "enter" callback routine for suspend-to-idle * @dev: cpuidle_device @@ -1375,6 +1403,11 @@ static void __init intel_idle_cpuidle_driver_init(void) mark_tsc_unstable("TSC halts in idle" " states deeper than C2"); + if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) && + cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IBRS) { + drv->states[drv->state_count].enter = intel_idle_ibrs; + } + drv->states[drv->state_count] = /* structure copy */ cpuidle_state_table[cstate]; From patchwork Thu Oct 27 20:55:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022822 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9B92ECAAA1 for ; Thu, 27 Oct 2022 21:03:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237038AbiJ0VDh (ORCPT ); Thu, 27 Oct 2022 17:03:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234884AbiJ0VDD (ORCPT ); Thu, 27 Oct 2022 17:03:03 -0400 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFE3944CD5; Thu, 27 Oct 2022 13:55:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904122; x=1698440122; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=KcT1xBEj8Cf8plFULF/rph4MHCZ8tcuhgKVEZbifjV8=; b=jeKnh8UL+Z6mNNdc3tDk6CvvxKPSiPHzZcjo0rtftv8iVXyDLeiyjuGl mXr0DoAIU3xIu9eP7eNFETJdoTqlj+oeYSh/0IXAcAZsGZNe2Wl0ZZdka mb2VBqL5mDTCOcDe+HKTx6wBZRWt1nwqzhOXPGwk04x59BsXtsnE81b0y 0=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145243695" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-fad5e78e.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:22 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-fad5e78e.us-west-2.amazon.com (Postfix) with ESMTPS id 29E47A10C6; Thu, 27 Oct 2022 20:55:23 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:22 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:22 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 18/34] x86/speculation: Change FILL_RETURN_BUFFER to work with objtool Date: Thu, 27 Oct 2022 13:55:10 -0700 Message-ID: <20221027205512.17684-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205512.17684-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205512.17684-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D11UWB002.ant.amazon.com (10.43.161.20) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit 089dd8e53126ebaf506e2dc0bf89d652c36bfc12 upstream. Change FILL_RETURN_BUFFER so that objtool groks it and can generate correct ORC unwind information. - Since ORC is alternative invariant; that is, all alternatives should have the same ORC entries, the __FILL_RETURN_BUFFER body can not be part of an alternative. Therefore, move it out of the alternative and keep the alternative as a sort of jump_label around it. - Use the ANNOTATE_INTRA_FUNCTION_CALL annotation to white-list these 'funny' call instructions to nowhere. - Use UNWIND_HINT_EMPTY to 'fill' the speculation traps, otherwise objtool will consider them unreachable. - Move the RSP adjustment into the loop, such that the loop has a deterministic stack layout. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Alexandre Chartre Acked-by: Josh Poimboeuf Link: https://lkml.kernel.org/r/20200428191700.032079304@infradead.org [ bp: no intra-function call validation support ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/nospec-branch.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 3a1e3062f244..652c1159a6f6 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -4,11 +4,13 @@ #define _ASM_X86_NOSPEC_BRANCH_H_ #include +#include #include #include #include #include +#include /* * Fill the CPU return stack buffer. @@ -50,9 +52,9 @@ lfence; \ jmp 775b; \ 774: \ + add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ - jnz 771b; \ - add $(BITS_PER_LONG/8) * nr, sp; + jnz 771b; #ifdef __ASSEMBLY__ @@ -142,10 +144,8 @@ */ .macro FILL_RETURN_BUFFER reg:req nr:req ftr:req #ifdef CONFIG_RETPOLINE - ANNOTATE_NOSPEC_ALTERNATIVE - ALTERNATIVE "jmp .Lskip_rsb_\@", \ - __stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \ - \ftr + ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr + __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) .Lskip_rsb_\@: #endif .endm From patchwork Thu Oct 27 20:55:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022821 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D57E2FA3740 for ; Thu, 27 Oct 2022 21:03:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236894AbiJ0VDe (ORCPT ); Thu, 27 Oct 2022 17:03:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236666AbiJ0VDD (ORCPT ); Thu, 27 Oct 2022 17:03:03 -0400 Received: from smtp-fw-9102.amazon.com (smtp-fw-9102.amazon.com [207.171.184.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5345A44CF3; Thu, 27 Oct 2022 13:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904124; x=1698440124; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=derLv7dQEAXbOAKEsypaYAoNRpFKvEhJUqwrKS5o+Xk=; b=J8Pdvyyx9nrWRpeQH951xBhbbLEGV9up44QHLXfIpDj1Wz5/H6YYKAAk BAI0BdzjURoPEjmZsJd4VYB5G+z9wy5cJHM7pZJnnZRkgBaKyGX2h0adf iCj1BSN5FiYfdUmSp2LhVxokoMgLYEW57xQ2SKGadfKpXnqQrI1NvNXwF 8=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="274379660" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-9102.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:24 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id 8F3FC41740; Thu, 27 Oct 2022 20:55:23 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:22 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:22 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 19/34] x86/speculation: Add LFENCE to RSB fill sequence Date: Thu, 27 Oct 2022 13:55:11 -0700 Message-ID: <20221027205512.17684-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205512.17684-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205512.17684-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D11UWB002.ant.amazon.com (10.43.161.20) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Pawan Gupta commit ba6e31af2be96c4d0536f2152ed6f7b6c11bca47 upstream. RSB fill sequence does not have any protection for miss-prediction of conditional branch at the end of the sequence. CPU can speculatively execute code immediately after the sequence, while RSB filling hasn't completed yet. #define __FILL_RETURN_BUFFER(reg, nr, sp) \ mov $(nr/2), reg; \ 771: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 772f; \ 773: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 773b; \ 772: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 774f; \ 775: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 775b; \ 774: \ add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ jnz 771b; <----- CPU can miss-predict here. Before RSB is filled, RETs that come in program order after this macro can be executed speculatively, making them vulnerable to RSB-based attacks. Mitigate it by adding an LFENCE after the conditional branch to prevent speculation while RSB is being filled. Suggested-by: Andrew Cooper Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov --- arch/x86/include/asm/nospec-branch.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 652c1159a6f6..0d474525caec 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -54,7 +54,9 @@ 774: \ add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ - jnz 771b; + jnz 771b; \ + /* barrier for jnz misprediction */ \ + lfence; #ifdef __ASSEMBLY__ From patchwork Thu Oct 27 20:55:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022820 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C16C1FA3740 for ; Thu, 27 Oct 2022 21:03:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235608AbiJ0VDc (ORCPT ); Thu, 27 Oct 2022 17:03:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235817AbiJ0VDD (ORCPT ); Thu, 27 Oct 2022 17:03:03 -0400 Received: from smtp-fw-9103.amazon.com (smtp-fw-9103.amazon.com [207.171.188.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DDBB44CFD; Thu, 27 Oct 2022 13:55:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904124; x=1698440124; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=L0G4OfLRb9FPUVwVeUPHnBJ69Nl45YdZRVLqyU7y9fA=; b=i2yXJ9bvgj0OYoTz51YvCg+6zFloMYBWJmIjU+rui5YXZvZ1S0us8G2z Tr+Y/xIkmTBF8ADvLmk16XUZaDLjIvSVduFKtRrUXC7z61Ua7t/tmhhCx dXCP6HkB6dZWkn5nLu0FR6Cj2sfFGFdj/5G/muyLsSe9TFIbhAAxIpqBl 8=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="1068508419" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-d47337e0.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:24 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2a-m6i4x-d47337e0.us-west-2.amazon.com (Postfix) with ESMTPS id CA5D4611AC; Thu, 27 Oct 2022 20:55:23 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:23 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:23 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 20/34] x86/speculation: Fix RSB filling with CONFIG_RETPOLINE=n Date: Thu, 27 Oct 2022 13:55:12 -0700 Message-ID: <20221027205512.17684-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205512.17684-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205512.17684-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D11UWB002.ant.amazon.com (10.43.161.20) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit b2620facef4889fefcbf2e87284f34dcd4189bce upstream. If a kernel is built with CONFIG_RETPOLINE=n, but the user still wants to mitigate Spectre v2 using IBRS or eIBRS, the RSB filling will be silently disabled. There's nothing retpoline-specific about RSB buffer filling. Remove the CONFIG_RETPOLINE guards around it. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/entry_32.S | 2 -- arch/x86/entry/entry_64.S | 2 -- arch/x86/include/asm/nospec-branch.h | 2 -- 3 files changed, 6 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index c19974a49378..dbcea4281c30 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -245,7 +245,6 @@ ENTRY(__switch_to_asm) movl %ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset #endif -#ifdef CONFIG_RETPOLINE /* * When switching from a shallower to a deeper call stack * the RSB may either underflow or use entries populated @@ -254,7 +253,6 @@ ENTRY(__switch_to_asm) * speculative execution to prevent attack. */ FILL_RETURN_BUFFER %ebx, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW -#endif /* restore callee-saved registers */ popfl diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index fee102f5a7d5..637a23d404e9 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -357,7 +357,6 @@ ENTRY(__switch_to_asm) movq %rbx, PER_CPU_VAR(irq_stack_union)+stack_canary_offset #endif -#ifdef CONFIG_RETPOLINE /* * When switching from a shallower to a deeper call stack * the RSB may either underflow or use entries populated @@ -366,7 +365,6 @@ ENTRY(__switch_to_asm) * speculative execution to prevent attack. */ FILL_RETURN_BUFFER %r12, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW -#endif /* restore callee-saved registers */ popfq diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 0d474525caec..da81adabac94 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -145,11 +145,9 @@ * monstrosity above, manually. */ .macro FILL_RETURN_BUFFER reg:req nr:req ftr:req -#ifdef CONFIG_RETPOLINE ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) .Lskip_rsb_\@: -#endif .endm #else /* __ASSEMBLY__ */ From patchwork Thu Oct 27 20:55:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022826 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29615FA3740 for ; Thu, 27 Oct 2022 21:03:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237179AbiJ0VDq (ORCPT ); Thu, 27 Oct 2022 17:03:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49762 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235481AbiJ0VDG (ORCPT ); Thu, 27 Oct 2022 17:03:06 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2101D5246B; Thu, 27 Oct 2022 13:55:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904137; x=1698440137; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=3wy24Xd6A4WFckMWpHAx6jmyeBohXvfA67UBPF/Kntg=; b=Vtho8mymFr497R6YbmDRlLA46F5MKTIBw1eGVlXHyd9p662IGWuPpfAr kiqpRhSjwaVGjGa7HuZQEYrE7hu+KomhnFFBFsnmkSuaVivYo3wsOS2P2 28zGyd+iTOnsx3b+1RkWbdMf+YrNWYEiev4y5lOEJYurclhfkKEzJ8IRh w=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153183" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-3ef535ca.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:35 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-3ef535ca.us-west-2.amazon.com (Postfix) with ESMTPS id B4E6B6132F; Thu, 27 Oct 2022 20:55:34 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:32 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:32 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 21/34] x86/speculation: Fix firmware entry SPEC_CTRL handling Date: Thu, 27 Oct 2022 13:55:20 -0700 Message-ID: <20221027205523.17741-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D37UWA004.ant.amazon.com (10.43.160.23) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit e6aa13622ea8283cc699cac5d018cc40a2ba2010 upstream. The firmware entry code may accidentally clear STIBP or SSBD. Fix that. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index da81adabac94..c7cbad1ec034 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -303,18 +303,16 @@ extern u64 spec_ctrl_current(void); */ #define firmware_restrict_branch_speculation_start() \ do { \ - u64 val = x86_spec_ctrl_base | SPEC_CTRL_IBRS; \ - \ preempt_disable(); \ - alternative_msr_write(MSR_IA32_SPEC_CTRL, val, \ + alternative_msr_write(MSR_IA32_SPEC_CTRL, \ + spec_ctrl_current() | SPEC_CTRL_IBRS, \ X86_FEATURE_USE_IBRS_FW); \ } while (0) #define firmware_restrict_branch_speculation_end() \ do { \ - u64 val = x86_spec_ctrl_base; \ - \ - alternative_msr_write(MSR_IA32_SPEC_CTRL, val, \ + alternative_msr_write(MSR_IA32_SPEC_CTRL, \ + spec_ctrl_current(), \ X86_FEATURE_USE_IBRS_FW); \ preempt_enable(); \ } while (0) From patchwork Thu Oct 27 20:55:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022824 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2BCEECAAA1 for ; Thu, 27 Oct 2022 21:03:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235472AbiJ0VDm (ORCPT ); Thu, 27 Oct 2022 17:03:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236924AbiJ0VDG (ORCPT ); Thu, 27 Oct 2022 17:03:06 -0400 Received: from smtp-fw-9103.amazon.com (smtp-fw-9103.amazon.com [207.171.188.200]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85B5F520B9; Thu, 27 Oct 2022 13:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904136; x=1698440136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=7t/j7GXSejXesGXC2TSIoxDeu79YiM/nvx3/8jOOoP8=; b=pIyVP0YvdnncZCK0FSHMwBSDSHzPbxQebsiOa34+3CgTRp2PAzm4rlQY gUZzW50qcJI7CZN7CESiNXMcITQzETS4S9LJNg+cXwQvFQlCpu1PkMmfs Jh83PHgyFFtEOrrMI7QdKdqiq0eG+MTktiEiI9CwIqc2nSYlqVIaF2n7u U=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="1068508460" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-9103.sea19.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:36 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-b1c0e1d0.us-west-2.amazon.com (Postfix) with ESMTPS id 9A03181A8B; Thu, 27 Oct 2022 20:55:35 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:32 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:32 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 22/34] x86/speculation: Fix SPEC_CTRL write on SMT state change Date: Thu, 27 Oct 2022 13:55:21 -0700 Message-ID: <20221027205523.17741-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205523.17741-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205523.17741-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D37UWA004.ant.amazon.com (10.43.160.23) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit 56aa4d221f1ee2c3a49b45b800778ec6e0ab73c5 upstream. If the SMT state changes, SSBD might get accidentally disabled. Fix that. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 43a258e592d0..31aa473717bb 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1333,7 +1333,8 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base, true); + u64 val = spec_ctrl_current() | (x86_spec_ctrl_base & SPEC_CTRL_STIBP); + write_spec_ctrl_current(val, true); } /* Update x86_spec_ctrl_base in case SMT state changed. */ From patchwork Thu Oct 27 20:55:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022825 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B916FA3743 for ; Thu, 27 Oct 2022 21:03:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236615AbiJ0VDn (ORCPT ); Thu, 27 Oct 2022 17:03:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235514AbiJ0VDG (ORCPT ); Thu, 27 Oct 2022 17:03:06 -0400 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2489252805; Thu, 27 Oct 2022 13:55:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904136; x=1698440136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=0rtwe3IQgcdvPhlnwATiEOKcROEHUG5SOh8hmHcTseE=; b=HHXjZ4X/FcpGWLEdsLdOTLGZB31T+omZK1OPWSbsb5KdNAJVNBnHgOU4 NqgfMkQtfvwBriURqbYlk0PH1AMcYUyAkp2RmPA5QCgGOtbsKH2AOeTh4 wpWd4K0HmxuYFyDV1D3vuJrJvUTnLbr+im53BD1crKVZ+uHcK3TIeureL M=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145243761" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-d47337e0.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:35 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2a-m6i4x-d47337e0.us-west-2.amazon.com (Postfix) with ESMTPS id 8AE2B612F5; Thu, 27 Oct 2022 20:55:36 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:33 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:32 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 23/34] x86/speculation: Use cached host SPEC_CTRL value for guest entry/exit Date: Thu, 27 Oct 2022 13:55:22 -0700 Message-ID: <20221027205523.17741-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205523.17741-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205523.17741-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D37UWA004.ant.amazon.com (10.43.160.23) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit bbb69e8bee1bd882784947095ffb2bfe0f7c9470 upstream. There's no need to recalculate the host value for every entry/exit. Just use the cached value in spec_ctrl_current(). Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 31aa473717bb..a0d180e52231 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -198,7 +198,7 @@ void __init check_bugs(void) void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { - u64 msrval, guestval, hostval = x86_spec_ctrl_base; + u64 msrval, guestval, hostval = spec_ctrl_current(); struct thread_info *ti = current_thread_info(); /* Is MSR_SPEC_CTRL implemented ? */ @@ -211,15 +211,6 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) guestval = hostval & ~x86_spec_ctrl_mask; guestval |= guest_spec_ctrl & x86_spec_ctrl_mask; - /* SSBD controlled in MSR_SPEC_CTRL */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || - static_cpu_has(X86_FEATURE_AMD_SSBD)) - hostval |= ssbd_tif_to_spec_ctrl(ti->flags); - - /* Conditional STIBP enabled? */ - if (static_branch_unlikely(&switch_to_cond_stibp)) - hostval |= stibp_tif_to_spec_ctrl(ti->flags); - if (hostval != guestval) { msrval = setguest ? guestval : hostval; wrmsrl(MSR_IA32_SPEC_CTRL, msrval); @@ -1272,7 +1263,6 @@ static void __init spectre_v2_select_mitigation(void) pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); if (spectre_v2_in_ibrs_mode(mode)) { - /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; write_spec_ctrl_current(x86_spec_ctrl_base, true); } From patchwork Thu Oct 27 20:55:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7CE7ECAAA1 for ; Thu, 27 Oct 2022 21:03:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237276AbiJ0VDs (ORCPT ); Thu, 27 Oct 2022 17:03:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53058 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235548AbiJ0VDG (ORCPT ); Thu, 27 Oct 2022 17:03:06 -0400 Received: from smtp-fw-80006.amazon.com (smtp-fw-80006.amazon.com [99.78.197.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C15853D10; Thu, 27 Oct 2022 13:55:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904138; x=1698440138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Eqkkpxd0M5kslfPuxbklxFeCBsEp3Zu1Rw00hi2OrWg=; b=ETF7tmzlmwnocT3ZvLbppxs9IJOjVqIAVliR8uvN6LzHX2VQlmMVGPzA /kFIR/2Hsc6wuaaWNxIjVxA9LcabdmGQNbEqKySd7cd0D/6gkX+PlDv3j VOSDSNyp2OrxWLhsbf70JABD5kNUXTaMa+fb7IRpsMi/gBNy9qUtdd77N 4=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145138324" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-e7094f15.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-80006.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:37 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-e7094f15.us-west-2.amazon.com (Postfix) with ESMTPS id 440F54187C; Thu, 27 Oct 2022 20:55:37 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:33 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:33 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 24/34] x86/speculation: Remove x86_spec_ctrl_mask Date: Thu, 27 Oct 2022 13:55:23 -0700 Message-ID: <20221027205523.17741-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205523.17741-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205523.17741-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D37UWA004.ant.amazon.com (10.43.160.23) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit acac5e98ef8d638a411cfa2ee676c87e1973f126 upstream. This mask has been made redundant by kvm_spec_ctrl_test_value(). And it doesn't even work when MSR interception is disabled, as the guest can just write to SPEC_CTRL directly. Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Reviewed-by: Paolo Bonzini Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 31 +------------------------------ 1 file changed, 1 insertion(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index a0d180e52231..5f805013b7f4 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -83,12 +83,6 @@ u64 spec_ctrl_current(void) } EXPORT_SYMBOL_GPL(spec_ctrl_current); -/* - * The vendor and possibly platform specific bits which can be modified in - * x86_spec_ctrl_base. - */ -static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS; - /* * AMD specific MSR info for Speculative Store Bypass control. * x86_amd_ls_cfg_ssbd_mask is initialized in identify_boot_cpu(). @@ -137,10 +131,6 @@ void __init check_bugs(void) if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); - /* Allow STIBP in MSR_SPEC_CTRL if supported */ - if (boot_cpu_has(X86_FEATURE_STIBP)) - x86_spec_ctrl_mask |= SPEC_CTRL_STIBP; - /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); spectre_v2_select_mitigation(); @@ -198,19 +188,10 @@ void __init check_bugs(void) void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { - u64 msrval, guestval, hostval = spec_ctrl_current(); + u64 msrval, guestval = guest_spec_ctrl, hostval = spec_ctrl_current(); struct thread_info *ti = current_thread_info(); - /* Is MSR_SPEC_CTRL implemented ? */ if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) { - /* - * Restrict guest_spec_ctrl to supported values. Clear the - * modifiable bits in the host base value and or the - * modifiable bits from the guest value. - */ - guestval = hostval & ~x86_spec_ctrl_mask; - guestval |= guest_spec_ctrl & x86_spec_ctrl_mask; - if (hostval != guestval) { msrval = setguest ? guestval : hostval; wrmsrl(MSR_IA32_SPEC_CTRL, msrval); @@ -1540,16 +1521,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) break; } - /* - * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper - * bit in the mask to allow guests to use the mitigation even in the - * case where the host does not enable it. - */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || - static_cpu_has(X86_FEATURE_AMD_SSBD)) { - x86_spec_ctrl_mask |= SPEC_CTRL_SSBD; - } - /* * We have three CPU feature flags that are in play here: * - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible. From patchwork Thu Oct 27 20:55:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57327ECAAA1 for ; Thu, 27 Oct 2022 21:03:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237280AbiJ0VDx (ORCPT ); Thu, 27 Oct 2022 17:03:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235700AbiJ0VDJ (ORCPT ); Thu, 27 Oct 2022 17:03:09 -0400 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5E6E8113F; Thu, 27 Oct 2022 13:55:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904147; x=1698440147; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Cs85r/tiZzvWU/gz/JrupVz29DrVtkPA7hqAoJeeRb4=; b=MwEjb6XxqNlgjpqvYws0NzdKUUG7PEZy4TN/uNZWJvrx8TcaaIBD3Qr5 wWFkIb8FIvlNTDR99NpQWByIxoHBfBDhcbw2ewB7xdt6jgJrxXOeY4xGX aOYMpfIs4WobtzxiYcshOWaiQBf39JDv6ZMqZ4vwFRdmH1x/TdroFFM0O w=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="260700395" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-dc7c3f8b.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:44 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-dc7c3f8b.us-west-2.amazon.com (Postfix) with ESMTPS id DF0ADA282F; Thu, 27 Oct 2022 20:55:43 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:42 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:42 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 25/34] KVM: VMX: Prevent guest RSB poisoning attacks with eIBRS Date: Thu, 27 Oct 2022 13:55:30 -0700 Message-ID: <20221027205533.17873-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D34UWA002.ant.amazon.com (10.43.160.245) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit fc02735b14fff8c6678b521d324ade27b1a3d4cf upstream. On eIBRS systems, the returns in the vmexit return path from __vmx_vcpu_run() to vmx_vcpu_run() are exposed to RSB poisoning attacks. Fix that by moving the post-vmexit spec_ctrl handling to immediately after the vmexit. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust for the fact that vmexit is in inline assembly ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/nospec-branch.h | 3 +- arch/x86/kernel/cpu/bugs.c | 4 +++ arch/x86/kvm/vmx.c | 45 ++++++++++++++++++++++++---- 3 files changed, 45 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c7cbad1ec034..2d6d5bac4997 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -257,7 +257,7 @@ extern char __indirect_thunk_end[]; * retpoline and IBRS mitigations for Spectre v2 need this; only on future * CPUs with IBRS_ALL *might* it be avoided. */ -static inline void vmexit_fill_RSB(void) +static __always_inline void vmexit_fill_RSB(void) { #ifdef CONFIG_RETPOLINE unsigned long loops; @@ -292,6 +292,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; +extern u64 x86_spec_ctrl_current; extern void write_spec_ctrl_current(u64 val, bool force); extern u64 spec_ctrl_current(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 5f805013b7f4..1fde42e5be6e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -185,6 +185,10 @@ void __init check_bugs(void) #endif } +/* + * NOTE: For VMX, this function is not called in the vmexit path. + * It uses vmx_spec_ctrl_restore_host() instead. + */ void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 48b40e160e27..539720a8e094 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9770,10 +9770,31 @@ static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu) vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, delta_tsc); } +u64 __always_inline vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx) +{ + u64 guestval, hostval = this_cpu_read(x86_spec_ctrl_current); + + if (!cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) + return 0; + + guestval = __rdmsr(MSR_IA32_SPEC_CTRL); + + /* + * If the guest/host SPEC_CTRL values differ, restore the host value. + */ + if (guestval != hostval) + native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval); + + barrier_nospec(); + + return guestval; +} + static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) { struct vcpu_vmx *vmx = to_vmx(vcpu); unsigned long debugctlmsr, cr3, cr4; + u64 spec_ctrl; /* Record the guest's net vcpu time for enforced NMI injections. */ if (unlikely(!cpu_has_virtual_nmis() && @@ -9967,6 +9988,23 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) , "eax", "ebx", "edi", "esi" #endif ); + /* + * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before + * the first unbalanced RET after vmexit! + * + * For retpoline, RSB filling is needed to prevent poisoned RSB entries + * and (in some cases) RSB underflow. + * + * eIBRS has its own protection against poisoned RSB, so it doesn't + * need the RSB filling sequence. But it does need to be enabled + * before the first unbalanced RET. + * + * So no RETs before vmx_spec_ctrl_restore_host() below. + */ + vmexit_fill_RSB(); + + /* Save this for below */ + spec_ctrl = vmx_spec_ctrl_restore_host(vmx); vmx_enable_fb_clear(vmx); @@ -9986,12 +10024,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) * save it. */ if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) - vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - - x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0); - - /* Eliminate branch target predictions from guest mode */ - vmexit_fill_RSB(); + vmx->spec_ctrl = spec_ctrl; /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ if (debugctlmsr) From patchwork Thu Oct 27 20:55:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1159FA3740 for ; Thu, 27 Oct 2022 21:03:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236502AbiJ0VDu (ORCPT ); Thu, 27 Oct 2022 17:03:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53138 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235659AbiJ0VDH (ORCPT ); Thu, 27 Oct 2022 17:03:07 -0400 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C60DA814C1; Thu, 27 Oct 2022 13:55:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904148; x=1698440148; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=Hh167AI5Vou0EeJ42nJg7ou8wvGIRupc2Wh3BClfUuQ=; b=Q+tzhmieSqCfZxD6RM1Jv6QYQBLO5kJUl9mfVeg6ge0JxHANDtsh7MY4 TwgsTkzc5gpxPsQNH2UNUnVg3mSO5jyJJ8UShQBq8NsehHyfS5a9q+B6O V6j2rGhdaiSEASZr1OlOuCBdgXWs0QMnfihl4krkjVeEmciuoe136Zke/ g=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="236704604" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-d40ec5a9.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-33001.sea14.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:46 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-d40ec5a9.us-west-2.amazon.com (Postfix) with ESMTPS id 0E27641727; Thu, 27 Oct 2022 20:55:44 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:43 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:42 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 26/34] KVM: VMX: Fix IBRS handling after vmexit Date: Thu, 27 Oct 2022 13:55:31 -0700 Message-ID: <20221027205533.17873-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205533.17873-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205533.17873-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D34UWA002.ant.amazon.com (10.43.160.245) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit bea7e31a5caccb6fe8ed989c065072354f0ecb52 upstream. For legacy IBRS to work, the IBRS bit needs to be always re-written after vmexit, even if it's already on. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 539720a8e094..e35cb9560eeb 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9781,8 +9781,13 @@ u64 __always_inline vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx) /* * If the guest/host SPEC_CTRL values differ, restore the host value. + * + * For legacy IBRS, the IBRS bit always needs to be written after + * transitioning from a less privileged predictor mode, regardless of + * whether the guest/host values differ. */ - if (guestval != hostval) + if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) || + guestval != hostval) native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval); barrier_nospec(); From patchwork Thu Oct 27 20:55:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26C81FA3743 for ; Thu, 27 Oct 2022 21:03:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237282AbiJ0VD4 (ORCPT ); Thu, 27 Oct 2022 17:03:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236619AbiJ0VDJ (ORCPT ); Thu, 27 Oct 2022 17:03:09 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6187814C2; Thu, 27 Oct 2022 13:55:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904148; x=1698440148; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=xRZBag5km38bdzxCRHNmAZJtwTVyldMqSetcH2JtZfE=; b=QS2equjjYMXEHNzKBsQq/s6Pk5k8/Gs2YRw7gWjcPTBikk8kRE2Wd4eg BehQOn3Vgq9j2BM6dPt03BehlcSC/FMFH7gvnlLFichmcXXsHnhQG7TVX bfhctGpx9rQeBWgQIYyZP+9MaYz1S0wRIKnlYsjs84ceulwKJe3L90Gm0 I=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153240" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-21d8d9f4.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:45 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-21d8d9f4.us-west-2.amazon.com (Postfix) with ESMTPS id 2645C80FAD; Thu, 27 Oct 2022 20:55:44 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:43 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:43 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 27/34] x86/speculation: Fill RSB on vmexit for IBRS Date: Thu, 27 Oct 2022 13:55:32 -0700 Message-ID: <20221027205533.17873-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205533.17873-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205533.17873-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D34UWA002.ant.amazon.com (10.43.160.245) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Josh Poimboeuf commit 9756bba28470722dacb79ffce554336dd1f6a6cd upstream. Prevent RSB underflow/poisoning attacks with RSB. While at it, add a bunch of comments to attempt to document the current state of tribal knowledge about RSB attacks and what exactly is being mitigated. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust for the fact that vmexit is in inline assembly ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/nospec-branch.h | 4 +- arch/x86/kernel/cpu/bugs.c | 63 +++++++++++++++++++++++++--- arch/x86/kvm/vmx.c | 4 +- 4 files changed, 62 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index b78acf1e70b6..fc52b59c3178 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -203,7 +203,7 @@ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ -/* FREE! ( 7*32+13) */ +#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 2d6d5bac4997..dad872462e8a 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -259,17 +259,15 @@ extern char __indirect_thunk_end[]; */ static __always_inline void vmexit_fill_RSB(void) { -#ifdef CONFIG_RETPOLINE unsigned long loops; asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE ALTERNATIVE("jmp 910f", __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)), - X86_FEATURE_RETPOLINE) + X86_FEATURE_RSB_VMEXIT) "910:" : "=r" (loops), ASM_CALL_CONSTRAINT : : "memory" ); -#endif } static __always_inline diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 1fde42e5be6e..4c491c2f772b 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1276,16 +1276,69 @@ static void __init spectre_v2_select_mitigation(void) pr_info("%s\n", spectre_v2_strings[mode]); /* - * If spectre v2 protection has been enabled, unconditionally fill - * RSB during a context switch; this protects against two independent - * issues: + * If Spectre v2 protection has been enabled, fill the RSB during a + * context switch. In general there are two types of RSB attacks + * across context switches, for which the CALLs/RETs may be unbalanced. * - * - RSB underflow (and switch to BTB) on Skylake+ - * - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs + * 1) RSB underflow + * + * Some Intel parts have "bottomless RSB". When the RSB is empty, + * speculated return targets may come from the branch predictor, + * which could have a user-poisoned BTB or BHB entry. + * + * AMD has it even worse: *all* returns are speculated from the BTB, + * regardless of the state of the RSB. + * + * When IBRS or eIBRS is enabled, the "user -> kernel" attack + * scenario is mitigated by the IBRS branch prediction isolation + * properties, so the RSB buffer filling wouldn't be necessary to + * protect against this type of attack. + * + * The "user -> user" attack scenario is mitigated by RSB filling. + * + * 2) Poisoned RSB entry + * + * If the 'next' in-kernel return stack is shorter than 'prev', + * 'next' could be tricked into speculating with a user-poisoned RSB + * entry. + * + * The "user -> kernel" attack scenario is mitigated by SMEP and + * eIBRS. + * + * The "user -> user" scenario, also known as SpectreBHB, requires + * RSB clearing. + * + * So to mitigate all cases, unconditionally fill RSB on context + * switches. + * + * FIXME: Is this pointless for retbleed-affected AMD? */ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); + /* + * Similar to context switches, there are two types of RSB attacks + * after vmexit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS, on the other hand, has RSB-poisoning protections, so it + * doesn't need RSB clearing after vmexit. + */ + if (boot_cpu_has(X86_FEATURE_RETPOLINE) || + boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS * and Enhanced IBRS protect firmware too, so enable IBRS around diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index e35cb9560eeb..d6ae5237fc38 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -9997,8 +9997,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before * the first unbalanced RET after vmexit! * - * For retpoline, RSB filling is needed to prevent poisoned RSB entries - * and (in some cases) RSB underflow. + * For retpoline or IBRS, RSB filling is needed to prevent poisoned RSB + * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't * need the RSB filling sequence. But it does need to be enabled From patchwork Thu Oct 27 20:55:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3449DFA3740 for ; Thu, 27 Oct 2022 21:03:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237012AbiJ0VDv (ORCPT ); Thu, 27 Oct 2022 17:03:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229531AbiJ0VDH (ORCPT ); Thu, 27 Oct 2022 17:03:07 -0400 Received: from smtp-fw-80006.amazon.com (smtp-fw-80006.amazon.com [99.78.197.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C514081138; Thu, 27 Oct 2022 13:55:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904147; x=1698440147; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=bR6PwBN7rS80MRPrFN0BvJ1Fq33LAEWjxmD7wfwXV3U=; b=u3aaGwuOxFuOwmLcK4c9Hwo+86S0VcGC+Wo5GHfAC84ByRLrP83RiBOd rsn+gfke+/mPZ7KbRwWEaTGLBkVSMFn48DtNxU2ReozZXKkJ6N5419C4y eol1fLnYyJ2gQytEQrMjQvAIzNgul2OXKJc2HrJf3rKN+xrvf/uAg2+E4 Q=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145138375" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-8c5b1df3.us-west-2.amazon.com) ([10.25.36.210]) by smtp-border-fw-80006.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:55:45 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-pdx-2c-m6i4x-8c5b1df3.us-west-2.amazon.com (Postfix) with ESMTPS id 89EAE41700; Thu, 27 Oct 2022 20:55:44 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:55:44 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:55:43 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 28/34] x86/common: Stamp out the stepping madness Date: Thu, 27 Oct 2022 13:55:33 -0700 Message-ID: <20221027205533.17873-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205533.17873-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205533.17873-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D34UWA002.ant.amazon.com (10.43.160.245) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit 7a05bc95ed1c5a59e47aaade9fb4083c27de9e62 upstream. The whole MMIO/RETBLEED enumeration went overboard on steppings. Get rid of all that and simply use ANY. If a future stepping of these models would not be affected, it had better set the relevant ARCH_CAP_$FOO_NO bit in IA32_ARCH_CAPABILITIES. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Acked-by: Dave Hansen Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/common.c | 38 +++++++++++++++--------------------- 1 file changed, 16 insertions(+), 22 deletions(-) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index c7202647daeb..67eefcb7f925 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -994,32 +994,26 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(HASWELL_CORE, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_ULT, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_GT3E, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(HASWELL_X, BIT(2) | BIT(4), MMIO), - VULNBL_INTEL_STEPPINGS(BROADWELL_XEON_D,X86_STEPPINGS(0x3, 0x5), MMIO), + VULNBL_INTEL_STEPPINGS(HASWELL_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(BROADWELL_XEON_D,X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL_GT3E, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL_CORE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | - BIT(7) | BIT(0xB), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_MOBILE, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(ICELAKE_XEON_D, X86_STEPPINGS(0x1, 0x1), MMIO), - VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(ICELAKE_MOBILE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ICELAKE_XEON_D, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPING_ANY, MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPING_ANY, MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_X, X86_STEPPING_ANY, MMIO), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS), VULNBL_AMD(0x15, RETBLEED), VULNBL_AMD(0x16, RETBLEED), From patchwork Thu Oct 27 20:55:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D4DFECAAA1 for ; Thu, 27 Oct 2022 21:03:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237088AbiJ0VD5 (ORCPT ); Thu, 27 Oct 2022 17:03:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236844AbiJ0VDL (ORCPT ); Thu, 27 Oct 2022 17:03:11 -0400 Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2527B9C7D0; Thu, 27 Oct 2022 13:56:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904165; x=1698440165; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=12CRVG2CjavIduugJlg8p7mBbqdHpeO5UCn3r/4FRk0=; b=K408/yfB1H2U/eIzjAWA0qHmBwAb0n7eRryhnqMOnGsQBgGad71dgLq5 YeFo06IItkjiU3PKVQPcvLA6c0B/c97surYpD1DhI2wGurvPqxU0WFXI+ 2aszty3IZTV9LX7teoaFR2krvkImp8x/DpnHyXhhxiiJGbVtIxuLWfQQc o=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="260700530" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2b-m6i4x-32fb4f1a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-6002.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:56:03 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2b-m6i4x-32fb4f1a.us-west-2.amazon.com (Postfix) with ESMTPS id D276AC3B26; Thu, 27 Oct 2022 20:56:02 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:56:02 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:56:02 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 29/34] x86/cpu/amd: Enumerate BTC_NO Date: Thu, 27 Oct 2022 13:55:41 -0700 Message-ID: <20221027205544.17949-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D36UWA003.ant.amazon.com (10.43.160.237) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Andrew Cooper commit 26aae8ccbc1972233afd08fb3f368947c0314265 upstream. BTC_NO indicates that hardware is not susceptible to Branch Type Confusion. Zen3 CPUs don't suffer BTC. Hypervisors are expected to synthesise BTC_NO when it is appropriate given the migration pool, to prevent kernels using heuristics. Signed-off-by: Andrew Cooper Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust context ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/amd.c | 21 +++++++++++++++------ arch/x86/kernel/cpu/common.c | 6 ++++-- 3 files changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index fc52b59c3178..c01cc52a9285 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -303,6 +303,7 @@ #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ +#define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type Confusion */ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 3914f9218a6b..0ccd74d37aad 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -857,12 +857,21 @@ static void init_amd_zn(struct cpuinfo_x86 *c) { set_cpu_cap(c, X86_FEATURE_ZEN); - /* - * Fix erratum 1076: CPB feature bit not being set in CPUID. - * Always set it, except when running under a hypervisor. - */ - if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB)) - set_cpu_cap(c, X86_FEATURE_CPB); + /* Fix up CPUID bits, but only if not virtualised. */ + if (!cpu_has(c, X86_FEATURE_HYPERVISOR)) { + + /* Erratum 1076: CPB feature bit not being set in CPUID. */ + if (!cpu_has(c, X86_FEATURE_CPB)) + set_cpu_cap(c, X86_FEATURE_CPB); + + /* + * Zen3 (Fam19 model < 0x10) parts are not susceptible to + * Branch Type Confusion, but predate the allocation of the + * BTC_NO bit. + */ + if (c->x86 == 0x19 && !cpu_has(c, X86_FEATURE_BTC_NO)) + set_cpu_cap(c, X86_FEATURE_BTC_NO); + } } static void init_amd(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 67eefcb7f925..44562885fa1d 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1123,8 +1123,10 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_MMIO_UNKNOWN); } - if ((cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))) - setup_force_cpu_bug(X86_BUG_RETBLEED); + if (!cpu_has(c, X86_FEATURE_BTC_NO)) { + if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA)) + setup_force_cpu_bug(X86_BUG_RETBLEED); + } if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; From patchwork Thu Oct 27 20:55:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E425CFA3740 for ; Thu, 27 Oct 2022 21:04:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237299AbiJ0VEA (ORCPT ); Thu, 27 Oct 2022 17:04:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236021AbiJ0VDL (ORCPT ); Thu, 27 Oct 2022 17:03:11 -0400 Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 286D5951E9; Thu, 27 Oct 2022 13:56:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904163; x=1698440163; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=zYf4Pp4MyRYF7yfjNxD1TewTeZMki4yJL4wI+cDmBWE=; b=fhXXDngOl3pX+ClGM03UxQnCiALbHRkuNoGEas3q2HBKFUqLorVWBnlI FLZl3KGr3RM3COWeHMJbZ3TVYWqSyQmCRqeVng+/456qfVu4a9oPW/nzd zOYAXhd716wTeR9mDbj4X/FEAdI14+jrDyoSn/+f9T/zHlUzBzI1Ad/Rc U=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="145243944" Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-3ef535ca.us-west-2.amazon.com) ([10.25.36.214]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:56:02 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-3ef535ca.us-west-2.amazon.com (Postfix) with ESMTPS id 708B26117D; Thu, 27 Oct 2022 20:56:03 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:56:02 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:56:02 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 30/34] x86/bugs: Add Cannon lake to RETBleed affected CPU list Date: Thu, 27 Oct 2022 13:55:42 -0700 Message-ID: <20221027205544.17949-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205544.17949-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205544.17949-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D36UWA003.ant.amazon.com (10.43.160.237) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Pawan Gupta commit f54d45372c6ac9c993451de5e51312485f7d10bc upstream. Cannon lake is also affected by RETBleed, add it to the list. Fixes: 6ad0ad2bf8a6 ("x86/bugs: Report Intel retbleed vulnerability") Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman [ bp: Adjust cpu model name CANNONLAKE_L -> CANNONLAKE_MOBILE ] Signed-off-by: Suraj Jitindar Singh --- arch/x86/kernel/cpu/common.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 44562885fa1d..4cbbe266a2b2 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1004,6 +1004,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(CANNONLAKE_MOBILE,X86_STEPPING_ANY, RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_MOBILE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_XEON_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO), From patchwork Thu Oct 27 20:55:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B215CECAAA1 for ; Thu, 27 Oct 2022 21:04:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237141AbiJ0VD7 (ORCPT ); Thu, 27 Oct 2022 17:03:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236980AbiJ0VDL (ORCPT ); Thu, 27 Oct 2022 17:03:11 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 271A79DF8E; Thu, 27 Oct 2022 13:56:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904166; x=1698440166; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=jbIiCg6Tyz8P8QfFpgDjm4doz8Bb7iMEIYmJCD9S154=; b=vBX4p5lEnWTJL0a1vXOMQ9B5zehVLrmx5f61Y7WQYBMy+SXxHF7m0HDd C2z9S/jYKfaVzLFFMQDpg3snMoOwHcR2LLv+yrHQb2Ssi4Zu5H0v+pN5c ps+0bJCTh8kXVlkjDxLb8XQzsEon2PoKb8FB5hmr3uMiRCD+7Gzg6ZN7Z 0=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153374" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-44b6fc51.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:56:04 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-44b6fc51.us-west-2.amazon.com (Postfix) with ESMTPS id BD34FA2819; Thu, 27 Oct 2022 20:56:03 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:56:03 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:56:02 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 31/34] x86/speculation: Disable RRSBA behavior Date: Thu, 27 Oct 2022 13:55:43 -0700 Message-ID: <20221027205544.17949-3-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205544.17949-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205544.17949-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D36UWA003.ant.amazon.com (10.43.160.237) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Pawan Gupta commit 4ad3278df6fe2b0852b00d5757fc2ccd8e92c26e upstream. Some Intel processors may use alternate predictors for RETs on RSB-underflow. This condition may be vulnerable to Branch History Injection (BHI) and intramode-BTI. Kernel earlier added spectre_v2 mitigation modes (eIBRS+Retpolines, eIBRS+LFENCE, Retpolines) which protect indirect CALLs and JMPs against such attacks. However, on RSB-underflow, RET target prediction may fallback to alternate predictors. As a result, RET's predicted target may get influenced by branch history. A new MSR_IA32_SPEC_CTRL bit (RRSBA_DIS_S) controls this fallback behavior when in kernel mode. When set, RETs will not take predictions from alternate predictors, hence mitigating RETs as well. Support for this is enumerated by CPUID.7.2.EDX[RRSBA_CTRL] (bit2). For spectre v2 mitigation, when a user selects a mitigation that protects indirect CALLs and JMPs against BHI and intramode-BTI, set RRSBA_DIS_S also to protect RETs for RSB-underflow case. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov [bwh: Backported to 5.15: adjust context in scattered.c] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman [sam: Fixed for missing X86_FEATURE_ENTRY_IBPB context] Signed-off-by: Samuel Mendoza-Jonas --- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/msr-index.h | 9 +++++++++ arch/x86/kernel/cpu/bugs.c | 26 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/scattered.c | 1 + 4 files changed, 37 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index c01cc52a9285..dcb37f07874d 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -288,7 +288,7 @@ /* FREE! (11*32+ 8) */ /* FREE! (11*32+ 9) */ /* FREE! (11*32+10) */ -/* FREE! (11*32+11) */ +#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index f5341b4de46a..a1a0b90293aa 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -47,6 +47,8 @@ #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */ #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ +#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */ +#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT) #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ @@ -121,6 +123,13 @@ * bit available to control VERW * behavior. */ +#define ARCH_CAP_RRSBA BIT(19) /* + * Indicates RET may use predictors + * other than the RSB. With eIBRS + * enabled predictions in kernel mode + * are restricted to targets in + * kernel. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 4c491c2f772b..742f5d0988d2 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1179,6 +1179,22 @@ static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void) return SPECTRE_V2_RETPOLINE; } +/* Disable in-kernel use of non-RSB RET predictors */ +static void __init spec_ctrl_disable_kernel_rrsba(void) +{ + u64 ia32_cap; + + if (!boot_cpu_has(X86_FEATURE_RRSBA_CTRL)) + return; + + ia32_cap = x86_read_arch_cap_msr(); + + if (ia32_cap & ARCH_CAP_RRSBA) { + x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S; + write_spec_ctrl_current(x86_spec_ctrl_base, true); + } +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); @@ -1272,6 +1288,16 @@ static void __init spectre_v2_select_mitigation(void) break; } + /* + * Disable alternate RSB predictions in kernel when indirect CALLs and + * JMPs gets protection against BHI and Intramode-BTI, but RET + * prediction from a non-RSB predictor is still a risk. + */ + if (mode == SPECTRE_V2_EIBRS_LFENCE || + mode == SPECTRE_V2_EIBRS_RETPOLINE || + mode == SPECTRE_V2_RETPOLINE) + spec_ctrl_disable_kernel_rrsba(); + spectre_v2_enabled = mode; pr_info("%s\n", spectre_v2_strings[mode]); diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index 0b9c7150cb23..efdb1decf034 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -21,6 +21,7 @@ struct cpuid_bit { static const struct cpuid_bit cpuid_bits[] = { { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 }, { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 }, + { X86_FEATURE_RRSBA_CTRL, CPUID_EDX, 2, 0x00000007, 2 }, { X86_FEATURE_CQM_LLC, CPUID_EDX, 1, 0x0000000f, 0 }, { X86_FEATURE_CQM_OCCUP_LLC, CPUID_EDX, 0, 0x0000000f, 1 }, { X86_FEATURE_CQM_MBM_TOTAL, CPUID_EDX, 1, 0x0000000f, 1 }, From patchwork Thu Oct 27 20:55:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022835 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05946FA3744 for ; Thu, 27 Oct 2022 21:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237318AbiJ0VED (ORCPT ); Thu, 27 Oct 2022 17:04:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237003AbiJ0VDM (ORCPT ); Thu, 27 Oct 2022 17:03:12 -0400 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93B8A9E2C0; Thu, 27 Oct 2022 13:56:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904167; x=1698440167; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=qzF+xERmgJ9+mghuS54m219xTIgWIfAHol7gHyRFW90=; b=pL4jYHRFE64gaWD2vKiYYHNy7hHH2C1gQt69usss6FrG8I/fvK3IeyFT MfqjkhkULOVv+j9QrimQarOOi7hl3nLmtVJ+BGDMnQGARtp/Ejg42N+OB kE3JvuGgVUPbLWnnXZ+t80UKBZJBwRKJm5lLYWQq+OJ9va3OygVOfGPpt M=; Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-pdx-2a-m6i4x-44b6fc51.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-6001.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:56:05 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2a-m6i4x-44b6fc51.us-west-2.amazon.com (Postfix) with ESMTPS id 43228A1039; Thu, 27 Oct 2022 20:56:04 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:56:03 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.161.14) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:56:03 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 32/34] x86/speculation: Use DECLARE_PER_CPU for x86_spec_ctrl_current Date: Thu, 27 Oct 2022 13:55:44 -0700 Message-ID: <20221027205544.17949-4-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205544.17949-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205544.17949-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.161.14] X-ClientProxiedBy: EX13D36UWA003.ant.amazon.com (10.43.160.237) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Nathan Chancellor commit db886979683a8360ced9b24ab1125ad0c4d2cf76 upstream. Clang warns: arch/x86/kernel/cpu/bugs.c:58:21: error: section attribute is specified on redeclared variable [-Werror,-Wsection] DEFINE_PER_CPU(u64, x86_spec_ctrl_current); ^ arch/x86/include/asm/nospec-branch.h:283:12: note: previous declaration is here extern u64 x86_spec_ctrl_current; ^ 1 error generated. The declaration should be using DECLARE_PER_CPU instead so all attributes stay in sync. Cc: stable@vger.kernel.org Fixes: fc02735b14ff ("KVM: VMX: Prevent guest RSB poisoning attacks with eIBRS") Reported-by: kernel test robot Signed-off-by: Nathan Chancellor Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index dad872462e8a..0c71e0b0dc6f 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -11,6 +11,7 @@ #include #include #include +#include /* * Fill the CPU return stack buffer. @@ -290,7 +291,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern u64 x86_spec_ctrl_current; +DECLARE_PER_CPU(u64, x86_spec_ctrl_current); extern void write_spec_ctrl_current(u64 val, bool force); extern u64 spec_ctrl_current(void); From patchwork Thu Oct 27 20:56:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022836 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1489FFA3743 for ; Thu, 27 Oct 2022 21:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237331AbiJ0VEH (ORCPT ); Thu, 27 Oct 2022 17:04:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52840 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236760AbiJ0VDM (ORCPT ); Thu, 27 Oct 2022 17:03:12 -0400 Received: from smtp-fw-2101.amazon.com (smtp-fw-2101.amazon.com [72.21.196.25]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DF39937B2; Thu, 27 Oct 2022 13:56:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904175; x=1698440175; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=bfQCNzMGxU/YAenzJL8MmQXT+u5Ihq6un5jQPap2hMM=; b=CJj1Or2gXJWpL+Qoa4Ya/k16AeHQI28Xe/ayf75bRKOgMbo/dmJwt5UW srEjRqJEc5gtlIIxQP0euiIc17+bVJxXGXFu8umgB1SeauloG8v4D4AXJ 1rvBwdC+mSvjGzxAJko4Un/VWY7eywIEXdXAXaMfT9Ge5NALnPbKSAxXS 0=; X-IronPort-AV: E=Sophos;i="5.95,218,1661817600"; d="scan'208";a="257153398" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-2101.iad2.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:56:14 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id 3F2A441740; Thu, 27 Oct 2022 20:56:13 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:56:12 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:56:12 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 33/34] x86/bugs: Warn when "ibrs" mitigation is selected on Enhanced IBRS parts Date: Thu, 27 Oct 2022 13:56:03 -0700 Message-ID: <20221027205604.18039-1-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027204801.13146-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D37UWC003.ant.amazon.com (10.43.162.183) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Pawan Gupta commit eb23b5ef9131e6d65011de349a4d25ef1b3d4314 upstream. IBRS mitigation for spectre_v2 forces write to MSR_IA32_SPEC_CTRL at every kernel entry/exit. On Enhanced IBRS parts setting MSR_IA32_SPEC_CTRL[IBRS] only once at boot is sufficient. MSR writes at every kernel entry/exit incur unnecessary performance loss. When Enhanced IBRS feature is present, print a warning about this unnecessary performance loss. Signed-off-by: Pawan Gupta Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Thadeu Lima de Souza Cascardo Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/2a5eaf54583c2bfe0edc4fea64006656256cca17.1657814857.git.pawan.kumar.gupta@linux.intel.com Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 742f5d0988d2..0af5e4d6b806 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -849,6 +849,7 @@ static inline const char *spectre_v2_module_string(void) { return ""; } #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n" #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" #define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n" +#define SPECTRE_V2_IBRS_PERF_MSG "WARNING: IBRS mitigation selected on Enhanced IBRS CPU, this may cause unnecessary performance loss\n" #ifdef CONFIG_BPF_SYSCALL void unpriv_ebpf_notify(int new_state) @@ -1275,6 +1276,8 @@ static void __init spectre_v2_select_mitigation(void) case SPECTRE_V2_IBRS: setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) + pr_warn(SPECTRE_V2_IBRS_PERF_MSG); break; case SPECTRE_V2_LFENCE: From patchwork Thu Oct 27 20:56:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jitindar Singh, Suraj" X-Patchwork-Id: 13022839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23979FA3746 for ; Thu, 27 Oct 2022 21:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237343AbiJ0VEL (ORCPT ); Thu, 27 Oct 2022 17:04:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236853AbiJ0VDM (ORCPT ); Thu, 27 Oct 2022 17:03:12 -0400 Received: from smtp-fw-6001.amazon.com (smtp-fw-6001.amazon.com [52.95.48.154]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19CDB9E2E2; Thu, 27 Oct 2022 13:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1666904177; x=1698440177; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=EUyKiQIE8FmQPBJP3R3vAaz+3ZR+RCTp73pYfLbEbNs=; b=T3ft8AxUuYY5i0NzwWQxNi1+PpKJIVA/ZseJlEQj1K+XEIHSaZain8qj Zj/TXiDAUeICP22Ae4xllWzgO6nAP//nnIt2Ke6/GfHnlAtJ3PIX9fWY5 qlOpFTbKvMJktn4RupUYVsSmq8u4Vl5BTca5O6PaWXmhZ/4umLvuMfgPQ s=; Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-6001.iad6.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2022 20:56:15 +0000 Received: from EX13MTAUWB001.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-pdx-2c-m6i4x-5eae960a.us-west-2.amazon.com (Postfix) with ESMTPS id AE7D7417EE; Thu, 27 Oct 2022 20:56:13 +0000 (UTC) Received: from EX19D030UWB002.ant.amazon.com (10.13.139.182) by EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS) id 15.0.1497.42; Thu, 27 Oct 2022 20:56:13 +0000 Received: from u3c3f5cfe23135f.ant.amazon.com (10.43.160.223) by EX19D030UWB002.ant.amazon.com (10.13.139.182) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.15; Thu, 27 Oct 2022 20:56:12 +0000 From: Suraj Jitindar Singh To: CC: , , , , , , , Subject: [PATCH 4.14 34/34] x86/speculation: Add RSB VM Exit protections Date: Thu, 27 Oct 2022 13:56:04 -0700 Message-ID: <20221027205604.18039-2-surajjs@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221027205604.18039-1-surajjs@amazon.com> References: <20221027204801.13146-1-surajjs@amazon.com> <20221027205604.18039-1-surajjs@amazon.com> MIME-Version: 1.0 X-Originating-IP: [10.43.160.223] X-ClientProxiedBy: EX13D37UWC003.ant.amazon.com (10.43.162.183) To EX19D030UWB002.ant.amazon.com (10.13.139.182) Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Daniel Sneddon commit 2b1299322016731d56807aa49254a5ea3080b6b3 upstream. tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. == Background == Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. == Problem == Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. == Solution == The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e., eIBRS systems which enable legacy IBRS explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] Signed-off-by: Daniel Sneddon Co-developed-by: Pawan Gupta Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov [ bp: Adjust patch to account for kvm entry being in c ] Signed-off-by: Suraj Jitindar Singh --- Documentation/admin-guide/hw-vuln/spectre.rst | 8 ++ arch/x86/include/asm/cpufeatures.h | 2 + arch/x86/include/asm/msr-index.h | 4 + arch/x86/include/asm/nospec-branch.h | 15 +++- arch/x86/kernel/cpu/bugs.c | 87 ++++++++++++++----- arch/x86/kernel/cpu/common.c | 12 ++- arch/x86/kvm/vmx.c | 4 +- tools/arch/x86/include/asm/cpufeatures.h | 1 + 8 files changed, 103 insertions(+), 30 deletions(-) diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst index 6bd97cd50d62..7e061ed449aa 100644 --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -422,6 +422,14 @@ The possible values in this file are: 'RSB filling' Protection of RSB on context switch enabled ============= =========================================== + - EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status: + + =========================== ======================================================= + 'PBRSB-eIBRS: SW sequence' CPU is affected and protection of RSB on VMEXIT enabled + 'PBRSB-eIBRS: Vulnerable' CPU is vulnerable + 'PBRSB-eIBRS: Not affected' CPU is not affected by PBRSB + =========================== ======================================================= + Full mitigation might require a microcode update from the CPU vendor. When the necessary microcode is not available, the kernel will report vulnerability. diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index dcb37f07874d..840d8981567e 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -291,6 +291,7 @@ #define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ +#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */ /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */ #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */ @@ -405,5 +406,6 @@ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */ #define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */ +#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index a1a0b90293aa..92c6054f0a00 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -130,6 +130,10 @@ * are restricted to targets in * kernel. */ +#define ARCH_CAP_PBRSB_NO BIT(24) /* + * Not susceptible to Post-Barrier + * Return Stack Buffer Predictions. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 0c71e0b0dc6f..118441f53399 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -59,6 +59,13 @@ /* barrier for jnz misprediction */ \ lfence; +#define ISSUE_UNBALANCED_RET_GUARD(sp) \ + call 992f; \ + int3; \ +992: \ + add $(BITS_PER_LONG/8), sp; \ + lfence; + #ifdef __ASSEMBLY__ /* @@ -263,9 +270,11 @@ static __always_inline void vmexit_fill_RSB(void) unsigned long loops; asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE - ALTERNATIVE("jmp 910f", - __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)), - X86_FEATURE_RSB_VMEXIT) + ALTERNATIVE_2("jmp 910f", "", X86_FEATURE_RSB_VMEXIT, + "jmp 911f", X86_FEATURE_RSB_VMEXIT_LITE) + __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)) + "911:" + __stringify(ISSUE_UNBALANCED_RET_GUARD(%1)) "910:" : "=r" (loops), ASM_CALL_CONSTRAINT : : "memory" ); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 0af5e4d6b806..05dcdb419abd 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1196,6 +1196,54 @@ static void __init spec_ctrl_disable_kernel_rrsba(void) } } +static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode) +{ + /* + * Similar to context switches, there are two types of RSB attacks + * after VM exit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB + * bug is present then a LITE version of RSB protection is required, + * just a single call needs to retire before a RET is executed. + */ + switch (mode) { + case SPECTRE_V2_NONE: + return; + + case SPECTRE_V2_EIBRS_LFENCE: + case SPECTRE_V2_EIBRS: + if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB) && + (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)) { + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); + pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n"); + } + return; + + case SPECTRE_V2_EIBRS_RETPOLINE: + case SPECTRE_V2_RETPOLINE: + case SPECTRE_V2_LFENCE: + case SPECTRE_V2_IBRS: + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n"); + return; + } + + pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit"); + dump_stack(); +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); @@ -1345,28 +1393,7 @@ static void __init spectre_v2_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); - /* - * Similar to context switches, there are two types of RSB attacks - * after vmexit: - * - * 1) RSB underflow - * - * 2) Poisoned RSB entry - * - * When retpoline is enabled, both are mitigated by filling/clearing - * the RSB. - * - * When IBRS is enabled, while #1 would be mitigated by the IBRS branch - * prediction isolation protections, RSB still needs to be cleared - * because of #2. Note that SMEP provides no protection here, unlike - * user-space-poisoned RSB entries. - * - * eIBRS, on the other hand, has RSB-poisoning protections, so it - * doesn't need RSB clearing after vmexit. - */ - if (boot_cpu_has(X86_FEATURE_RETPOLINE) || - boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) - setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + spectre_v2_determine_rsb_fill_type_at_vmexit(mode); /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS @@ -2094,6 +2121,19 @@ static char *ibpb_state(void) return ""; } +static char *pbrsb_eibrs_state(void) +{ + if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { + if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) || + boot_cpu_has(X86_FEATURE_RSB_VMEXIT)) + return ", PBRSB-eIBRS: SW sequence"; + else + return ", PBRSB-eIBRS: Vulnerable"; + } else { + return ", PBRSB-eIBRS: Not affected"; + } +} + static ssize_t spectre_v2_show_state(char *buf) { if (spectre_v2_enabled == SPECTRE_V2_LFENCE) @@ -2106,12 +2146,13 @@ static ssize_t spectre_v2_show_state(char *buf) spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n"); - return sprintf(buf, "%s%s%s%s%s%s\n", + return sprintf(buf, "%s%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], ibpb_state(), boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", stibp_state(), boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", + pbrsb_eibrs_state(), spectre_v2_module_string()); } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 4cbbe266a2b2..2ad6d3b02a38 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -906,6 +906,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c) #define NO_SWAPGS BIT(6) #define NO_ITLB_MULTIHIT BIT(7) #define NO_MMIO BIT(8) +#define NO_EIBRS_PBRSB BIT(9) #define VULNWL(_vendor, _family, _model, _whitelist) \ { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist } @@ -947,7 +948,7 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO), VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO | NO_EIBRS_PBRSB), /* * Technically, swapgs isn't serializing on AMD (despite it previously @@ -957,7 +958,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { * good enough for our purposes. */ - VULNWL_INTEL(ATOM_TREMONT_X, NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_TREMONT, NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_L, NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_X, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB), /* AMD Family 0xf - 0x12 */ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_MMIO), @@ -1129,6 +1132,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_RETBLEED); } + if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) && + !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && + !(ia32_cap & ARCH_CAP_PBRSB_NO)) + setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index d6ae5237fc38..aea4c497da3f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -10001,8 +10001,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't - * need the RSB filling sequence. But it does need to be enabled - * before the first unbalanced RET. + * need the RSB filling sequence. But it does need to be enabled, and a + * single call to retire, before the first unbalanced RET. * * So no RETs before vmx_spec_ctrl_restore_host() below. */ diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index bb5861adb5a0..8fd46c879348 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -270,6 +270,7 @@ /* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:0 (EDX), word 11 */ #define X86_FEATURE_CQM_LLC (11*32+ 1) /* LLC QoS if 1 */ +#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */ /* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:1 (EDX), word 12 */ #define X86_FEATURE_CQM_OCCUP_LLC (12*32+ 0) /* LLC occupancy monitoring */