From patchwork Thu Nov 17 09:19:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suleiman Souhlal X-Patchwork-Id: 13046372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE0F9C4332F for ; Thu, 17 Nov 2022 09:21:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239891AbiKQJVX (ORCPT ); Thu, 17 Nov 2022 04:21:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239291AbiKQJVP (ORCPT ); Thu, 17 Nov 2022 04:21:15 -0500 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BC38697DB for ; Thu, 17 Nov 2022 01:21:07 -0800 (PST) Received: by mail-pl1-x649.google.com with SMTP id o7-20020a170902d4c700b001868cdac9adso1033866plg.13 for ; Thu, 17 Nov 2022 01:21:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ONCQMapXMFNqN4E8L/38xXygfmWlcyiuSOGs+oRFy0Q=; b=ApWbalOQkteFEbjAljcmOsMfarW8W5ENKRgGw0Y74hRB1+0fyeHddoco4grbVGH5Uf t0KM5w38Bu+I4BJxgXBN3hqRGFHQ8wKMLx0rvu6pTFyS7inV9bCV9h+tzZPZ4bemGLls aMtn2ikInYpUv+vjyWDCzFgE96YTKMp450xFOXjhSn7GpaMwNDfOOJs8yIlxmrjHZpN6 y6ho5B+cxBlJPEW0MqBfxHKGwUIVnX6B5WsEzeb5fkkvIJxcn+z6Dy95NKP/vUX3u/NQ aDHRF/d+MLK48t8WZ51gnLuvaZ2dT+rohGygOeSoTMBW2EUDacGYba4ifKlt3oIQo2zi XuWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ONCQMapXMFNqN4E8L/38xXygfmWlcyiuSOGs+oRFy0Q=; b=vHFh0VVBFvmxsgim3yD65P+8ji8wvh4TM/B7qMRKxNQZ1PsMKG+eKknA6E0IoT0Grm Oqj68sugSEZX3YxpGi0K/FUN5vU2Z1AGG+WHRjS3JLXsWnT+2cdNUPw8HyzSiB3Xc3jI kcFtIbdp+OEADxvJMizWS6JRJfipFdnsMiS0bmYZf5yyCzb5/RUManAZDvI6GEyAPND0 tywhrJuXN7KZ+IkPQjpQ4K0TEbGa7X3OexMmGm4WrqL7sbj75jMC9lb4NELIl3uMuYH7 N2PtjvjEcsjAHpKbwSjXGevMePv7TUbUOOROotw3CmFyA/7KpG0XPz5hsmHdTDMpF1Kg ptWQ== X-Gm-Message-State: ANoB5plFcJHkoL/y9EFoHzIk7G9bM48C1uytHa4hvFq5B7W0+WBBvncd tL6060Gwmi4ngw8U/ZfRbOo9EjWqHmeirQ== X-Google-Smtp-Source: AA0mqf46/v92SVcAYmYuY8L8jZQku6yEDIjzE0ETCfW5PzVEKFGqjhg7mWnjxyaXxGo9bw1dQYFMD26OWrnOhg== X-Received: from suleiman1.tok.corp.google.com ([2401:fa00:8f:203:416e:f3c7:7f1d:6e]) (user=suleiman job=sendgmr) by 2002:a17:902:ba91:b0:186:c958:6cd8 with SMTP id k17-20020a170902ba9100b00186c9586cd8mr1807542pls.145.1668676866460; Thu, 17 Nov 2022 01:21:06 -0800 (PST) Date: Thu, 17 Nov 2022 18:19:32 +0900 In-Reply-To: <20221117091952.1940850-1-suleiman@google.com> Message-Id: <20221117091952.1940850-15-suleiman@google.com> Mime-Version: 1.0 References: <20221117091952.1940850-1-suleiman@google.com> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog Subject: [PATCH 4.19 14/34] x86/bugs: Optimize SPEC_CTRL MSR writes From: Suleiman Souhlal To: stable@vger.kernel.org Cc: x86@kernel.org, kvm@vger.kernel.org, bp@alien8.de, pbonzini@redhat.com, peterz@infradead.org, jpoimboe@kernel.org, cascardo@canonical.com, surajjs@amazon.com, ssouhlal@FreeBSD.org, suleiman@google.com Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Peter Zijlstra commit c779bc1a9002fa474175b80e72b85c9bf628abb0 upstream. When changing SPEC_CTRL for user control, the WRMSR can be delayed until return-to-user when KERNEL_IBRS has been enabled. This avoids an MSR write during context switch. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman Signed-off-by: Suleiman Souhlal --- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 18 ++++++++++++------ arch/x86/kernel/process.c | 2 +- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 6e2ff31584ae..c87ca2596c8a 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -307,7 +307,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern void write_spec_ctrl_current(u64 val); +extern void write_spec_ctrl_current(u64 val, bool force); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index c55570a7c91a..ceb7cf1a1a3c 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -61,13 +61,19 @@ static DEFINE_MUTEX(spec_ctrl_mutex); * Keep track of the SPEC_CTRL MSR value for the current task, which may differ * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). */ -void write_spec_ctrl_current(u64 val) +void write_spec_ctrl_current(u64 val, bool force) { if (this_cpu_read(x86_spec_ctrl_current) == val) return; this_cpu_write(x86_spec_ctrl_current, val); - wrmsrl(MSR_IA32_SPEC_CTRL, val); + + /* + * When KERNEL_IBRS this MSR is written on return-to-user, unless + * forced the update can be delayed until that time. + */ + if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS)) + wrmsrl(MSR_IA32_SPEC_CTRL, val); } /* @@ -1195,7 +1201,7 @@ static void __init spectre_v2_select_mitigation(void) if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } switch (mode) { @@ -1250,7 +1256,7 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1493,7 +1499,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } } @@ -1698,7 +1704,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index d697ccbf0cd2..a95b9e090f9e 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -434,7 +434,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp, } if (updmsr) - write_spec_ctrl_current(msr); + write_spec_ctrl_current(msr, false); } static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)