From patchwork Tue Jul 20 04:09:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuppuswamy Sathyanarayanan X-Patchwork-Id: 12387469 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DAC3C07E95 for ; Tue, 20 Jul 2021 04:11:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 71FBE6101B for ; Tue, 20 Jul 2021 04:11:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230015AbhGTDaW (ORCPT ); Mon, 19 Jul 2021 23:30:22 -0400 Received: from mga14.intel.com ([192.55.52.115]:58892 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243152AbhGTD3c (ORCPT ); Mon, 19 Jul 2021 23:29:32 -0400 X-IronPort-AV: E=McAfee;i="6200,9189,10050"; a="210893268" X-IronPort-AV: E=Sophos;i="5.84,254,1620716400"; d="scan'208";a="210893268" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jul 2021 21:10:11 -0700 X-IronPort-AV: E=Sophos;i="5.84,254,1620716400"; d="scan'208";a="431914086" Received: from ywei11-mobl1.amr.corp.intel.com (HELO skuppusw-desk1.amr.corp.intel.com) ([10.251.138.31]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jul 2021 21:10:10 -0700 From: Kuppuswamy Sathyanarayanan To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Peter Zijlstra , Andy Lutomirski Cc: Peter H Anvin , Dave Hansen , Tony Luck , Dan Williams , Andi Kleen , Kirill Shutemov , Sean Christopherson , Kuppuswamy Sathyanarayanan , x86@kernel.org, linux-kernel@vger.kernel.org, "Rafael J . Wysocki" , linux-acpi@vger.kernel.org, "Rafael J . Wysocki" Subject: [PATCH v3 5/5] x86: Skip WBINVD instruction for VM guest Date: Mon, 19 Jul 2021 21:09:01 -0700 Message-Id: <20210720040901.2121268-6-sathyanarayanan.kuppuswamy@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210720040901.2121268-1-sathyanarayanan.kuppuswamy@linux.intel.com> References: <20210720040901.2121268-1-sathyanarayanan.kuppuswamy@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org VM guests that supports ACPI, use standard ACPI mechanisms to signal sleep state entry (including reboot) to the host. The ACPI specification mandates WBINVD on any sleep state entry with the expectation that the platform is only responsible for maintaining the state of memory over sleep states, not preserving dirty data in any CPU caches. ACPI cache flushing requirements pre-date the advent of virtualization. Given guest sleep state entry does not affect any host power rails it is not required to flush caches. The host is responsible for maintaining cache state over its own bare metal sleep state transitions that power-off the cache. A TDX guest, unlike a typical guest, will machine check if the CPU cache is powered off. Cc: Rafael J. Wysocki Cc: linux-acpi@vger.kernel.org Reviewed-by: Dan Williams Acked-by: Rafael J. Wysocki Signed-off-by: Kuppuswamy Sathyanarayanan --- arch/x86/include/asm/acenv.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h index 9aff97f0de7f..d4162e94bee8 100644 --- a/arch/x86/include/asm/acenv.h +++ b/arch/x86/include/asm/acenv.h @@ -10,10 +10,15 @@ #define _ASM_X86_ACENV_H #include +#include /* Asm macros */ -#define ACPI_FLUSH_CPU_CACHE() wbinvd() +#define ACPI_FLUSH_CPU_CACHE() \ +do { \ + if (!boot_cpu_has(X86_FEATURE_HYPERVISOR)) \ + wbinvd(); \ +} while (0) int __acpi_acquire_global_lock(unsigned int *lock); int __acpi_release_global_lock(unsigned int *lock);