From patchwork Thu Feb 27 01:48:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993515 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7918E3A1BA for ; Thu, 27 Feb 2025 01:49:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620947; cv=none; b=nEW2KB+FtnQTBuMGmH87JelhBCzaxQmmeU0FXWMkJ9x6m0KtKprqQzYhBH4KXP3UqnupQTPy8L+25VGJzNbcfTa8rbACvMTXsZ4KvS5MUVoZnZikU/q166q6pzDjv4Ftfo4kT1CjyEItfcGr1qD5mtUT8tNte4grBSnrSp556rU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620947; c=relaxed/simple; bh=FxXG4vu/6eu+p30+t5KxZky4GYe1qhhb4jfmOZkmPys=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Sq1PaU3qbbB+r62EyNrOKssfeycLpsesdKdRGOeGpOHEGOZ5+VTvNMEuSxzrTwbr2z/rHHCorPZ9/2e6PRpYcQ16csGnk/j5YhKxTrySLzBVgmYRQmJAOeyk9Q9SYPlD5+SAox2I9zsdp1mj/5SVm++2qzXsP1J5h/BF4/Ku4jc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=e3kKPokN; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="e3kKPokN" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc518f0564so1042736a91.2 for ; Wed, 26 Feb 2025 17:49:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620946; x=1741225746; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=GpF7VaswH1+PtCwj7e39drWtyVDZ7wRi7CgpqaqVasQ=; b=e3kKPokNWZVqZFPOx/8sfSaS2bGT4C+fNRqmy5rJox57LknJVn3pnJ/kePcs0Pc9bh JwP6NswzFOzLV71S2T7b0SLyryZrL6ttwTORW8JBRblUjdb0XmUSME2rJPH1n8d6RDIy 7t/3cwQnd0k7iPrqW2E4/lOOdnaPPkXNH+F248A80OKLmxndlaY+38DjISn7iZpeN0H7 1N9sNv92fY2mrr4SVJlRqUTD8uBb2NRv8c3+YppbBbfiKr3wE8mqBDjeGzyYjDo6rKdP poYUMdZqrzt0jBgMsWm0aayqPAaA9QbqWcxisSgzGiektVIJqPNdXe/cqsBboYxfsWN7 rzgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620946; x=1741225746; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=GpF7VaswH1+PtCwj7e39drWtyVDZ7wRi7CgpqaqVasQ=; b=VZZB7HAmPdb4rTf+XYHlXspC105omPIzMM96eXHNLVBInfkCw7p6QWBUAwiEZcLuNy 2wNShbbJSmfw0w57OQlvLEkmRBtGvmyHEcN0d0mjZYH+W9xQmK9vdWbOVQ8avBWOlOUL PFTULx5nM8QLXCdUbL+8O/d2r3cLkB8HKCQSD3uIuseh6NbfyaS8dX/GJocd/yXgBcKQ NX8llYgW7Ij2LTrTZYYhG5aZdytukloKSGdN1FY34FkB1n5uolLr293B9b6DxhUBw1h2 3olkEqVUGtKiJ+GYsc8vbkA8DaeFIGmaoY09LEonABGFwiRPanW0W77VDUF2v6XzXoJZ mscQ== X-Forwarded-Encrypted: i=1; AJvYcCXjaj0ZWnH7LFTIeyXSmniRMK13R7Tm+U82XPFBHQBgjNomnYTxZ4ot3P14Dk/jfuD8Pk4=@vger.kernel.org X-Gm-Message-State: AOJu0YwhwcSLo2ss2ey+b/yEcmCQcT7BdQBUo80GLStekpP3KA7GKUlg o/6ica9Zor3NtWtsiS1pmOMoci8b1YWZ3PwOZR2r+i/uutKn3wvIIn7rSJk2x895aIKoXUfh3W3 h1g== X-Google-Smtp-Source: AGHT+IE1Ud6/nNgaJU0N0FB6mqBtnxDmIgHW50+GZisORAEZMHbHbI9bClZkETrXKUspUI8U9MdvNfjAVMA= X-Received: from pjz6.prod.google.com ([2002:a17:90b:56c6:b0:2fc:3022:36b8]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:51:b0:2ee:ed1c:e451 with SMTP id 98e67ed59e1d1-2fe68ae1b15mr15878436a91.15.1740620945822; Wed, 26 Feb 2025 17:49:05 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:52 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-2-seanjc@google.com> Subject: [PATCH 1/7] KVM: SVM: Remove wbinvd in sev_vm_destroy() From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang From: Zheyun Shen Before sev_vm_destroy() is called, kvm_arch_guest_memory_reclaimed() has been called for SEV and SEV-ES and kvm_arch_gmem_invalidate() has been called for SEV-SNP. These functions have already handled flushing the memory. Therefore, this wbinvd_on_all_cpus() can simply be dropped. Suggested-by: Sean Christopherson Signed-off-by: Zheyun Shen Reviewed-by: Tom Lendacky Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 74525651770a..d934d788ac39 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2879,12 +2879,6 @@ void sev_vm_destroy(struct kvm *kvm) return; } - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); /* * if userspace was terminated before unregistering the memory regions From patchwork Thu Feb 27 01:48:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993516 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B6941474B8 for ; Thu, 27 Feb 2025 01:49:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620949; cv=none; b=D/iwUfYbANAUxm5QjVLi6kCmYJOEzcTd2rEjm2aNvwsuJYQYpjsEfZhzOmcvQUMKRidOjiSVrhbfdG4xLIvjQQUiRbzOgo7omZMfhkQWfq1vOe8/B2oKuvoXhj8Ef0ja57lwb4iIwcai9XU8Sm8vIE+/OLhQ1xbkl4RKYYCBpyQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620949; c=relaxed/simple; bh=43j1TPRazK8BCh3mEM3X8xpew+6c1YDyF0sMhn/7bR8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ubpYsCvDOE0V9s9EeCSU93d3GcHXoIF9fvUPgBcflmfJYLUm5v/i4HkmNLTY5shBX6yP+9Yf5MTTGO+XHli2ISyj1pMxffW/MH/O6xmIygw2MsK6uLlB2kWhtQDkEhR+lvhT/lpf7O0Gz+U2ayl6aSZUYrBfcEJM8XiAweS1pLI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SHGpuEdc; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SHGpuEdc" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2234bf13b47so7239545ad.0 for ; Wed, 26 Feb 2025 17:49:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620947; x=1741225747; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=EonFhjKCFXLwW4g8Vs93xIs2sNaiaGoxwaxHmKn8ztc=; b=SHGpuEdceExb/nYtvEI5uS5zY+Lys3ilHaJJwfpnnZWea0urDvO6IkGX9hfVLXtXLP jOfV8ZaX9vcdQjD61/eTRXxYX7GNX3Nq9qp+JLnbC4b266VrpPzNjMIP0vhb5wcSbkvR Fxhy9cKRd6xHN0e1qRYLhOUwPnWVKNTAOcIkHgaU9ShbMrze5srhfI1N47qPcYAi6BgJ sIC1cDBL9P0VbRAvKu4/UWAJKwwto7PlNVbx1AIy4mI4OCxOX0jbdfnWu3wSrpEzc+Su VfDpI7nSWbfSgJxKT5kAlkc8GTH4FkxTuPQy6sH5qBJOOFbhpDjlNkNzEMINKmD5Kquf PQ5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620947; x=1741225747; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=EonFhjKCFXLwW4g8Vs93xIs2sNaiaGoxwaxHmKn8ztc=; b=ioIUgAPXP+tc5xH5tsniCkAI7h9Ra1BibfajZ3PwqBl384Bkz4f0VrMc9Y1mfl9Ywj 4xOR9mD91kvvPWyjDnrHgbDpqs0WLpTVT+X8pWGDUfr8G2ilwPQVxy1B6fyAp9JwsKWn zBRZ2+/66zzbhdvkdbtHujxnBlCV0OUXBeUQ7whP3QXzwcFatZQDPdaVsjgzyl8LGFMJ Ayit0M+JB3jYdb6HRTbukDVSp8XuLtIdBUdD6YOfDmGNoCUvAsgYx4XsiAOpae+4pWGa M5xwlNtBK+y5E4E37XzEqw83Tf/sEIKqkOcNF5B/MCLH2usH2GLbQDs4+Xk7a+PRTvMZ x6Eg== X-Forwarded-Encrypted: i=1; AJvYcCX0mTRTZKy9QyAoCp7S5bld2Wat672kwVVyeJRMZm5qEyGC4eQHHTlANmgfDC9HVFSKiXo=@vger.kernel.org X-Gm-Message-State: AOJu0YzIyQAdUdSlFUb3IG0KAYoDfg9j+Udp/a02FxsYVYmyjtQyPiT+ CBXCTryJVBrCnSiYwpH8IOXvbPuhMjKj2z5NyxPi9xa5vPgoIwq1XW5i7/GC4/hIXxMhhjvUmvG qGg== X-Google-Smtp-Source: AGHT+IGE5mWIDU4Vs4EAjIB3CweF7rOyGspQLWufKH4bsvLgDhV8Q0xoTjomyWDxZBWJ7lKHEMgh7W8MY2M= X-Received: from plau19.prod.google.com ([2002:a17:903:3053:b0:220:e8a0:ec1d]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d502:b0:220:f509:686a with SMTP id d9443c01a7336-221a10f1e97mr389509235ad.29.1740620947569; Wed, 26 Feb 2025 17:49:07 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:53 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-3-seanjc@google.com> Subject: [PATCH 2/7] x86, lib: Drop the unused return value from wbinvd_on_all_cpus() From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang Drop wbinvd_on_all_cpus()'s return value; both the "real" version and the stub always return '0', and none of the callers check the return. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/smp.h | 5 ++--- arch/x86/lib/cache-smp.c | 3 +-- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index ca073f40698f..ee61e322e2a1 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -111,7 +111,7 @@ void __noreturn hlt_play_dead(void); void native_play_dead(void); void play_dead_common(void); void wbinvd_on_cpu(int cpu); -int wbinvd_on_all_cpus(void); +void wbinvd_on_all_cpus(void); void smp_kick_mwait_play_dead(void); @@ -154,10 +154,9 @@ static inline struct cpumask *cpu_l2c_shared_mask(int cpu) #else /* !CONFIG_SMP */ #define wbinvd_on_cpu(cpu) wbinvd() -static inline int wbinvd_on_all_cpus(void) +static inline void wbinvd_on_all_cpus(void) { wbinvd(); - return 0; } static inline struct cpumask *cpu_llc_shared_mask(int cpu) diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c index 7af743bd3b13..079c3f3cd32c 100644 --- a/arch/x86/lib/cache-smp.c +++ b/arch/x86/lib/cache-smp.c @@ -14,9 +14,8 @@ void wbinvd_on_cpu(int cpu) } EXPORT_SYMBOL(wbinvd_on_cpu); -int wbinvd_on_all_cpus(void) +void wbinvd_on_all_cpus(void) { on_each_cpu(__wbinvd, NULL, 1); - return 0; } EXPORT_SYMBOL(wbinvd_on_all_cpus); From patchwork Thu Feb 27 01:48:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993517 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E3A451537CB for ; Thu, 27 Feb 2025 01:49:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620951; cv=none; b=Rmu7k5+Wbix0Jl4BYbpYds4w7Zw12Im7S3Y8V4sHoZDuCkdKBcu59AihEwQ1fMck8hfmvrnxEil7ENt0XnzLTu9/MdIUsGONY2waP8c9xbGU4zaqWOlRNVp93HoMYbp+jbuq1huG78JD5hX33OG4D5EcsHa1bKokr7JAlGkRyNw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620951; c=relaxed/simple; bh=MLFcAaV7Kp3fzJwWXrLEW5MrwTlpZh8+LdBgVE9E8LY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H153qf7fGD3jL1fSLuw1PnxR41KEDnBZRyR+wnH10YehqO7GpZRlTyKzpF5owW7Rp67Tg/d0T5LTKhhUWNG14SDmP4pX3jaVYqkpNRAmvRxxE7T+43ng14+853CSHnWGCR5u2NkdktLx2T+uDlnhqRk+s6f/n3dro1B0zr55u8c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vc+clqhD; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vc+clqhD" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fc0bc05c00so1425037a91.2 for ; Wed, 26 Feb 2025 17:49:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620949; x=1741225749; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=ql6MsPcFG1hKThKVbS33oCo+B0Gfqh6TWUoqCKtoqBs=; b=vc+clqhDIAXxr8H4NcwmaHg/cftakSAcCwiQN+XuxXitUuSgZ1DTE9UObEBh1Y5Ael EtBvS1tBVKtWCo6KMQtKViD7tzsSdYeetifJAeTRsPOMW2P1c+wtHG699ZsRv6VrG0pe EILhPxLFvyabVjWpItN+ao/WrjMOZtiUDT838Y0/hlHCBFK8pbF3NuSQt+X7rx44LJAy snvMYz4KVOKvRLZouVhwoT/JqaZISCA6VwLcKSRPihOrNJipSNG41YD/0U9LySrJpqbh /L31g31uln/ZRCkPWxmGGveEsbtUN/3JAGrCqqsz5OiUTIaEujVRFqOLqg8L/iZRsQWT GnIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620949; x=1741225749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ql6MsPcFG1hKThKVbS33oCo+B0Gfqh6TWUoqCKtoqBs=; b=wacQ445+yQnzSsE2bwlDEdC70iqpp9hDmTwvLXkoGW09f1GYtuDFXRsneTGE8ccToo OyRhYj9pU96JJokRZCv3FgWdWPB2eYb29F+d3W1mkstbb/0o//7iqR/spn8GI6CL4Aa/ f/AwW6IchqTR67XlmiRmwmPVJwSszQZ13quJk/XWTlr++nE27L/y7OeHG4nXLpNPAd/9 iFv2Ldwaw1lijNsUw8Q8V0NIXeuW1Ava8soqENqNRDuDewXauSX50wZ2kw0nnzKQcbFj STLJ2l073Kr/Nm13PpsAUK/z2M9dLWWxQfsAn+fe7Ar3e+Rbex1tWrVHjoRwAUTFrWje GFng== X-Forwarded-Encrypted: i=1; AJvYcCVdvCr90UQYoDyBSxfhd7zLS2DDWbDZtbDgDpODtGKwMh2N3rFSEMJZLd6P1a0BrmXqlII=@vger.kernel.org X-Gm-Message-State: AOJu0YyLBEFCYhGDKuirnUXBJJiHkgcVB5iL01RIJI/bcBHSJ+81rOPA H/Roi6yScom0GvyZkrn6zCbQIT91hzDg/FLPsSzGu4yoUvD9e59ftYxB//NvEo597b3IcRommMl DFA== X-Google-Smtp-Source: AGHT+IHSebIlwvVX9AkLwVEhCgxbe2TyARpKt911VKqev26ad0iwG8j4dXRoe0e4YVDy+hyTSXHejlekFWY= X-Received: from pjuw13.prod.google.com ([2002:a17:90a:d60d:b0:2f9:dc36:b11]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2d43:b0:2fa:b84:b31f with SMTP id 98e67ed59e1d1-2fe68d065f6mr15204924a91.25.1740620949332; Wed, 26 Feb 2025 17:49:09 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:54 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-4-seanjc@google.com> Subject: [PATCH 3/7] x86, lib: Add WBNOINVD helper functions From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang From: Kevin Loughlin In line with WBINVD usage, add WBONINVD helper functions. Fall back to WBINVD (via alternative()) if WBNOINVD isn't supported, as WBINVD provides a superset of functionality, just more slowly. Note, alternative() ensures compatibility with early boot code as needed. Signed-off-by: Kevin Loughlin Reviewed-by: Tom Lendacky [sean: massage changelog and comments, use ASM_WBNOINVD and _ASM_BYTES] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/smp.h | 6 ++++++ arch/x86/include/asm/special_insns.h | 19 ++++++++++++++++++- arch/x86/lib/cache-smp.c | 11 +++++++++++ 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index ee61e322e2a1..d4c50128aa6c 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -112,6 +112,7 @@ void native_play_dead(void); void play_dead_common(void); void wbinvd_on_cpu(int cpu); void wbinvd_on_all_cpus(void); +void wbnoinvd_on_all_cpus(void); void smp_kick_mwait_play_dead(void); @@ -159,6 +160,11 @@ static inline void wbinvd_on_all_cpus(void) wbinvd(); } +static inline void wbnoinvd_on_all_cpus(void) +{ + wbnoinvd(); +} + static inline struct cpumask *cpu_llc_shared_mask(int cpu) { return (struct cpumask *)cpumask_of(0); diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 03e7c2d49559..962477a83584 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -117,7 +117,24 @@ static inline void wrpkru(u32 pkru) static __always_inline void wbinvd(void) { - asm volatile("wbinvd": : :"memory"); + asm volatile("wbinvd" : : : "memory"); +} + +/* Instruction encoding provided for binutils backwards compatibility. */ +#define ASM_WBNOINVD _ASM_BYTES(0xf3,0x0f,0x09) + +/* + * Cheaper version of wbinvd(). Call when caches need to be written back but + * not invalidated. + */ +static __always_inline void wbnoinvd(void) +{ + /* + * If WBNOINVD is unavailable, fall back to the compatible but + * more destructive WBINVD (which still writes the caches back + * but also invalidates them). + */ + alternative("wbinvd", ASM_WBNOINVD, X86_FEATURE_WBNOINVD); } static inline unsigned long __read_cr4(void) diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c index 079c3f3cd32c..1789db5d8825 100644 --- a/arch/x86/lib/cache-smp.c +++ b/arch/x86/lib/cache-smp.c @@ -19,3 +19,14 @@ void wbinvd_on_all_cpus(void) on_each_cpu(__wbinvd, NULL, 1); } EXPORT_SYMBOL(wbinvd_on_all_cpus); + +static void __wbnoinvd(void *dummy) +{ + wbnoinvd(); +} + +void wbnoinvd_on_all_cpus(void) +{ + on_each_cpu(__wbnoinvd, NULL, 1); +} +EXPORT_SYMBOL(wbnoinvd_on_all_cpus); From patchwork Thu Feb 27 01:48:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993518 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C537D1632F2 for ; Thu, 27 Feb 2025 01:49:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620953; cv=none; b=kM7yoCrUU8txhXBVqLwDnftwvhembD6VBS3FMiSywSQMq+zbiA1cZ2+hZdzVV0L5r95zxOkPUtkog3s3a3Yjlrs+JqqC8my+/a/lrVIdSyTsA2jz9yN4Mp/QpvahF1l9zBO0yxUhtENrQhL+S+jISqkvSdcjP803LoHsEf7HW+k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620953; c=relaxed/simple; bh=DfYvZ5Mm0h+MWifbIAedzEkieOfAgd0J2MF+dMGDucI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P/3wshqgwldNJyvIRPZ3kWaCjZFoOmY97IrQUBulhwtKYUXeJ7nd8EQIfyQl3L/CkvsDVAXTUmQsBvQC5ZGoiLXVjgTT049ak+ccODJGEYbVSw7JPt1AYJk0aZNcYHCr0YvWizSaYTmzdDZlfbnWlmhbiaLFQPYd+5omOFoGL0A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yvx+w/0c; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yvx+w/0c" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2233b154004so8916825ad.0 for ; Wed, 26 Feb 2025 17:49:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620951; x=1741225751; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=TR4IeDnNClz6+x5842imtnuCjIWUrPGC6Tw25Ns6uLw=; b=yvx+w/0crUXCD0OqZ/CPGrDgDHcvbp2WY9Dew1AA+dEWg199FEzaYMcYC8RkLwii/S SZlLVqHbhRckN8/BCVkTporXP9WUYBXrgoFcXJkYhTekO/KYyw85R4siPIzop9VTBg5e t1A/Qg2P6IlkTQssU0ehy/XmIyXRXrfcKpj5X1UMDhJb8HN8poFz7fNVp9CyMo8TLY6V jSuGNsKubCzSUCx7ZrJYzFDpIw8lnxWMpNxotTMYbEiEL8CoygTGXsKLsOzFESAlbL1I niTQG/GogYaDCXFqHw1Lyxe6cRlxL3DxyryCB6mLTbcvD1BW7r0InM3eMqAPIX2oU2d1 aewg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620951; x=1741225751; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=TR4IeDnNClz6+x5842imtnuCjIWUrPGC6Tw25Ns6uLw=; b=wAFnUizcfO8Iv+S3tqBUpNQEPUCf/NqG+wImSD3GDZ7O28VIY+mveTcL+wYohkaymj 20RdIl6KYPcZewBAIx+tGrPuJAfr/09UsXi5vtTsFuGfpArZZzUK6YmvR/vmoVunizXh tUAuPD+fTwLeuE+ru/u/XeHatp5gu1Mz4l5XdKOxEAwmml2jcia11OkIQdx4mC+nTIOA bURR20CB8RZ82Azei+fknNA2p6Yw79gvE1+SAkV6Aymqtz9kTrLAXbULTJp68lOEUEI+ bwc6zcwk8ZsHaDohKHNpCrceCz7miGIEJvWqpfiMcTpc5RuLPZsZnz8xgFvqtlO4XKBd ebXA== X-Forwarded-Encrypted: i=1; AJvYcCWJFNSXCwoKUBjAmtr7cf6OijVssL/6YV8S3f47bnaszlcXAVTrA3/Ifb9pB7e7bK6kHaI=@vger.kernel.org X-Gm-Message-State: AOJu0Yyin2oNSAdCdhVvunSXkNtQGJBZP157MbxWnjy9JqlJxw2SrTEd FVK5wHFfehkjuPJXhbhwwyWGwe14j2QYP9kUYZGdIMSAWKOfWliBtWnHAAVVkxVIeLVs60vXmni e8w== X-Google-Smtp-Source: AGHT+IGD//k8iz98CkPPC/4d9y2zUzbB/Sbo83vNGoT7wE075VyBluONQFVEM/j9i7oxwwVwWqKwPa4RqRM= X-Received: from pjc12.prod.google.com ([2002:a17:90b:2f4c:b0:2e5:5ffc:1c36]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:18f:b0:220:c9e5:f94e with SMTP id d9443c01a7336-223200b2f02mr96564955ad.23.1740620951184; Wed, 26 Feb 2025 17:49:11 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:55 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-5-seanjc@google.com> Subject: [PATCH 4/7] KVM: SEV: Prefer WBNOINVD over WBINVD for cache maintenance efficiency From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang From: Kevin Loughlin AMD CPUs currently execute WBINVD in the host when unregistering SEV guest memory or when deactivating SEV guests. Such cache maintenance is performed to prevent data corruption, wherein the encrypted (C=1) version of a dirty cache line might otherwise only be written back after the memory is written in a different context (ex: C=0), yielding corruption. However, WBINVD is performance-costly, especially because it invalidates processor caches. Strictly-speaking, unless the SEV ASID is being recycled (meaning the SNP firmware requires the use of WBINVD prior to DF_FLUSH), the cache invalidation triggered by WBINVD is unnecessary; only the writeback is needed to prevent data corruption in remaining scenarios. To improve performance in these scenarios, use WBNOINVD when available instead of WBINVD. WBNOINVD still writes back all dirty lines (preventing host data corruption by SEV guests) but does *not* invalidate processor caches. Note that the implementation of wbnoinvd() ensures fall back to WBINVD if WBNOINVD is unavailable. In anticipation of forthcoming optimizations to limit the WBNOINVD only to physical CPUs that have executed SEV guests, place the call to wbnoinvd_on_all_cpus() in a wrapper function sev_writeback_caches(). Signed-off-by: Kevin Loughlin Reviewed-by: Mingwei Zhang Reviewed-by: Tom Lendacky Link: https://lore.kernel.org/r/20250201000259.3289143-3-kevinloughlin@google.com [sean: tweak comment regarding CLFUSH] Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 43 ++++++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index d934d788ac39..4238af23ab1b 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -116,6 +116,7 @@ static int sev_flush_asids(unsigned int min_asid, unsigned int max_asid) */ down_write(&sev_deactivate_lock); + /* SNP firmware requires use of WBINVD for ASID recycling. */ wbinvd_on_all_cpus(); if (sev_snp_enabled) @@ -705,6 +706,18 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) } } +static void sev_writeback_caches(void) +{ + /* + * Ensure that all dirty guest tagged cache entries are written back + * before releasing the pages back to the system for use. CLFLUSH will + * not do this without SME_COHERENT, and flushing many cache lines + * individually is slower than blasting WBINVD for large VMs, so issue + * WBNOINVD (or WBINVD if the "no invalidate" variant is unsupported). + */ + wbnoinvd_on_all_cpus(); +} + static unsigned long get_num_contig_pages(unsigned long idx, struct page **inpages, unsigned long npages) { @@ -2753,12 +2766,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, goto failed; } - /* - * Ensure that all guest tagged cache entries are flushed before - * releasing the pages back to the system for use. CLFLUSH will - * not do this, so issue a WBINVD. - */ - wbinvd_on_all_cpus(); + sev_writeback_caches(); __unregister_enc_region_locked(kvm, region); @@ -3110,30 +3118,29 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va) /* * VM Page Flush takes a host virtual address and a guest ASID. Fall - * back to WBINVD if this faults so as not to make any problems worse - * by leaving stale encrypted data in the cache. + * back to full writeback of caches if this faults so as not to make + * any problems worse by leaving stale encrypted data in the cache. */ if (WARN_ON_ONCE(wrmsrl_safe(MSR_AMD64_VM_PAGE_FLUSH, addr | asid))) - goto do_wbinvd; + goto do_sev_writeback_caches; return; -do_wbinvd: - wbinvd_on_all_cpus(); +do_sev_writeback_caches: + sev_writeback_caches(); } void sev_guest_memory_reclaimed(struct kvm *kvm) { /* * With SNP+gmem, private/encrypted memory is unreachable via the - * hva-based mmu notifiers, so these events are only actually - * pertaining to shared pages where there is no need to perform - * the WBINVD to flush associated caches. + * hva-based mmu notifiers, i.e. these events are explicitly scoped to + * shared pages, where there's no need to flush caches. */ if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; - wbinvd_on_all_cpus(); + sev_writeback_caches(); } void sev_free_vcpu(struct kvm_vcpu *vcpu) @@ -3856,8 +3863,8 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) * guest-mapped page rather than the initial one allocated * by KVM in svm->sev_es.vmsa. In theory, svm->sev_es.vmsa * could be free'd and cleaned up here, but that involves - * cleanups like wbinvd_on_all_cpus() which would ideally - * be handled during teardown rather than guest boot. + * cleanups like flushing caches, which would ideally be + * handled during teardown rather than guest boot. * Deferring that also allows the existing logic for SEV-ES * VMSAs to be re-used with minimal SNP-specific changes. */ @@ -4905,7 +4912,7 @@ void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) /* * SEV-ES avoids host/guest cache coherency issues through - * WBINVD hooks issued via MMU notifiers during run-time, and + * WBNOINVD hooks issued via MMU notifiers during run-time, and * KVM's VM destroy path at shutdown. Those MMU notifier events * don't cover gmem since there is no requirement to map pages * to a HVA in order to use them for a running guest. While the From patchwork Thu Feb 27 01:48:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993519 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E06C70814 for ; Thu, 27 Feb 2025 01:49:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620954; cv=none; b=qXh9VlDQWfnr7o+erPTKiwTADodt5qro83bHn9zKKgjRCDzbLP0unKj8XCqp8sY/g/1UBrltl7KUR/JS4SYIskZH9D2vFizXAiQdeU13hS/d6DtIxeweGBq+AsD1sEu8scWyA569m00XLyNTFMT6qNZlAwhAAFMI3hA3EaPpaFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620954; c=relaxed/simple; bh=LsOPwUOC/QmWTWQha1SKJbGEsKJZ/1O5A6HaXw1KHdw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WrEbI+TpWPo5g1kRxM1buTSgzY66FOnrzUsNXKwgTi6yOcpBdSF7yiP7RbfVDKuiukUoZY+XuoYkst6+fGzdmB+xQYbIn761K53EDEJ5I06bwEalhQMtsyfBQOvZGICrLuR67kM63PKnqhWP0wPLX8IS19C/CnFoAycNmIvJOlQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YvJxePCS; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YvJxePCS" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc0bc05b36so1434933a91.3 for ; Wed, 26 Feb 2025 17:49:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620953; x=1741225753; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=KyfK4MiK7b3lWIMmmLsQdRJWDqYklk35V7aidIjSERg=; b=YvJxePCS35AgOVa4nuJ8+zGUC5gFcIIJfrURfQZ0k9Z0zFeVLYJaB7pzIYXA8hFXxe UnYhMoRWWxAGvFq0nABlYb2qyN1WpwW6gJXlgdIT/8IbxmchwHIz0ooAY69UDlVpcBCu 0Fqm86D3+nhmtcak1UZ6ydEDP+rM4qWIhnuN9Fmivb8evCxEQ8zy0m6ZUhgAy/YFuNnv pa1OZlBXmuiJHYSQZH+BurzUe9N687M4G45s1i7jI8RfwTVoZV+hxDrk86DBkye7TP8w +GLcqmYx9jeptvsvARkObHCMCMTYHlJPk84zbMinjjUM5SdfNcSTTp+eEvj0pIudIff1 COxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620953; x=1741225753; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=KyfK4MiK7b3lWIMmmLsQdRJWDqYklk35V7aidIjSERg=; b=pzUU0zoN8dW8at8Ck5seWN+v8YpOpGEZyQILQj9G0U8YnacAUOU1JOCZtr1Z/FHy9C 1wczUitGNHUYuK2HMYcHY/r/LNd+l+aetzHeU2JLKUnTioLKdr+EDu3PWSA1X36YU/DZ pWcpqzzjvNH8L7IR9AnM+8zYE/R1MAAYJnx2x4RKSZChlesneegOnJIaYmRNqjV6wi0/ iDUn1pnyrrCIqYnV6Rxg5TIDfX0/E7Xav00I7bEB2lmZGO/3VD5dr+af/kiELu8aEn5w ua1/jiw/goAyXb8/GHsn3RFaGuqK8LU+XoPSXNmMmwe/Of3JdtnSC0v0tdxAieeFlr1F QhWA== X-Forwarded-Encrypted: i=1; AJvYcCV+r83/XBXEa7LfcxCmepQb7g5rgXo3wddtazbTKzDb6IaAK8MyM4vDoKp9e7c00QB9BZY=@vger.kernel.org X-Gm-Message-State: AOJu0Ywf+kO3LahnLfzPz9OeTcZ5mtcxanOlInpVaBQjuOOVKN2Q/sZ6 glKDE2MDh7dSndFjfcI7Eotf+YHN2XKOaRtMCuZDd1cjzjwIUo2IDfPEg3Hoi5FoPpNVSpOl+Ab +pg== X-Google-Smtp-Source: AGHT+IGdAHvvL7FBML86H3kSweA4zXL/bz5qnnfM/P5V/XFs6lad3KvJxC96rYsV0nVGk10l9iswAPaT9cE= X-Received: from pjbsh9.prod.google.com ([2002:a17:90b:5249:b0:2fa:1fac:269c]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:268d:b0:2ee:59af:a432 with SMTP id 98e67ed59e1d1-2fce874088emr34953472a91.31.1740620952821; Wed, 26 Feb 2025 17:49:12 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:56 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-6-seanjc@google.com> Subject: [PATCH 5/7] KVM: x86: Use wbinvd_on_cpu() instead of an open-coded equivalent From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang Use wbinvd_on_cpu() to target a single CPU instead of open-coding an equivalent. In addition to deduplicating code, this will allow removing KVM's wbinvd_ipi() once the other usage is gone. No functional change intended. Signed-off-by: Sean Christopherson Reviewed-by: Tom Lendacky --- arch/x86/kvm/x86.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 58b82d6fd77c..eab1e64a19a2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4983,8 +4983,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) if (kvm_x86_call(has_wbinvd_exit)()) cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); else if (vcpu->cpu != -1 && vcpu->cpu != cpu) - smp_call_function_single(vcpu->cpu, - wbinvd_ipi, NULL, 1); + wbinvd_on_cpu(vcpu->cpu); } kvm_x86_call(vcpu_load)(vcpu, cpu); From patchwork Thu Feb 27 01:48:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993520 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64749137750 for ; Thu, 27 Feb 2025 01:49:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620956; cv=none; b=Uv4Y+mmFKPE9T3By3SGgO/wvPuZGK1UAj7mZhfsTHIG9gqFA4I7NE6hP4umANuSrMh/s8aHE5iibiXMmJU/V8LeekcIjBHg6++fCQxlJi+rI5LBZsJoE5i5C4UDfizDUCV29nqiDDZ5e71TbtWiuhVRBMblOKgiaxBc+bzqf5wk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620956; c=relaxed/simple; bh=f+785LO4fr1Q0w+vl79+A+2demuBbiAwgE72UOupkZU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ggtNxkgZ5D9AMOTYdhUs//aeOVoS4cpwu/NHUrIM313M8i/MVAMNrRfVexhhHg5a4Q+KvhiueR42FTrQyG6/YZHV4afPmPej8vOgjVfKzanubqUV2Bg9DA6vlnUU9K4SesehCt+LIbCXopT91g6Fz+RoEZojV0VE+JdNvqyX7TA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eIlsJA+U; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eIlsJA+U" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-220d9d98ea6so12305305ad.3 for ; Wed, 26 Feb 2025 17:49:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620954; x=1741225754; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=NQZed8KL6LHWO/050rdoKsJKgWyNu5Psv+4lrw+wuwc=; b=eIlsJA+Udj73SUuU6QLqH/n2fyYA8LdiJC/C5pOhXpVTXq9u6PrH3e0qofqpJOS/u5 oVBorRtU40CByG+KYgC9GijDQ7WKN77ZpNuFH9d8RRR+s3Budk+z4OHHodfgwu+bib/t UNpl09I5sP7BDH0c5GetDQ4JHwt5ScxOUkn6jCwGS5BT2gMTYlE5rY+d6a7WwcemEFhr HnXiRHO6NO8e+Cfp9ZKqVlHzyeSEPw9z3qIO52whYaXbafafUH0lcHcLiMLqetEj5UOX sCUmhV/FJIdETTJkF5gdTilHVgAq5XnA/I10wqSZ+lEyaFgVR9tOIp6UZETsTVpjzJfQ LY/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620954; x=1741225754; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NQZed8KL6LHWO/050rdoKsJKgWyNu5Psv+4lrw+wuwc=; b=rpvKoagrFWRiDem0MrMn3+sVc6CZf7m1XMBxYoYo5bUV1XWT3Q0Xc96lgZW9CqirBb e3aT2iQTfAnHywwyCOTaHALqF4SB/oyvNTX4v+E600TJBJlOg6JLZ821gLPdKQsIodzT Ph06tKLmmlq9Yz1jdKsnVQLO7vnrpx7GUGQv1mhgnAoWHhvEsaSwZkLSMVCsXZhxG9e/ 7HM09BlV3q98gebqEe6rVTvCOof3iEDZyNgw8tW86eWKF3+KnEN85Nlsc3AUYZIuTGyM Jlf8cIMz4yg2PjSkCvFbf8f6GoAUEUCmiP3dEag0d2TxUs4lwAI1Ga0Mi93KZjtaqLSy YF0Q== X-Forwarded-Encrypted: i=1; AJvYcCUjp5kGCapmvg2kLbxb+ZPLIJDiwevi7S9gmhGea0Gw4CQr/S/4COTrOJ0C9lzBEN4ae9s=@vger.kernel.org X-Gm-Message-State: AOJu0YyhpfgWQCGNkvmsGjjoGohPOy3G6rci5UheO0xRxNY2jL93uERV HfJw+a3vAVE8GJ/0wRo51D9h564iLGHQG2/XCQYL/sY92mrdk/iJNDEGVejn5OOHY0t5feu+gPg Bdw== X-Google-Smtp-Source: AGHT+IEBsvNRrBmGvNI+lMLh6OTitFYdua8e/JkLykQQgoZZ3p0Ao9dgEqJEduJTu6/BkxCWmHP7ogooq3Q= X-Received: from pjbqn3.prod.google.com ([2002:a17:90b:3d43:b0:2ea:5469:76c2]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e749:b0:21f:6c81:f63 with SMTP id d9443c01a7336-22307b4aa2bmr131878595ad.16.1740620954634; Wed, 26 Feb 2025 17:49:14 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:57 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-7-seanjc@google.com> Subject: [PATCH 6/7] x86, lib: Add wbinvd and wbnoinvd helpers to target multiple CPUs From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang From: Zheyun Shen Extract KVM's open-coded calls to do writeback caches on multiple CPUs to common library helpers for both WBINVD and WBNOINVD (KVM will use both). Put the onus on the caller to check for a non-empty mask to simplify the SMP=n implementation, e.g. so that it doesn't need to check that the one and only CPU in the system is present in the mask. Signed-off-by: Zheyun Shen Reviewed-by: Tom Lendacky Link: https://lore.kernel.org/r/20250128015345.7929-2-szy0127@sjtu.edu.cn [sean: move to lib, add SMP=n helpers, clarify usage] Signed-off-by: Sean Christopherson --- arch/x86/include/asm/smp.h | 12 ++++++++++++ arch/x86/kvm/x86.c | 8 +------- arch/x86/lib/cache-smp.c | 12 ++++++++++++ 3 files changed, 25 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h index d4c50128aa6c..df828b36e33f 100644 --- a/arch/x86/include/asm/smp.h +++ b/arch/x86/include/asm/smp.h @@ -112,7 +112,9 @@ void native_play_dead(void); void play_dead_common(void); void wbinvd_on_cpu(int cpu); void wbinvd_on_all_cpus(void); +void wbinvd_on_many_cpus(struct cpumask *cpus); void wbnoinvd_on_all_cpus(void); +void wbnoinvd_on_many_cpus(struct cpumask *cpus); void smp_kick_mwait_play_dead(void); @@ -160,11 +162,21 @@ static inline void wbinvd_on_all_cpus(void) wbinvd(); } +static inline void wbinvd_on_many_cpus(struct cpumask *cpus) +{ + wbinvd(); +} + static inline void wbnoinvd_on_all_cpus(void) { wbnoinvd(); } +static inline wbnoinvd_on_many_cpus(struct cpumask *cpus) +{ + wbnoinvd(); +} + static inline struct cpumask *cpu_llc_shared_mask(int cpu) { return (struct cpumask *)cpumask_of(0); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index eab1e64a19a2..8146c3e7eb40 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4957,11 +4957,6 @@ long kvm_arch_dev_ioctl(struct file *filp, return r; } -static void wbinvd_ipi(void *garbage) -{ - wbinvd(); -} - static bool need_emulate_wbinvd(struct kvm_vcpu *vcpu) { return kvm_arch_has_noncoherent_dma(vcpu->kvm); @@ -8236,8 +8231,7 @@ static int kvm_emulate_wbinvd_noskip(struct kvm_vcpu *vcpu) int cpu = get_cpu(); cpumask_set_cpu(cpu, vcpu->arch.wbinvd_dirty_mask); - on_each_cpu_mask(vcpu->arch.wbinvd_dirty_mask, - wbinvd_ipi, NULL, 1); + wbinvd_on_many_cpus(vcpu->arch.wbinvd_dirty_mask); put_cpu(); cpumask_clear(vcpu->arch.wbinvd_dirty_mask); } else diff --git a/arch/x86/lib/cache-smp.c b/arch/x86/lib/cache-smp.c index 1789db5d8825..ebbc91b3ac67 100644 --- a/arch/x86/lib/cache-smp.c +++ b/arch/x86/lib/cache-smp.c @@ -20,6 +20,12 @@ void wbinvd_on_all_cpus(void) } EXPORT_SYMBOL(wbinvd_on_all_cpus); +void wbinvd_on_many_cpus(struct cpumask *cpus) +{ + on_each_cpu_mask(cpus, __wbinvd, NULL, 1); +} +EXPORT_SYMBOL_GPL(wbinvd_on_many_cpus); + static void __wbnoinvd(void *dummy) { wbnoinvd(); @@ -30,3 +36,9 @@ void wbnoinvd_on_all_cpus(void) on_each_cpu(__wbnoinvd, NULL, 1); } EXPORT_SYMBOL(wbnoinvd_on_all_cpus); + +void wbnoinvd_on_many_cpus(struct cpumask *cpus) +{ + on_each_cpu_mask(cpus, __wbnoinvd, NULL, 1); +} +EXPORT_SYMBOL_GPL(wbnoinvd_on_many_cpus); From patchwork Thu Feb 27 01:48:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13993521 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05F30192D8A for ; Thu, 27 Feb 2025 01:49:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620959; cv=none; b=LwoUyPn090nvscWaXebHd5vYMRnzGIRmTjlVIT4P9CZe1zJ/JdAzn41dPGezlif0EE5vEcs9KKqeh80LVuhM5A7gcTDXeZ2gBVYmh5G1CvGYC5HFSjyvoQiwPX4xgPO8ClS4eaHAYTUAf+zFePkGXgVZlxYJs0idnVElx/SCU/s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740620959; c=relaxed/simple; bh=ZjznNdxeawcs4ehBQN14IvUeGWD7tUmkCSukq8IQXtk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iG9VooHdr3XL/esuDbZNPCyBEoT/MvJS7BBAgh0JHqfi/V615GC6ra6KOpKPfw0HPL8u3azaGPAK9DwS9bRz6fRok+a9rr4ly12DAVk0tjqYbiA4WMURbfqGMP8asnvPVPILsfajN8v57AVd0c2qaPxR+pUwTdIHstOsxY/Yur8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d0K1ouEp; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d0K1ouEp" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fc1e7efdffso1473056a91.0 for ; Wed, 26 Feb 2025 17:49:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740620956; x=1741225756; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+fAaFYGznjHWuwCGEwUCA2tWPe6g3CNPq2rCUGtXY68=; b=d0K1ouEpird8L1LWt/dgiZAAcoBnqbc6UE4hKIqx+2cqZnFXIBBDzP8l4KRiXLppj5 ieWrXcGcc3pWmgxiIIcj2B5IrYQJ+BXu/E2aFetE8UX47VAmR4leQk+zo2X8CAovA949 13eIq28uJW7FqjMS06bDmNtFoKO0P+X3PYYXMUXdWNxP/sKMTM1zGnEvB3hqc8YHmZi7 PLLKaYcVn8BPf0z1DWELwVL/5YhbSF44dQoW354SgLPe8nXYpEp3ROZeK7DzsUnPTgaM fMtJh9LBjtV6EMkR/9mdKIQV/oBOWhCgai07JMX4+r9EC2TJza89esr4lg7uk81gsxZO H+Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740620956; x=1741225756; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+fAaFYGznjHWuwCGEwUCA2tWPe6g3CNPq2rCUGtXY68=; b=ZnCDETMuDDBT3+pJU+TZrb4fDDCD/RWqW7ijf9vfjXAY5C+/CGKHYHikau8jHpwdaC cUYxIaOhB0YAyO3NvYs4+Wk49oYMBcWN/fSooLMzinK5ibK3O36CzOcWvs3plnDYnt6g Z6XsGYhn6zx9XZ44oRItb7xJkHyo1L4R1ZPOoGmpU07c+KN0Oi+UYgvUBYhYE2tG6BGT JLkvYY2CuUnFGhvk/2QZYuzXhTQVubW7ZyX6TW7hkhP3zn3IXEcmBvylEF+bFRD32a23 ZhQHMnMnGJL/t9PKcGx7Tu/gOjvzLr0yOktOVW7O2HhePyly1DVyH5AJsEY1VAb3RMUY 530A== X-Forwarded-Encrypted: i=1; AJvYcCWsKzvAXcdcFUTt0vN99LxMIqH4hxKLhpPE2MIBFkZkq7DUSafdR1y6mTs/vxvxnbHrnlA=@vger.kernel.org X-Gm-Message-State: AOJu0YwKvb7+iYM+nA5jc/dv3hdpZRZCVAtOnPwVmesEh5F+Lyk9iqZh XESxGCaV+NXeBDlIjhAn2bcjwIeMELxaJdTti2FJyGG3OEgds7d4mUZO3OySrWwYSvv5K7sESGh 7Sg== X-Google-Smtp-Source: AGHT+IEbQ1gQgokpG5EzaQqC9W30sV157j2cgm8ux/I9GJmzFDDxw179J+/d8wjCg+jKRuKefKs1aMX8P1M= X-Received: from pjyd4.prod.google.com ([2002:a17:90a:dfc4:b0:2ef:8055:93d9]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:6cc:b0:2ee:ad18:b309 with SMTP id 98e67ed59e1d1-2fe68accf77mr13751734a91.3.1740620956399; Wed, 26 Feb 2025 17:49:16 -0800 (PST) Reply-To: Sean Christopherson Date: Wed, 26 Feb 2025 17:48:58 -0800 In-Reply-To: <20250227014858.3244505-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250227014858.3244505-1-seanjc@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250227014858.3244505-8-seanjc@google.com> Subject: [PATCH 7/7] KVM: SVM: Flush cache only on CPUs running SEV guest From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Zheyun Shen , Tom Lendacky , Kevin Loughlin , Mingwei Zhang From: Zheyun Shen On AMD CPUs without ensuring cache consistency, each memory page reclamation in an SEV guest triggers a call to do WBNOINVD/WBINVD on all CPUs, thereby affecting the performance of other programs on the host. Typically, an AMD server may have 128 cores or more, while the SEV guest might only utilize 8 of these cores. Meanwhile, host can use qemu-affinity to bind these 8 vCPUs to specific physical CPUs. Therefore, keeping a record of the physical core numbers each time a vCPU runs can help avoid flushing the cache for all CPUs every time. Signed-off-by: Zheyun Shen Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson --- arch/x86/kvm/svm/sev.c | 42 +++++++++++++++++++++++++++++++++++------- arch/x86/kvm/svm/svm.h | 1 + 2 files changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 4238af23ab1b..b7a4cb728fba 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -447,6 +447,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, ret = sev_platform_init(&init_args); if (ret) goto e_free; + if (!zalloc_cpumask_var(&sev->have_run_cpus, GFP_KERNEL_ACCOUNT)) + goto e_free; /* This needs to happen after SEV/SNP firmware initialization. */ if (vm_type == KVM_X86_SNP_VM) { @@ -706,16 +708,31 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) } } -static void sev_writeback_caches(void) +static void sev_writeback_caches(struct kvm *kvm) { + /* + * Note, the caller is responsible for ensuring correctness if the mask + * can be modified, e.g. if a CPU could be doing VMRUN. + */ + if (cpumask_empty(to_kvm_sev_info(kvm)->have_run_cpus)) + return; + /* * Ensure that all dirty guest tagged cache entries are written back * before releasing the pages back to the system for use. CLFLUSH will * not do this without SME_COHERENT, and flushing many cache lines * individually is slower than blasting WBINVD for large VMs, so issue - * WBNOINVD (or WBINVD if the "no invalidate" variant is unsupported). + * WBNOINVD (or WBINVD if the "no invalidate" variant is unsupported) + * on CPUs that have done VMRUN, i.e. may have dirtied data using the + * VM's ASID. + * + * For simplicity, never remove CPUs from the bitmap. Ideally, KVM + * would clear the mask when flushing caches, but doing so requires + * serializing multiple calls and having responding CPUs (to the IPI) + * mark themselves as still running if they are running (or about to + * run) a vCPU for the VM. */ - wbnoinvd_on_all_cpus(); + wbnoinvd_on_many_cpus(to_kvm_sev_info(kvm)->have_run_cpus); } static unsigned long get_num_contig_pages(unsigned long idx, @@ -2766,7 +2783,7 @@ int sev_mem_enc_unregister_region(struct kvm *kvm, goto failed; } - sev_writeback_caches(); + sev_writeback_caches(kvm); __unregister_enc_region_locked(kvm, region); @@ -2914,6 +2931,7 @@ void sev_vm_destroy(struct kvm *kvm) } sev_asid_free(sev); + free_cpumask_var(sev->have_run_cpus); } void __init sev_set_cpu_caps(void) @@ -3127,7 +3145,7 @@ static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va) return; do_sev_writeback_caches: - sev_writeback_caches(); + sev_writeback_caches(vcpu->kvm); } void sev_guest_memory_reclaimed(struct kvm *kvm) @@ -3140,7 +3158,7 @@ void sev_guest_memory_reclaimed(struct kvm *kvm) if (!sev_guest(kvm) || sev_snp_guest(kvm)) return; - sev_writeback_caches(); + sev_writeback_caches(kvm); } void sev_free_vcpu(struct kvm_vcpu *vcpu) @@ -3456,7 +3474,17 @@ void sev_es_unmap_ghcb(struct vcpu_svm *svm) void pre_sev_run(struct vcpu_svm *svm, int cpu) { struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu); - unsigned int asid = sev_get_asid(svm->vcpu.kvm); + struct kvm *kvm = svm->vcpu.kvm; + unsigned int asid = sev_get_asid(kvm); + + /* + * To optimize cache flushes when memory is reclaimed from an SEV VM, + * track physical CPUs that enter the guest for SEV VMs and thus can + * have encrypted, dirty data in the cache, and flush caches only for + * CPUs that have entered the guest. + */ + if (!cpumask_test_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus)) + cpumask_set_cpu(cpu, to_kvm_sev_info(kvm)->have_run_cpus); /* Assign the asid allocated with this SEV guest */ svm->asid = asid; diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 5b159f017055..6ad18ce5a754 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -112,6 +112,7 @@ struct kvm_sev_info { void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ struct mutex guest_req_mutex; /* Must acquire before using bounce buffers */ + cpumask_var_t have_run_cpus; /* CPUs that have done VMRUN for this VM. */ }; struct kvm_svm {