From patchwork Sat Aug 31 00:15:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785726 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 729AF4C81 for ; Sat, 31 Aug 2024 00:15:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063346; cv=none; b=BnVhMP+tfUXitwHSzuVSmIy652ME5HjPsdyIavEE9Z/egdZVd5H6pSYNRdv/1NkwQmGgwuc68JDlkU+Mj/FFTbsgOQQYeOyA6Dl7NDD1vS5ZdBWlhPFBlkPc0HW90Su11pYwMIwjxCinISqiag/XSp6oqq43HfINtHmjyte8+q0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063346; c=relaxed/simple; bh=YXptvY2Uza2H6tZmzonyv74pSte76Tmo4IfX2fwO+lI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mdK/bJdAqDnkfCiTHs1wvLqlBLUHnaQpQfVwVSfT05gR5zL/e8SrFD6fpaIQID5VPOKHXAtzJNAYVgLnIw8V78PZY8/uzwyorOEvROIVNV+vr9OVWQ0zDdkmH6YiMNIreFmyO22/1971zaLj+wD+ukfGhk1HqLC3tmCA0OTXu0o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=wVS4K4EC; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wVS4K4EC" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71431f47164so2530044b3a.1 for ; Fri, 30 Aug 2024 17:15:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063343; x=1725668143; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:from:to:cc:subject:date :message-id:reply-to; bh=TE7D/Acr9LdQYMIXlBbpo74VePMtTsMcOWGqO6E6aPU=; b=wVS4K4ECSWnN4B2tXS5Xi+dfWOwdFk322Z3lzJgGYbNbPuUk1GVvTgQl1EF2ZORY38 ygjMeFPBXe6cWuHk/yvVr3aryruQGspP8HdJKGDqhdzE2KkA6mC7hnRFaTOKg9UwQywR SZRU1ya92RrqYrA2esJCvU9eMKpCMUyWhGONaTCBMWr6VF12KyVzLvTo7duQzHG9uynM zmtgYcP2hn40AYEZn1AmkgebR71OKQmZAzDhQh/Hf5we0VFvUK/UU7MOzG59mJ27jIku PDHgP49RazXClrJ2AzxVYkmKC09zntgMVqGjLnZIClTAsE6vFG6h5oMR5gOGB5aj+/iJ JkNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063343; x=1725668143; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:reply-to:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=TE7D/Acr9LdQYMIXlBbpo74VePMtTsMcOWGqO6E6aPU=; b=P4VWbJhO7yvIWzHzE/5lsYLBYrPfMFWUMVshKyZA/lynlLMu+lB+6p9z3jgcUUNCKE guVF7VFKQm6TwgH4P+Ezr2I3NbUHhNTqwji1Hr4W4abPyjLEADpM41/xUwTLCw6PgCuW WZsRGRL0WEjy8jVWYsUjIEkNbKUg7Ta+YTGgx4lobmR4qfq6rjWhp2OrAS8s2DUY9fyI 9wHuro8gV9x1Z/Jyl2s3P87dO1FVxfJYKsB5uXnb53dCUm9k9NCyS/zv9vYPjmDOLIyh 3HZwIVlUydEdyWNE/34cZY4LwVkxHKNtet9appWtpiCjmwVZ4MgP5G8rjdw2FZqCMSQx M7Vg== X-Gm-Message-State: AOJu0Yz8Af7IAGwuRFU1uILHMKmB7VKQ/J2+WfcldEXJnRdvH50f0gXa oh8vv8MNwnRv/sg4JjBa8yzaa/W0kJ8/AJgCfG6A9nPaXvt2Pp1kF9laFdq/G+L49BiPVBEYNfI Geg== X-Google-Smtp-Source: AGHT+IHgRgu3nlEMwZydwng1Zdptt9mq0HfSF6rdSzaH8wMvVyFEIrgTbt4SBvsLsaty6rpdkiRrHOmEX/c= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:739d:b0:714:200c:39b0 with SMTP id d2e1a72fcca58-717308ae005mr11477b3a.6.1725063342706; Fri, 30 Aug 2024 17:15:42 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:16 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-2-seanjc@google.com> Subject: [PATCH v2 01/22] KVM: VMX: Set PFERR_GUEST_{FINAL,PAGE}_MASK if and only if the GVA is valid From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Set PFERR_GUEST_{FINAL,PAGE}_MASK based on EPT_VIOLATION_GVA_TRANSLATED if and only if EPT_VIOLATION_GVA_IS_VALID is also set in exit qualification. Per the SDM, bit 8 (EPT_VIOLATION_GVA_TRANSLATED) is valid if and only if bit 7 (EPT_VIOLATION_GVA_IS_VALID) is set, and is '0' if bit 7 is '0'. Bit 7 (a.k.a. EPT_VIOLATION_GVA_IS_VALID) Set if the guest linear-address field is valid. The guest linear-address field is valid for all EPT violations except those resulting from an attempt to load the guest PDPTEs as part of the execution of the MOV CR instruction and those due to trace-address pre-translation Bit 8 (a.k.a. EPT_VIOLATION_GVA_TRANSLATED) If bit 7 is 1: • Set if the access causing the EPT violation is to a guest-physical address that is the translation of a linear address. • Clear if the access causing the EPT violation is to a paging-structure entry as part of a page walk or the update of an accessed or dirty bit. Reserved if bit 7 is 0 (cleared to 0). Failure to guard the logic on GVA_IS_VALID results in KVM marking the page fault as PFERR_GUEST_PAGE_MASK when there is no known GVA, which can put the vCPU into an infinite loop due to kvm_mmu_page_fault() getting false positive on its PFERR_NESTED_GUEST_PAGE logic (though only because that logic is also buggy/flawed). In practice, this is largely a non-issue because so GVA_IS_VALID is almost always set. However, when TDX comes along, GVA_IS_VALID will *never* be set, as the TDX Module deliberately clears bits 12:7 in exit qualification, e.g. so that the faulting virtual address and other metadata that aren't practically useful for the hypervisor aren't leaked to the untrusted host. When exit is due to EPT violation, bits 12-7 of the exit qualification are cleared to 0. Fixes: eebed2438923 ("kvm: nVMX: Add support for fast unprotection of nested guest page tables") Reviewed-by: Yuan Yao Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/vmx.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f9fbc299126c..ad5c3f149fd3 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -5800,8 +5800,9 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu) error_code |= (exit_qualification & EPT_VIOLATION_RWX_MASK) ? PFERR_PRESENT_MASK : 0; - error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ? - PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; + if (error_code & EPT_VIOLATION_GVA_IS_VALID) + error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) ? + PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK; /* * Check that the GPA doesn't exceed physical memory limits, as that is From patchwork Sat Aug 31 00:15:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785727 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 997FFA932 for ; Sat, 31 Aug 2024 00:15:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063347; cv=none; b=kYDWh70pGAviZls7S2olT1hvVhlHR7INtYWA7NqeQTdhYbDe2NlpnJm2Oxs+Lnlum5p2JjfG5nm5AFTuIZL/kPnEHqF0JcZVZo77qszgG9pYz5DolGUvUfqPGbTrMtSJaYxYpZvhw6yNOverdbyl7SdLLNWgXXoauHu6PjRZd4o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063347; c=relaxed/simple; bh=Mo/ClTEYGuNJJb8dKR3/LwowRG+11WAmRzNI0wc6hIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=o+cGJYj1i1vZl/t1gxZ3tWDkQawbENKDhdiZZbeN6NSHDRvG2xidtNJglHRnGBER24R+2A2DjCPU4tvQQYvVvwtJfkE01EpUaYf6Hvz6Q3HwzAX0i40BHjbAGIy0vEER6E8Bg7KBuQXymGYe/GaP2ivBKDvr9wSJV6A7V5FjHKc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zUqpGxDA; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zUqpGxDA" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6d54ab222fcso3284977b3.1 for ; Fri, 30 Aug 2024 17:15:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063344; x=1725668144; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=4GoAAkSfNvtumfQvjreZFP9Dt8KDDoJK+os4hnvMlFA=; b=zUqpGxDAkLHSy1m0G0VXcv28dH5UkPDbGfG9jD2oTaIMmsqfsXM8Og+S4qcxGjkeQi YXExGWU0qNI/1WMXxiRlPFJSS3S38G88u3bD5Ullu1Vha/o064Tpom9hCh+x2dSTzUcb Ogg7v1jobsRrTCHXFysi9i/6RUnYCQ/L0voUEFp3MJ0+YNYHNlSpALws7JISQqbUMTKu drpr6mVYv+QWvQZkoaTBR8OUy+JFFf8sADsPHIlodjy5LAQbD8SEbqvH1p7yD3c7RCtM o2ibQeOgeIMagHHTUV3WgVZvyyKAsKTUyn8m1HI142xhbKIBOsDyvXYVlEyTRGfZt9xo aOjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063344; x=1725668144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4GoAAkSfNvtumfQvjreZFP9Dt8KDDoJK+os4hnvMlFA=; b=bAM1HW7M6FKEVyk6wEc7KUkjrnauM3De8Qk2Trr1SMt/+pOnUG3HdIQUAooKur6/st WuldejthjLqlbVjZXQBNKcEaJlD/Hxvb1epVu0d9+KrmvJsHt8vgd5pMTjr+FMU1Qdd2 f8Bxwnu7IM9em4CMnhlzxb9uvv0KozdoQjFeicrae0g1ZdcOGMbg8Lteft1Amlj6XYLi yyWf5/bbcUhqKiOOKEMOC27jXu4ZsQJbXmnH4Bzp3cly0ABlYsO31o9xW4MIINctJ9Nw IfuxHCDqwahDVyVGZKe9fm7S5krsf1i5Rae9g7G5fMfIEhBsFCPw/3Yte5EnCiGYMydH 7sww== X-Gm-Message-State: AOJu0YzbIj9H5wpj83OUmckgWDq7cxSweAo0dLjefU4pFXLRcmW+wun2 ggHdH1qR6YHnIRUySiDry8EWlUUwGc3CCCcrOuNZyTfFhZ/KTODamb3FB0WrT0AbvBA4U3fGnA0 G/w== X-Google-Smtp-Source: AGHT+IErh4TFCGh75jwFCCbkJhJfx7XdDEylAQkQhEXoE6ZOSV4assjNprZddTeOdsaNpULeb35aupwsApg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:4182:b0:691:2f66:4b1c with SMTP id 00721157ae682-6d410103377mr464037b3.6.1725063344714; Fri, 30 Aug 2024 17:15:44 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:17 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-3-seanjc@google.com> Subject: [PATCH v2 02/22] KVM: x86/mmu: Replace PFERR_NESTED_GUEST_PAGE with a more descriptive helper From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Drop the globally visible PFERR_NESTED_GUEST_PAGE and replace it with a more appropriately named is_write_to_guest_page_table(). The macro name is misleading, because while all nNPT walks match PAGE|WRITE|PRESENT, the reverse is not true. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 4 ---- arch/x86/kvm/mmu/mmu.c | 9 ++++++++- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 1811a42fa093..62d19403d63c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -280,10 +280,6 @@ enum x86_intercept_stage; #define PFERR_PRIVATE_ACCESS BIT_ULL(49) #define PFERR_SYNTHETIC_MASK (PFERR_IMPLICIT_ACCESS | PFERR_PRIVATE_ACCESS) -#define PFERR_NESTED_GUEST_PAGE (PFERR_GUEST_PAGE_MASK | \ - PFERR_WRITE_MASK | \ - PFERR_PRESENT_MASK) - /* apic attention bits */ #define KVM_APIC_CHECK_VAPIC 0 /* diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d25c2b395116..4ca01256143e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5947,6 +5947,13 @@ void kvm_mmu_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, write_unlock(&vcpu->kvm->mmu_lock); } +static bool is_write_to_guest_page_table(u64 error_code) +{ + const u64 mask = PFERR_GUEST_PAGE_MASK | PFERR_WRITE_MASK | PFERR_PRESENT_MASK; + + return (error_code & mask) == mask; +} + int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len) { @@ -6010,7 +6017,7 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err * and resume the guest. */ if (vcpu->arch.mmu->root_role.direct && - (error_code & PFERR_NESTED_GUEST_PAGE) == PFERR_NESTED_GUEST_PAGE) { + is_write_to_guest_page_table(error_code)) { kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); return 1; } From patchwork Sat Aug 31 00:15:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785728 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 460BBFC0C for ; Sat, 31 Aug 2024 00:15:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063348; cv=none; b=jIIiD1+cILtFz/JuNzxgk34vIQFfxBmVSsqtkEkMW8J/2EMheYKdeoLVWlRlQ5X7yZU188C7b63Jk9bZlC8FM+FOVhuFtcbtBHYISToiNMSVWcLpeolgTIHfbYJygipRdW2uNIRluC5Cw+dvZ1RSIWRU1gjOwgOYxBBi0D+W3Wo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063348; c=relaxed/simple; bh=0S1/EC5bIW7JtKKmJwoso0teGFo94sSYh6FcpWHE3qM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fb/CBVPCpQ1QkQdlB5rJlb1WZ2/ITq2ITYgBPcBDMxVyI5NtLqJuJ2gtqj3JVKJURvAb+3LLyr8s4ltLzrtqxoXZG54rUxmxYRL1PaxDxjwkEn2ngBxKDFDHRyxePpWjpR3J+/YYsCwFI2HnOXfUv0HPbIJZucN4RLvQQUyuLPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FotemT0a; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FotemT0a" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1ff24acb60dso23025085ad.0 for ; Fri, 30 Aug 2024 17:15:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063347; x=1725668147; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=XxaFoHu5A4IbMNy5peQQugHxw56QXCkNO6VTfRtkiXo=; b=FotemT0aYrkpUXJK3O2KZVdcObrvQw+Bo1XoSIkxcl9m4ZxgL2WOJeMFodlDRQ1hHs EUjtcfL2XEApQq5gEk9iY+gx5vdae2koFcfDzBtuHlBbz0PE7GdJSMeJB0nPzO56ZDRI k7GD3xBJlG6bkIlU1m/PgVDcZs5Jvz/3Ly/6qJYZhxAvHYh+6aLUeqVds63fHpMhERA3 ceG21gxPhoFq/8lJOq7LxUWDOr69aFa7As1q0ou3ClYQcIdqDVIcviRT5Ss2Lx7JT7Om XQniYdthN2DFUseIEqJFHu7UKJgOYaxuH02LsXe9XBYG+7LtuJSmH67H8a8fbRM8Rp0n cyEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063347; x=1725668147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=XxaFoHu5A4IbMNy5peQQugHxw56QXCkNO6VTfRtkiXo=; b=StqqFhIrWsFNGzT7K8hnwJ5IPB/e961qoFpKBXUekZRDBFSdIi3e6H6BiN1jqbdBi+ 22/QqmVg+DD5C8bPl7wcQX3T8TFB+9HJ1m5Tjl8K1oi/jgyns8XmGX9qG6PdyydjddgD BlUEO3WsU2vMwo2qBzFwpSeCjANUqnBvSlhki3OS/iyFzNwJAkogJpFc9ba6wGVOHIUr GPEQN51MJsvQRNBCsB4ujT3i/IhHshlmanQon0dGd+Pa+t2Jtg0+asLYKp7nwOr9aGa1 ymDFSHqluHHyVwuQ40B10bYFfgqiQ4GgpOT3aAC2FiII9zWiyta/SMXzkyvTDjiJiF/X 9QFA== X-Gm-Message-State: AOJu0YxiyOXTittsJEON6lw03wdrcZNddvS3slei2ax6WOq2HyP/q9Ie uGKFxmV28d5Iodfqlod2xFueRC0Mx/jRlKlYZ2H5XlYYcfiCcv3k2KjVXcxQU5m0iFGmskpMLsU auA== X-Google-Smtp-Source: AGHT+IGKNozOb7ZK1aStUW9JhhyXqjyJ0ZBSiMSMAq640/pmOcPj4mJYAE5X8noIc6nc/DNjGrR2ZttuBmw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:11c3:b0:1fa:1a78:b5bc with SMTP id d9443c01a7336-20527228c4emr2748435ad.0.1725063346583; Fri, 30 Aug 2024 17:15:46 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:18 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-4-seanjc@google.com> Subject: [PATCH v2 03/22] KVM: x86/mmu: Trigger unprotect logic only on write-protection page faults From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Trigger KVM's various "unprotect gfn" paths if and only if the page fault was a write to a write-protected gfn. To do so, add a new page fault return code, RET_PF_WRITE_PROTECTED, to explicitly and precisely track such page faults. If a page fault requires emulation for any MMIO (or any reason besides write-protection), trying to unprotect the gfn is pointless and risks putting the vCPU into an infinite loop. E.g. KVM will put the vCPU into an infinite loop if the vCPU manages to trigger MMIO on a page table walk. Fixes: 147277540bbc ("kvm: svm: Add support for additional SVM NPF error codes") Reviewed-by: Yuan Yao Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 75 +++++++++++++++++++-------------- arch/x86/kvm/mmu/mmu_internal.h | 3 ++ arch/x86/kvm/mmu/mmutrace.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 6 +-- 5 files changed, 50 insertions(+), 37 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4ca01256143e..57692d873f76 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2896,10 +2896,8 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, trace_kvm_mmu_set_spte(level, gfn, sptep); } - if (wrprot) { - if (write_fault) - ret = RET_PF_EMULATE; - } + if (wrprot && write_fault) + ret = RET_PF_WRITE_PROTECTED; if (flush) kvm_flush_remote_tlbs_gfn(vcpu->kvm, gfn, level); @@ -4531,7 +4529,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault return RET_PF_RETRY; if (page_fault_handle_page_track(vcpu, fault)) - return RET_PF_EMULATE; + return RET_PF_WRITE_PROTECTED; r = fast_page_fault(vcpu, fault); if (r != RET_PF_INVALID) @@ -4624,7 +4622,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, int r; if (page_fault_handle_page_track(vcpu, fault)) - return RET_PF_EMULATE; + return RET_PF_WRITE_PROTECTED; r = fast_page_fault(vcpu, fault); if (r != RET_PF_INVALID) @@ -4703,6 +4701,7 @@ static int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code, switch (r) { case RET_PF_FIXED: case RET_PF_SPURIOUS: + case RET_PF_WRITE_PROTECTED: return 0; case RET_PF_EMULATE: @@ -5954,6 +5953,40 @@ static bool is_write_to_guest_page_table(u64 error_code) return (error_code & mask) == mask; } +static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, + u64 error_code, int *emulation_type) +{ + bool direct = vcpu->arch.mmu->root_role.direct; + + /* + * Before emulating the instruction, check if the error code + * was due to a RO violation while translating the guest page. + * This can occur when using nested virtualization with nested + * paging in both guests. If true, we simply unprotect the page + * and resume the guest. + */ + if (direct && is_write_to_guest_page_table(error_code)) { + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); + return RET_PF_RETRY; + } + + /* + * The gfn is write-protected, but if emulation fails we can still + * optimistically try to just unprotect the page and let the processor + * re-execute the instruction that caused the page fault. Do not allow + * retrying MMIO emulation, as it's not only pointless but could also + * cause us to enter an infinite loop because the processor will keep + * faulting on the non-existent MMIO address. Retrying an instruction + * from a nested guest is also pointless and dangerous as we are only + * explicitly shadowing L1's page tables, i.e. unprotecting something + * for L1 isn't going to magically fix whatever issue cause L2 to fail. + */ + if (!mmio_info_in_cache(vcpu, cr2_or_gpa, direct) && !is_guest_mode(vcpu)) + *emulation_type |= EMULTYPE_ALLOW_RETRY_PF; + + return RET_PF_EMULATE; +} + int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 error_code, void *insn, int insn_len) { @@ -5999,6 +6032,10 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err if (r < 0) return r; + if (r == RET_PF_WRITE_PROTECTED) + r = kvm_mmu_write_protect_fault(vcpu, cr2_or_gpa, error_code, + &emulation_type); + if (r == RET_PF_FIXED) vcpu->stat.pf_fixed++; else if (r == RET_PF_EMULATE) @@ -6009,32 +6046,6 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err if (r != RET_PF_EMULATE) return 1; - /* - * Before emulating the instruction, check if the error code - * was due to a RO violation while translating the guest page. - * This can occur when using nested virtualization with nested - * paging in both guests. If true, we simply unprotect the page - * and resume the guest. - */ - if (vcpu->arch.mmu->root_role.direct && - is_write_to_guest_page_table(error_code)) { - kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); - return 1; - } - - /* - * vcpu->arch.mmu.page_fault returned RET_PF_EMULATE, but we can still - * optimistically try to just unprotect the page and let the processor - * re-execute the instruction that caused the page fault. Do not allow - * retrying MMIO emulation, as it's not only pointless but could also - * cause us to enter an infinite loop because the processor will keep - * faulting on the non-existent MMIO address. Retrying an instruction - * from a nested guest is also pointless and dangerous as we are only - * explicitly shadowing L1's page tables, i.e. unprotecting something - * for L1 isn't going to magically fix whatever issue cause L2 to fail. - */ - if (!mmio_info_in_cache(vcpu, cr2_or_gpa, direct) && !is_guest_mode(vcpu)) - emulation_type |= EMULTYPE_ALLOW_RETRY_PF; emulate: return x86_emulate_instruction(vcpu, cr2_or_gpa, emulation_type, insn, insn_len); diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 1721d97743e9..50d2624111f8 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -258,6 +258,8 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); * RET_PF_CONTINUE: So far, so good, keep handling the page fault. * RET_PF_RETRY: let CPU fault again on the address. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. + * RET_PF_WRITE_PROTECTED: the gfn is write-protected, either unprotected the + * gfn and retry, or emulate the instruction directly. * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. * RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. @@ -274,6 +276,7 @@ enum { RET_PF_CONTINUE = 0, RET_PF_RETRY, RET_PF_EMULATE, + RET_PF_WRITE_PROTECTED, RET_PF_INVALID, RET_PF_FIXED, RET_PF_SPURIOUS, diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index 195d98bc8de8..f35a830ce469 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -57,6 +57,7 @@ TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_EMULATE); +TRACE_DEFINE_ENUM(RET_PF_WRITE_PROTECTED); TRACE_DEFINE_ENUM(RET_PF_INVALID); TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 69941cebb3a8..a722a3c96af9 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -805,7 +805,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (page_fault_handle_page_track(vcpu, fault)) { shadow_page_table_clear_flood(vcpu, fault->addr); - return RET_PF_EMULATE; + return RET_PF_WRITE_PROTECTED; } r = mmu_topup_memory_caches(vcpu, true); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3c55955bcaf8..3b996c1fdaab 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1046,10 +1046,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, * protected, emulation is needed. If the emulation was skipped, * the vCPU would have the same fault again. */ - if (wrprot) { - if (fault->write) - ret = RET_PF_EMULATE; - } + if (wrprot && fault->write) + ret = RET_PF_WRITE_PROTECTED; /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) { From patchwork Sat Aug 31 00:15:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785729 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A33D5171D2 for ; Sat, 31 Aug 2024 00:15:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063351; cv=none; b=NEBaTCyN57Mb1kjtYdCOIVCRzVfOjJll2GmBBHqYZlKx025Eg+p3/dR54/idZn2IeN6qcgUb/VCKOGA6f9NxtWtc13Ctej3L6wpPcShMdpPLL7jBVYRrGAdAmGHvsf7L//wU+80EqT4lFo+y9APOWf/Ni6BTukW/jUxmoXA4UFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063351; c=relaxed/simple; bh=Vfn4sap7F12KSn6gEqe3OjOMjrYNlixw+DuYUTSraC8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BwShEYLRmm6naNeG9HXvKoNB12UH8G463puvuTvzUK1aIZ405fPyORL8Asc4DHlUs9tEomkB8Lbv8tlESp4oUYEBXNd5DAGvfpbzhSIhXFaNJxJ+lSWCfh3ChcQGivyWrCnTTvuCZCq7ZNsupFp+ZmHDl725oGiT7KmtVlNgy+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PNhw/taa; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PNhw/taa" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6b43e6b9c82so56381577b3.0 for ; Fri, 30 Aug 2024 17:15:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063348; x=1725668148; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=r3dfY8/fDEjK6mVegBOh3Hcv+cOkuxjVRlEEYnL1bxw=; b=PNhw/taaNcf5TffgHlDIn99eeOHOCbf7GefvwEfAWI+G3nSOnzSbFzIm7vyt70zKEW 8yPck86N6nOHfU7fFxvwlJCfc/bJb9TmvW+gO4JI2YrN5X3J8a6+p3AcBh/R8FV6TyjZ Ym3g1U5uAGZgRTtpoQsbNEqU5ZBqL4On0KxoTpYcLRtK5+uqwwOQ8i1LuyLA60y0RJ7K A8ro/5/XCz3YQ82AgVduK6JCGKDoCOoLTyeFZKr7nO7j16xXBxRbOfEdeJpK1IyjBMbN xVOvcv/xdi+8VbOS+OkvIKconJF+wtqhc+uOsMnXokdxXik6e+GaYgdwtpXFWDeJYtNV sSog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063348; x=1725668148; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=r3dfY8/fDEjK6mVegBOh3Hcv+cOkuxjVRlEEYnL1bxw=; b=KucUyFDq+QTdWlCWJFOa9kzjdymp8qe3St4ThBulDa+JG3uDioLunKQLEDDQYD2GV5 /TLLWysvxvapCnkEGswazb9NQoE2lx5j6UffK0GXho+7jsslDY1S7Se55fSN0tYSlKwV zSA61KU0r24L0yjXaQJvGi6yc6ktTZD5CykheRPLWvRJ88HW5mI0YqDrX7MoZh03UHCJ 4Eon56wz50e/Qi3qGNNKkwwJUQ761VMs+P/7Appc+9P7NeGCYespi2mPmyMYmDCWRiJH /WpBdajFFaUmEUREFLC4CsDzSPBEBP652hVNQe9oGgf8S9g7cwg8CpBvL2c4FmI5XfSB rk1Q== X-Gm-Message-State: AOJu0Yz0GUUsRN2NkhYUoXZLpiIb3gkoikSD45PHTrcAI7RLn1aY2P12 BjOqlgkXpEJzdy5n3IEp7SRMs9C6ljCS/ikpLyQrJwiLbYf71Q7zFxpII/gobPRMS7Ser4yW04y P9Q== X-Google-Smtp-Source: AGHT+IEX38k43fNniAUT9zZ10fu//L1yrfO75qcNhigMQXRQOpLzwXTrY/RTolEek0hddCoQ2pigQLcJwJ0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:2884:b0:6b2:6cd4:7f98 with SMTP id 00721157ae682-6d4102f7068mr572167b3.8.1725063348631; Fri, 30 Aug 2024 17:15:48 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:19 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-5-seanjc@google.com> Subject: [PATCH v2 04/22] KVM: x86/mmu: Skip emulation on page fault iff 1+ SPs were unprotected From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao When doing "fast unprotection" of nested TDP page tables, skip emulation if and only if at least one gfn was unprotected, i.e. continue with emulation if simply resuming is likely to hit the same fault and risk putting the vCPU into an infinite loop. Note, it's entirely possible to get a false negative, e.g. if a different vCPU faults on the same gfn and unprotects the gfn first, but that's a relatively rare edge case, and emulating is still functionally ok, i.e. saving a few cycles by avoiding emulation isn't worth the risk of putting the vCPU into an infinite loop. Opportunistically rewrite the relevant comment to document in gory detail exactly what scenario the "fast unprotect" logic is handling. Fixes: 147277540bbc ("kvm: svm: Add support for additional SVM NPF error codes") Cc: Yuan Yao Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/mmu/mmu.c | 37 +++++++++++++++++++++++++++++-------- 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 57692d873f76..6b5f80f38a95 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5959,16 +5959,37 @@ static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, bool direct = vcpu->arch.mmu->root_role.direct; /* - * Before emulating the instruction, check if the error code - * was due to a RO violation while translating the guest page. - * This can occur when using nested virtualization with nested - * paging in both guests. If true, we simply unprotect the page - * and resume the guest. + * Before emulating the instruction, check to see if the access was due + * to a read-only violation while the CPU was walking non-nested NPT + * page tables, i.e. for a direct MMU, for _guest_ page tables in L1. + * If L1 is sharing (a subset of) its page tables with L2, e.g. by + * having nCR3 share lower level page tables with hCR3, then when KVM + * (L0) write-protects the nested NPTs, i.e. npt12 entries, KVM is also + * unknowingly write-protecting L1's guest page tables, which KVM isn't + * shadowing. + * + * Because the CPU (by default) walks NPT page tables using a write + * access (to ensure the CPU can do A/D updates), page walks in L1 can + * trigger write faults for the above case even when L1 isn't modifying + * PTEs. As a result, KVM will unnecessarily emulate (or at least, try + * to emulate) an excessive number of L1 instructions; because L1's MMU + * isn't shadowed by KVM, there is no need to write-protect L1's gPTEs + * and thus no need to emulate in order to guarantee forward progress. + * + * Try to unprotect the gfn, i.e. zap any shadow pages, so that L1 can + * proceed without triggering emulation. If one or more shadow pages + * was zapped, skip emulation and resume L1 to let it natively execute + * the instruction. If no shadow pages were zapped, then the write- + * fault is due to something else entirely, i.e. KVM needs to emulate, + * as resuming the guest will put it into an infinite loop. + * + * Note, this code also applies to Intel CPUs, even though it is *very* + * unlikely that an L1 will share its page tables (IA32/PAE/paging64 + * format) with L2's page tables (EPT format). */ - if (direct && is_write_to_guest_page_table(error_code)) { - kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa)); + if (direct && is_write_to_guest_page_table(error_code) && + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa))) return RET_PF_RETRY; - } /* * The gfn is write-protected, but if emulation fails we can still From patchwork Sat Aug 31 00:15:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785730 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 617BF3D76 for ; Sat, 31 Aug 2024 00:15:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063352; cv=none; b=ZO6hQgtCaeFa9WVK6xy2KGEXaZrOIYMH91JdlP3+lRrJTPm8RWYmwk4OouDQ8AfDjtUWb9K5NZkI4H1VYHDkXTeUE8dPZu5yd5Gczde8d4UL+jmSmGgQ3/IjjvrPyJ8rD5jd9xYo2h2hDvbOi/ZZfYYaj2aujC0xPdQodr9UD4Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063352; c=relaxed/simple; bh=aTVAVwczKEP1sfRbeojsquqSLoy7yQdZzgoVJkxL3Hw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qj5x0EH6uFRiIpJmfUFRU1yxcOW2WCmwc9+/ep8Ew23GRt5oaoYo2FhDcHRuKx3zcZoSsIVmIPBa6PTwRi/8VHYgHJH9KQFiTPVBRZcWvY+DxQBBhqZqxLv6TNZgYxJjtbOKZ9vxOXMK/PDv8LsHlqp/E76tkt8B5jShKOMyUFM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=u4V+jB3D; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u4V+jB3D" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-69a0536b23aso47526657b3.3 for ; Fri, 30 Aug 2024 17:15:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063350; x=1725668150; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=7EhMqLEmX3n6K2WXzV5Gm2EiCppXNLHfvg1qXCILft0=; b=u4V+jB3D9zcTcjsW2UOL9Mi5Ej3Pv9n02ZNBcq5tIbc4wZIy/Y3/sJ+/ZSwQj8k0I5 EMLdI+q2PRAZ2fX/OxAviCUbmL71FPgg24GEDkZUNcrLuOiSrLI4OREv9y8Js45pSDQm fsxr5Cp90dQ+pZ2FR+7PBnhKW8ZPQ17COOPVVzO1Kx+zk3Ou/5v2AFX6pvJhdhCCG/4R z+pdHAEsrOCsYrfjcMaQCvG6zrajXbciEPIic0cEVn8Hqekssum+l2S6/P1Z8d2RNnKM Gq/8biLDca4ndrxJKYO8nT5GT/tolSAlia0WvAvpfIa/h4XLH96zBsHctnT/f1EvHItN 4PTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063350; x=1725668150; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7EhMqLEmX3n6K2WXzV5Gm2EiCppXNLHfvg1qXCILft0=; b=ts08GSIh6YHiiH99a+URD9h2Vrub7ZNmPwbxEvNbFuvISx9w7DJTeal3DUryPu3yQt GnMnKw1l3hBAokpth5kjUmiimAKOvO0Gu4KN7jl/yKIxVNEobAR26fgTA3jxCRSBbzyP Xe3nd5GSvjzI+ykAFjVV5Rsr9KOprDs3usZTYqVH3gS1hlGS73dp+qHGPO9cESFmP1X/ 5E6NwGq+A0ZAbb3u5+DAiR7IF9c2Phj6NH8QFiIfhH3K9IqBr+7AsMQVtf1NMw0iXKuf f3rXgcpxwT2bDXcY4oFVtV4Z0h70DOS3k24PvC3ialKIy9koOVJQp1fcoKnyMYyewldQ vvfw== X-Gm-Message-State: AOJu0Yw1tFM0hsD/5+mwYT5k1Wanvy5yZmRN8Km3jMxhLYO/B7QLaHXI hgp3nY6n68QKWJ1UG6TApol7ohAl+TAITtexut3Q0V+PthPY3+xcoLvC17mcZy31MBQHZYHXPsV rqg== X-Google-Smtp-Source: AGHT+IH772L2wnK+bJh/Re03anXSfmUsTgTWEqSK2F8xhRla6k8xN8TqIB6/NPP9fx/gq1A3FL1yJgAK9+o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:4490:b0:650:a16c:91ac with SMTP id 00721157ae682-6d411290d2emr966657b3.8.1725063350558; Fri, 30 Aug 2024 17:15:50 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:20 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-6-seanjc@google.com> Subject: [PATCH v2 05/22] KVM: x86: Retry to-be-emulated insn in "slow" unprotect path iff sp is zapped From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Resume the guest and thus skip emulation of a non-PTE-writing instruction if and only if unprotecting the gfn actually zapped at least one shadow page. If the gfn is write-protected for some reason other than shadow paging, attempting to unprotect the gfn will effectively fail, and thus retrying the instruction is all but guaranteed to be pointless. This bug has existed for a long time, but was effectively fudged around by the retry RIP+address anti-loop detection. Signed-off-by: Sean Christopherson Reviewed-by: Yuan Yao --- arch/x86/kvm/x86.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 966fb301d44b..c4cb6c6d605b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8961,14 +8961,14 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa) return false; + if (!vcpu->arch.mmu->root_role.direct) + gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); + + if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa))) + return false; + vcpu->arch.last_retry_eip = ctxt->eip; vcpu->arch.last_retry_addr = cr2_or_gpa; - - if (!vcpu->arch.mmu->root_role.direct) - gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); - - kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); - return true; } From patchwork Sat Aug 31 00:15:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785731 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1C191CAA6 for ; Sat, 31 Aug 2024 00:15:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063355; cv=none; b=Xu5gp7y/Zy0IUZ1rBS00vbrZaSXhhxvbo+XIUNDGFq9n3A9Qe6O9r56qUiOAd0rN0vQrB2K8m5PBNYeJP2oV9FyI13IYamsoqthmz/eL0GHmu5iBf02np89/6QgLZYg10P99FkNzQCQUiO1rkGEDwiQVbWq/9FUazVpozyNfl9g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063355; c=relaxed/simple; bh=qX4frsSxKTQ6bojkC/e3dE7oNpANxrfnXEYYdQProQY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BvZz+/CzQVDNnMoezz5DwE6P4+oYoA5nZ9BVNDlrlUVP2tyicoaqHRN2ntuSh37uCrSMJu6UfFKfQ33FeCx6qskpowlBztBa0XDOcbJ97TO0q5mLO2vQMfZ7ujEks7yK6dMsr4NvSpTYHMmVIcLn2J2tobkAVNO9ZGD73G8BdPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=if74Vmvq; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="if74Vmvq" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-71420354182so2720670b3a.1 for ; Fri, 30 Aug 2024 17:15:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063353; x=1725668153; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=24VCpIEq4+Q+YXItNAgkyUyFi45YkZe3g1wKHaUykVY=; b=if74VmvqYni3eqNvaUmGHTJjg4PCvPPWMu3qF84E0YRUB6IwZ85rCzgNA5cSk2+1uN iqL2y7Vo40yuzfVazV2EJSZV3hFZrjUp5GilEw+am5A38qMmLh7WgnovBW1Htf1rwWd/ GzXegY51lL52fkRRvXif14JmIshL/LxgcFhU4DD+OuVXcLAnu0suFjbe4ohkcMbtap6n T2+Ywv20Wn7dwS+xz4adxjbn7lK0Emm1G9L1ZHBSh6L0TbsJOnOhknl3k0ow/d8lbc4X 0qU+V/bl4LVPGwnsjiSl1YUsUxnyJPsP+FU7dr8a9IfsN+piuzYhQzCw0uQk8fQ9gzxq ppdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063353; x=1725668153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=24VCpIEq4+Q+YXItNAgkyUyFi45YkZe3g1wKHaUykVY=; b=m3yjnkOWvjWdHyrLr9O1UspyKni0jDvbB9BAlCCh9bkh/kcE6SfGX/ErxyCZr2nPNw onudHHHA68KsZ7jEoFz3SqS8Qzy0tnacsnQa9WoilXL7/yHsCJrbOGAwgXprv0rA0KTa h8RsdoWcV5osQan44FA4wWPN4/Ma/QhdtCXL0vexT/htq0QLa4NMsIYXNJy8ClpT7x4p pac43Fd3dq7hK7/7DJx0l49YmdTCVBhNUaqC2qiyd1JGjB9uZ0ocuHjWKoCBPX4O79HM gdYnRTp+zkZBKLRDGgl0uf8+eBv9Uch63gl143qIp1LdGYGwjuFpzIF+E20/51cN+pKL 0Tmg== X-Gm-Message-State: AOJu0Yw7pCaFYlOCgpPpTGNtYsIbBgSFz7YoEfyvErs20a39qC+QgFFh RZxG/dtCgmHq/OR1zBbkU4lsPTMe1ShM1aOmLFnHMxNs9CVpTl5PQMp7/RjY/Xw1oCB/sB3polz 2uQ== X-Google-Smtp-Source: AGHT+IEmNWc2IOu0f8CgmpJdBNIqSgvzCJlTfxhTsRLfVh/+amYRW9Ejbsh1qT3edScYRLUIfkgLKFvBqKo= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:91d7:b0:714:200c:39a2 with SMTP id d2e1a72fcca58-717307a3102mr7424b3a.6.1725063352881; Fri, 30 Aug 2024 17:15:52 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:21 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-7-seanjc@google.com> Subject: [PATCH v2 06/22] KVM: x86: Get RIP from vCPU state when storing it to last_retry_eip From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Read RIP from vCPU state instead of pulling it from the emulation context when filling last_retry_eip, which is part of the anti-infinite-loop protection used when unprotecting and retrying instructions that hit a write-protected gfn. This will allow reusing the anti-infinite-loop protection in flows that never make it into the emulator. No functional change intended, as ctxt->eip is set to kvm_rip_read() in init_emulate_ctxt(), and EMULTYPE_PF emulation is mutually exclusive with EMULTYPE_NO_DECODE and EMULTYPE_SKIP, i.e. always goes through x86_decode_emulated_instruction() and hasn't advanced ctxt->eip (yet). Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c4cb6c6d605b..a1f0f4dede55 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8967,7 +8967,7 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa))) return false; - vcpu->arch.last_retry_eip = ctxt->eip; + vcpu->arch.last_retry_eip = kvm_rip_read(vcpu); vcpu->arch.last_retry_addr = cr2_or_gpa; return true; } From patchwork Sat Aug 31 00:15:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785732 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 895F51F94A for ; Sat, 31 Aug 2024 00:15:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063356; cv=none; b=m2WLmWovR95bWAfZflGgssFZaQt2Q6heY6HtcoKkAPuXa5Cv1Zn0SuYfen/jX/RfQ0BpqThVwAlnnYdgcsmwOdsUbQQQBUSGwMmzWYl4Tr6gFnXmLSAywBxU91uFKlwMvu3ZajmjtepK31ABORfzfmhYGMpP/dmMtrzuI+xBswo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063356; c=relaxed/simple; bh=pD5WjQYntO47PlRaRqi7VIbbwgINoA1bCT/vA6bz3VQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=mULRs8i2F/OkMk+/0BltZ8cgiT93++7VxRyKJRxh7LdH0+UT3t9wUBuuKtJAeR+sIoGoiABfxxRkSsCAX909Usk27CYGVvJyv8xknrYP9y8Uwo9UxZqDPxEhzIYiTk3umamdvWZls7ASsrYqWwkcuMM+7hjW3KZsJJKhiFtXNlo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=zg4wGD5x; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="zg4wGD5x" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d8817d8e03so828746a91.0 for ; Fri, 30 Aug 2024 17:15:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063355; x=1725668155; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=+Qft+nZl+LjcqZqAniryaxEtRXUtCnTAy59Lv2DhjhA=; b=zg4wGD5xuUA3h+LdDwGof/mJtuSJEw13b9S35EQNGcCDy8uXbsfdzz4NcLX16XRdcS aoc8sWi/MglBjyiWUtm2NXpYc386Y1A6Ynte9I0ogyB1lm+GP26C0yaFfMNf8Hc29ViX PapNWp6AeSJBKPaQMJsKG62e+/Kn3t21vFvsxtjfRnLNIfe9G8CzIpcr9/gsAa8nSb13 AS2W6XQsZoHmr6JxbG9jHP7OdvlTjkVEi/EED+PsETeba9dRDh8NczH+Ns0nUtEY43wZ 7B6pgOOzInf3B7IVhJfzVXw6xSJ7viXSE/foXpenZsKMH3K6uzHXRaO/P5DScGWnJgqN cZhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063355; x=1725668155; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=+Qft+nZl+LjcqZqAniryaxEtRXUtCnTAy59Lv2DhjhA=; b=t96hnzgDUx3sRyWs4E6/yBD2eFYdwHPt8j4LsRGE6+I/77pdbPAs4Y+GI2Z/368tn7 mTH07Y/sErAwL16kGIY1r8plID8jZ5S/YSD7ZgWB5aSe2o68zubB/OsdwHb/8TgOol1j bN9P0MvCsPBtAdh/BCM5W7Hb+4W3NQLL9e09r48JH3kmb0Pl16Dn4lx/M0Fb5/IrmNR4 zwekEXyYA2SeKNyqSNHhrGT9Dv5Desuz71J3ZhmywMMrcC1Mq4rld+IVm4RKOEraMqTY S3y2sfqC9t7+sdGFPGNSYiXXxQzRbSOLhouxf2K3kYkxDKLdCO8EOEjHhrVdesO5JvEd 7/1A== X-Gm-Message-State: AOJu0Yys+CcplivdQtZsuoOVB4D5DAib570vtwHCQyIArGWhnCwW2YkE nUZi1LgBLxajJJJ4b7tA/gUPe7fmWMYCM+kspskFO6hCqNIE95+VwdUGiZ+wAu9BOMVeR0oi6YE UvA== X-Google-Smtp-Source: AGHT+IGTQJIEJrg9n5PvoSMJ5tiETEAqsh2sRrvNvPq6jrtr32LGIiuli1Er3RpuQYOhVlsrghuEnFeBcIY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:3f86:b0:2d8:8abe:cb03 with SMTP id 98e67ed59e1d1-2d88abed74emr3402a91.6.1725063354736; Fri, 30 Aug 2024 17:15:54 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:22 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-8-seanjc@google.com> Subject: [PATCH v2 07/22] KVM: x86: Store gpa as gpa_t, not unsigned long, when unprotecting for retry From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Store the gpa used to unprotect the faulting gfn for retry as a gpa_t, not an unsigned long. This fixes a bug where 32-bit KVM would unprotect and retry the wrong gfn if the gpa had bits 63:32!=0. In practice, this bug is functionally benign, as unprotecting the wrong gfn is purely a performance issue (thanks to the anti-infinite-loop logic). And of course, almost no one runs 32-bit KVM these days. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a1f0f4dede55..c84f57e1a888 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8928,7 +8928,8 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, gpa_t cr2_or_gpa, int emulation_type) { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - unsigned long last_retry_eip, last_retry_addr, gpa = cr2_or_gpa; + unsigned long last_retry_eip, last_retry_addr; + gpa_t gpa = cr2_or_gpa; last_retry_eip = vcpu->arch.last_retry_eip; last_retry_addr = vcpu->arch.last_retry_addr; From patchwork Sat Aug 31 00:15:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785733 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4C462374C for ; Sat, 31 Aug 2024 00:15:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063359; cv=none; b=Nuo8gpo0I5DkEG/uOJ1NWOSk9W9/Bj3i21bfBPbLmbGfDaPf/LNNECMDLtansu+wqzf7Nrw7wrUz3BvTUrVlOcbXy9L1JTw59F0o/Gr27jqLVnn2RHXu6MMecXM674+wmy9cPAB+MilG6bWaC9Qeh70RJCnn0dMjus2Tm8YbmEY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063359; c=relaxed/simple; bh=okOHlfjWRjUaRWCJVIfP0FRC3jaPnRUhU2Ljs35GXBk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=T7Aw9zkjvNZoQIBrzAjV5fftuOHHPm/pMx1INz+6TaTm7OvES9+Hj3cKglVdUDw7O2jgy3QAjK3ct4W0mlt6IwdDCibvSaHOJky9lt0Rh+GmVVVk9NKcgDLBpgU31cQpRHEtU609SPgh59Z6N5Icg1p4PB8rDQnU9Ln5eGelYEY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2+Irigdr; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2+Irigdr" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6d4daca4a4bso9714527b3.0 for ; Fri, 30 Aug 2024 17:15:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063357; x=1725668157; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=20uh8Karf7HaNxExXLYXZVv2s3Odzh3UmkqbAlerb3Y=; b=2+Irigdr4bQIlzbvJeSbXDfosSil53gPtkW8kI4SodSmmNdyfHLwsU9F0nJNdZCafS VoEThpBorLnNYrx/JOkNpUlQssXapauDaudrOHW7T65sEaAwbKRJhLsw3oEHDp+F0g5N QnVxXbUf9yBGqhpEqWNNIHSZQK+P7GiRFCcuERb4vJjzZmgtwpG42eurHaUTZH15vyps KIKCEUgpMHrIMrJnYJ2v0RUwPHq5WIAnK2K7QA91CPSW+QNAu05mHVqlZbimWCh0ndky BUAQqEngDj0sS83iD5ycFmnIgCSvuybpR1/KjtzQ/wrYyRkA3YWibrV7LqIe4ITePh8f uOXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063357; x=1725668157; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=20uh8Karf7HaNxExXLYXZVv2s3Odzh3UmkqbAlerb3Y=; b=wHByjnSLUyQQBeaxheu5CG1skkj5H8Mpcs0/AaMvgQXYX+Ml9nkyibOotweVEzaoDa 4z+mzCsJusOjj2Agx3H9UlgwoSPzlii3h0/1UUzkNobVENmLIjvM+38jLHKlWzt1GXPF 1ZYUAvNSoQAA8+23T1Ty/8Nyi+LJ/QmD2abhS8aRKe0xGtpfWnWXMYRtWCQ0ChBbb3hh hqbzo4lGF+OZz1D1Y+zPdpvbm7N9xHoNhhavgCW9byVQJNE/kGU/pIrM9VwySqLaGzBt 5gQ1OEq1/PYwZi9e1V2FFXeZaD/5LTsBF1q5rwf8OQ9/UUQ0W6ofKnBbY/C+2E5SADJK WzFw== X-Gm-Message-State: AOJu0Ywr1JyA2eWqAJUZ8SmyhcYxlNSOwt2NksZRrCRF9yoPxbvKvPpj W0TgCJkTby+sqULKS+r3L+vcJm7rjK+DB/sJ8tcviubc03JebV/jjyx86Yd9jM2N4T2rux6S64a o8Q== X-Google-Smtp-Source: AGHT+IEtVf5oWPZeuNHrdt0XXTW0hZJaLnTQjVDPanMWRsjlMz9p2OLeYpGnhxaOIzgFrBVl4hdpYEQgBiU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:8d02:0:b0:e1a:7eff:f66b with SMTP id 3f1490d57ef6-e1a7efff840mr25477276.5.1725063356829; Fri, 30 Aug 2024 17:15:56 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:23 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-9-seanjc@google.com> Subject: [PATCH v2 08/22] KVM: x86/mmu: Apply retry protection to "fast nTDP unprotect" path From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Move the anti-infinite-loop protection provided by last_retry_{eip,addr} into kvm_mmu_write_protect_fault() so that it guards unprotect+retry that never hits the emulator, as well as reexecute_instruction(), which is the last ditch "might as well try it" logic that kicks in when emulation fails on an instruction that faulted on a write-protected gfn. Add a new helper, kvm_mmu_unprotect_gfn_and_retry(), to set the retry fields and deduplicate other code (with more to come). Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 39 ++++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 27 +---------------------- 3 files changed, 40 insertions(+), 27 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 62d19403d63c..2c3f28331118 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2135,6 +2135,7 @@ int kvm_get_nr_pending_nmis(struct kvm_vcpu *vcpu); void kvm_update_dr7(struct kvm_vcpu *vcpu); int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn); +bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa); void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu, ulong roots_to_free); void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6b5f80f38a95..c34c8bbd61c8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2713,6 +2713,22 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn) return r; } +bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa) +{ + gpa_t gpa = cr2_or_gpa; + bool r; + + if (!vcpu->arch.mmu->root_role.direct) + gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); + + r = kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); + if (r) { + vcpu->arch.last_retry_eip = kvm_rip_read(vcpu); + vcpu->arch.last_retry_addr = cr2_or_gpa; + } + return r; +} + static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) { gpa_t gpa; @@ -5958,6 +5974,27 @@ static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, { bool direct = vcpu->arch.mmu->root_role.direct; + /* + * Do not try to unprotect and retry if the vCPU re-faulted on the same + * RIP with the same address that was previously unprotected, as doing + * so will likely put the vCPU into an infinite. E.g. if the vCPU uses + * a non-page-table modifying instruction on the PDE that points to the + * instruction, then unprotecting the gfn will unmap the instruction's + * code, i.e. make it impossible for the instruction to ever complete. + */ + if (vcpu->arch.last_retry_eip == kvm_rip_read(vcpu) && + vcpu->arch.last_retry_addr == cr2_or_gpa) + return RET_PF_EMULATE; + + /* + * Reset the unprotect+retry values that guard against infinite loops. + * The values will be refreshed if KVM explicitly unprotects a gfn and + * retries, in all other cases it's safe to retry in the future even if + * the next page fault happens on the same RIP+address. + */ + vcpu->arch.last_retry_eip = 0; + vcpu->arch.last_retry_addr = 0; + /* * Before emulating the instruction, check to see if the access was due * to a read-only violation while the CPU was walking non-nested NPT @@ -5988,7 +6025,7 @@ static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * format) with L2's page tables (EPT format). */ if (direct && is_write_to_guest_page_table(error_code) && - kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(cr2_or_gpa))) + kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa)) return RET_PF_RETRY; /* diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c84f57e1a888..862eed96cfd5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8928,27 +8928,13 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, gpa_t cr2_or_gpa, int emulation_type) { struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - unsigned long last_retry_eip, last_retry_addr; - gpa_t gpa = cr2_or_gpa; - - last_retry_eip = vcpu->arch.last_retry_eip; - last_retry_addr = vcpu->arch.last_retry_addr; /* * If the emulation is caused by #PF and it is non-page_table * writing instruction, it means the VM-EXIT is caused by shadow * page protected, we can zap the shadow page and retry this * instruction directly. - * - * Note: if the guest uses a non-page-table modifying instruction - * on the PDE that points to the instruction, then we will unmap - * the instruction and go to an infinite loop. So, we cache the - * last retried eip and the last fault address, if we meet the eip - * and the address again, we can break out of the potential infinite - * loop. */ - vcpu->arch.last_retry_eip = vcpu->arch.last_retry_addr = 0; - if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8959,18 +8945,7 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, if (x86_page_table_writing_insn(ctxt)) return false; - if (ctxt->eip == last_retry_eip && last_retry_addr == cr2_or_gpa) - return false; - - if (!vcpu->arch.mmu->root_role.direct) - gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); - - if (!kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa))) - return false; - - vcpu->arch.last_retry_eip = kvm_rip_read(vcpu); - vcpu->arch.last_retry_addr = cr2_or_gpa; - return true; + return kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa); } static int complete_emulated_mmio(struct kvm_vcpu *vcpu); From patchwork Sat Aug 31 00:15:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785734 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 141BF2773C for ; Sat, 31 Aug 2024 00:15:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063361; cv=none; b=lEobQd4+XNUirO/GINsB7CTCBUIZdRAdN8zTb4lTvBqsFSyliHCD0WBcRt8p1TwKzKfj/tTqBvOuaolJ3m8x4dgPxTNhGBMuDTv/V7C+v8oxuUaFWHRGSrWuyOKLqGRYsA3jOIOKCPFtDj0qayQMp3D4DotLPzRRcBFvRtnu43E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063361; c=relaxed/simple; bh=DjdRqrZVYaDjDoNjboOK73Yx3wVuE0JYZU3mSEZ1MxI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aSVLyY7Y6X7ivhH/PIAhWo6cdSCsiTB6vlnn1pL/APOhYD48yCjmfsZ8C0phBQtgpTKZn0Bn2g+gfVNICTaMS7hBpianJl75R13e4FmyNmnvvB7xtjXVdelK1pYGhzePHXpfN7spyBh+GUXma1vMSrezoyQc0tCGwYAr4NHx7tI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=TjxQ0x6m; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TjxQ0x6m" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e13c4519ed6so4650206276.0 for ; Fri, 30 Aug 2024 17:15:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063359; x=1725668159; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=7IoutToPYHP78p+giZTIYSlAIHUDU+a9Cu0+YJ5R9Xs=; b=TjxQ0x6mO0qhLDz7PUmPBXJZFI6tjXSmkWi5TFrqGLmpDtFk7g7523cJ7jdnCbwpg9 BXFOJ2kc5WSYChLT4x0HqbWXQLsxoeP8OJkdNCb/MzFGLpMoE05Rz5mfxt/nhFJ4bptc v/LJO2riG041V68hCCwRT/15kMP5h+ex1lmJrMQRMUmnfrqi5TWL2aYP56ynv2R4qmUm 80RXuJd1u7AQExiumlw8SUzhcNksy0+/FYF/PYL2tQQZ2C7CvwN3MU9kt44IVMne8SkQ d07/vREGYuaiOVll+dAonMdaVgGCW6IrdMHZHdGCJZi0BYdKTM7BI7LgURCKgr5PvwvX 2f2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063359; x=1725668159; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=7IoutToPYHP78p+giZTIYSlAIHUDU+a9Cu0+YJ5R9Xs=; b=asuB1nkWYS0BIUhYqWwTGSCcAtpZm6MLc0Ev0STH+rhBbCQR1Y95kAmv6KQTkP9A7h K8HngHTZaASMqiHzI8gb6DIKI21Tw9W++i5g88hy2Gby7c3KTFGZ+qmYZokeh0nKmbKd T7Cyi3g6Wr6+thhagWaM+rBlofcXVlyGeExsj/0uGrWQoPSXMQSkjo8PoQzVSvDYctFQ 1ID0M0soeNun4bKWKrcjVvtb81/v17WAZ5h8wJQlhM8c+M73l2T7AnTPftI7zmvA+xQT +gLqKLoVf6yQ7LMqa+XB8CUvlZgDYbECxojApk/wDhsNI9/nJaR/2z9eUKItwUYvT9zt q0MA== X-Gm-Message-State: AOJu0Yzg3p5v4+rBzDYrPR7SNnQXU8+/vGwe21rklrPdCpIw7bW8n3BM L9/KbHVQ27YYT0kVn7ExezLOOocTXfeY+Y4lbc4+v+bXCQn0J8Y42lk/2LTAn6yByFzBT+cEOND 4iw== X-Google-Smtp-Source: AGHT+IHuUMDVgauY7GuvKPVxwwQlBkVToxZb6WAcgBa0Ma8vA/kxkgf3DqQCO/+ZLc7AXyR0g3JjVsDh5oM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6902:1367:b0:e0b:b36e:d73d with SMTP id 3f1490d57ef6-e1a5c8765b0mr58850276.4.1725063358899; Fri, 30 Aug 2024 17:15:58 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:24 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-10-seanjc@google.com> Subject: [PATCH v2 09/22] KVM: x86/mmu: Try "unprotect for retry" iff there are indirect SPs From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Try to unprotect shadow pages if and only if indirect_shadow_pages is non- zero, i.e. iff there is at least one protected such shadow page. Pre- checking indirect_shadow_pages avoids taking mmu_lock for write when the gfn is write-protected by a third party, i.e. not for KVM shadow paging, and in the *extremely* unlikely case that a different task has already unprotected the last shadow page. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index c34c8bbd61c8..dd62bd1e7657 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2718,6 +2718,17 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa) gpa_t gpa = cr2_or_gpa; bool r; + /* + * Bail early if there aren't any write-protected shadow pages to avoid + * unnecessarily taking mmu_lock lock, e.g. if the gfn is write-tracked + * by a third party. Reading indirect_shadow_pages without holding + * mmu_lock is safe, as this is purely an optimization, i.e. a false + * positive is benign, and a false negative will simply result in KVM + * skipping the unprotect+retry path, which is also an optimization. + */ + if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) + return false; + if (!vcpu->arch.mmu->root_role.direct) gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); From patchwork Sat Aug 31 00:15:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785735 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 186EC2B9B8 for ; Sat, 31 Aug 2024 00:16:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063363; cv=none; b=qC6I77gHKiO0j75KLdCG2C6ST93QssqXNkIJixdVQo0Zgz4kxqff9zlM0MS5Wc3EGpCab/g/iaZ7cWoM+z+UntR+fKwE+BJEr6llj82Vf08jlENoANqhVDZD2aXyGmSy8Na0ut91fWeEdUaBPPdQ1zoSSG4fRwpMS86jPmKI0+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063363; c=relaxed/simple; bh=DCTzB3tz2/E8t0BLmCC1a+S2lgNSLEDLxXAB74yg2pc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Rqt510mIEno4jUGDxqQboGpBt2jttIK2NAvJ1xtHnNy6VuwlHmnySQ3lmMVkZVYMzm+1O8UBrx/tdsnt6Ahv5BRyH/qpQ3CMeiKc8JBy/1wi283s8eCG24gxv+1mT49W9ipX0ANRPd63Hy0jzWcNT3AKEqe4cTfeIpYezDMF8ME= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jOjvqHEq; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jOjvqHEq" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6b2822668c2so45865687b3.1 for ; Fri, 30 Aug 2024 17:16:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063361; x=1725668161; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=If9fhF0Hkv4z0IGK7mMWOkmyhzNHFQEYRDvTkV0MKq0=; b=jOjvqHEq+XKUKiN1eGl8lkJ3pYAE5i/2mfM8UXFFxvr6LmknJDVZeYqb18NcyD0y1Z GOL32mRi8Kkin+X7bh5DnjtKL2gmOnuMV77DktMYX9pnHVEk21QrZ73+5WFeNQueK9ks B/O93/GVYPqqH1XTxY3RlHjYr/JA0VwBD5mFqOVcrCeK4nn1vXF8C5ekusA6ScMBWZ5r Ly16/M5lRvb7aaJ9R7ymrBLB7hnIC9Xus0WNoL49B71qgjGSrNXbbKTVJLeYAFeOfH1d H1AlZX0j9HSJDz3hdRQSV9bzAr1DERaKdQXMuhDxYtW8gTZdAKqeITYEOnHspcsUBpoJ ZgQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063361; x=1725668161; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=If9fhF0Hkv4z0IGK7mMWOkmyhzNHFQEYRDvTkV0MKq0=; b=vZYKhe6rsC3OCw0JSCCpbsVF3Y0iaEwH2gS/tj2Ttu/WPXRGamdXcfeasE/brNlgGp zVyT1/stRl4N9rWa3Us7QgzO+x3KroaWRSVW+4E2qk5obpdpBBYjzI9UVAEaea1+uo06 C0bSXg6T6QtFHLgnzTlpJ+tMP+gBv7I6GxiG0nnzlQsh0sJkyzyy03QSiHcA4cxBMluE 3yTpOaI00w/UTFDAQoB5Eav4omc01A6sRldZoUfa6t6taGeXeAhv1N/f6Waoawp3jBfR NYd0DHqjdgZmIIHy2V3rud/qdPy03P6PBudpCedRlh18US5JaV94uuZp7qffUBlRMeRq 33Fg== X-Gm-Message-State: AOJu0YyDPat/SuyPefvyWr2VRv8Nzo2t6H+k3Y3w3jAwjrJplIip710G A5zmFSGUtvzmUhefTkOAj6jM0dNANhbyJtccDQspM9+MzwGbGr78ZtdUE5kUa9wIxYpe5ZxFBV4 RoQ== X-Google-Smtp-Source: AGHT+IGiIrJ4A5BQmPLk01OCG6uv19KvfBdqQ++zW9utvyZoUIEh7zcE7pTpIYxGuUvXyPiED4dAhLqQ79M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a0d:c944:0:b0:691:55ea:85e6 with SMTP id 00721157ae682-6d410bbb5c2mr205897b3.7.1725063360840; Fri, 30 Aug 2024 17:16:00 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:25 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-11-seanjc@google.com> Subject: [PATCH v2 10/22] KVM: x86: Move EMULTYPE_ALLOW_RETRY_PF to x86_emulate_instruction() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Move the sanity checks for EMULTYPE_ALLOW_RETRY_PF to the top of x86_emulate_instruction(). In addition to deduplicating a small amount of code, this makes the connection between EMULTYPE_ALLOW_RETRY_PF and EMULTYPE_PF even more explicit, and will allow dropping retry_instruction() entirely. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 862eed96cfd5..7ddca8edf91b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8866,10 +8866,6 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; - if (WARN_ON_ONCE(is_guest_mode(vcpu)) || - WARN_ON_ONCE(!(emulation_type & EMULTYPE_PF))) - return false; - if (!vcpu->arch.mmu->root_role.direct) { /* * Write permission should be allowed since only @@ -8938,10 +8934,6 @@ static bool retry_instruction(struct x86_emulate_ctxt *ctxt, if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; - if (WARN_ON_ONCE(is_guest_mode(vcpu)) || - WARN_ON_ONCE(!(emulation_type & EMULTYPE_PF))) - return false; - if (x86_page_table_writing_insn(ctxt)) return false; @@ -9144,6 +9136,11 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; bool writeback = true; + if ((emulation_type & EMULTYPE_ALLOW_RETRY_PF) && + (WARN_ON_ONCE(is_guest_mode(vcpu)) || + WARN_ON_ONCE(!(emulation_type & EMULTYPE_PF)))) + emulation_type &= ~EMULTYPE_ALLOW_RETRY_PF; + r = kvm_check_emulate_insn(vcpu, emulation_type, insn, insn_len); if (r != X86EMUL_CONTINUE) { if (r == X86EMUL_RETRY_INSTR || r == X86EMUL_PROPAGATE_FAULT) From patchwork Sat Aug 31 00:15:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785736 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CA6679FD for ; Sat, 31 Aug 2024 00:16:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063364; cv=none; b=KmwtM2TDlHGqTUyaIrf/hTmUgoCwZVgHj/2biBxjCQAwiTvQgXm0C4q9GYbMnsxnBMH+Qy6xExBGzL4DNyF8qinCqfJ5DHMuRX+NR4u5zxcQtuLkkC2SZWm1PLPwWMv/Wi5hBoc1QuLjoejT5p+IFf5UwRwC6pRxA5IQMjEMobo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063364; c=relaxed/simple; bh=Gt/hYCvtkqqT087VrypTyzp43782ZGBu82s/aGaFlhM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Wy9S3fs+vzaTOge++lSyxIywm9p4mLJPI+zuBCj5sWa/HkzeJIiKXhiaK0wr8UT3M978+wG4TUi34AQu1ekfXdAtu6AWCJFqGri04WYpr+dN4+dOEme9janBD7Y1ZPKVH/6v//J7uEzg1iMzr+J6misjYANcVODbzkFlBwTzU7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4P17CCHH; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4P17CCHH" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2053f4938c7so6194095ad.2 for ; Fri, 30 Aug 2024 17:16:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063362; x=1725668162; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=REcqlnagGsiRzvNxOPvSFUigo/12XdRKebv9ZoVRwHA=; b=4P17CCHH9cUyr35Ptu3Cp5aJpTAhT6f9U8msd7v1QhWjWMIYHtwFCwn5P6Crt7cRRY N6E6oUzAs9H5EGTsbBwVJ4GuYUdbk2SRil7Da5W05L/pOvfOVhtQck2wzFvx/i22UA+Q hDDsIASA5z4ZCrPR6wdWd7PevV8Wu1PNoki6y5aICA+MENKVxIbn39ZO1UeAkutIOoKB nuEFtAwx48ihfeNr8K3ed+NA3P0NG5Nm+o472txGcT9tRYGg+Y3DvenRIP/DDXkoHvMJ YwQFncsundD13wItLYdaqn6hdZ5PHEHPC9jLri5C88/go6K8NkJFvrKhp0uVAGQrRL3E VB8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063362; x=1725668162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=REcqlnagGsiRzvNxOPvSFUigo/12XdRKebv9ZoVRwHA=; b=A+gD95uVqlfQW17vZZMGhAj0dyGzKxp4DCE9BlgexYHJgFb+L7lsNh5wE+Ulj83cOj SVk7KD7CdW/ojNb7dWnlPy8OkfRtba9MKUUBqcW6NgRnjtibSPsQtUTevdOSs0Dm6zhX c+BseZqY8VnGDrsrYtegbLCyllCIvaC0VqBPBD0k0YwPu+/GduwYnGj5JqKqH1pD853b mBcfx0Sp5wIJiSxKNi1v7t7/DRsxxM8zlcZVD1hIKcqtOLjjXURcGnntBPxDKtv5AnO5 +jJtLCUQKz7eSNfOYO+ruMnLoiq7+C9SvGKLpeR66GIZv6mCgvSkPpQfKjBBZwONCKEQ dW5w== X-Gm-Message-State: AOJu0YycIJxMktrJHMQy7DJfvUwpZdhfQUWZy8RdUGrnOUIDgCg9Vefp KAtlKdYKtzG+iDtesjXThqr0R9lEL+QR+TxM3ERtY1gVSZyco4NRfaWrsdhS8f+yz3RAhsXrbCG Wyg== X-Google-Smtp-Source: AGHT+IH1Nilzksm2OfpsDzscQSy+ZeVt81YmGO0kRIhD+rE737Nd1fk81Tkpfa9/YV68oh+vNJ/7FWAw/OI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:fb0b:b0:202:232b:2dc6 with SMTP id d9443c01a7336-205276e1f97mr726825ad.5.1725063362502; Fri, 30 Aug 2024 17:16:02 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:26 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-12-seanjc@google.com> Subject: [PATCH v2 11/22] KVM: x86: Fold retry_instruction() into x86_emulate_instruction() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Now that retry_instruction() is reasonably tiny, fold it into its sole caller, x86_emulate_instruction(). In addition to getting rid of the absurdly confusing retry_instruction() name, handling the retry in x86_emulate_instruction() pairs it back up with the code that resets last_retry_{eip,address}. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 30 +++++++++--------------------- 1 file changed, 9 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7ddca8edf91b..c873a587769a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8920,26 +8920,6 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return !(emulation_type & EMULTYPE_WRITE_PF_TO_SP); } -static bool retry_instruction(struct x86_emulate_ctxt *ctxt, - gpa_t cr2_or_gpa, int emulation_type) -{ - struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); - - /* - * If the emulation is caused by #PF and it is non-page_table - * writing instruction, it means the VM-EXIT is caused by shadow - * page protected, we can zap the shadow page and retry this - * instruction directly. - */ - if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) - return false; - - if (x86_page_table_writing_insn(ctxt)) - return false; - - return kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa); -} - static int complete_emulated_mmio(struct kvm_vcpu *vcpu); static int complete_emulated_pio(struct kvm_vcpu *vcpu); @@ -9219,7 +9199,15 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return 1; } - if (retry_instruction(ctxt, cr2_or_gpa, emulation_type)) + /* + * If emulation was caused by a write-protection #PF on a non-page_table + * writing instruction, try to unprotect the gfn, i.e. zap shadow pages, + * and retry the instruction, as the vCPU is likely no longer using the + * gfn as a page table. + */ + if ((emulation_type & EMULTYPE_ALLOW_RETRY_PF) && + !x86_page_table_writing_insn(ctxt) && + kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa)) return 1; /* this is needed for vmware backdoor interface to work since it From patchwork Sat Aug 31 00:15:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785737 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EF0641A94 for ; Sat, 31 Aug 2024 00:16:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063366; cv=none; b=MdeGDsbBhYb+9rFpiGq2/kiJ0gUomLbcRsA78rgHBLlaMm9SvIcZV8vTK4bjDWh/unI0Hq1Ycs5yVeOPIxR1aysm4enfOuLihpfS7EQ1nM8Wr1V6Ib0dol4wC7axyzQmY/uPgQ/mj4RAJMkdIeH2ymkbn4Q/ODs2fC1L3bWtz6g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063366; c=relaxed/simple; bh=0ugKPXm7f0Z9tkNBlydIBvCQhLXirirTmaxUHzxFYGc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IPBPP2moioMe+mm1OdXIidjR1g0gw+lyxwHrAfOlBVzfM3IzLQ+rtt9zd6uSpqjld+2/9HxmUQyhtoXLdVSyTGaN4XDubfg1RxJ7LGobwi0z5g50dzoOls+mvgMBwkyNnScVCqnLn+XrShTI8BBqBcSX85qjE0CRHoqwE9fLU5Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=x6Ud8f+Y; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="x6Ud8f+Y" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7143ae1b4fbso1962077b3a.0 for ; Fri, 30 Aug 2024 17:16:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063365; x=1725668165; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=pFaEKpwsoQLgtn+xPOU/nmb2rgY2BlrNEgxoYJSjng0=; b=x6Ud8f+YfiNAppvZWkbOoTh7yNRBp/eZpMnrRSdIR8mbBUeHPL2tUDE41RtBZRqTKk 2m4wZj4k0gWHDEMvAE+rD2Rx9ihFFJTKoO8hEsteWIAfQxbwquLVysdznBoBIPKl9XIq 7AD9/vT/9fTg3Xp8nrZiAF5gn9l0djEkeAK9zf7xodZd8QZkgvFqhzqyFf+DnG4Q5Jn7 Y8c4j1tx0T96EHILB77ySJgUKmMX2BMjWp0cKLpM7DyAF4+WWZM5cp4JSjNSp7nqf7Ph f9vIieuWR7y/+fP3h185SmIir8jt4SmGf52ctXsCU2ywgmxW0/8+ALa+BsIskzn3jBtH A+JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063365; x=1725668165; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pFaEKpwsoQLgtn+xPOU/nmb2rgY2BlrNEgxoYJSjng0=; b=dbKdLwNzS0KxuNdM5QzEjGWXOWOiL2ikyd+zeJK+xuxlx/sOLLamjowbIj4rTzDuqX f6lOtjCXvesJMSmIIrapngrIgslDUyAasYcIoYaWD3l01FDM+OgWNIOBmrWOb6LX5681 WNRZsZBbbGgjkl8cbgFubKUjgKjNiBnQMRHeGoyAp/1LbUKPzJbX0wdWqpL3LaCCiWXI nytqilMONXhtaGM3ZmGoTyxdpigAn1E8SxTKeFmahdTA8h/cpUyA9vnND5C/528HPqIM 54fgQMEtsMX1I8Rl85OD9xY4r2J8omVbLPuWmFc+vmS+wAhlfLPddNbC9YBtDnGIuCnr AMLQ== X-Gm-Message-State: AOJu0Yw2Sj7pGkT3NXUGrJL/2E7cgA7+FV15rlDBQjLq3th0wd5MtTkA hXMjUaeYyBMHE4uilkcmkjC2CoNkN1Lb8vqsrbtXUnamNJr+IJcnYDErnRpaDstH5ivPzSvOLp6 Nfw== X-Google-Smtp-Source: AGHT+IHeayohZ8P9bTfkvLJWiAXof8KEhjPXfE8aDRqDE3XirEUvuRZ9MRzo1AWngrkQZ4frV2qYEdsDGQ0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:870f:b0:714:37ca:ed6e with SMTP id d2e1a72fcca58-715e104ad90mr24129b3a.3.1725063364507; Fri, 30 Aug 2024 17:16:04 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:27 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-13-seanjc@google.com> Subject: [PATCH v2 12/22] KVM: x86/mmu: Don't try to unprotect an INVALID_GPA From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao If getting the gpa for a gva fails, e.g. because the gva isn't mapped in the guest page tables, don't try to unprotect the invalid gfn. This is mostly a performance fix (avoids unnecessarily taking mmu_lock), as for_each_gfn_valid_sp_with_gptes() won't explode on garbage input, it's simply pointless. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index dd62bd1e7657..ee288f8370de 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2729,8 +2729,11 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa) if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) return false; - if (!vcpu->arch.mmu->root_role.direct) + if (!vcpu->arch.mmu->root_role.direct) { gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); + if (gpa == INVALID_GPA) + return false; + } r = kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); if (r) { @@ -2749,6 +2752,8 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) return 0; gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL); + if (gpa == INVALID_GPA) + return 0; r = kvm_mmu_unprotect_page(vcpu->kvm, gpa >> PAGE_SHIFT); From patchwork Sat Aug 31 00:15:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785738 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E178AD21 for ; Sat, 31 Aug 2024 00:16:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063369; cv=none; b=EVjWlaNOWZP/k4ax2hVmIPQzLTwnt6YY1GGA1XC44Uw1DbZxpxDSybud+D2lwHcxaFpvmv0VmZmVCm0wB1sfRM0fp5bOGcsliZTACwAOkcNZj/zMhCEoG1fsTymyoab/bAXCYGLBV3i4vhU1iqjfpK3HDAxL6gGnj6rkgJj3A70= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063369; c=relaxed/simple; bh=LzpJ/JVED9Xy/kfec+E0sqmDsVm4jZ8PMfDy2LTQRhA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cA22eJuhMyNRtA+4DXibdSa0t91QxzhKbDpZ8rE1IzR3rRkg41a9du59SEW9whl+LkkfMMba3zkan1A9GzyiajDhrp8x5DCNQYxWaPV7h/bkXEmLEL62U2A3rV5iY4r+zA9CxqjLOJJfBLrf5xWv58befWH9HdYPM+juBNQrFW8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cGo2+jyr; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cGo2+jyr" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7174080fb23so25669b3a.3 for ; Fri, 30 Aug 2024 17:16:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063368; x=1725668168; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=i1F/rHShfrDTgNHmv4cG1toMKdrxBDoSppyZEKY7y+E=; b=cGo2+jyrnJ7A6D0zsP/8TWXvElPKXRdHnVxx4M30gHzMwr6sjOoERCtL/GVa6EJIZt 3q4b4lkWMrPO8CFxRBjC0r6cP2AKMqWGiaEFxDhfPAcqw2+j5Pu94ZiTmXO/ks8THxev 52CFV4LvzYVEmIjaNVeF3eudhcI3fbIdIKYBYKoF7GJEJPTuVGTkReNq3yJj7OaP3OfI f5pWwse4xl4FtUcmJu2pwiZ5ursqgtKV42lHKzZvV4NCzX4K2dPgsASV/OLDv+2NjH97 3bW9KuD2O6R/3o3SzsSOgmPQBQbJ1KXZL8BOCLt3RypMpLDcxSjsz2nCVA3gABcALOEW 5n2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063368; x=1725668168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=i1F/rHShfrDTgNHmv4cG1toMKdrxBDoSppyZEKY7y+E=; b=w4Iq1hmD4DGCcWMscx0L9LiMQYRyLfbsUHK3qlx5mSsLyxcrE91aZnEMV3PzpylAxQ 25haZsEJBxX/w1XYbblfM8ed6I2BWB8IDtlsaDJx28+5ToeF52LqrXO4U1kulZkmBckp phjm7iKsd2bZNCIyCVxC0gDMFWSgdVnRqTiHVnxfRpkca5hCVxWhvRErBTBgt0ZvZAHl 3HOnl5zJtDRsG40hgBH5IAW4aK46nEgDyLpXwJjKf0cMeFgr+7Nx2nU30kldD0YOxrBA ZM9glj1vDLlvkAR9nfrGuBE5k5TvmavKiKA6B8TSCTeaEaY76383O+psVaxC+ykkTgGe qHNw== X-Gm-Message-State: AOJu0YzCIEZyWWh9VT5KxIiXFrnVUQWGbDXEZ5EqdYXFXRawYlJnUrER if/FanTa7WH7WmSeAboK/OBgPmqKiUZf164A/fNLJ9M3LWJH3qtB9JaB8WPDYdPWk9aMJ3cq1/j l0Q== X-Google-Smtp-Source: AGHT+IFRLF5vuwLCmCExGGa5XYbATHrgktWltU4k7xw5DL+Mn8fkJtnEkifE9EVPqwBnNdQM3Vq7ebbZ5c4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:8983:b0:710:5cfc:8795 with SMTP id d2e1a72fcca58-7173045b0b6mr7558b3a.0.1725063366272; Fri, 30 Aug 2024 17:16:06 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:28 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-14-seanjc@google.com> Subject: [PATCH v2 13/22] KVM: x86/mmu: Always walk guest PTEs with WRITE access when unprotecting From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao When getting a gpa from a gva to unprotect the associated gfn when an event is awating reinjection, walk the guest PTEs for WRITE as there's no point in unprotecting the gfn if the guest is unable to write the page, i.e. if write-protection can't trigger emulation. Note, the entire flow should be guarded on the access being a write, and even better should be conditioned on actually triggering a write-protect fault. This will be addressed in a future commit. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ee288f8370de..b89e2c63b435 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2751,7 +2751,7 @@ static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) if (vcpu->arch.mmu->root_role.direct) return 0; - gpa = kvm_mmu_gva_to_gpa_read(vcpu, gva, NULL); + gpa = kvm_mmu_gva_to_gpa_write(vcpu, gva, NULL); if (gpa == INVALID_GPA) return 0; From patchwork Sat Aug 31 00:15:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785739 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 716D91D131B for ; Sat, 31 Aug 2024 00:16:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063371; cv=none; b=t3hnMVXeua84erizQQn+JEL/18fEsnpsBvjVm2F68HVunUHtVbrLgl9DzgbcM0luU8jKq1lTbXaaOCRzFHUuOmG0A0N+OfXpjNZqsVAYJosUjDeLjjSHYzf3oqgZoZb0kncgn428x4jspD5Hv1Q0mdEhCspe9Rk9i9xXfI+KVrc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063371; c=relaxed/simple; bh=uB8TtNeHx27Ad7EM5biaxlZheZEshJ2mMvs5MEtYNi4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ptvulGhph1n3Cw0gDGJqdVUwo6Z6nZLYBhtIv641N/zwBzSDiD8Fvcg2RcDdOCKduHVMGgkHZNiUA8wJY3SzIosQMZ5895GvluKkPX2rZ6djIKp9R2olcmcfe1VeAJK47EgTr4d3BPdNJVfihbgk1OpnzgTnxWQa253w3R1LTWM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jHFiApLp; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jHFiApLp" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-714290c2b34so2321129b3a.1 for ; Fri, 30 Aug 2024 17:16:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063370; x=1725668170; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=9O4VEwldMzhpZYdn76eUeYyG7aK2ZiOdW5UELJTylQg=; b=jHFiApLpCRUo8xRYDgJtKyqNk7XnmPjUX4PCYfQuEKViL5h4HT6fkJwUzJ2Qm9uA61 a+jKdWS8ZREMNT9XJy9Opvm1HTYAnUshHZqLd66vYKp4oJPBLEBfcjr31nL6sKcTn8Df tDceGS7gVkbaEyqxPpOK5pzDNrsSktjEo1ScIp0QqqGsy52m1z8x+iEtKcbKHGZQj9rQ 8WEJTfARdI++iMlO3awEhNJRKcg7JM4w/BgNarqHRcXSI2ymivZF8TTh8x09zVgBy0vx 9hB6fkrIjaIAX5FftWDWy6jCpPsqzOxQBQKTbzcOCmVHm8dbmmTWcJtCJoD4rDNKem/4 W7Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063370; x=1725668170; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9O4VEwldMzhpZYdn76eUeYyG7aK2ZiOdW5UELJTylQg=; b=hxvuuQL7h3e5J5aVEuu2E1xVAyWfaV+ga5CDnUTIYj2vouo7crjtqG32yI1lpJJmym 6qVKejpFHg9jCqz4I7HJFS8kRBGdfKglMaWa9bCd+72XVdrDiyO/bnTvaVAd2MxJWMf6 CerqjSJmLdeKQPqfT6+QQSMsOea+zPnwkld+Zh7nRdllFBR4EyiEW3+1a4M8m17POsrO eF8jHl5IyhTsDBzxvJbRlxY1x4Y7SjiqjhLEcnjBwpJxrb2N7DW8WSdM067HolAc0VNw jewEw7/KhOF1wJbqiXtxkf+qKjD9kwvNSZ+O0/FLFY1kVID1Ec70fNl3dYZ8Q5quPg+1 ub0w== X-Gm-Message-State: AOJu0Yy+q//fmt4i3k21dQonORuBhwAXilVwEETU3baSWU5zNTcza1VJ zkZEWiYiUvIPj+39YkP+47E7MAXNNzCdTwI9h8ubdfusWQYqwZGpHNoVEGsnCoWerJDtmmyf3hX w4g== X-Google-Smtp-Source: AGHT+IGVv3c3jcB0r49a4HwlSnaEQWbUvEgEsWoVSM9vX8DyBoJnaCsk9OSnitGCy817N8L5rs5wpa/NxsM= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:aa7:9383:0:b0:714:184b:b2de with SMTP id d2e1a72fcca58-7170677fd42mr15341b3a.1.1725063369831; Fri, 30 Aug 2024 17:16:09 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:29 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-15-seanjc@google.com> Subject: [PATCH v2 14/22] KVM: x86/mmu: Move event re-injection unprotect+retry into common path From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Move the event re-injection unprotect+retry logic into kvm_mmu_write_protect_fault(), i.e. unprotect and retry if and only if the #PF actually hit a write-protected gfn. Note, there is a small possibility that the gfn was unprotected by a different tasking between hitting the #PF and acquiring mmu_lock, but in that case, KVM will resume the guest immediately anyways because KVM will treat the fault as spurious. As a bonus, unprotecting _after_ handling the page fault also addresses the case where the installing a SPTE to handle fault encounters a shadowed PTE, i.e. *creates* a read-only SPTE. Opportunstically add a comment explaining what on earth the intent of the code is, as based on the changelog from commit 577bdc496614 ("KVM: Avoid instruction emulation when event delivery is pending"). Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b89e2c63b435..4910ac3d7f83 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2743,23 +2743,6 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa) return r; } -static int kvm_mmu_unprotect_page_virt(struct kvm_vcpu *vcpu, gva_t gva) -{ - gpa_t gpa; - int r; - - if (vcpu->arch.mmu->root_role.direct) - return 0; - - gpa = kvm_mmu_gva_to_gpa_write(vcpu, gva, NULL); - if (gpa == INVALID_GPA) - return 0; - - r = kvm_mmu_unprotect_page(vcpu->kvm, gpa >> PAGE_SHIFT); - - return r; -} - static void kvm_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp) { trace_kvm_mmu_unsync_page(sp); @@ -4630,8 +4613,6 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, if (!flags) { trace_kvm_page_fault(vcpu, fault_address, error_code); - if (kvm_event_needs_reinjection(vcpu)) - kvm_mmu_unprotect_page_virt(vcpu, fault_address); r = kvm_mmu_page_fault(vcpu, fault_address, error_code, insn, insn_len); } else if (flags & KVM_PV_REASON_PAGE_NOT_PRESENT) { @@ -6039,8 +6020,15 @@ static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * Note, this code also applies to Intel CPUs, even though it is *very* * unlikely that an L1 will share its page tables (IA32/PAE/paging64 * format) with L2's page tables (EPT format). + * + * For indirect MMUs, i.e. if KVM is shadowing the current MMU, try to + * unprotect the gfn and retry if an event is awaiting reinjection. If + * KVM emulates multiple instructions before completing event injection, + * the event could be delayed beyond what is architecturally allowed, + * e.g. KVM could inject an IRQ after the TPR has been raised. */ - if (direct && is_write_to_guest_page_table(error_code) && + if (((direct && is_write_to_guest_page_table(error_code)) || + (!direct && kvm_event_needs_reinjection(vcpu))) && kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa)) return RET_PF_RETRY; From patchwork Sat Aug 31 00:15:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785740 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD8488528F for ; Sat, 31 Aug 2024 00:16:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063374; cv=none; b=GZ8Rrs4Nn5mQVj/peh7frHgTG8yODwKWYfDjeI9IMxnc2/8or+dagWPr6LqT+Yc+g9Fv9tlFV6FmzKynYdIoct7/KHRSOThOXOGFvPepSewo8LVSEuCHKEmTf2jLcoXnhrNbjLqpK+UWA/N1WNfDbfz1j95SiVBDZnQsD1GshQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063374; c=relaxed/simple; bh=oLv8xcV7npfhCNOsBP8X1Fn7qxQks0etkV0j5AxfzOM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fhTzwT3rucV8NBTCSVjLoLp4k3rfgq3qSwjggvS+8gTYNBQWtCjNbEquwAA4KXIo8wKn/JEqNkzrCBopaDzlGFS/e6/zumTt9JU/PX1Y0lE3ZhJr8/OhNjsA/02+cAPJH5jjDYbG99JPaDiPN5g7wZRf7va2nKqS4gBg7kT0tWc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XyncVTSp; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XyncVTSp" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6c3982a0c65so43852157b3.1 for ; Fri, 30 Aug 2024 17:16:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063372; x=1725668172; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=u3jsbE6XP6TTJxd7m+VzZ7IpksW0atkO59oovcuh+cU=; b=XyncVTSp9OOQ1hTBku60LBAwfl6MAAxDKr4rDWj8Ap5Tlwj+RvpIgXkjVvVBLjSNQe leSkZLfcw1Nz4h7kH/iJGvdOjp5Exa2BcGf+kKvZ+jM2ZVGBXBEiKKMz6/lLr4pe8vh6 GunlKsH/MWIMUfoJLkZL0U8JupRljNPFK60flwqEJfSK0YSy66O1YY3QfoWanI9/n+aB 5fKwSZaGxXK72XdmgnfsBdUsj2vcpczy1eGe5yfeJh2Iq0JsAuTaqPle2mqA+v8cHHkO AkElyQX+g+43wdOvNyY0wE6eRbUHnQQ0mgnk8dqNGPkHWcNgKVMnvmXcKG7el3uDYou1 xMxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063372; x=1725668172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=u3jsbE6XP6TTJxd7m+VzZ7IpksW0atkO59oovcuh+cU=; b=qfm4O2OELzySUpXpomkupzh9gNHCBANhM/3ffDAJs0zrU1Ha+763nOaL8kdorLZO2m SgKMdA+TQ2RIO2/L1Ov9GhD4lql6ly+tirqQSzT2naV8WKzabIiq4N0yQd5Esmw7WxmJ LcSq1VpAO0qj1MRXlhNN98m5wVvXiN2vY8+j/1y3NgJx5ZdhKIBd/n01KD8W3H9qyeWC lIgd2PCW5JVhLH1yOmngvaORKh+0cZq+H7wc6fPOHbPaZv42u2rC7zOkQ7Zj8+8AzJro +sEn4ULqlUaaSZmoS8Qs/Lp4Nxk3J/xbrwWWwb49dvq0uaWQfyV4/sqCmJukQ7OKOizP zFvw== X-Gm-Message-State: AOJu0Yy0ORNvkFnLiTOaFdxsuXDTAiCQkrUQw/JHipJhUuLMMFaMErlb LNt92Zm6DA0pGNb+WpAacIQifKeSZFv/H+Du9dPXA5fjR2o60c3T8WDKwoDySc5U2a8lrr826z8 ZKw== X-Google-Smtp-Source: AGHT+IH9ZaBmo0cTRqD8QCtPipkvkJ/y4RHQnHtriu8u41kiihvKrtMLF9S4g7I+ks6ZXYZUgSjRk+7s+Mw= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a5b:9c9:0:b0:e16:50d2:3d39 with SMTP id 3f1490d57ef6-e1a7a1771dbmr6330276.9.1725063371775; Fri, 30 Aug 2024 17:16:11 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:30 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-16-seanjc@google.com> Subject: [PATCH v2 15/22] KVM: x86: Remove manual pfn lookup when retrying #PF after failed emulation From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Drop the manual pfn look when retrying an instruction that KVM failed to emulation in response to a #PF due to a write-protected gfn. Now that KVM sets EMULTYPE_ALLOW_RETRY_PF if and only if the page fault hit a write- protected gfn, i.e. if and only if there's a writable memslot, there's no need to redo the lookup to avoid retrying an instruction that failed on emulated MMIO (no slot, or a write to a read-only slot). I.e. KVM will never attempt to retry an instruction that failed on emulated MMIO, whereas that was not the case prior to the introduction of RET_PF_WRITE_PROTECTED. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c873a587769a..23be5384d5a5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8861,7 +8861,6 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int emulation_type) { gpa_t gpa = cr2_or_gpa; - kvm_pfn_t pfn; if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8881,23 +8880,6 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return true; } - /* - * Do not retry the unhandleable instruction if it faults on the - * readonly host memory, otherwise it will goto a infinite loop: - * retry instruction -> write #PF -> emulation fail -> retry - * instruction -> ... - */ - pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); - - /* - * If the instruction failed on the error pfn, it can not be fixed, - * report the error to userspace. - */ - if (is_error_noslot_pfn(pfn)) - return false; - - kvm_release_pfn_clean(pfn); - /* * If emulation may have been triggered by a write to a shadowed page * table, unprotect the gfn (zap any relevant SPTEs) and re-enter the From patchwork Sat Aug 31 00:15:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785741 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF4B9136E37 for ; Sat, 31 Aug 2024 00:16:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063376; cv=none; b=alEkMRGkoiLjElRKkPmRqH+JCseeZRGdyhs+pf5J0y95M7npS6xvyvSjKAWv3ZGg8uwf2otNOwLlpwXxo8m3m6B4VwuWN+iz14Q6x8Gx7GYLbhqgwtZOT4qPqdTOzWMMMnUe5CXIjO3rm0RKo+CZW1dlu8yX2F+OL6Viwfjc0k4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063376; c=relaxed/simple; bh=03HGP+MpByrZ7kzxrpdb3wvh41fpQldermKrn3cjMQg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JmdizYTA4OZj8TYjCOb3oiJtfwmHujtD1mO3F7W4UDP1Kt/gT6LvJRCtJ8cM4V5dCa0Qcp/ILUnw+4sGPyU2B+Xs37kGli9qwjDe0XsKbAm5HJ1etQlW0/c6YQ2DyVuojVfnm+4t2AcIDesGMDpa7hukqoNSxAGSgUA8oMmq9+w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OLlVQGX4; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OLlVQGX4" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-71439227092so2565210b3a.3 for ; Fri, 30 Aug 2024 17:16:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063374; x=1725668174; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=NOgdVNwQvUHFMsxLXjZ7EYJ7UIxi1DX2us/omkckfEs=; b=OLlVQGX4/ynkZOD+R7thBD3Acongqwxk1ZaX2reavQ+yWutoQ8sucH+A9BtccsKz26 W5bOfSw7BidX4CyHQ0OGYU07NKxpmMXzNT/8XRELr08HAsBLGgEUk2QYVJonUvuUOssX BW8NHZtmHjhhL/J2t2ORjGZM+tuAj11l/AEGOImE4wyvCh2cR69gGwV02Jnl32zYEF54 WuC39hI3Et1Pt+9W0l+/9x19gjMWB+Qb8H8McniOyv8AkyJsGxLUBV97PJgFt+1Da2UM eA+3j9HlnSsfOuTUU5jy8RZp6qYy6H3s37GlMSB5GSfHtzCESfJ9/jt0ewQWx7Rtfn2f mlpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063374; x=1725668174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=NOgdVNwQvUHFMsxLXjZ7EYJ7UIxi1DX2us/omkckfEs=; b=EPTyMqw8CsTKrGTzFCytW70JD7Ay1mmaDexOnr/nz92w2fYEKtZyFaGWZsyQdvvlDX RODG2mYWhEyDDwo1WGidZdh9ud1MX5r/tdmHtuGN+sxJtP97s8lPfa3cfOxfXCjxsZ8m lVgvwnJqbhNg4Sx8skhE1amct4NI6OfMF8hrg2QcPySNmsIA8s01urMHtNuc94XOY4Pv bUy7UVD6tV9YG08ML1Xq5MxxyOlpk9t9seXI5iEtS7EOGLXwbAWH3xVSAi/a4+H8KPVI VuMn8V/h8eE2zjECwV0SpIAqY5yBTiHHl6mLaUNqWxUIK7iFlhoYhi9hF+U8WaIfnxY/ CzjQ== X-Gm-Message-State: AOJu0YwTORtU40Pac2pD9UU2ENnH19gK2RWYy8xa0wB5nIHwkMpHAn2J W8gXfVT3bciCq5vBuUqld2gSvXaqp78pqxMrY/D2GaSRoRfWZ3Qmpmsi7czvn0eHxgcTCOxA3Le FOA== X-Google-Smtp-Source: AGHT+IHOxjcI5NDjaRDWXOQy4s+IAV1B1/GPpRSOPNEGZ+Qxs8WEg4mv+/mZzSMwknbgpQWTaR+XmVIYezg= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:9444:b0:70d:2a6e:31cb with SMTP id d2e1a72fcca58-7173072f5a5mr11018b3a.3.1725063373855; Fri, 30 Aug 2024 17:16:13 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:31 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-17-seanjc@google.com> Subject: [PATCH v2 16/22] KVM: x86: Check EMULTYPE_WRITE_PF_TO_SP before unprotecting gfn From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Don't bother unprotecting the target gfn if EMULTYPE_WRITE_PF_TO_SP is set, as KVM will simply report the emulation failure to userspace. This will allow converting reexecute_instruction() to use kvm_mmu_unprotect_gfn_instead_retry() instead of kvm_mmu_unprotect_page(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 23be5384d5a5..ad457487971c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8865,6 +8865,19 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; + /* + * If the failed instruction faulted on an access to page tables that + * are used to translate any part of the instruction, KVM can't resolve + * the issue by unprotecting the gfn, as zapping the shadow page will + * result in the instruction taking a !PRESENT page fault and thus put + * the vCPU into an infinite loop of page faults. E.g. KVM will create + * a SPTE and write-protect the gfn to resolve the !PRESENT fault, and + * then zap the SPTE to unprotect the gfn, and then do it all over + * again. Report the error to userspace. + */ + if (emulation_type & EMULTYPE_WRITE_PF_TO_SP) + return false; + if (!vcpu->arch.mmu->root_role.direct) { /* * Write permission should be allowed since only @@ -8890,16 +8903,13 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); /* - * If the failed instruction faulted on an access to page tables that - * are used to translate any part of the instruction, KVM can't resolve - * the issue by unprotecting the gfn, as zapping the shadow page will - * result in the instruction taking a !PRESENT page fault and thus put - * the vCPU into an infinite loop of page faults. E.g. KVM will create - * a SPTE and write-protect the gfn to resolve the !PRESENT fault, and - * then zap the SPTE to unprotect the gfn, and then do it all over - * again. Report the error to userspace. + * Retry even if _this_ vCPU didn't unprotect the gfn, as it's possible + * all SPTEs were already zapped by a different task. The alternative + * is to report the error to userspace and likely terminate the guest, + * and the last_retry_{eip,addr} checks will prevent retrying the page + * fault indefinitely, i.e. there's nothing to lose by retrying. */ - return !(emulation_type & EMULTYPE_WRITE_PF_TO_SP); + return true; } static int complete_emulated_mmio(struct kvm_vcpu *vcpu); From patchwork Sat Aug 31 00:15:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785742 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B95613B5B7 for ; Sat, 31 Aug 2024 00:16:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063378; cv=none; b=LcYPr7CnOaXYF2MkMom1l/oAUn6yi2AR4yjdyN7mE1DrpB2Rw4jTFxn47GADaZUIojN/O+QnprRFzVS4JYWmy18Ueox7r1yUUpjNU/cybj4yJjxcBe7/pM/KsgrujDMIUGxi6FV4YaAQPoVL7gCQBKgwlEenwpnhFyQ3imHe1oI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063378; c=relaxed/simple; bh=QaJOICfdwQ30ELkiG5OwkLM3NF9kzLFQgtTBFvcj7SY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l50dp6EYs1JaUodDnVMg41V+ShZlF+9YWKN+g61JXOx77hpsIBFQ5oH1BxLezDNmei6MIopcNlBZAT9lLKDkRRt0g4xjTLU+AeruG3YSiMUdRpRtFepziy2vefjvgzSSGZbCk4edMelkBRDV6s2SifATzveXLTtvDZhAo3iCbQ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AfMQ2ACM; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AfMQ2ACM" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e03b3f48c65so4455477276.0 for ; Fri, 30 Aug 2024 17:16:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063376; x=1725668176; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=WjHitxqWq6khw0o+yUtW8qx1qHM1xiloEtLNGgpGHyQ=; b=AfMQ2ACMQmx1Uc7lboshFETqa+gcvHqtsdxWZSxx1DXbCbX6BEUV4zrKF0vFxNVxIk KuSgRougIoYa8Pw4/+jTXAhttinwklYSNuRgW0fioobIanvSH+wxOv2zL1FV4OB4NUi/ focr48ZselAyGouzsrtnlDjZZk+2UnKjsdSowEAIqfSFVnZRIpjrJvAm6cGTv/2w18qB L0T4dBMgboCKE9unK4I1aBOkfKUAh7HpL++cDe+ex7xFcPhnWqhK9JmZ8tLxosIaXUFx 1A45gIIr0bXxFipPU+gPG3iP6g2Lo0I+QcW3aRcE35l33our+3vQCS6b24V/m+xnHxOW ufUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063376; x=1725668176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WjHitxqWq6khw0o+yUtW8qx1qHM1xiloEtLNGgpGHyQ=; b=g2RvT/3ExPbcFeJIICeCT1xCOqVE2pqO4GWgcb/DCvjBMg6P013gBoFD/AFBJ4Kq8c XrS1v7s+T2yJnsCv3kb3n/13JZV8E/xkXWLUTawYJDb3joe062dPIYn5oqjA5JnPe2dI SybbDvnDHcBW6IftHox0hZxoF19JOzERoMkFg7MokREYnnOT51rHz6C0OjtH5t0WeCx3 WmD3LOvfL+zQK1Px8SlFgjWTupZLXi8cXw1n3+0vkAblIE6vbzIA1RV4JR5S0jFCAXEJ Ld3uDp930jJrsF0D+mnyZYUcAwFDwWF9r6GsojNslRg/KYo7kxmbVkFshtMupMpo0VaU qKWg== X-Gm-Message-State: AOJu0Yy8aizQmuC2v3d3fKxhbLE1iGK8JlCoQdSDScsRHjS4eyUhNFXU xq4AXxV1ucY+47mt4y6T2qS+riMgSvnwc6Co+wqmlELhAHekL6tNKwEWa5TPD5c0zyhLQRyb4mk pSg== X-Google-Smtp-Source: AGHT+IGo0xQgoGTC0KvVhguAmOCzE28x29RmlZpo2tFA6rgbVMj8w/GoFAp/vSeEnp4ivUddGUq0H+jFDzk= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:7156:0:b0:e11:5da7:337 with SMTP id 3f1490d57ef6-e1a79ff9892mr8887276.3.1725063375993; Fri, 30 Aug 2024 17:16:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:32 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-18-seanjc@google.com> Subject: [PATCH v2 17/22] KVM: x86: Apply retry protection to "unprotect on failure" path From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Use kvm_mmu_unprotect_gfn_and_retry() in reexecute_instruction() to pick up protection against infinite loops, e.g. if KVM somehow manages to encounter an unsupported instruction and unprotecting the gfn doesn't allow the vCPU to make forward progress. Other than that, the retry-on- failure logic is a functionally equivalent, open coded version of kvm_mmu_unprotect_gfn_and_retry(). Note, the emulation failure path still isn't fully protected, as KVM won't update the retry protection fields if no shadow pages are zapped (but this change is still a step forward). That flaw will be addressed in a future patch. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 20 +------------------- 1 file changed, 1 insertion(+), 19 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ad457487971c..09fc43699b15 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8860,8 +8860,6 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type) static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int emulation_type) { - gpa_t gpa = cr2_or_gpa; - if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8878,29 +8876,13 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (emulation_type & EMULTYPE_WRITE_PF_TO_SP) return false; - if (!vcpu->arch.mmu->root_role.direct) { - /* - * Write permission should be allowed since only - * write access need to be emulated. - */ - gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); - - /* - * If the mapping is invalid in guest, let cpu retry - * it to generate fault. - */ - if (gpa == INVALID_GPA) - return true; - } - /* * If emulation may have been triggered by a write to a shadowed page * table, unprotect the gfn (zap any relevant SPTEs) and re-enter the * guest to let the CPU re-execute the instruction in the hope that the * CPU can cleanly execute the instruction that KVM failed to emulate. */ - if (vcpu->kvm->arch.indirect_shadow_pages) - kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); + kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa); /* * Retry even if _this_ vCPU didn't unprotect the gfn, as it's possible From patchwork Sat Aug 31 00:15:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785743 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EDB3A13C3CD for ; Sat, 31 Aug 2024 00:16:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063380; cv=none; b=AhG16eTcBkuc2wKzMfWZhBU2dGEpckfknWALsn25q4fPtuJhOgiE9pLDchftmXE3LmyzgMbEf8XvNliy0OzE3WV/PAp+i2mXGkuuxaNY6mlEV7rRt2WUD3pVXn0xiGkwnLYKvZ999ciH8TIQZbtlNLKFciHzncgNR9qDSzJpBYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063380; c=relaxed/simple; bh=nHlj+xD/ONShNKxAH4EaHG4dggMKZLPwwDJ8pS5wCL4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gqXbDTGgmHJsA2MqZxZzOEAnATkHt8rEzCE8H/BZHuRdT8X4LFqpLCJ6dfzU8ljOUq6ok8jkXJHkf0MkRpU7HCgzZjqh9W3gvBsS/u/w8wV5J/oz3cLXlrRjoLAtLMjrPCJ1K/kKDNrrCQsLrX4fydGLWt28EjCtIHgYiAl0YB0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FrIfmNBw; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FrIfmNBw" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-2021ab2b5e6so21041465ad.0 for ; Fri, 30 Aug 2024 17:16:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063378; x=1725668178; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=hU77OYTkUMLfRxkahmddxieOilGttKEAZ+XNkDI94lk=; b=FrIfmNBwOJ/nIvezraVFZufKwvOVxQm1l8/QO+M9mFHsqLxlCK9p75qcKmyXEmfgA0 wBSG6N0VwE2b8qshg+LYXO/XWXbWVREW+OinUSKFWDO3flfDWkJG+tWnAKPozOtzbtoS qUUDHH/emk6lvMftMWezRoik4HA8WZUiLPoZb/3FLkq9uqg26uB1VqRobM/a5rrMlCuR u4wifjqarxdc+A+lGa9R58RXuVfPjnEM0FdEo1tUeH3Q8L9wChZ371DlQfe5AIu3jR2J f4kWaKrNtz8vDcmzJyrmDp1itu9+KdFkcBm+VDruDl9UzbukGbjJwJ3Upvxky9bh/H5z 1u0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063378; x=1725668178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hU77OYTkUMLfRxkahmddxieOilGttKEAZ+XNkDI94lk=; b=Rr9d7tOFtaKaFal5FMQEy4ilFy1NUdXBUVzMR9TqZ5K2P6X4mvLHVBi9/wLJj9qsqn u2oebDopziVdk3Dmp2IOEns+mzYKcTKatMyw+ZZtJjQkoPDdg8lqEWPgNvd7hgz7SZts qGd1n5MAXbOKkC1m8DaqjDCcw5zsg1Gj6Rxq38WSZps1zmynewev8nIdSNPGOAWqHqlR 5Vg1IOoZr7A+ee1JISNPxM/JvMQTBqeo8so27a7aGYFVfijTvetHRXyMIDDa1J94ZGiL rkf6a3juqjEZMI34h0PLR4pZB0CbmgrnmZqlKoopcfY1L38NE0hDsGZ5WQSuVva5yVDJ appg== X-Gm-Message-State: AOJu0YyE5344V3gomr2lWBBmz0RfegmceeSbNkqptoAvuGQCEHFEiYB4 UX+uTgK2krF6XObKakC72VxZJD9HBHD21DyPwXeaLuObzz2+eiQQ36QVs0+RAAjF5sI2NLEka82 Mog== X-Google-Smtp-Source: AGHT+IHblxeHkSJzB63Sh5fAMNGdMNQ2SfoqcaQ/5BRDjPe/OytJTEsD0IQrqIW87OaHEx0tHfAmrT3IG7M= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:1d1:b0:202:70f:641a with SMTP id d9443c01a7336-20527626efdmr976465ad.2.1725063378157; Fri, 30 Aug 2024 17:16:18 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:33 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-19-seanjc@google.com> Subject: [PATCH v2 18/22] KVM: x86: Update retry protection fields when forcing retry on emulation failure From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao When retrying the faulting instruction after emulation failure, refresh the infinite loop protection fields even if no shadow pages were zapped, i.e. avoid hitting an infinite loop even when retrying the instruction as a last-ditch effort to avoid terminating the guest. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 10 +++++++++- arch/x86/kvm/mmu/mmu.c | 12 +++++++----- arch/x86/kvm/x86.c | 2 +- 3 files changed, 17 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 2c3f28331118..4aa10db97f6f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2135,7 +2135,15 @@ int kvm_get_nr_pending_nmis(struct kvm_vcpu *vcpu); void kvm_update_dr7(struct kvm_vcpu *vcpu); int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn); -bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa); +bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, + bool always_retry); + +static inline bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, + gpa_t cr2_or_gpa) +{ + return __kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa, false); +} + void kvm_mmu_free_roots(struct kvm *kvm, struct kvm_mmu *mmu, ulong roots_to_free); void kvm_mmu_free_guest_mode_roots(struct kvm *kvm, struct kvm_mmu *mmu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4910ac3d7f83..aabed77f35d4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2713,10 +2713,11 @@ int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn) return r; } -bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa) +bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, + bool always_retry) { gpa_t gpa = cr2_or_gpa; - bool r; + bool r = false; /* * Bail early if there aren't any write-protected shadow pages to avoid @@ -2727,16 +2728,17 @@ bool kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa) * skipping the unprotect+retry path, which is also an optimization. */ if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) - return false; + goto out; if (!vcpu->arch.mmu->root_role.direct) { gpa = kvm_mmu_gva_to_gpa_write(vcpu, cr2_or_gpa, NULL); if (gpa == INVALID_GPA) - return false; + goto out; } r = kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); - if (r) { +out: + if (r || always_retry) { vcpu->arch.last_retry_eip = kvm_rip_read(vcpu); vcpu->arch.last_retry_addr = cr2_or_gpa; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 09fc43699b15..081ac4069666 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8882,7 +8882,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * guest to let the CPU re-execute the instruction in the hope that the * CPU can cleanly execute the instruction that KVM failed to emulate. */ - kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa); + __kvm_mmu_unprotect_gfn_and_retry(vcpu, cr2_or_gpa, true); /* * Retry even if _this_ vCPU didn't unprotect the gfn, as it's possible From patchwork Sat Aug 31 00:15:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785744 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D03314A4E0 for ; Sat, 31 Aug 2024 00:16:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063382; cv=none; b=GgoKIGIz4lbie+pwVDr8u99dQNHOAQMKXPvvf4HVUPaNLSwipHnjshlqcYG+C7NYxQWQZlvqIlkgro6snVoq77K9yioATUQ9BG3fuI/lDEC6vnsAaRj6r/LbUIK2tZguT8+IJPu/c9mXzKhgRoslAN8lN6UKF4JgrWGzaWR68S8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063382; c=relaxed/simple; bh=ccmI6fgXaS6ULZCMeBWyQaXyCbh7+cmAtQZQ72hgJko=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=OW2ahEw02ZCqEMpsyqCLIerXSnoIj2fUcMEUPTKQNtQCKH1gWHot1akOLrhvib4QwpOPof9x60dGZxZyaEATxtoTkNMOUW4Xvw4lYk3rogRZRRU2e7bni5tj9DczWfqzyRgqV2GcXMplTjHm5GGXla8FfPAdRI8nG4VUkgKgLX0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fhaWi+E1; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fhaWi+E1" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6b41e02c293so46031427b3.0 for ; Fri, 30 Aug 2024 17:16:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063380; x=1725668180; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=heJP/gE8sNgyHlf3GBWAi72kmE/mtNKhFddnYBUVTOE=; b=fhaWi+E106CCFd5rgQtGX4joRjEjC0gmVzUlr+pTI5dSIPC7JrJoYsPFdc4PK8EFVt XKXw6KPkvSrYdFC0bNPtLpxTtE2paLsNrSHNoEBHfzjBpw1IvHmlx75IkqFufOKbUJlI yv6wHagtf98IUSXyoXqXnzxkz+mTcwh4Y++lnVlKvXKheb21V9q5uTXhbZuTZdw/tUfI cHZb8vH+y3lkUZGxXWS7YQ8+VLvM1ntiaFdNR5/U+Ofc5CHqNAMyvUnmKSVyD8NkJvVq VszHbjvT4g/Oogn1S/+m2xKGKbUpQmRYML14h48qy2b/DYAGRUTR9/8Jfo5i6nqD4V1A pWeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063380; x=1725668180; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=heJP/gE8sNgyHlf3GBWAi72kmE/mtNKhFddnYBUVTOE=; b=Tj9h5nPEYYkR0hWsjtf4DgU4ldV7/5PdrKZ6XEkWJWMJy/qTBdVj7IdzFPQ81lZYFb wqlvwgnYT604viu3e2GdRqjZnIgl9Ea5WVfjDFgXgKlqzSO+38jB6o5HjEDYzlUv8sgB 0W5luVEbnDm1A+rm1SJXTxbZJhLHsAAWjtVnmQ9J12OfATnSWF9mcDs8nxdYQ290DLSg O9t1lOk5mM3W4bPMEN9bQP+qw+mn8eJjQlvopEqwWK7xwFZyhg850GzbNfQj3ovp9m4e 9ckGUJeoRM/WlOAjJ/5IegLcUTolMp6Tx+5U2+iKDCB9GgOeW/zf9/5bWwEGc3qDAXTn r//g== X-Gm-Message-State: AOJu0YxR+1fpTKPzOHxXqmnYNpbEXhignVW8ohObJpiD/wbBKpfE1c4/ sgd/nGioB1xB0yTMlkOuxrsptxLDMm2HxcNYnA/bwKOBRlhA4Fe872fnGftvSh+jLk6tFV5geSm 9bA== X-Google-Smtp-Source: AGHT+IHiBKbgkYqyJRDMcIF9+XogmVSPipW/TOpEzV4NmLy2XzHmDX3Zwx+y82tht7UQeTFU4iZAMrMnrmI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:690c:6103:b0:62c:ea0b:a447 with SMTP id 00721157ae682-6d40d88f5d9mr2178957b3.2.1725063380216; Fri, 30 Aug 2024 17:16:20 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:34 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-20-seanjc@google.com> Subject: [PATCH v2 19/22] KVM: x86: Rename reexecute_instruction()=>kvm_unprotect_and_retry_on_failure() From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Rename reexecute_instruction() to kvm_unprotect_and_retry_on_failure() to make the intent and purpose of the helper much more obvious. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 081ac4069666..450db5cec088 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8857,8 +8857,9 @@ static int handle_emulation_failure(struct kvm_vcpu *vcpu, int emulation_type) return 1; } -static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, - int emulation_type) +static bool kvm_unprotect_and_retry_on_failure(struct kvm_vcpu *vcpu, + gpa_t cr2_or_gpa, + int emulation_type) { if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -9125,8 +9126,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_queue_exception(vcpu, UD_VECTOR); return 1; } - if (reexecute_instruction(vcpu, cr2_or_gpa, - emulation_type)) + if (kvm_unprotect_and_retry_on_failure(vcpu, cr2_or_gpa, + emulation_type)) return 1; if (ctxt->have_exception && @@ -9212,7 +9213,8 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return 1; if (r == EMULATION_FAILED) { - if (reexecute_instruction(vcpu, cr2_or_gpa, emulation_type)) + if (kvm_unprotect_and_retry_on_failure(vcpu, cr2_or_gpa, + emulation_type)) return 1; return handle_emulation_failure(vcpu, emulation_type); From patchwork Sat Aug 31 00:15:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785745 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 754D514E2F6 for ; Sat, 31 Aug 2024 00:16:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063383; cv=none; b=lGfh9e7i6tR5WMncHQDoBanFXnvpQHh6126N5hT/Wbe5i8mCaH/64k1YII53QmrgcrLarx57sK0g1PTBJWtCTn9i3BSwaJ/9PgAM1ymgmos0GeGXl/qzLiEs2KVAR6zRHUJz9YcOArzX2TGbkzyUdjCEwp19CgGlNmtKWVJIJ90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063383; c=relaxed/simple; bh=jbNYq8nf4cToP9nG3R9NtHXCFO1UUWSTXyZK1MFowyM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Cj6zOwRXcki+iU9tuBcKS5xOn308e0k46jrC7VO9GjfDneazSVt4zvtBVCCHtfbbUIGWgQzdtTATdP5fDKmSU9N8S/L4H+8MJbfREqlw3UUr+dyYnNUEiA9NiPXVvZuqDQyPxKazZmnWbO6eMv/7nK+zQm7TB1WAPFLVzked7sU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gqxJs9zS; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gqxJs9zS" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2d88116d768so826418a91.0 for ; Fri, 30 Aug 2024 17:16:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063382; x=1725668182; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=cR1WOtXSuetpHOXK2S+96cMV73XM9Whtcc3KrMXdeXM=; b=gqxJs9zS7eFen9ZACMiMMglKRYkC+Xb7ZK7pYkLyKdJTjggbmQvEHqEXPGXI9Nxq41 IdogRgDcL4iawguzYoGtGTKXVLrvIqW/c8GXBXgLxgecgUmmGQYociWuQDHLL6CebDle P3FZhsPvFOyLbtb+CoyYSP1zdWTZ4k+bF0bzf7Kx6Kwix8t2eSd9xD1XRlup0fsrdcju SzBaE9maQ65q7Fc06D6/GP2fxGzJDePck5H/kNgLrT+n48Ju5OBy3QgW5GCztmx1ansu IQjrxRvH6WvaIwSO51tzHxZ8jy37nGqXSXVyGY/y9S6K80UjVL58lPrr0AuO6KaREsLo VTHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063382; x=1725668182; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=cR1WOtXSuetpHOXK2S+96cMV73XM9Whtcc3KrMXdeXM=; b=AevP8ggLGhINB2giIh60Tv9RUuL5E4+ww+00hKvvOpFMDFuoDmJCEwJX3CmIXfN4r4 BBXUz14b9SCym/MfZDkmXrGGAid9G/a4fCXCa2XfpgsepI1mAR6kEP+9JSDFx/rVGbll tJBQhdEmAtxmVQpwIwLVJt2xXjjI8XkgG3clRdUghZuc6yQyZ6SYQhIvbLBM1/CZCFOq PZBJupQ9lSDLF6VGKRsprVsv3UA7cf2s3eGLkQ/EYLniB5YPqricj5knftgvfkdv/wKE JUwb8PNclI3UNYJYGkCtMsvF9O6KVEozC7PpeB5AAOCQwCuc3ad/Wu/nkWWMxuNJgc7C VFKw== X-Gm-Message-State: AOJu0YzViYIUdg39ROlF0siYkU7TIETzG/fs53gHGgwn2yeIN5ymw2hh E8rEoxTLII1S2QNRDWjSvh2n3i7knkT4vtoXW5d/Mb0NWSnE5rxwm02jxkE1ZktQYLbds6x88x1 4DA== X-Google-Smtp-Source: AGHT+IGfs7gXN6qWH4tGlZzsztZguerRTLfMi+oiBPqhIYFJkNIRpCAXsHN1MLCUuyCoUdzMs9CiDW0dVcI= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90a:bc85:b0:2d8:94d4:5845 with SMTP id 98e67ed59e1d1-2d894d4658amr1195a91.0.1725063381728; Fri, 30 Aug 2024 17:16:21 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:35 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-21-seanjc@google.com> Subject: [PATCH v2 20/22] KVM: x86/mmu: Subsume kvm_mmu_unprotect_page() into the and_retry() version From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Fold kvm_mmu_unprotect_page() into kvm_mmu_unprotect_gfn_and_retry() now that all other direct usage is gone. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 1 - arch/x86/kvm/mmu/mmu.c | 33 +++++++++++++-------------------- 2 files changed, 13 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4aa10db97f6f..0fbde3ca8d1a 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2134,7 +2134,6 @@ int kvm_get_nr_pending_nmis(struct kvm_vcpu *vcpu); void kvm_update_dr7(struct kvm_vcpu *vcpu); -int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn); bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, bool always_retry); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aabed77f35d4..d042874b0a3b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2695,27 +2695,12 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long goal_nr_mmu_pages) write_unlock(&kvm->mmu_lock); } -int kvm_mmu_unprotect_page(struct kvm *kvm, gfn_t gfn) -{ - struct kvm_mmu_page *sp; - LIST_HEAD(invalid_list); - int r; - - r = 0; - write_lock(&kvm->mmu_lock); - for_each_gfn_valid_sp_with_gptes(kvm, sp, gfn) { - r = 1; - kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); - } - kvm_mmu_commit_zap_page(kvm, &invalid_list); - write_unlock(&kvm->mmu_lock); - - return r; -} - bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, bool always_retry) { + struct kvm *kvm = vcpu->kvm; + LIST_HEAD(invalid_list); + struct kvm_mmu_page *sp; gpa_t gpa = cr2_or_gpa; bool r = false; @@ -2727,7 +2712,7 @@ bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * positive is benign, and a false negative will simply result in KVM * skipping the unprotect+retry path, which is also an optimization. */ - if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) + if (!READ_ONCE(kvm->arch.indirect_shadow_pages)) goto out; if (!vcpu->arch.mmu->root_role.direct) { @@ -2736,7 +2721,15 @@ bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, goto out; } - r = kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); + r = false; + write_lock(&kvm->mmu_lock); + for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa)) { + r = true; + kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); + } + kvm_mmu_commit_zap_page(kvm, &invalid_list); + write_unlock(&kvm->mmu_lock); + out: if (r || always_retry) { vcpu->arch.last_retry_eip = kvm_rip_read(vcpu); From patchwork Sat Aug 31 00:15:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785746 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 966E7111A8 for ; Sat, 31 Aug 2024 00:16:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063385; cv=none; b=MfPtQ6LUjo/axYXwLWQxkDJyWSSH6gydzI33U+ZtRVwMzaSQKNpN+amOxrjoPtDvTmfU4Yy5YJ9vyYTsFe/FM/PJZMnnQvTwBhNu6KlvxaA8ztQ9or2SstlETiL1q51RnPamY8SNHOxhC1V/oXfCdoE3kVM0uAKXrRVyb7eQQQM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063385; c=relaxed/simple; bh=0jH6aGUfJ6nXVSowsfjSHGcTxPjzpE2W+Gg/9Z921x8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lKv5SEM0x6yBdPqsaRJsAt/stUKp52sPi8czfAFUOeIiAOK6Cs1XcUhloQyBiknuO5ZvgcILJ/vUJxbb3rqLvA6PzEDOI8ZM7NdGIIfLAr73ocbOjyF1m3SnnLRyAjMylpaqCqaBSlvOBch/alzfiMZsitVCTUzqQcjAHXAGolg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Mfp6PDTM; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mfp6PDTM" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-7b696999c65so2107869a12.3 for ; Fri, 30 Aug 2024 17:16:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063384; x=1725668184; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=DYB5QhEfP12JFKsEt0nbZ8L014oKTm3YJf7/Et6tdYI=; b=Mfp6PDTMjZtwF95ui7Km+FWChJKeWWZT+yE1ZWOqar5RRPsgXglDh7YnKZ8e1tUCdS Z1YKJw9s4gmLmacMPNxmIhhSyT91I3w6EkkD2HKMwaE7/HIzS3PBsPXgR2lT6XllzoQ8 wy3Um6tjd+5x7vh+J+57J0YS7Ktjjb9LAhZ5IoAJTjg8VqN39mTZ1j9F1xHU6dAG6qXj jsHPOd5/6GdpTV0ljohtsWiuZFd4PhIs1NZkBrHGoJznxYKVXI2cB4a3iScfLu5d/IL6 i7fvHI3HuFxjj4PWMUTAVXp2o2vMJlhLhD2c9Ge0hhVi7xnIF5FVkhl4LnqK0njd1jCj 0HKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063384; x=1725668184; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=DYB5QhEfP12JFKsEt0nbZ8L014oKTm3YJf7/Et6tdYI=; b=E6UoW1kfcOlYCaONYgnsv1JhaHCNnP0Jx1z+jJyEtNz/IIQ0erSGGPTx7b6zvBwUG1 Wh951dNNmd4z0d9CV3CblZYPLtOs4hG52RmMLgaS6SVoAexQgAcERzq1MfghNxbllq1F nYhJKlC2lmtzEhfdYKmW8we1SXbkFpuXT41Q1AApvfQAu7Wr2TseO78kST3S32NtpCkE 1nagC2PVXNRsfGHZdkuE2HP6hQnZPN+fa0uwCWQ4l6BJKqpix974nWPe43t2CmWH6YJ8 WSc5a5ooESC9Po+0+XxjF7lvBceM9yBjPIdUrofLCarZP1F/cs+ucvwNBqgEUJqSE833 ONFA== X-Gm-Message-State: AOJu0YzEJa6QBjl64+S4QXBZKEqK/OgtEQXl+peQDdjDwFslb0Fubf/q mDBg+0JmS8IYBcLxNfy/Iqg1o8HaaBQxm8qYofXalMmNxISD8aAWOHGBassUZSyPMlEDm/uMpNm eVA== X-Google-Smtp-Source: AGHT+IG9Caf/Ygx0zgVnN7lUuRoljiEi/rAgaAN7Ize+jtz+IuGMkKmmeQheo0RyBKXbT93DJK9xVvlhHb4= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e5c7:b0:1fb:716e:819e with SMTP id d9443c01a7336-20527669412mr2026595ad.4.1725063383865; Fri, 30 Aug 2024 17:16:23 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:36 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-22-seanjc@google.com> Subject: [PATCH v2 21/22] KVM: x86/mmu: Detect if unprotect will do anything based on invalid_list From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao Explicitly query the list of to-be-zapped shadow pages when checking to see if unprotecting a gfn for retry has succeeded, i.e. if KVM should retry the faulting instruction. Add a comment to explain why the list needs to be checked before zapping, which is the primary motivation for this change. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index d042874b0a3b..be5c2c33b530 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2721,12 +2721,15 @@ bool __kvm_mmu_unprotect_gfn_and_retry(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, goto out; } - r = false; write_lock(&kvm->mmu_lock); - for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa)) { - r = true; + for_each_gfn_valid_sp_with_gptes(kvm, sp, gpa_to_gfn(gpa)) kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list); - } + + /* + * Snapshot the result before zapping, as zapping will remove all list + * entries, i.e. checking the list later would yield a false negative. + */ + r = !list_empty(&invalid_list); kvm_mmu_commit_zap_page(kvm, &invalid_list); write_unlock(&kvm->mmu_lock); From patchwork Sat Aug 31 00:15:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13785747 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8F5312B64 for ; Sat, 31 Aug 2024 00:16:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063388; cv=none; b=SYNNtJ5MY9anniReysraapWX8gyDeyCJ4PRYFz0dmAt8EQQBC234TuASroQX3pag8HnjNri42xXaZQyAHz9zq/y/h4uqGT5US5c1YMkZLjILd4eDWbs9WfsdkN7C5TkHET3UoITNVIBxdTJPGf8ufbIW8KVqFkz6Ip9qi85tUP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725063388; c=relaxed/simple; bh=wP2W7KNm4uaZDYWeNWlAdBMglkxhSfv/7EU+XUEuEpI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q4fX4kwjUd3aKFefiivmao3t2/qYHahk9s7aKT1qwGx2MdbelPpoKtnct96mz4xDRjQgE6lJo8zDDKXNZqCLGWXvJ2LNxIA0XJeDs5d+rCnVuwsufjd4c30yr44UCNFRUX4SegxyCPkqgG54wKP3P6bPJuyk0znoAy2lVeP9Tn8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ksu3PKTO; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ksu3PKTO" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1fc6db23c74so23663515ad.0 for ; Fri, 30 Aug 2024 17:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725063386; x=1725668186; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=gP7K1Sev7aMPsK1RhVpHEhmgUKxxDPKWFSHlKDQHYWk=; b=Ksu3PKTOOO4mxuAo6jd9hV464GRAdKdPpv/NZVShb8FV+EIBJH7n4cduy6DLOAtgyc Je3KWlcDmVEHo2gI7bj/zw96UV0EDNiXNh+foPKA54kr07ZVkC/fy672Jqp0vLbqNnXh UgFZk4Zvu5IG5kq2AoHGtO/4gUCO0/3/c5IwzB7lUw/taTvqBjVPurLsSYai+P2E3D+k lKA9ubIk7QKIper7UQJdo5dUvaYoTPFUSLZPssWgF8xLESR96XQOSjNUyRawOAEc7AJu NCpVVklH0zp5OPCwf9VqqyybNFfgND5eJIXFgpZm6t/bKQMef8DSHCI6d3R1CizKvoth zt/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725063386; x=1725668186; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gP7K1Sev7aMPsK1RhVpHEhmgUKxxDPKWFSHlKDQHYWk=; b=anccfzYVSPCVxZ48QvciRCiZtWx7fnF33xCmiApi60ONvcGdBgnXFTK+oCPv+Hg5z2 8HRo0J/FjdDTCDvvyLZwctyE7X1KKRVOP+h7wphDi0YlrPyHmDkWjAm7XiRwsQg3zNul 1xwzDq4nwiBio04wfy0sgIOfn/4tgrej5KMgAWz9wtl6+KFJyvuuX2aAQz2il8nhedvL rbIbgHtMOLwcYGuPz+/OEr+jLfQwruI01gMdxDVCRqaJoqRJDWRKwWEUODoUDrBhCCkO T6xLMnbA0EXX5SJWnyMsI+uV+6xUxrZzF+gnU5tQR+ApijajeSa5+aN6qz5O5alDFf28 1uuA== X-Gm-Message-State: AOJu0YwFuVYWib6ktsbpdWN5X4U7LMTB2ATJYfRqidGEp0XJjcY7Hvnx po/Zh9RC6GaPBtE9jeNeWpz07HnP47exT8dj6ofOl2aZvcbFQ6pqc+Jzp4f+4rn/wO8qHWHxifI 0Gg== X-Google-Smtp-Source: AGHT+IETKE6MP4mUQ/pQIHpSlTqVQLzv82JDtckTBZhGegQc2MolI+RlmMC71D58MpTAuF2igBH8CH7kJwU= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:fb0e:b0:1fb:7f2c:5642 with SMTP id d9443c01a7336-2052764073dmr769545ad.4.1725063386022; Fri, 30 Aug 2024 17:16:26 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 30 Aug 2024 17:15:37 -0700 In-Reply-To: <20240831001538.336683-1-seanjc@google.com> Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240831001538.336683-1-seanjc@google.com> X-Mailer: git-send-email 2.46.0.469.g59c65b2a67-goog Message-ID: <20240831001538.336683-23-seanjc@google.com> Subject: [PATCH v2 22/22] KVM: x86/mmu: WARN on MMIO cache hit when emulating write-protected gfn From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yuan Yao , Yuan Yao WARN if KVM gets an MMIO cache hit on a RET_PF_WRITE_PROTECTED fault, as KVM should return RET_PF_WRITE_PROTECTED if and only if there is a memslot, and creating a memslot is supposed to invalidate the MMIO cache by virtue of changing the memslot generation. Keep the code around mainly to provide a convenient location to document why emulated MMIO should be impossible. Suggested-by: Yuan Yao Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index be5c2c33b530..c9cea020aad6 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5990,6 +5990,18 @@ static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, vcpu->arch.last_retry_eip = 0; vcpu->arch.last_retry_addr = 0; + /* + * It should be impossible to reach this point with an MMIO cache hit, + * as RET_PF_WRITE_PROTECTED is returned if and only if there's a valid, + * writable memslot, and creating a memslot should invalidate the MMIO + * cache by way of changing the memslot generation. WARN and disallow + * retry if MMIO is detected, as retrying MMIO emulation is pointless + * and could put the vCPU into an infinite loop because the processor + * will keep faulting on the non-existent MMIO address. + */ + if (WARN_ON_ONCE(mmio_info_in_cache(vcpu, cr2_or_gpa, direct))) + return RET_PF_EMULATE; + /* * Before emulating the instruction, check to see if the access was due * to a read-only violation while the CPU was walking non-nested NPT @@ -6031,17 +6043,15 @@ static int kvm_mmu_write_protect_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, return RET_PF_RETRY; /* - * The gfn is write-protected, but if emulation fails we can still - * optimistically try to just unprotect the page and let the processor + * The gfn is write-protected, but if KVM detects its emulating an + * instruction that is unlikely to be used to modify page tables, or if + * emulation fails, KVM can try to unprotect the gfn and let the CPU * re-execute the instruction that caused the page fault. Do not allow - * retrying MMIO emulation, as it's not only pointless but could also - * cause us to enter an infinite loop because the processor will keep - * faulting on the non-existent MMIO address. Retrying an instruction - * from a nested guest is also pointless and dangerous as we are only - * explicitly shadowing L1's page tables, i.e. unprotecting something - * for L1 isn't going to magically fix whatever issue cause L2 to fail. + * retrying an instruction from a nested guest as KVM is only explicitly + * shadowing L1's page tables, i.e. unprotecting something for L1 isn't + * going to magically fix whatever issue caused L2 to fail. */ - if (!mmio_info_in_cache(vcpu, cr2_or_gpa, direct) && !is_guest_mode(vcpu)) + if (!is_guest_mode(vcpu)) *emulation_type |= EMULTYPE_ALLOW_RETRY_PF; return RET_PF_EMULATE;