From patchwork Sat Jan 23 00:03:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12040969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CF10C433DB for ; Sat, 23 Jan 2021 00:07:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D70D23B75 for ; Sat, 23 Jan 2021 00:07:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726224AbhAWAGu (ORCPT ); Fri, 22 Jan 2021 19:06:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726580AbhAWAEV (ORCPT ); Fri, 22 Jan 2021 19:04:21 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF2C0C061794 for ; Fri, 22 Jan 2021 16:03:40 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id j24so5042450qvg.8 for ; Fri, 22 Jan 2021 16:03:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=kAWuaESsc2ZEzrFWxj8fVnjbm69FHkEio6nEEHrCqso=; b=vPheHINMwQ7x81BAVOdlxaLV9ZmxEAUxLwRQEterkex73h1GF1e0JJOYRw917hMlJS FjIRezaFCAn8JbR+AXZr+AcnJ3CebTq465UDw75KfzxtM4TRaQCtV3gq/CRx8YeCNoTw RbNr1fj2/TsWw0R3YPkIvv4sqI1fwZp+G5aM9StcUIGhKnUYY39MI131/gVsHTV6uNvF Xxuz6WK7rAVqq1FamjRJaBpl2CNt4D8B41KeEa98aaDnHtwvVlQ9oBcKS/1WSS5cN9Uv D6/FDFzy3zLDM2Hhkk8PnrXd1cGoCSjw4d5gVOxUV5JqjSs9EaOTNab2/rhN0kCRkCb2 3j1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=kAWuaESsc2ZEzrFWxj8fVnjbm69FHkEio6nEEHrCqso=; b=a0SCBHU60VcVtVZRX4nUPLBbVr4fl3Gfc5UiEmuz7Go4gVeGoc2JyTsygQyAF1jpAE T71yn4Oi1pXvERSpBvG7gUMB4lZNsGP2fSTQXTiUO3kvxFiv3YYu2eSGerS/4fwbfDpQ Ego9BLT5aOWBsh1m3IdmAviEeVrcu1Ut5no9nKFaIyQEaIMQo5wUF/8zSHgnVnE9pOgG sOaQ62yE39TlACT3BJqGmSEfbtQXcrHo5sUj0o1Fcf13bn1bF5fovmbC4NjSSlh1/euO nnqkqKKawtowZ0jCbLqWtp0DX95qoT6wRw/qtXO8+ainogDRCZxo8Wi2BBU8kmcVQMuZ TDFA== X-Gm-Message-State: AOAM532BE88QotvyytwEPl6AqsZtXD2/wCRFUSTG0EkHEcdXSC0OCLSJ 2OXYbxjhCIoKmkVBPKM116mof1XYoV0= X-Google-Smtp-Source: ABdhPJzIaxvGYNmKQCnDZF49b4i+ePgAT9JFDk8xFGgl9hTFC178lmVaRjBqNudM58fgQn4oA9eD1zI+Xk8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:ad4:5904:: with SMTP id ez4mr6848541qvb.30.1611360220039; Fri, 22 Jan 2021 16:03:40 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 22 Jan 2021 16:03:33 -0800 In-Reply-To: <20210123000334.3123628-1-seanjc@google.com> Message-Id: <20210123000334.3123628-2-seanjc@google.com> Mime-Version: 1.0 References: <20210123000334.3123628-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH 1/2] KVM: x86: Remove obsolete disabling of page faults in kvm_arch_vcpu_put() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Remove the disabling of page faults across kvm_steal_time_set_preempted() as KVM now accesses the steal time struct (shared with the guest) via a cached mapping (see commit b043138246a4, "x86/KVM: Make sure KVM_VCPU_FLUSH_TLB flag is not missed".) The cache lookup is flagged as atomic, thus it would be a bug if KVM tried to resolve a new pfn, i.e. we want the splat that would be reached via might_fault(). Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9a8969a6dd06..3f4b09d9f25b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4031,15 +4031,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) if (vcpu->preempted && !vcpu->arch.guest_state_protected) vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu); - /* - * Disable page faults because we're in atomic context here. - * kvm_write_guest_offset_cached() would call might_fault() - * that relies on pagefault_disable() to tell if there's a - * bug. NOTE: the write to guest memory may not go through if - * during postcopy live migration or if there's heavy guest - * paging. - */ - pagefault_disable(); /* * kvm_memslots() will be called by * kvm_write_guest_offset_cached() so take the srcu lock. @@ -4047,7 +4038,6 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) idx = srcu_read_lock(&vcpu->kvm->srcu); kvm_steal_time_set_preempted(vcpu); srcu_read_unlock(&vcpu->kvm->srcu, idx); - pagefault_enable(); kvm_x86_ops.vcpu_put(vcpu); vcpu->arch.last_host_tsc = rdtsc(); /* From patchwork Sat Jan 23 00:03:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12040971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16243C433E0 for ; Sat, 23 Jan 2021 00:08:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C007A23B6A for ; Sat, 23 Jan 2021 00:08:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726852AbhAWAHr (ORCPT ); Fri, 22 Jan 2021 19:07:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35524 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727009AbhAWAF4 (ORCPT ); Fri, 22 Jan 2021 19:05:56 -0500 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C1B3C0617A7 for ; Fri, 22 Jan 2021 16:03:43 -0800 (PST) Received: by mail-yb1-xb4a.google.com with SMTP id i82so7078574yba.18 for ; Fri, 22 Jan 2021 16:03:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=x5xkXxx84p9RzI8TFc1X+Kwfm7V0NmvnOTKFNhy6a6c=; b=hNELuhTUjCVTl1K3vgCXtv2OmygDlXZZMIHBXiF4NSWReX9YgiLK9QGt/C2KY8KcPP aZPPtbcse9xlLMo3T0yJ8DY2sm9vOgvGzbsptq2caDtMJMgYACqJQMWyUXQfp4qhp6pL z74M0ZW82+QabmOxRReAGtzgSupkxJeZc4sPGgsATF06bEcwm+TF6AvKpMJix4tkty9E Wwss1QQhyXT+EslO28LYZJLI3b4gWGtsxtMp24EeV2ZwzObOBlXxhwR1svg0xE3SGwOB hrLb2mvyKxeWOhPM35/mE0mfmhSraAUHYMirq6P0uQZRmPxkQY4jHBh4bj8mwW8cwIZZ NKDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=x5xkXxx84p9RzI8TFc1X+Kwfm7V0NmvnOTKFNhy6a6c=; b=FYWshYKTF35K5axEkYgRG6UqjUim3IWDCnTQZX9QVXFWRQAeuN3TiKATNCEfilMAVY C7QKWSgD2w+ag4pXLQrMUo04iA1LIGOeIb/1hY1JRqrBXuce8qaVj/EFf6QktUJLooXq sbDVcf4ZgHociai7pkeGNinyLvF0YwHDESCd8PitQr3CK8PU1S1mPhJ5ZhKtTR4B/9oa qc7GPpEaYovi6lOB0aml37pOAF2nMMHQOmrpOuHW60rzEaRaqlFVuujlXUEvKCzB7PG6 zjunGQ4Knsv6+nfWDDeTDAEg78TFVoZGgVDOotBel6u6V4Kqan2IrjxxltVIoprEp8SF 7u0Q== X-Gm-Message-State: AOAM531lR6R9mRUTHgCK55q+c8qGXyrbRmA9s2iPRDFUHEcd9cVUZvwi oGND/++AbsvLrXjO5JUZh1T+DdpsOu8= X-Google-Smtp-Source: ABdhPJx9gAT03BywYIvsAG5zPAl13fqtU0y68UjtDee/aUsIAdLkeTaznS+Sv/hzWYgpcdJJ8go6OvhsEG8= Sender: "seanjc via sendgmr" X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:1ea0:b8ff:fe73:50f5]) (user=seanjc job=sendgmr) by 2002:a5b:987:: with SMTP id c7mr10173174ybq.303.1611360222571; Fri, 22 Jan 2021 16:03:42 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 22 Jan 2021 16:03:34 -0800 In-Reply-To: <20210123000334.3123628-1-seanjc@google.com> Message-Id: <20210123000334.3123628-3-seanjc@google.com> Mime-Version: 1.0 References: <20210123000334.3123628-1-seanjc@google.com> X-Mailer: git-send-email 2.30.0.280.ga3ce27912f-goog Subject: [PATCH 2/2] KVM: x86: Take KVM's SRCU lock only if steal time update is needed From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Enter a SRCU critical section for a memslots lookup during steal time update if and only if a steal time update is actually needed. Taking the lock can be avoided if steal time is disabled by the guest, or if KVM knows it has already flagged the vCPU as being preempted. Reword the comment to be more precise as to exactly why memslots will be queried. Signed-off-by: Sean Christopherson --- arch/x86/kvm/x86.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3f4b09d9f25b..4efaa858a8bb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4005,6 +4005,7 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) { struct kvm_host_map map; struct kvm_steal_time *st; + int idx; if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) return; @@ -4012,9 +4013,15 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) if (vcpu->arch.st.preempted) return; + /* + * Take the srcu lock as memslots will be accessed to check the gfn + * cache generation against the memslots generation. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); + if (kvm_map_gfn(vcpu, vcpu->arch.st.msr_val >> PAGE_SHIFT, &map, &vcpu->arch.st.cache, true)) - return; + goto out; st = map.hva + offset_in_page(vcpu->arch.st.msr_val & KVM_STEAL_VALID_BITS); @@ -4022,22 +4029,17 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) st->preempted = vcpu->arch.st.preempted = KVM_VCPU_PREEMPTED; kvm_unmap_gfn(vcpu, &map, &vcpu->arch.st.cache, true, true); + +out: + srcu_read_unlock(&vcpu->kvm->srcu, idx); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { - int idx; - if (vcpu->preempted && !vcpu->arch.guest_state_protected) vcpu->arch.preempted_in_kernel = !kvm_x86_ops.get_cpl(vcpu); - /* - * kvm_memslots() will be called by - * kvm_write_guest_offset_cached() so take the srcu lock. - */ - idx = srcu_read_lock(&vcpu->kvm->srcu); kvm_steal_time_set_preempted(vcpu); - srcu_read_unlock(&vcpu->kvm->srcu, idx); kvm_x86_ops.vcpu_put(vcpu); vcpu->arch.last_host_tsc = rdtsc(); /*