From patchwork Mon Sep 26 17:45:34 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Feiner X-Patchwork-Id: 9351093 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 37CF56077B for ; Mon, 26 Sep 2016 17:45:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 28C2D28A17 for ; Mon, 26 Sep 2016 17:45:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D04628A20; Mon, 26 Sep 2016 17:45:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE73728A17 for ; Mon, 26 Sep 2016 17:45:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1161209AbcIZRpy (ORCPT ); Mon, 26 Sep 2016 13:45:54 -0400 Received: from mail-pf0-f182.google.com ([209.85.192.182]:33522 "EHLO mail-pf0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1034478AbcIZRpx (ORCPT ); Mon, 26 Sep 2016 13:45:53 -0400 Received: by mail-pf0-f182.google.com with SMTP id 21so68039418pfy.0 for ; Mon, 26 Sep 2016 10:45:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=DUDHNbCxjdYxxcCHIJwiAg3LMHrnrhH6RiGQGAsPPss=; b=h+5YX4dLRcBTt6CjPead0kXkcmqtNqyRfVTBuVLdoyytLVY6YOukOR7oI+RexwlL3B 1z8Ek1rI/mZjREzde/+DyeCpa+59i/MS0Q1Izecn2d0ijp3w7r8gCECgWO+2+72SepGG wj2Mxghjjem4CObBavC4cpFyHECFTnoNe3o8lRlem5jvQaYCbqnMYgtCr7helojeN2FV 72VwVFp0tJzE/JEwzXdiEtnFLo1DmD2AuS6NaopAaXmulYmeP3ijXKb7OITzF0q1UmIr rNXbSDogkYOrsm+WkwKpgAzSjKQtUFUNXVNY4+KvGyanFnaKoZZ0+PKymoGO0n0aRzn9 PDVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=DUDHNbCxjdYxxcCHIJwiAg3LMHrnrhH6RiGQGAsPPss=; b=bK+le5QeKu+sYf0NlYBrs7kT/qT0tYFChTJvLe5KRCsW4L5pFU6kN65i2JRMsyebDG MV1V76Tq4TWAdDUhKeqAduXpKJuoHedjNGnhXfFpfl4gTH7U3pX6YSpvrC4vXcJzNLmM 8On7wj4+sSjh+5CjP3CMTvybEt8aeWJALCzRRX5+tsjzQxIEC7zMnp7//NJkMf9ZgXsH apMpFowJP7XXJNXfApx9OClL1Zwrf2ZAPym6Z3mkws+qJhrc5OYJa6kwuCp+SEMWYO3J xxOdKCpG4Ggz/wKhM2RBY5oLcyK+j9wdjdpRHgF4alXI1fnuD5DzJkf1O6RIt4NF71dl iajQ== X-Gm-Message-State: AE9vXwPTPOraiD793DI8VSzt8j5mi/qy0PIqGE4q0Yjd+on+vU2CP/IbCvdYG4hS6+fNdnOs X-Received: by 10.98.27.195 with SMTP id b186mr39875840pfb.111.1474911952297; Mon, 26 Sep 2016 10:45:52 -0700 (PDT) Received: from localhost ([2620:0:1009:11:5cc4:10ae:804e:467c]) by smtp.gmail.com with ESMTPSA id y190sm26427573pfg.39.2016.09.26.10.45.51 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Mon, 26 Sep 2016 10:45:51 -0700 (PDT) From: Peter Feiner To: kvm@vger.kernel.org Cc: pbonzini@redhat.com, andreslc@google.com, junaids@google.com, Peter Feiner Subject: [PATCH] KVM: X86: MMU: no mmu_notifier_seq++ in kvm_age_hva Date: Mon, 26 Sep 2016 10:45:34 -0700 Message-Id: X-Mailer: git-send-email 2.8.0.rc3.226.g39d4020 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The MMU notifier sequence number keeps GPA->HPA mappings in sync when GPA->HPA lookups are done outside of the MMU lock (e.g., in tdp_page_fault). Since kvm_age_hva doesn't change GPA->HPA, it's unnecessary to increment the sequence number. Signed-off-by: Peter Feiner --- arch/x86/kvm/mmu.c | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 3d4cc8cc..dc6d1e8 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1660,17 +1660,9 @@ int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) * This has some overhead, but not as much as the cost of swapping * out actively used pages or breaking up actively used hugepages. */ - if (!shadow_accessed_mask) { - /* - * We are holding the kvm->mmu_lock, and we are blowing up - * shadow PTEs. MMU notifier consumers need to be kept at bay. - * This is correct as long as we don't decouple the mmu_lock - * protected regions (like invalidate_range_start|end does). - */ - kvm->mmu_notifier_seq++; + if (!shadow_accessed_mask) return kvm_handle_hva_range(kvm, start, end, 0, kvm_unmap_rmapp); - } return kvm_handle_hva_range(kvm, start, end, 0, kvm_age_rmapp); }