From patchwork Fri Mar 26 02:19:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 12165605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A06BC433DB for ; Fri, 26 Mar 2021 02:26:21 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F20A619F7 for ; Fri, 26 Mar 2021 02:26:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F20A619F7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:Reply-To:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MNWkZPLeE15iOil831NyVmeS2OYvp8U77XFMNLxHmP4=; b=Sk3sVgm6xg/WlXjzJ2NISnQ9Z TcYE+9dH6OaW+6ZPj+wFUuOmj6AU0KCYK/g+Oyw+nOkQxPO4oNj/HHuiU9ByDKP2eKSbyTKpQ9nA2 BEhs0CX1y9SzomD/9ltd+Ssspy18fuJhTOUw7gIOM/8HTC1llgHTACKCrmccby0BSiznwJTOWgiay eCgmG87uTCl1aZgZoGU2o8lSL72LlzhRsxyvGTJJSeEn2Bb2rrpNwooP59Rog5pLm9Pkr5cuQARYt 8z73qCamsjLFLG9a94H2OGMj20Fj4j8VU91mAxdCWXMG9dZ4HVSANSIvcXvq/NiKLZd4PMFQLEinR 0R5YcfzQg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lPc9D-002aOM-9P; Fri, 26 Mar 2021 02:24:27 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lPc5e-002Z3Y-NV for linux-arm-kernel@lists.infradead.org; Fri, 26 Mar 2021 02:20:48 +0000 Received: by mail-yb1-xb49.google.com with SMTP id v186so316327ybe.5 for ; Thu, 25 Mar 2021 19:20:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=TaRsW9hCT9gEJ/YVzW7kB9YFbRu5cnB3H0FViU1bRhk=; b=kBPgGMYl4blfSBIqQ5FKhwNHBKbvitDKr+WZYdt/8IhMnlfsrTKNXxhUeD4Dfv4OdK Qp85biDD6s+K+U5KksvGPm1ktvKtYdF7mnE9MrHxMeaVxGrZG5ap40dtbvhTh6LLvRmq EVe4q3actfo6eycSfr6QO9c/QHHJUOWKxLFg2JRrhbugVxZPDZF4Tws0K8Q0zTr01x17 AhbFvngDP583OBSyiGm1/Mz3bjEOz7lQ/Yjo2wW5h41dxHGv5x1vrgqkeUC6D0RE05dK 3WjGfoR4fP8tf2U8RHVaSt/5IZ2QjRAtja0LwszjEpZxXU5VWS0iy7Sr4MaEgtMnRZgZ ZNJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=TaRsW9hCT9gEJ/YVzW7kB9YFbRu5cnB3H0FViU1bRhk=; b=gV6fIrJC2NfXtsrXJgfvYCBZlB9V/QoaKlFuSieR9ig5S5XtXyDf4IAbonAnGtzrsR AUUDMmKzy7Grf7Vj9/uk7k1a0WTSlImD5J1lK8ZsMPbpJ1tND65FrSxwK08gsKIGMLXA As1jeRiwLJ+fa7cn9ioEHNGCEoyaGBVxRZYzCeQwnnUfOmHgvqRsB+A2o7Yvj0hQ0N0b wUeE50gJ8XfQ2eLLtwUM2fZ1dOD49oZ8bnhyBur4ohzpeGCr5CzV9/cWxP1+Kjh6GC01 ujRwyBZgNf+MSR1lRdqoEyaXYgDed3weHHxDPWP9xPRHfBQxX4U5TmUsnVehYDVtOSye DKhQ== X-Gm-Message-State: AOAM532vch4sRwn9it9Th3HTnC6Jo0JpNq1eLKu+xcRB8rLAyRVizB9u BzD7UagzMCcj9++aM/xwe8yEQAzksJQ= X-Google-Smtp-Source: ABdhPJy2kLsKpNboktKYvRYtclaI6TLQhJXYHuQbVsBNzQbrVXujQcH971ASv0KdE05FkcrHFObwQrNjrD8= X-Received: from seanjc798194.pdx.corp.google.com ([2620:15c:f:10:b1bb:fab2:7ef5:fc7d]) (user=seanjc job=sendgmr) by 2002:a25:5f46:: with SMTP id h6mr16551423ybm.255.1616725245154; Thu, 25 Mar 2021 19:20:45 -0700 (PDT) Date: Thu, 25 Mar 2021 19:19:54 -0700 In-Reply-To: <20210326021957.1424875-1-seanjc@google.com> Message-Id: <20210326021957.1424875-16-seanjc@google.com> Mime-Version: 1.0 References: <20210326021957.1424875-1-seanjc@google.com> X-Mailer: git-send-email 2.31.0.291.g576ba9dcdaf-goog Subject: [PATCH 15/18] KVM: Take mmu_lock when handling MMU notifier iff the hva hits a memslot From: Sean Christopherson To: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Paolo Bonzini Cc: James Morse , Julien Thierry , Suzuki K Poulose , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, Ben Gardon X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210326_022046_923393_3216C232 X-CRM114-Status: GOOD ( 15.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Defer acquiring mmu_lock in the MMU notifier paths until a "hit" has been detected in the memslots, i.e. don't take the lock for notifications that don't affect the guest. For small VMs, spurious locking is a minor annoyance. And for "volatile" setups where the majority of notifications _are_ relevant, this barely qualifies as an optimization. But, for large VMs (hundreds of threads) with static setups, e.g. no page migration, no swapping, etc..., the vast majority of MMU notifier callbacks will be unrelated to the guest, e.g. will often be in response to the userspace VMM adjusting its own virtual address space. In such large VMs, acquiring mmu_lock can be painful as it blocks vCPUs from handling page faults. In some scenarios it can even be "fatal" in the sense that it causes unacceptable brownouts, e.g. when rebuilding huge pages after live migration, a significant percentage of vCPUs will be attempting to handle page faults. x86's TDP MMU implementation is especially susceptible to spurious locking due it taking mmu_lock for read when handling page faults. Because rwlock is fair, a single writer will stall future readers, while the writer is itself stalled waiting for in-progress readers to complete. This is exacerbated by the MMU notifiers often firing multiple times in quick succession, e.g. moving a page will (always?) invoke three separate notifiers: .invalidate_range_start(), invalidate_range_end(), and .change_pte(). Unnecessarily taking mmu_lock each time means even a single spurious sequence can be problematic. Note, this optimizes only the unpaired callbacks. Optimizing the .invalidate_range_{start,end}() pairs is more complex and will be done in a future patch. Suggested-by: Ben Gardon Signed-off-by: Sean Christopherson --- virt/kvm/kvm_main.c | 34 ++++++++++++++++------------------ 1 file changed, 16 insertions(+), 18 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index bfa43eea891a..0c2aff8a4aa1 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -458,6 +458,7 @@ struct kvm_hva_range { unsigned long end; pte_t pte; hva_handler_t handler; + bool caller_locked; bool flush_on_ret; bool may_block; }; @@ -465,14 +466,12 @@ struct kvm_hva_range { static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, const struct kvm_hva_range *range) { - struct kvm_memory_slot *slot; - struct kvm_memslots *slots; + bool ret = false, locked = range->caller_locked; struct kvm_gfn_range gfn_range; - bool ret = false; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; int i, idx; - lockdep_assert_held_write(&kvm->mmu_lock); - idx = srcu_read_lock(&kvm->srcu); for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { @@ -503,6 +502,10 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, gfn_range.end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot); gfn_range.slot = slot; + if (!locked) { + locked = true; + KVM_MMU_LOCK(kvm); + } ret |= range->handler(kvm, &gfn_range); } } @@ -510,6 +513,9 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, if (range->flush_on_ret && (ret || kvm->tlbs_dirty)) kvm_flush_remote_tlbs(kvm); + if (locked && !range->caller_locked) + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); /* The notifiers are averse to booleans. :-( */ @@ -528,16 +534,11 @@ static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, .end = end, .pte = pte, .handler = handler, + .caller_locked = false, .flush_on_ret = true, .may_block = false, }; - int ret; - - KVM_MMU_LOCK(kvm); - ret = __kvm_handle_hva_range(kvm, &range); - KVM_MMU_UNLOCK(kvm); - - return ret; + return __kvm_handle_hva_range(kvm, &range); } static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn, @@ -551,16 +552,12 @@ static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn .end = end, .pte = __pte(0), .handler = handler, + .caller_locked = false, .flush_on_ret = false, .may_block = false, }; - int ret; - KVM_MMU_LOCK(kvm); - ret = __kvm_handle_hva_range(kvm, &range); - KVM_MMU_UNLOCK(kvm); - - return ret; + return __kvm_handle_hva_range(kvm, &range); } static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, struct mm_struct *mm, @@ -581,6 +578,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, .end = range->end, .pte = __pte(0), .handler = kvm_unmap_gfn_range, + .caller_locked = true, .flush_on_ret = true, .may_block = mmu_notifier_range_blockable(range), };