From patchwork Thu Oct 27 20:03:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13022695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC894FA3740 for ; Thu, 27 Oct 2022 20:03:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236713AbiJ0UD2 (ORCPT ); Thu, 27 Oct 2022 16:03:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236058AbiJ0UDY (ORCPT ); Thu, 27 Oct 2022 16:03:24 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A446495CD for ; Thu, 27 Oct 2022 13:03:24 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-36fde8f2cdcso23269247b3.23 for ; Thu, 27 Oct 2022 13:03:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b3K11rzBWUrVKqwdCSRAQP1ffARxvzU99Wf31co2VLM=; b=cw9ojMxMLNa7IyHdxeknDZVaOvYohozTGlU/Tq7jBM+1jHcVKBYCaOCaFrBBYSPkQ2 k8z0bwhGTQoC2OQN7HhZYAkwI9hkP5wi83cKvDVuUlP99GfRps/6pxHS5IyJ/V7k3Vxl AxW1hjjnji1lzbeMN0dVr464+QDiFoQLbWUyqigM2c5JhYKEQlD/bh/z3EyOfyUl88sf lTePeQb0B5E6kvEeYzZFie6v1dt5HPbr1kvcbweqcZ+LKytTU9EVLcY5NXf8bRi/Hv6q BPQd3RbZSUhMX/DhxbaFmkb7q2+PyioXPKNBtepNLFb1ch1zr4MvQO7qO0HHZTVRbjgS YzMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b3K11rzBWUrVKqwdCSRAQP1ffARxvzU99Wf31co2VLM=; b=hsRUNzxWGX62rufz1nGIqf081UZNJ6GuQ/2urG5GTMqIWENi2L6CfpGB/Gh6rofh2C MUuKz0qHOn6MP0hp2hxlfTJzV+uRKV6eFdeavtqCyf38A6XD+1X9VYCLrCYCF6V58O5/ bDVCyZ8h+giDDrgNJXILq72P8YciZWqaE4nK+yeH1dijMq/31ZAbIyGQyckpAhPu1cqd 55/mho+uP87He5pzAK/UeJWnx59+qnWR7TCwetmCIBqAKGd7Gy653wfODHxLdXxivYc7 5r+UMXr4zNUm7S8cJ0d2Ie0aitf9o18s+bawZbMppswuQ5nYV4wWsQiHTSlZONV1gPmK 3S6w== X-Gm-Message-State: ACrzQf3hXUsAQc/Q/jybJyzcN6Kf5klCkKmOUUbS9clHdAihnKedRP2q GnXE85XFeca9QmV0CvClH/BnPAIp3EuRDQ== X-Google-Smtp-Source: AMsMyM5k7PZQYuEU1EEkPiWtBfNGQ9LpBG9A/jHf61OlbavbBjc+sglJ8euNi+YrMcKRPl+dAk2DyOlfkM483w== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:c704:0:b0:6c1:9494:f584 with SMTP id w4-20020a25c704000000b006c19494f584mr46340692ybe.98.1666901003291; Thu, 27 Oct 2022 13:03:23 -0700 (PDT) Date: Thu, 27 Oct 2022 13:03:15 -0700 In-Reply-To: <20221027200316.2221027-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221027200316.2221027-1-dmatlack@google.com> X-Mailer: git-send-email 2.38.1.273.g43a17bfeac-goog Message-ID: <20221027200316.2221027-2-dmatlack@google.com> Subject: [PATCH 1/2] KVM: Keep track of the number of memslots with dirty logging enabled From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new field to struct kvm that keeps track of the number of memslots with dirty logging enabled. This will be used in a future commit to cheaply check if any memslot is doing dirty logging. Signed-off-by: David Matlack --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 10 ++++++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 32f259fa5801..25ed8c1725ff 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -709,6 +709,8 @@ struct kvm { struct kvm_memslots __memslots[KVM_ADDRESS_SPACE_NUM][2]; /* The current active memslot set for each address space */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; + /* The number of memslots with dirty logging enabled. */ + int nr_memslots_dirty_logging; struct xarray vcpu_array; /* Used to wait for completion of MMU notifiers. */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e30f1b4ecfa5..57e4406005cd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1641,6 +1641,9 @@ static void kvm_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { + int old_flags = old ? old->flags : 0; + int new_flags = new ? new->flags : 0; + /* * Update the total number of memslot pages before calling the arch * hook so that architectures can consume the result directly. @@ -1650,6 +1653,13 @@ static void kvm_commit_memory_region(struct kvm *kvm, else if (change == KVM_MR_CREATE) kvm->nr_memslot_pages += new->npages; + if ((old_flags ^ new_flags) & KVM_MEM_LOG_DIRTY_PAGES) { + if (new_flags & KVM_MEM_LOG_DIRTY_PAGES) + kvm->nr_memslots_dirty_logging++; + else + kvm->nr_memslots_dirty_logging--; + } + kvm_arch_commit_memory_region(kvm, old, new, change); switch (change) { From patchwork Thu Oct 27 20:03:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 13022696 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1047ECAAA1 for ; Thu, 27 Oct 2022 20:03:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236842AbiJ0UDb (ORCPT ); Thu, 27 Oct 2022 16:03:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236834AbiJ0UD1 (ORCPT ); Thu, 27 Oct 2022 16:03:27 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 896EF36429 for ; Thu, 27 Oct 2022 13:03:26 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 204-20020a2510d5000000b006be7970889cso2479491ybq.21 for ; Thu, 27 Oct 2022 13:03:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5U5KGPA2RxlK5HZq/lifrukyKlRZoskBuzdbIGc8W1A=; b=iPr2JcYLbN/WjRuSpxTwdCFdvYSD8UvcLJmTGl6abDzYih95Yl7AP0fTo1zuEaUFEB Wu6ldu5NE29knKkz+FKQ/ZDsEyfm35qfs3wXJgQlf0ACRM/hRcJmZue4yQQz3MgjtvKd P1oIv28ecVl4sE7uA+QxISfCbi1ciOBC0/l28zJyLeWf7fSdqd/+VLadyBpTSjy5rUf3 hKAmEJwaxfcgneLftty6cl00ECVk8fhgLT7XXyv0rhYHJwBxZZ9B+ci8GRF0YViXJsPl WRAZbq1Sm4uyALAhVaCZBDl7qURwESagxy/WfTYThD3RJqWBMgpPs4h9+/YwS9k5RKr6 0OiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5U5KGPA2RxlK5HZq/lifrukyKlRZoskBuzdbIGc8W1A=; b=hWx54hOoRrkVtFOpA3kl/ceMl9NpAwGHGBWe+Z0GlTHu4GPGkRBhoiZqDuCesGVdLD DenIX25/N7A1Le+BZbskqScI0Hv1uQMSe6peNsWeOvnkql6GgW42DheLHhUoGr+818kp 0x0No5syWjHpp10RlmKfQvD+OuoddQlE3j+JYdB2GxKylScyKvgFz8xXuR9cXCtWkMlX 6qaPVpA4hT4Le2i1UTx57kndK2JBY/IMkA40yxvmEcLLPro8T8JNMoR9fk2a1/yd37a8 63n7NlKunTlNrJ1PHCu1sTMxqjbWRr2cBR5+ncuRl1xUMAS6T8vHcgB/G7pn6Dhenflj cEBg== X-Gm-Message-State: ACrzQf1g7+fRuUWAKStzl+YdrrfyDqvyhe4LQ3K22quk7aWhI4kv1idj 2e+FJOmLyCP6WEQCIsWkdJgARIlKbqy2ZQ== X-Google-Smtp-Source: AMsMyM4828ZIVCbVNRMsGtPCjnK36mpWkXx8zKNJa748d3+ubLnpxYmeIccnJ//EiL1VmMVu9Wp1Yay05Pz2iQ== X-Received: from dmatlack-n2d-128.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:1309]) (user=dmatlack job=sendgmr) by 2002:a25:bb13:0:b0:6bc:c39d:fe07 with SMTP id z19-20020a25bb13000000b006bcc39dfe07mr0ybg.186.1666901005097; Thu, 27 Oct 2022 13:03:25 -0700 (PDT) Date: Thu, 27 Oct 2022 13:03:16 -0700 In-Reply-To: <20221027200316.2221027-1-dmatlack@google.com> Mime-Version: 1.0 References: <20221027200316.2221027-1-dmatlack@google.com> X-Mailer: git-send-email 2.38.1.273.g43a17bfeac-goog Message-ID: <20221027200316.2221027-3-dmatlack@google.com> Subject: [PATCH 2/2] KVM: x86/mmu: Do not recover NX Huge Pages when dirty logging is enabled From: David Matlack To: Paolo Bonzini Cc: Sean Christopherson , kvm@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Do not recover NX Huge Pages if dirty logging is enabled on any memslot. Zapping a region that is being dirty tracked is a waste of CPU cycles (both by the recovery worker, and subsequent vCPU faults) since the memory will just be faulted back in at the same 4KiB granularity. Use kvm->nr_memslots_dirty_logging as a cheap way to check if NX Huge Pages are being dirty tracked. This has the additional benefit of ensuring that the NX recovery worker uses little-to-no CPU during the precopy phase of a live migration. Note, kvm->nr_memslots_dirty_logging can result in false positives and false negatives, e.g. if dirty logging is only enabled on a subset of memslots or the recovery worker races with a memslot update. However there are no correctness issues either way, and eventually NX Huge Pages will be recovered once dirty logging is disabled on all memslots. An alternative approach would be to lookup the memslot of each NX Huge Page and check if it is being dirty tracked. However, this would increase the CPU usage of the recovery worker and MMU lock hold time in write mode, especially in VMs with a large number of memslots. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f81539061d6..b499d3757173 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6806,6 +6806,14 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) bool flush = false; ulong to_zap; + /* + * Do not attempt to recover NX Huge Pages while dirty logging is + * enabled since any subsequent accesses by a vCPUs will just fault the + * memory back in at the same 4KiB granularity. + */ + if (READ_ONCE(kvm->nr_memslots_dirty_logging)) + return; + rcu_idx = srcu_read_lock(&kvm->srcu); write_lock(&kvm->mmu_lock);