From patchwork Tue May 18 17:34:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73506C433B4 for ; Tue, 18 May 2021 17:34:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 59C5F61361 for ; Tue, 18 May 2021 17:34:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351172AbhERRfo (ORCPT ); Tue, 18 May 2021 13:35:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346564AbhERRfk (ORCPT ); Tue, 18 May 2021 13:35:40 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27118C061761 for ; Tue, 18 May 2021 10:34:20 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id d20-20020a25add40000b02904f8960b23e8so14526116ybe.6 for ; Tue, 18 May 2021 10:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+sf6y3nv2ZCNMXtqDYEEUFgNmENQxF7tXN6LoZYjT6w=; b=PU5+p2NnY0u4ZuZWWlDMiOb5GYo4r0+qOUW247POD85mVHqQiV4MByP+516xA+ko2N /eF+jpmKfJWCWyS7Q4l2p6GRWAxftv+4SaTlD1Yugk4v+ULn+VBaqO2nq5bLS8hIEhT9 Tg7RrVP/OYR+8R9yE1+KGIy29BlteCy4tRBeJnvgy6TE4Q2rQjs2q4Cobi4mJP+fHS4s ix7/OKNTcfbmyrksuvBL7t+oKlXF9zemZ8sK1SrqXMD+4rNEjHT3mHtzDvxLInriml6W YRRe6p+pwWm97JSG2jVcwOT0U0mlO/aX4HEHbPEJPxXpvig4tpNA//j6p5RqF4piBk76 gfcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+sf6y3nv2ZCNMXtqDYEEUFgNmENQxF7tXN6LoZYjT6w=; b=lR1MKMYVL7pRapy+LykatQw2vrsbUW42fyaty6veExZt1o4u1JFLXmI2DrmMPgGazH 77FYowDN5P5BhMa8J7EdghfPOEEx3jGnflzTg2nQb4bFHkkO0/zZpqNM7MnxCwsSUe3P hZBUJue7HzCjIkczwFg4VAW01sYSbKv3Y8WjeaLpLTg0PMn/jFWn85qU0/bJqV5Hz4+c rhuhQAt+j5dHhcESkdT+QRNcYfQlbjq4JHiXpqiqjmJgXjFQYqbAaqe6Iqmp2QLWzQ1j hCb+5FNpOLAnCIliFW916Mcrtg7AwTSk8dhHQRVwdI3/Tr141y8Kn0ngYhQW2nhG78YZ yXbA== X-Gm-Message-State: AOAM530s1IRzvNZKFNITVHvKBxECywoJSBAY1kUG8xDdT6cEV/liogRo pR6yV2fO8/OQRe3mTJQjZAp3h0iPSbD1 X-Google-Smtp-Source: ABdhPJyFg3C3gccSXGLVHaDl3yMQ2ccs5PyYFlbCOevTKAIcdWPUH93Yi7nwEn0Wy8SM+qElCtaO8jLNrmd1 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a25:6c0a:: with SMTP id h10mr8538112ybc.167.1621359259737; Tue, 18 May 2021 10:34:19 -0700 (PDT) Date: Tue, 18 May 2021 10:34:08 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-2-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 1/7] KVM: x86/mmu: Deduplicate rmap freeing From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small code deduplication. No functional change expected. Reviewed-by: David Hildenbrand Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9b6bca616929..11908beae58b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10896,17 +10896,23 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_hv_destroy_vm(kvm); } -void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +static void memslot_rmap_free(struct kvm_memory_slot *slot) { int i; for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.rmap[i]); slot->arch.rmap[i] = NULL; + } +} - if (i == 0) - continue; +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + int i; + + memslot_rmap_free(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } @@ -10972,12 +10978,9 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, return 0; out_free: - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { - kvfree(slot->arch.rmap[i]); - slot->arch.rmap[i] = NULL; - if (i == 0) - continue; + memslot_rmap_free(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } From patchwork Tue May 18 17:34:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75EB3C433ED for ; Tue, 18 May 2021 17:34:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 59CD0611AC for ; Tue, 18 May 2021 17:34:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351168AbhERRfp (ORCPT ); Tue, 18 May 2021 13:35:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351146AbhERRfk (ORCPT ); Tue, 18 May 2021 13:35:40 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41348C061573 for ; Tue, 18 May 2021 10:34:22 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id j19-20020a63fc130000b029020f623342e0so6803023pgi.10 for ; Tue, 18 May 2021 10:34:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rAGxH7RBjxWI4eFtU4uy5sDnRTRUjNhxYK0eIVMGyVI=; b=cn9JrqjDkc1ruNg7PlqXWsUWgYudQBTGNMIOFUHAQXPJN5ofWlrtvcbSc+lhrBY5Hn GrowzrdOWmHfq2mco/UyT+uFkXSMGCKNk16WLuNzcVF1h/rP4i7bY7fGughzZedLMwuL qxFlWxwoEQGlmFnB9fWRKmYXLl/D7+LC5weN4zSg9zxCtQbAdFyNR6EbD6SXnlp+/uNV 2E11ctl5Blh+RWIGm6W9EoO15M73VxHuvI0KUhG1bxI0PKSGRPm4mj/9AZIOWW+5X8CD Pbzu/7eyhAv3jkcZmjV1YxMbreaDFcS9609X60T/5HPAIlOvk47GYut7jH1zx9Lk73E3 sYug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rAGxH7RBjxWI4eFtU4uy5sDnRTRUjNhxYK0eIVMGyVI=; b=DCHvX93lTX/7LijWDbw9Xj8NsZuoPxLtDmyoLttYYOb7UExZbZtz9w6bZKTQR5W2OC aXCg0VijGnYsay86s+rwHgzx4wR7s4DndU3LQUPLTeDyTN8+waXVbphrp5Qaxud5hvCX /4HtOgPImUSbpT6JguYIP/2F5MlmDGOuIG1PRdCLmvj4q2D8O9RZDeXjezAJHmqCqzWd JnEayYjDPq7YCh5Y1xtXTy1mDa7JUs9YVWJtKZESLxKX/XkF0Le0GgZHZaGMjZGY8vke rHB/Ma5lJPy3a0PfKRcH2iahLb32c3zNhgi5i0sPxsjoz0+y2a+vKKM31qWQJBDAPv96 QPVw== X-Gm-Message-State: AOAM532+RcCKnruf8K50mqHDDnuedjMYHjdBglVs5/sUmg+sbEVJm/cA /wAIfd80K5PeCjtFLjvtB/nxRPxCRpW/ X-Google-Smtp-Source: ABdhPJzeBb7mX5Kx/y5D5/8VZv+eJfHZgnxz6Jy80W4SoseeeAPat3e1fuqNy8lhIk/yX49X6dKG2vg5LrWo X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a17:902:ea10:b029:ef:d2:16c2 with SMTP id s16-20020a170902ea10b02900ef00d216c2mr5794298plg.9.1621359261715; Tue, 18 May 2021 10:34:21 -0700 (PDT) Date: Tue, 18 May 2021 10:34:09 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-3-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 2/7] KVM: x86/mmu: Factor out allocating memslot rmap From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small refactor to facilitate allocating rmaps for all memslots at once. No functional change expected. Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 37 +++++++++++++++++++++++++++---------- 1 file changed, 27 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 11908beae58b..4b3d53c5fc76 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10920,10 +10920,31 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) kvm_page_track_free_memslot(slot); } +static int memslot_rmap_alloc(struct kvm_memory_slot *slot, + unsigned long npages) +{ + const int sz = sizeof(*slot->arch.rmap[0]); + int i; + + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + int level = i + 1; + int lpages = gfn_to_index(slot->base_gfn + npages - 1, + slot->base_gfn, level) + 1; + + slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT); + if (!slot->arch.rmap[i]) { + memslot_rmap_free(slot); + return -ENOMEM; + } + } + + return 0; +} + static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, unsigned long npages) { - int i; + int i, r; /* * Clear out the previous array pointers for the KVM_MR_MOVE case. The @@ -10932,7 +10953,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + r = memslot_rmap_alloc(slot, npages); + if (r) + return r; + + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; unsigned long ugfn; int lpages; @@ -10941,14 +10966,6 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, lpages = gfn_to_index(slot->base_gfn + npages - 1, slot->base_gfn, level) + 1; - slot->arch.rmap[i] = - kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), - GFP_KERNEL_ACCOUNT); - if (!slot->arch.rmap[i]) - goto out_free; - if (i == 0) - continue; - linfo = kvcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT); if (!linfo) goto out_free; From patchwork Tue May 18 17:34:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D8E0C433B4 for ; Tue, 18 May 2021 17:34:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 725AA61376 for ; Tue, 18 May 2021 17:34:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351193AbhERRfr (ORCPT ); Tue, 18 May 2021 13:35:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351160AbhERRfn (ORCPT ); Tue, 18 May 2021 13:35:43 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26077C061573 for ; Tue, 18 May 2021 10:34:24 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id d64-20020a6368430000b02902104a07607cso6761697pgc.1 for ; Tue, 18 May 2021 10:34:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v1MO0W9O0n7ySv9W37vHhHDIdH76j1VB3klkPGT/WBc=; b=PO2xCBBq9JJtIvDZ8nDn5Ka4Edp0liYw6IfVGa9oMwRpTBLAu+UG3o574i7oMYNxXv udx/JxqDYBklu5i8dn0Vb0ugNbK/GO2+d/0k5MgHNuSPT8gG++vuxUhnOpQps0wJHUUV A+iMvzy1K1AseedYdAB86gcMgPQmUhm0PD1tSfOFuLgapqWlHEQEbov+m7ziEP79wxEH yElSKdINUd3pcddUO0wVlLXCUAGoaY+U4lO4Zf4QL6kOXhbgkOcGYeqqEd1crptMmSIt IAynryNMqK7F7Gj83ED8l8tGTyNf3sYz3satq3ftBc0MmO7bgSTvLxDRAuU5cqlhAGwB x6lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v1MO0W9O0n7ySv9W37vHhHDIdH76j1VB3klkPGT/WBc=; b=RrT/jZJRoixwP/7QOzIKuGrFefogPKUgfbThkbt2B08yUjLISw31WgGJR1vYI1h/om ItXFmsG/w1sSB2Y8HxmMOl9m7tzyaZQzru1dsq8UQPPpvyGJGNxQiz4fnwuND237cxPA qK31v1Uhs6qTtK/77WNO6dcF/jnqDEUE6GWcXITnuTQWrE6qugVi1Z/UoujLkKaO1UNM b6nYlGYDPuu3JEIiLB+EqaEgQwdH/Nm5mncKMiA+R64zHMudgqDfKBkM+3NEbRgzyUbH VJbdQn98WajsrMf4A0dEq/ySMHjUTxSF3ZExIlP3+62SbXAJ5rubPGvKBrcO9rhAHGbt yIQQ== X-Gm-Message-State: AOAM532qngASShLm7vqyloTCvcVgwiYYwLqg2l8qy8xKPRaiS9lN74nt mezTN89MxAsH30ZW7JQw8RIyQ/cmevk4 X-Google-Smtp-Source: ABdhPJw321z1csPExPfH8dsDrSd2fIlO/vtVqmtehmBJN7dVH2Sx5UUweKLHW0qGN3smX4BfLgWklnkpi2wX X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a05:6a00:1742:b029:2cc:b1b0:731c with SMTP id j2-20020a056a001742b02902ccb1b0731cmr6274826pfc.15.1621359263620; Tue, 18 May 2021 10:34:23 -0700 (PDT) Date: Tue, 18 May 2021 10:34:10 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-4-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 3/7] KVM: mmu: Refactor memslot copy From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out copying kvm_memslots from allocating the memory for new ones in preparation for adding a new lock to protect the arch-specific fields of the memslots. No functional change intended. Reviewed-by: David Hildenbrand Signed-off-by: Ben Gardon --- virt/kvm/kvm_main.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6b4feb92dc79..4acd4722d729 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1306,6 +1306,18 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, return old_memslots; } +static size_t kvm_memslots_size(int slots) +{ + return sizeof(struct kvm_memslots) + + (sizeof(struct kvm_memory_slot) * slots); +} + +static void kvm_copy_memslots(struct kvm_memslots *to, + struct kvm_memslots *from) +{ + memcpy(to, from, kvm_memslots_size(from->used_slots)); +} + /* * Note, at a minimum, the current number of used slots must be allocated, even * when deleting a memslot, as we need a complete duplicate of the memslots for @@ -1315,19 +1327,16 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, enum kvm_mr_change change) { struct kvm_memslots *slots; - size_t old_size, new_size; - - old_size = sizeof(struct kvm_memslots) + - (sizeof(struct kvm_memory_slot) * old->used_slots); + size_t new_size; if (change == KVM_MR_CREATE) - new_size = old_size + sizeof(struct kvm_memory_slot); + new_size = kvm_memslots_size(old->used_slots + 1); else - new_size = old_size; + new_size = kvm_memslots_size(old->used_slots); slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT); if (likely(slots)) - memcpy(slots, old, old_size); + kvm_copy_memslots(slots, old); return slots; } From patchwork Tue May 18 17:34:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265243 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D01DC43460 for ; Tue, 18 May 2021 17:34:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6610060E0C for ; Tue, 18 May 2021 17:34:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351200AbhERRfs (ORCPT ); Tue, 18 May 2021 13:35:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351175AbhERRfo (ORCPT ); Tue, 18 May 2021 13:35:44 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7016DC061573 for ; Tue, 18 May 2021 10:34:26 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id k12-20020a17090aaa0cb029015cf15dc26dso2182163pjq.8 for ; Tue, 18 May 2021 10:34:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DNWDgFb1bZKAmr+FZ80H3e+f0Tx9YEcykENAEtVifrk=; b=uuhKJDuDF92PuqAkq4HG6tMh1+5JzmnjHdiz1IkBzi4FetdwRb8MpG9ZFKJdxUNSkg 8gKLGBxhz2yWPq0us2cNl41YG0E6COApfXAoFj9TY/0MlMzJZTBnU55wM4rRJY4vR542 80T3VE01qLHnw244NDbDIWZSMdTm6bQzBW2xpbJbRYB3pAlH+e2yhCz9IPBccM3b+ksF /gcO/lG480j3SNjdAqRmt+bPwr5weFwNz6Y5bn8vehxoMOnivxZ2dS+KSLuRiGjz4AT/ /3pP6YEDrAqOlXoxNldaeEitujqwxwJTWRRXr/LTOY6q05Jep8Iozwi/bGJ6/iKyxFyo 5qkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DNWDgFb1bZKAmr+FZ80H3e+f0Tx9YEcykENAEtVifrk=; b=ffmEZ6hdmtxawfmYsUTdh7qqcw5qNsuy4l5J7cX8dWgfyTkC0wDfJI0SljZJLCei/l Lhsl+kMNxUMSbFW25+LmMEeKLHDF+jlHz/iARCNfljc0r8E8NI29O6P7Tq0B/15V5lPI 1axbba/c7MvQnEwDtRbw1YgN++qY0a4KQoGEMc9mmTZeQqalyVZIBMzdj2Xw3yO+vlcw k9MtMe7TDmojfAR9mENrk4r/YpLHpOGtfOZ1rhbidLfApOuYmQsah6r8Z8FQ7NT6V3hQ l6L4nQ01dPo+FlADSN9kxRYdNWqESPo4KM0vRGVh7B+KLhYnNAkSKNxnQDtrOfIKguYs 2N3g== X-Gm-Message-State: AOAM5320TA+i7AqlFI42EfW3bkNLSpwM24kN6ybm6bneBF0sWNIWih9A y7+aW+5pIpkqzaV+9rVtZs77bpCKMLXt X-Google-Smtp-Source: ABdhPJwKV60c5LhTSvqCGb2D5j9zKfK6rFfpFAEkXQqenCAVbLENaCnNJ+BZ4O1sVIJ9YzUhjkG1l97tfLpx X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a17:90a:7f83:: with SMTP id m3mr609514pjl.0.1621359265527; Tue, 18 May 2021 10:34:25 -0700 (PDT) Date: Tue, 18 May 2021 10:34:11 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-5-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 4/7] KVM: mmu: Add slots_arch_lock for memslot arch fields From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new lock to protect the arch-specific fields of memslots if they need to be modified in a kvm->srcu read critical section. A future commit will use this lock to lazily allocate memslot rmaps for x86. Signed-off-by: Ben Gardon --- include/linux/kvm_host.h | 9 +++++++ virt/kvm/kvm_main.c | 54 +++++++++++++++++++++++++++++++++++----- 2 files changed, 57 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2f34487e21f2..817aa5e8dbd5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -517,6 +517,15 @@ struct kvm { #endif /* KVM_HAVE_MMU_RWLOCK */ struct mutex slots_lock; + + /* + * Protects the arch-specific fields of struct kvm_memory_slots in + * use by the VM. To be used under the slots_lock (above) or in a + * kvm->srcu critical section where acquiring the slots_lock would + * lead to deadlock with the synchronize_srcu in + * install_new_memslots. + */ + struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4acd4722d729..41dfebde4680 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -908,6 +908,7 @@ static struct kvm *kvm_create_vm(unsigned long type) mutex_init(&kvm->lock); mutex_init(&kvm->irq_lock); mutex_init(&kvm->slots_lock); + mutex_init(&kvm->slots_arch_lock); INIT_LIST_HEAD(&kvm->devices); BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); @@ -1280,6 +1281,14 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; rcu_assign_pointer(kvm->memslots[as_id], slots); + + /* + * Acquired in kvm_set_memslot. Must be released before synchronize + * SRCU below in order to avoid deadlock with another thread + * acquiring the slots_arch_lock in an srcu critical section. + */ + mutex_unlock(&kvm->slots_arch_lock); + synchronize_srcu_expedited(&kvm->srcu); /* @@ -1351,9 +1360,27 @@ static int kvm_set_memslot(struct kvm *kvm, struct kvm_memslots *slots; int r; + /* + * Released in install_new_memslots. + * + * Must be held from before the current memslots are copied until + * after the new memslots are installed with rcu_assign_pointer, + * then released before the synchronize srcu in install_new_memslots. + * + * When modifying memslots outside of the slots_lock, must be held + * before reading the pointer to the current memslots until after all + * changes to those memslots are complete. + * + * These rules ensure that installing new memslots does not lose + * changes made to the previous memslots. + */ + mutex_lock(&kvm->slots_arch_lock); + slots = kvm_dup_memslots(__kvm_memslots(kvm, as_id), change); - if (!slots) + if (!slots) { + mutex_unlock(&kvm->slots_arch_lock); return -ENOMEM; + } if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { /* @@ -1364,10 +1391,9 @@ static int kvm_set_memslot(struct kvm *kvm, slot->flags |= KVM_MEMSLOT_INVALID; /* - * We can re-use the old memslots, the only difference from the - * newly installed memslots is the invalid flag, which will get - * dropped by update_memslots anyway. We'll also revert to the - * old memslots if preparing the new memory region fails. + * We can re-use the memory from the old memslots. + * It will be overwritten with a copy of the new memslots + * after reacquiring the slots_arch_lock below. */ slots = install_new_memslots(kvm, as_id, slots); @@ -1379,6 +1405,17 @@ static int kvm_set_memslot(struct kvm *kvm, * - kvm_is_visible_gfn (mmu_check_root) */ kvm_arch_flush_shadow_memslot(kvm, slot); + + /* Released in install_new_memslots. */ + mutex_lock(&kvm->slots_arch_lock); + + /* + * The arch-specific fields of the memslots could have changed + * between releasing the slots_arch_lock in + * install_new_memslots and here, so get a fresh copy of the + * slots. + */ + kvm_copy_memslots(slots, __kvm_memslots(kvm, as_id)); } r = kvm_arch_prepare_memory_region(kvm, new, mem, change); @@ -1394,8 +1431,13 @@ static int kvm_set_memslot(struct kvm *kvm, return 0; out_slots: - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + slot = id_to_memslot(slots, old->id); + slot->flags &= ~KVM_MEMSLOT_INVALID; slots = install_new_memslots(kvm, as_id, slots); + } else { + mutex_unlock(&kvm->slots_arch_lock); + } kvfree(slots); return r; } From patchwork Tue May 18 17:34:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265247 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A952C43460 for ; Tue, 18 May 2021 17:34:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F37661209 for ; Tue, 18 May 2021 17:34:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351218AbhERRfw (ORCPT ); Tue, 18 May 2021 13:35:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351187AbhERRfq (ORCPT ); Tue, 18 May 2021 13:35:46 -0400 Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com [IPv6:2607:f8b0:4864:20::104a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3C78C06175F for ; Tue, 18 May 2021 10:34:27 -0700 (PDT) Received: by mail-pj1-x104a.google.com with SMTP id em20-20020a17090b0154b029015d6b612e97so2206675pjb.3 for ; Tue, 18 May 2021 10:34:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ci7csGTkYvm5LlxT+HruOwjwM2kzKVu4Qx/8fLV2KZA=; b=LssKPzEHtG0Xr1DkT8L9SN/+jjeEjFoRne1hwEh80ZuS33ivYDZwBJqfvOJgvVg9zp viPgE7E7F604Foji+ByHCK2sdv8vmwsCQF5jzW6v3uQULCFZCpJEIarcoqdxIGL2v5TR KhqSL0MwhrUisDvoqD7RRpLH/XYMNPZLAirALizfs8WSsLIRH/j+FDMk3PKKmm0WJqog ZbFQG9AGlahf5+r9lIRs3pOMROzHdDRkbcKHOJESgYyE0RC1N0GHF6HvRp88yGQ2b+/F iH1VZFJYUOjYbElkntu8MQHv+tAioL4hLzf2qwXH97d7ziyqgKWE4F1DhM9a0LJUJ4sf wqhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ci7csGTkYvm5LlxT+HruOwjwM2kzKVu4Qx/8fLV2KZA=; b=gGgbx4j/+RZut8NPLItvC3U6s8Mq4nOMnOOG4OsMFpFF6+cpJQ9meieZPJsJq9IkJ8 Y7hiRsatF6ZT93oVgBjGhQ9xgxJop8NuDGf0aX9F2uLZus2xZHSQCLkTAE2iysHJHNE0 kRXurfUv99Kbe10mKqVk+ZNC8OGv4Aw/Jk/58XsDDpfw5mthW1L/I8tSgWwp6P8E921J 5iKtJW7j0ny+d6m7YdfTlNCuPrQAn2kXHuqR1pCQZka4YBjU9qAt1/Vc9BYTE1MH4/sJ ivOZ1aqYpPFI4zzeY2w2tsaXJqJgu4ODbAneyERW6ElwPm0nSIGOfKD6nWqmu/VMnlh5 WqgQ== X-Gm-Message-State: AOAM530F4h1Pigl5srry9K+kQr0gYGS5u8CZvT88GVVGeuZD21JyA8Yj BrWbSzDvDx8+Yb8PMcS4ct7Mf6la6Cn2 X-Google-Smtp-Source: ABdhPJxA+HS9C/YbifKQZIU3hNtZlWBBT1Bo7TS9nl168tPeURTKsP5JRjNKxYHMi3M0ScCdYR97xdtoxyuQ X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a17:90a:ca05:: with SMTP id x5mr6524067pjt.16.1621359267435; Tue, 18 May 2021 10:34:27 -0700 (PDT) Date: Tue, 18 May 2021 10:34:12 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-6-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 5/7] KVM: x86/mmu: Add a field to control memslot rmap allocation From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a field to control whether new memslots should have rmaps allocated for them. As of this change, it's not safe to skip allocating rmaps, so the field is always set to allocate rmaps. Future changes will make it safe to operate without rmaps, using the TDP MMU. Then further changes will allow the rmaps to be allocated lazily when needed for nested oprtation. No functional change expected. Reviewed-by: David Hildenbrand Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 6 ++++++ arch/x86/kvm/mmu/mmu.c | 2 ++ arch/x86/kvm/x86.c | 13 ++++++++----- 3 files changed, 16 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 55efbacfc244..fc75ed49bfee 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1124,6 +1124,12 @@ struct kvm_arch { */ spinlock_t tdp_mmu_pages_lock; #endif /* CONFIG_X86_64 */ + + /* + * If set, rmaps have been allocated for all memslots and should be + * allocated for any newly created or modified memslots. + */ + bool memslots_have_rmaps; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0144c40d09c7..f059f2e8c6fe 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5469,6 +5469,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) kvm_mmu_init_tdp_mmu(kvm); + kvm->arch.memslots_have_rmaps = true; + node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4b3d53c5fc76..ae8e3179d483 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10941,7 +10941,8 @@ static int memslot_rmap_alloc(struct kvm_memory_slot *slot, return 0; } -static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, +static int kvm_alloc_memslot_metadata(struct kvm *kvm, + struct kvm_memory_slot *slot, unsigned long npages) { int i, r; @@ -10953,9 +10954,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - r = memslot_rmap_alloc(slot, npages); - if (r) - return r; + if (kvm->arch.memslots_have_rmaps) { + r = memslot_rmap_alloc(slot, npages); + if (r) + return r; + } for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; @@ -11026,7 +11029,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(memslot, + return kvm_alloc_memslot_metadata(kvm, memslot, mem->memory_size >> PAGE_SHIFT); return 0; } From patchwork Tue May 18 17:34:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91EDFC433ED for ; Tue, 18 May 2021 17:34:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6FCA5611CE for ; Tue, 18 May 2021 17:34:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351179AbhERRfu (ORCPT ); Tue, 18 May 2021 13:35:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351180AbhERRfs (ORCPT ); Tue, 18 May 2021 13:35:48 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7D76C061573 for ; Tue, 18 May 2021 10:34:29 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id t19-20020a170902d293b02900f0c90dc011so4063557plc.21 for ; Tue, 18 May 2021 10:34:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=d6OxCNrxfeQwEZOuDEC39WPTJNnJyBLd23f380XRAVY=; b=RtoAhn803nKovIcjqxLqb79ei6jnW/SHmHEz6m9VBvHEnpWAfyH5AVNxSCJltDwAXQ mwDZ0C8DxsmFLCThjdJ+coQNocB7vLPBA4yA/SsFOArxteLHvKPewdpbgYhmC20OJiYm KWIJNfUjIp6e4czt9DU0FwyK/JW/JlyuxAztKypr5wlX7XlvnpwDay4MsjO2tMo1Pnu9 7+mRcFPcQbjkpLnS3SxdGdZrIQS24mrKYuWTtdlLck42M1rS802u8TWWx3jk0SXb7Vyt IZ6hTg0Ldb7tTcy3KbJ142BpnkpW9KXPlNQ8HUdUM7kHQis0CJrrsuZoMWNenb4vk7QC qDqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=d6OxCNrxfeQwEZOuDEC39WPTJNnJyBLd23f380XRAVY=; b=bglMhNR4/TeG2icrmERvfrxNBRbsamFaaOMQe9/apSiKLxCQHLFsbxnK+s+Ez2TGPs UEfal5aj6hlnEpcdvYIzSG6T/oVP3EuJIlgxSXVgyZs5+79J56xKbu/mLiPUIDlDnJS0 VP8sdWwHOEFmk4tyFH5xsXYpSY/WdS4ifd1XzJYQuga601XSPst/m/kaA+NzwclsL1YY BrNZWYrr1nxxUIoz+CTm8CFjpBjtrD4V8WC8rrg1wKgqmr0jMOX5PrC64ZChH3MNfU0r XCvbDAA4IzTQsDhnxo42epthB0wnl2YCXkwNA7lr0NX+cgMq1VP2UWwQGgw1ZTQIWEqG 6OuA== X-Gm-Message-State: AOAM530+0WZpT5JPt/2CNs4uTmFDK2uP3tEw0qi3PyDUkfgZPm26iuoD 4WyMj270tbAD03j0p5qy8Vo6fIwTPkP0 X-Google-Smtp-Source: ABdhPJyWo3R76Fa8ZV64A8jqJmBWilVrF8XHEKWDc/351gvlFDDuCgqQnleefv3iKOZUPFq5owSeGR7ayYYN X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a17:90a:9202:: with SMTP id m2mr6546028pjo.180.1621359269297; Tue, 18 May 2021 10:34:29 -0700 (PDT) Date: Tue, 18 May 2021 10:34:13 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-7-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 6/7] KVM: x86/mmu: Skip rmap operations if rmaps not allocated From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If only the TDP MMU is being used to manage the memory mappings for a VM, then many rmap operations can be skipped as they are guaranteed to be no-ops. This saves some time which would be spent on the rmap operation. It also avoids acquiring the MMU lock in write mode for many operations. This makes it safe to run the VM without rmaps allocated, when only using the TDP MMU and sets the stage for waiting to allocate the rmaps until they're needed. Signed-off-by: Ben Gardon --- arch/x86/kvm/mmu.h | 5 ++ arch/x86/kvm/mmu/mmu.c | 113 ++++++++++++++++++++++++----------------- arch/x86/kvm/x86.c | 2 +- 3 files changed, 72 insertions(+), 48 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 88d0ed5225a4..af09c47b1aa2 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -232,4 +232,9 @@ int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu); int kvm_mmu_post_init_vm(struct kvm *kvm); void kvm_mmu_pre_destroy_vm(struct kvm *kvm); +static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) +{ + return kvm->arch.memslots_have_rmaps; +} + #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f059f2e8c6fe..1e0daabc83ca 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1189,6 +1189,10 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); + + if (!kvm_memslots_have_rmaps(kvm)) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1218,6 +1222,10 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); + + if (!kvm_memslots_have_rmaps(kvm)) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1260,9 +1268,11 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int i; bool write_protected = false; - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + if (kvm_memslots_have_rmaps(kvm)) { + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + write_protected |= __rmap_write_protect(kvm, rmap_head, true); + } } if (is_tdp_mmu_enabled(kvm)) @@ -1433,9 +1443,10 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); @@ -1445,9 +1456,10 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); @@ -1500,9 +1512,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1512,9 +1525,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); @@ -5492,29 +5506,29 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; - write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { - gfn_t start, end; - - start = max(gfn_start, memslot->base_gfn); - end = min(gfn_end, memslot->base_gfn + memslot->npages); - if (start >= end) - continue; + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + gfn_t start, end; + + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (start >= end) + continue; - flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PG_LEVEL_4K, - KVM_MAX_HUGEPAGE_LEVEL, - start, end - 1, true, flush); + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, start, + end - 1, true, flush); + } } + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); + write_unlock(&kvm->mmu_lock); } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); - - write_unlock(&kvm->mmu_lock); - if (is_tdp_mmu_enabled(kvm)) { flush = false; @@ -5541,12 +5555,15 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, int start_level) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, - start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, + start_level, KVM_MAX_HUGEPAGE_LEVEL, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5616,16 +5633,15 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_memory_slot *slot = (struct kvm_memory_slot *)memslot; bool flush; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { - flush = false; - read_lock(&kvm->mmu_lock); flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); if (flush) @@ -5652,11 +5668,14 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ae8e3179d483..7cbaa92687f7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10954,7 +10954,7 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, */ memset(&slot->arch, 0, sizeof(slot->arch)); - if (kvm->arch.memslots_have_rmaps) { + if (kvm_memslots_have_rmaps(kvm)) { r = memslot_rmap_alloc(slot, npages); if (r) return r; From patchwork Tue May 18 17:34:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12265249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F3DFC43470 for ; Tue, 18 May 2021 17:34:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 00323611AC for ; Tue, 18 May 2021 17:34:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351210AbhERRfy (ORCPT ); Tue, 18 May 2021 13:35:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1351202AbhERRfu (ORCPT ); Tue, 18 May 2021 13:35:50 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECB17C06175F for ; Tue, 18 May 2021 10:34:31 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id 140-20020a3704920000b02903a568b50545so2015143qke.7 for ; Tue, 18 May 2021 10:34:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QJBRdiNT50y6CWenT8iHGZ+jatWGFL5n8aWtdmK//io=; b=lq+642t/uIN+3l8/YXQdQ9VpERXKH/FAKZ1CnqHdfk7zw8dw3Om/dUBR3SYDjfA0mT QJCesbCHigRxleDQ6xFaX0f8i1b88osHnoE1mlgc2s0w52TJvgPWBKYI+8e4jTI6Y2qX +wtP+1qhfBMSNytwthMAph2umI4nVKkGrXdc68MF9/VO65PbcK6MNKkny34eWznp26UT QTCYBsEtNaHnjwN1WqqFqvd02QRAfoWgi4+mloYhH034hSwVVD/41SpDz82HfkxZUsTj LorhErhgAW9deu8ZqCeGImJ9Dfj7fz38r0u21ma5t80GpijY5/F6aHCochR+iWrr/g9D Ahjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QJBRdiNT50y6CWenT8iHGZ+jatWGFL5n8aWtdmK//io=; b=jCWLGa17DoFULkViHTWtPiYy0WSLSO9X70xK1mU57qqoh3Jdy2b8jajtqcC/CPIN/8 YqeHQU/r5JrHIOPXv5G61Mz70pIn/W4yv71IlsD5TcEaR5oH5nd1DJqMuERguWqUqWlS J4F44cK1z+SEIrgVjgIby+mcg915r9fpDWrLPRi4XtHgHIDkzePHY1K1dyfqnYtEBg2n CLDjDV5DPkwX2uLPm171/bZUTmTcZWXWcEYMmbJ1sC4PElJ4S4EnvRug982/0W3hZzEg Ss1G/TzPUniq9PO0MrOYODezQaUBmOX1QBArXgk81MrgJoNlAuphmP2IstRJRZmofKA+ 75SQ== X-Gm-Message-State: AOAM530MrdNMmZ2c/aAHUhMdSRWS30z11rXTbTdyuO6ckh+9NnCYoW2A m/wLITy0MX+70CN72eJTaa+9+TG1GDBo X-Google-Smtp-Source: ABdhPJxlZBvsJ7tEWJAysMi9ycSI3V6Lv90wDn44kMX/vfhOOjh0C8HqUirDFz7+ImKy1PFaGMwoGIO3Zt5C X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:2715:2de:868e:9db7]) (user=bgardon job=sendgmr) by 2002:a0c:8e49:: with SMTP id w9mr6721541qvb.35.1621359271150; Tue, 18 May 2021 10:34:31 -0700 (PDT) Date: Tue, 18 May 2021 10:34:14 -0700 In-Reply-To: <20210518173414.450044-1-bgardon@google.com> Message-Id: <20210518173414.450044-8-bgardon@google.com> Mime-Version: 1.0 References: <20210518173414.450044-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog Subject: [PATCH v5 7/7] KVM: x86/mmu: Lazily allocate memslot rmaps From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , David Hildenbrand , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the TDP MMU is in use, wait to allocate the rmaps until the shadow MMU is actually used. (i.e. a nested VM is launched.) This saves memory equal to 0.2% of guest memory in cases where the TDP MMU is used and there are no nested guests involved. Signed-off-by: Ben Gardon --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 7 ++++- arch/x86/kvm/mmu/mmu.c | 14 +++++++--- arch/x86/kvm/mmu/tdp_mmu.c | 6 +++-- arch/x86/kvm/mmu/tdp_mmu.h | 4 +-- arch/x86/kvm/x86.c | 46 +++++++++++++++++++++++++++++++++ 6 files changed, 71 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index fc75ed49bfee..7b65f82ade1c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1868,4 +1868,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) int kvm_cpu_dirty_log_size(void); +int alloc_all_memslots_rmaps(struct kvm *kvm); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index af09c47b1aa2..e987c9af82b6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -234,7 +234,12 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm); static inline bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return kvm->arch.memslots_have_rmaps; + /* + * Ensure that threads reading memslots_have_rmaps in various + * lock contexts see the value before trying to dereference + * the memslot rmap pointers. + */ + return smp_load_acquire(&kvm->arch.memslots_have_rmaps); } #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e0daabc83ca..2ac7bec515a1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3294,6 +3294,10 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } + r = alloc_all_memslots_rmaps(vcpu->kvm); + if (r) + return r; + write_lock(&vcpu->kvm->mmu_lock); r = make_mmu_pages_available(vcpu); if (r < 0) @@ -5481,9 +5485,13 @@ void kvm_mmu_init_vm(struct kvm *kvm) { struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; - kvm_mmu_init_tdp_mmu(kvm); - - kvm->arch.memslots_have_rmaps = true; + if (!kvm_mmu_init_tdp_mmu(kvm)) + /* + * No smp_load/store wrappers needed here as we are in + * VM init and there cannot be any memslots / other threads + * accessing this struct kvm yet. + */ + kvm->arch.memslots_have_rmaps = true; node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 95eeb5ac6a8a..ea00c9502ba1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -14,10 +14,10 @@ static bool __read_mostly tdp_mmu_enabled = false; module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); /* Initializes the TDP MMU for the VM, if enabled. */ -void kvm_mmu_init_tdp_mmu(struct kvm *kvm) +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) - return; + return false; /* This should not be changed for the lifetime of the VM. */ kvm->arch.tdp_mmu_enabled = true; @@ -25,6 +25,8 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); + + return true; } static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5fdf63090451..b046ab5137a1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -80,12 +80,12 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); #ifdef CONFIG_X86_64 -void kvm_mmu_init_tdp_mmu(struct kvm *kvm); +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } #else -static inline void kvm_mmu_init_tdp_mmu(struct kvm *kvm) {} +static inline bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return false; } static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7cbaa92687f7..28dc8bdd0c8a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10931,6 +10931,8 @@ static int memslot_rmap_alloc(struct kvm_memory_slot *slot, int lpages = gfn_to_index(slot->base_gfn + npages - 1, slot->base_gfn, level) + 1; + WARN_ON(slot->arch.rmap[i]); + slot->arch.rmap[i] = kvcalloc(lpages, sz, GFP_KERNEL_ACCOUNT); if (!slot->arch.rmap[i]) { memslot_rmap_free(slot); @@ -10941,6 +10943,50 @@ static int memslot_rmap_alloc(struct kvm_memory_slot *slot, return 0; } +int alloc_all_memslots_rmaps(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + int r, i; + + /* + * Check if memslots alreday have rmaps early before acquiring + * the slots_arch_lock below. + */ + if (kvm_memslots_have_rmaps(kvm)) + return 0; + + mutex_lock(&kvm->slots_arch_lock); + + /* + * Read memslots_have_rmaps again, under the slots arch lock, + * before allocating the rmaps + */ + if (kvm_memslots_have_rmaps(kvm)) { + mutex_unlock(&kvm->slots_arch_lock); + return 0; + } + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(slot, slots) { + r = memslot_rmap_alloc(slot, slot->npages); + if (r) { + mutex_unlock(&kvm->slots_arch_lock); + return r; + } + } + } + + /* + * Ensure that memslots_have_rmaps becomes true strictly after + * all the rmap pointers are set. + */ + smp_store_release(&kvm->arch.memslots_have_rmaps, true); + mutex_unlock(&kvm->slots_arch_lock); + return 0; +} + static int kvm_alloc_memslot_metadata(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned long npages)