From patchwork Thu May 6 18:42:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B08D7C433B4 for ; Thu, 6 May 2021 18:42:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84AFC61003 for ; Thu, 6 May 2021 18:42:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234757AbhEFSnv (ORCPT ); Thu, 6 May 2021 14:43:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234470AbhEFSnu (ORCPT ); Thu, 6 May 2021 14:43:50 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D496C061574 for ; Thu, 6 May 2021 11:42:51 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id s10-20020a05620a030ab02902e061a1661fso4143496qkm.12 for ; Thu, 06 May 2021 11:42:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ma9cEaNYQpiK24KAk14C2pwYKbfgwwx99tQd6V1vKOE=; b=LYvO9qL1KxuwwY8zXKo8FEJSIkPJvM+7iXkMC0Ql2RmJ+Aw8djkGrqQ0nooXX6LCqN vMoOU9dJbIHD2f/Nulg/MfRdxBbl17p77oGdeUwkND0cORvXlBjaOFjLgCQRhzCDu+SZ 57ndJ1qJcK22He2ba7+ttqL5NibboxrFqI1uEZBymduSL0WTc06py/ny14uFb6D2DIoX 79gV/oWqiT52iuQhyOCI+RgJa8arP8yybkAdYC8woKQd5WlhpTuqrgCzZOKXa5248qY/ 22QKi4SBvhMs156Q7+cOLCExdqJMbUFMRqU1ahGdAAm5R95VxNuYQ69xcgFea9U+L6CW uJHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ma9cEaNYQpiK24KAk14C2pwYKbfgwwx99tQd6V1vKOE=; b=gX9MwzjbUgVe8o22KSJm5KbA3RV35lE2Kk4oIylsO8ZY84M2paMJ8udpdTaaXAfVeU frdoRC5sf6qUhtWFi79LXYLVWJQQWl4Z4WwHG1DMzJwVfOhj7EtG/cDzvXZUQMS+XCld aRRjQ7sU0+W0nVV9IgNmglp1z8hUB5v/1Sttvel/Y42d6Oqqy0PtTIPjUdVQf8athag/ QnE0TSrr2Vp/prDM3GkX+qYR+aJW064mo+8bYV5tLM8JAxZ6pNsUWI66EYOnAUBAI31L MboTikuvylWWFFvpSd9wY5AmDsR+SEBefDtxeGHbq/yq2MXMo5ztvbc+7lVzuVzwAkVq CAMQ== X-Gm-Message-State: AOAM533xA8OUv0cNqHbYeEcVQYG1RYihlYmXBUtE9rc1AIUeUDcx0Wze erOeRhWmQa3bug2QxjmNjKs/fSLrQt27 X-Google-Smtp-Source: ABdhPJyNENTXrdjoSVNfItEX8J3Wb9EZHsSLEqNWcC9gEkVhLgIiJnnSfDKBbjTgQAWFiU+dEWlxTpcpyu/k X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:a0c:b28c:: with SMTP id r12mr3231963qve.32.1620326570257; Thu, 06 May 2021 11:42:50 -0700 (PDT) Date: Thu, 6 May 2021 11:42:34 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-2-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 1/8] KVM: x86/mmu: Deduplicate rmap freeing From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small code deduplication. No functional change expected. Signed-off-by: Ben Gardon Reviewed-by: David Hildenbrand --- arch/x86/kvm/x86.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index cf3b67679cf0..5bcf07465c47 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10818,17 +10818,23 @@ void kvm_arch_destroy_vm(struct kvm *kvm) kvm_hv_destroy_vm(kvm); } -void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +static void free_memslot_rmap(struct kvm_memory_slot *slot) { int i; for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.rmap[i]); slot->arch.rmap[i] = NULL; + } +} - if (i == 0) - continue; +void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + int i; + + free_memslot_rmap(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } @@ -10894,12 +10900,9 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, return 0; out_free: - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { - kvfree(slot->arch.rmap[i]); - slot->arch.rmap[i] = NULL; - if (i == 0) - continue; + free_memslot_rmap(slot); + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { kvfree(slot->arch.lpage_info[i - 1]); slot->arch.lpage_info[i - 1] = NULL; } From patchwork Thu May 6 18:42:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1B06C433B4 for ; Thu, 6 May 2021 18:42:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 89EAF61289 for ; Thu, 6 May 2021 18:42:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234504AbhEFSny (ORCPT ); Thu, 6 May 2021 14:43:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236401AbhEFSny (ORCPT ); Thu, 6 May 2021 14:43:54 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 12025C061763 for ; Thu, 6 May 2021 11:42:56 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id s123-20020a3777810000b02902e9adec2313so4144169qkc.4 for ; Thu, 06 May 2021 11:42:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=IvlR4TwLPb1VTfZLUATWctM7vw/E1uak9fxPDAc2C68=; b=qLvTttI//vGOQKjhAI709uVWmJJX9PjDcNvEdKpoTe1nwekLzot7XmeC569I41AUbO Rf7RVyEVc2dcPUMvuXuiBsC+D2P+DhFJI2AzyTz/OKH9yoGN+B1eqk4dHpdFZrnGV4kw 5tGCsJbkXii/72IzIqNDFbSx9KuoVQ1ZoNE2z7KFv4eMYWdExYdSmzbVjpYO3rCTgyAC 4UiBN158gd9GrhhJCP6cS0ivDnOYzpGBev2fcUW1vLaiUXZyQmVWDpzvbToiQiK1hAx6 ToNtI7GQQ6juvF4oCH2A2/0En03LAz3qY2lQ/EqhDSmHYi9JCVYTk78hxjqwzsMK2hYe IDOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=IvlR4TwLPb1VTfZLUATWctM7vw/E1uak9fxPDAc2C68=; b=rwHZZQVcEKJfefcpVYo4rbu0QO8IyCgMLEsw1Qgp+jy/s3iaPu4oMe+lkb162t5zPs 65CwvTGm4LCA6/nu38OJZgpAAHXSMrTHnw/QrseQDd62OVljEeblta42bVEubJtPzK0T qP2rn4bzfxf3owSsiFm0vMYss68/8eWGOezthRsh/7uvUa6s03BDkDf5p/b1ucxL7K44 SuRbPoxcMTp0YjSrr5b7n5vmdQhH34FM7ej2O0Ly740mK8BpvgchbSPRaQYU4v4DqIXi aPuxBe3oK0/hnIOFJzxdnFtfP6z7nx6VGi+c8XR0DK+EE1XZS1/VN7/txOasOpHRM42a Eb8Q== X-Gm-Message-State: AOAM533PpPeGMq0OFXUowT52IZVWiezMwYMLMQGdv4HgZa2hY+Y/V12B gj8GiJJzntV6hE/Oeeb1JPBBB0HjzO/U X-Google-Smtp-Source: ABdhPJwjCHgBiZQPveyAT8FAROrbGPGcTpihTMmkgQ8Bsq5jAd317Ayic93qyH9jaYwSjUS/wPnxAXGAp/M3 X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:ad4:4d94:: with SMTP id cv20mr5913495qvb.26.1620326575260; Thu, 06 May 2021 11:42:55 -0700 (PDT) Date: Thu, 6 May 2021 11:42:35 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-3-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 2/8] KVM: x86/mmu: Factor out allocating memslot rmap From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Small refactor to facilitate allocating rmaps for all memslots at once. No functional change expected. Signed-off-by: Ben Gardon --- arch/x86/kvm/x86.c | 41 ++++++++++++++++++++++++++++++++--------- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5bcf07465c47..fc32a7dbe4c4 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10842,10 +10842,37 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) kvm_page_track_free_memslot(slot); } +static int alloc_memslot_rmap(struct kvm_memory_slot *slot, + unsigned long npages) +{ + int i; + + for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + int lpages; + int level = i + 1; + + lpages = gfn_to_index(slot->base_gfn + npages - 1, + slot->base_gfn, level) + 1; + + slot->arch.rmap[i] = + kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), + GFP_KERNEL_ACCOUNT); + if (!slot->arch.rmap[i]) + goto out_free; + } + + return 0; + +out_free: + free_memslot_rmap(slot); + return -ENOMEM; +} + static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, unsigned long npages) { int i; + int r; /* * Clear out the previous array pointers for the KVM_MR_MOVE case. The @@ -10854,7 +10881,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) { + r = alloc_memslot_rmap(slot, npages); + if (r) + return r; + + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; unsigned long ugfn; int lpages; @@ -10863,14 +10894,6 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, lpages = gfn_to_index(slot->base_gfn + npages - 1, slot->base_gfn, level) + 1; - slot->arch.rmap[i] = - kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), - GFP_KERNEL_ACCOUNT); - if (!slot->arch.rmap[i]) - goto out_free; - if (i == 0) - continue; - linfo = kvcalloc(lpages, sizeof(*linfo), GFP_KERNEL_ACCOUNT); if (!linfo) goto out_free; From patchwork Thu May 6 18:42:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A041C433B4 for ; Thu, 6 May 2021 18:43:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7ED8461289 for ; Thu, 6 May 2021 18:43:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236438AbhEFSoC (ORCPT ); Thu, 6 May 2021 14:44:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236407AbhEFSn7 (ORCPT ); Thu, 6 May 2021 14:43:59 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39239C061761 for ; Thu, 6 May 2021 11:43:00 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id l5-20020a0ce0850000b02901c37c281207so4843824qvk.11 for ; Thu, 06 May 2021 11:43:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O0n4OZEsHYcsD2VSr2bO/0IhfkveI/qdk1WzUKQRQFQ=; b=vFvBjOncg7aGdK7ZU3LRmt/7t3P8qKHD9RrHXpR7ksCarWtGJ5CoVNsZ3frJsz8zcE d+kD9605lOQD5m3JuliJXQ+DS6c8bTD4yRrrc+qvzedtPSYxHMgukU1t2NYZeLegfbDG u6gd6rV6z49+HuFFlf5U0l3sfgeEANJngIim1etpwROcdpRmka65K/Y0IDCQ2ePDryGU bTGnbnOWRyhozHOUegzpbf3bj+fDFDbTZyBcZBUgNLOcr5j4/OjiQwnlPsqfyHPQ6B8b waFkFvheUZ/oQebkPGpa3lux0lYv/ZOFxI7UgGzxrBa9dLdxuMP/1G9g0TBuw0UwuNy0 3ofA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O0n4OZEsHYcsD2VSr2bO/0IhfkveI/qdk1WzUKQRQFQ=; b=X7dq//5LdDzeScoFvzhrAcdJULEh76cfNJrEmks0niAYKNnFiKMY1vU2/TKd+ycAbw MGXDW75h+1t9QJePVFEfoYn8sCiqGimPsDC4gQzNXCb0C08lt+I89J/2o4Od7aax2ydK frhhvujHXFgimg1Gf5ILf70O6goJAq31RbcVWVqQC9f9eHzGymk5EterCHBaRjYKmjD/ Cv7E4beb11ZR4txRB8qp5mk539Icl89yC5MIlc+3SvSGeG8uw9wTPZn9Sg1gEyAkV9/j F1Mf2IWdv6oaN4smaqmCZAjlpmeDpxRZ9DsP8MgugtvcHxRsFxOasCFcjZwXxBm6jh6w uMkw== X-Gm-Message-State: AOAM530D7Ejk3OcUM9qLZBAMuhPlqIpVq7GtT757kU56LY0NptbDd7+w O/mHuz/+GSmissQA5pXhWMp8oXf4fwO2 X-Google-Smtp-Source: ABdhPJzbdZ5eidTYLUpqAxffM6plVUL8/poFE64k5MuNYLOFnQEKczGzc8DsayccIR3TMxHrquLZoy2dqOfk X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:ad4:4634:: with SMTP id x20mr6228412qvv.49.1620326579461; Thu, 06 May 2021 11:42:59 -0700 (PDT) Date: Thu, 6 May 2021 11:42:36 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-4-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 3/8] KVM: mmu: Refactor memslot copy From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Factor out copying kvm_memslots from allocating the memory for new ones in preparation for adding a new lock to protect the arch-specific fields of the memslots. No functional change intended. Signed-off-by: Ben Gardon Reviewed-by: David Hildenbrand --- virt/kvm/kvm_main.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2799c6660cce..c8010f55e368 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1306,6 +1306,18 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, return old_memslots; } +static size_t kvm_memslots_size(int slots) +{ + return sizeof(struct kvm_memslots) + + (sizeof(struct kvm_memory_slot) * slots); +} + +static void kvm_copy_memslots(struct kvm_memslots *from, + struct kvm_memslots *to) +{ + memcpy(to, from, kvm_memslots_size(from->used_slots)); +} + /* * Note, at a minimum, the current number of used slots must be allocated, even * when deleting a memslot, as we need a complete duplicate of the memslots for @@ -1315,19 +1327,16 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, enum kvm_mr_change change) { struct kvm_memslots *slots; - size_t old_size, new_size; - - old_size = sizeof(struct kvm_memslots) + - (sizeof(struct kvm_memory_slot) * old->used_slots); + size_t new_size; if (change == KVM_MR_CREATE) - new_size = old_size + sizeof(struct kvm_memory_slot); + new_size = kvm_memslots_size(old->used_slots + 1); else - new_size = old_size; + new_size = kvm_memslots_size(old->used_slots); slots = kvzalloc(new_size, GFP_KERNEL_ACCOUNT); if (likely(slots)) - memcpy(slots, old, old_size); + kvm_copy_memslots(old, slots); return slots; } From patchwork Thu May 6 18:42:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6A5AC43460 for ; Thu, 6 May 2021 18:43:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95EAD61289 for ; Thu, 6 May 2021 18:43:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236433AbhEFSoJ (ORCPT ); Thu, 6 May 2021 14:44:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236466AbhEFSoD (ORCPT ); Thu, 6 May 2021 14:44:03 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AC4BC061763 for ; Thu, 6 May 2021 11:43:04 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id b3-20020a05620a0cc3b02902e9d5ca06f2so4132023qkj.19 for ; Thu, 06 May 2021 11:43:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=U7Vi41v0OSiHL19URZ52LjF0y68Q7hNWE3W5ECoMlug=; b=YP5S8XEMA4PG47qIBKcMDaF73wfH7QsIb512ozUiq+FU9ouR44k4gXEtaFeJhH7Sd4 Hw0LS/K3GFWR+hqvpiFq+yEz/+D2QoJHXwotlj2Pmn5sfS0KoG6HjYLnZZCZ/pBUQK12 hLwDKi3teSV8EBblAJd6IIRU35HQPqDf513gZOsgOnhK9ekkChl6OCLIrtXo13GyBhK6 1HU/lmJw7rxN27n1ru8Fi69bQFJSEzRcFZemTPi2x+eZQYa2ly0jXM8Zl7x7bGAp2JBP IY2ZQ75fW34DsH2M24DSaA2zTdtfLmKBPysLlVQjfhOnwWFamujYNyhPCYTd+m36M081 ZGsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U7Vi41v0OSiHL19URZ52LjF0y68Q7hNWE3W5ECoMlug=; b=UtzxDM3dnTro9NHHV8SdDZHUqtUViJb1jhk2E7+i5ABR029yC7fFQyC6wBF+Ym38QL +8z5tfPEMbJFoolFSIXgcNynSw6+6owckhakPIwWxiu0nc3DBvTxgJ7Z81QCdd0nhYla vzUzbrnKQWVL1zodNMi01+z1xXwEx1iU7fCHTt4KcupW9K1OZ8Kj033MmYm4nemK+ChC n9aVoYK+WdrhL5e7QqfzoH6ba0CA3VlFIPAjy8NJZQTkjyaXF5S2f6gON2+8I3ZNPXYj YAJ/wRnrNZ4SYYTkvyAbbljsZdVIqmndFMlEQqgHYQorBr7sTkZs8iVJE4gwidB/mDQL sLhA== X-Gm-Message-State: AOAM533O6nZqbzjn0GiBCxqGdLYBiiSlL5QtjLVe8s1rZ5wrzv/zcAZK FlfOwEiy1UYgU74Pma8bTu3HWLO/vqfa X-Google-Smtp-Source: ABdhPJyl8LUp6giVGjSFn82PuuUj26gvcp/GqmNC7tVwg4f7NfZTlAvBRgAgIX7Lqlka88QDS0ttQ7DgGvoj X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:a05:6214:408:: with SMTP id z8mr6158950qvx.54.1620326583566; Thu, 06 May 2021 11:43:03 -0700 (PDT) Date: Thu, 6 May 2021 11:42:37 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-5-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 4/8] KVM: mmu: Add slots_arch_lock for memslot arch fields From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a new lock to protect the arch-specific fields of memslots if they need to be modified in a kvm->srcu read critical section. A future commit will use this lock to lazily allocate memslot rmaps for x86. Signed-off-by: Ben Gardon --- include/linux/kvm_host.h | 9 +++++++++ virt/kvm/kvm_main.c | 31 ++++++++++++++++++++++++++----- 2 files changed, 35 insertions(+), 5 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8895b95b6a22..2d5e797fbb08 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -472,6 +472,15 @@ struct kvm { #endif /* KVM_HAVE_MMU_RWLOCK */ struct mutex slots_lock; + + /* + * Protects the arch-specific fields of struct kvm_memory_slots in + * use by the VM. To be used under the slots_lock (above) or in a + * kvm->srcu read cirtical section where acquiring the slots_lock + * would lead to deadlock with the synchronize_srcu in + * install_new_memslots. + */ + struct mutex slots_arch_lock; struct mm_struct *mm; /* userspace tied to this vm */ struct kvm_memslots __rcu *memslots[KVM_ADDRESS_SPACE_NUM]; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c8010f55e368..97b03fa2d0c8 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -908,6 +908,7 @@ static struct kvm *kvm_create_vm(unsigned long type) mutex_init(&kvm->lock); mutex_init(&kvm->irq_lock); mutex_init(&kvm->slots_lock); + mutex_init(&kvm->slots_arch_lock); INIT_LIST_HEAD(&kvm->devices); BUILD_BUG_ON(KVM_MEM_SLOTS_NUM > SHRT_MAX); @@ -1280,6 +1281,10 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, slots->generation = gen | KVM_MEMSLOT_GEN_UPDATE_IN_PROGRESS; rcu_assign_pointer(kvm->memslots[as_id], slots); + + /* Acquired in kvm_set_memslot. */ + mutex_unlock(&kvm->slots_arch_lock); + synchronize_srcu_expedited(&kvm->srcu); /* @@ -1351,6 +1356,9 @@ static int kvm_set_memslot(struct kvm *kvm, struct kvm_memslots *slots; int r; + /* Released in install_new_memslots. */ + mutex_lock(&kvm->slots_arch_lock); + slots = kvm_dup_memslots(__kvm_memslots(kvm, as_id), change); if (!slots) return -ENOMEM; @@ -1364,10 +1372,9 @@ static int kvm_set_memslot(struct kvm *kvm, slot->flags |= KVM_MEMSLOT_INVALID; /* - * We can re-use the old memslots, the only difference from the - * newly installed memslots is the invalid flag, which will get - * dropped by update_memslots anyway. We'll also revert to the - * old memslots if preparing the new memory region fails. + * We can re-use the memory from the old memslots. + * It will be overwritten with a copy of the new memslots + * after reacquiring the slots_arch_lock below. */ slots = install_new_memslots(kvm, as_id, slots); @@ -1379,6 +1386,17 @@ static int kvm_set_memslot(struct kvm *kvm, * - kvm_is_visible_gfn (mmu_check_root) */ kvm_arch_flush_shadow_memslot(kvm, slot); + + /* Released in install_new_memslots. */ + mutex_lock(&kvm->slots_arch_lock); + + /* + * The arch-specific fields of the memslots could have changed + * between releasing the slots_arch_lock in + * install_new_memslots and here, so get a fresh copy of the + * slots. + */ + kvm_copy_memslots(__kvm_memslots(kvm, as_id), slots); } r = kvm_arch_prepare_memory_region(kvm, new, mem, change); @@ -1394,8 +1412,11 @@ static int kvm_set_memslot(struct kvm *kvm, return 0; out_slots: - if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) + if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { + slot = id_to_memslot(slots, old->id); + slot->flags &= ~KVM_MEMSLOT_INVALID; slots = install_new_memslots(kvm, as_id, slots); + } kvfree(slots); return r; } From patchwork Thu May 6 18:42:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EC37C43470 for ; Thu, 6 May 2021 18:43:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4AFF0613BA for ; Thu, 6 May 2021 18:43:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236434AbhEFSoO (ORCPT ); Thu, 6 May 2021 14:44:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236425AbhEFSoJ (ORCPT ); Thu, 6 May 2021 14:44:09 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 18574C061574 for ; Thu, 6 May 2021 11:43:10 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id i8-20020a0569020688b02904ef3bd00ce7so7017084ybt.7 for ; Thu, 06 May 2021 11:43:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GtG2Li1hJ2c1X8OEHWNaHyM594B8YKHryUGcXMN1QiU=; b=UGdCOIsg0ebAZn3jpN09cWq9TXHe1QXf/zoSXe1jv/0Ps9l5vGgvoRcLMEXFvn2lrt WbZKA/EJN2oBPzBeho7RxLmzFTPUadL+2yb/6dapxHNN6LQ+K4M2BjpAPLTNeLe+M7BC xENE+nRNAOSl8H+Q69u25sP6qnhT383oDxSXhi2ik5u3RlF2wE5w4FUzF/T2GsZ1+nTN 4fI+/Ec2ILIAEkAQKiKKENxVrBH5pR9JzkJToEPMHZv/6u7FLlyaJzyX30f7snbZbRg8 kgLBIWiGtQnyMPvE9cvUWrHENpl1BjXwg+4jfpMSG66jcOQkQtF+ai3rugKwkIqL6DCk +rDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GtG2Li1hJ2c1X8OEHWNaHyM594B8YKHryUGcXMN1QiU=; b=nlrRpW5wXx2k5bYaWLxpYOUob/aYDtL+ZUatx0I0gkJT6AwFA/KgurSbbRMTH40dCe Bf8OHY8ftPqS4JzMmEnCLv95Zbd++LXuI+VQ7vjL3bUy6t1ktfyDbE2doWEgKoywJybU YenkEzsvmpe5LM2XfJzZIsfB+68wOHD0NPcM6WkWh+X1Fky+Y8cUBuhtIPijP8NaK23l HXbVw15GLj3EsrQttd5qoMQ2Tg0mZE21x13BhAacyAXR5WYVeUoBhKuIXP/ZA4ywLcdz ewZhLo9kQJpDvuHqAfGPUFKrTRpdz0HmE73HtsLSV5yNrhO49XzWJy0Z61jKl3rN2GYN Wu0Q== X-Gm-Message-State: AOAM530nL123N372skTjSkZxVlaLI71N/THz8kuGsnpk2WmnDwF62vU2 4mhz/yk1kUyQnaXB+A7CkSjYfWt/B1fs X-Google-Smtp-Source: ABdhPJxByY6b5f68Uhs/L0Xk9/1m581bhjl2gLChMB2+Ols6AXQeyTICiiycd7fELsn4qllVfYVs37NIyKFh X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:a25:b993:: with SMTP id r19mr8209402ybg.445.1620326589319; Thu, 06 May 2021 11:43:09 -0700 (PDT) Date: Thu, 6 May 2021 11:42:38 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-6-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 5/8] KVM: x86/mmu: Add a field to control memslot rmap allocation From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add a field to control whether new memslots should have rmaps allocated for them. As of this change, it's not safe to skip allocating rmaps, so the field is always set to allocate rmaps. Future changes will make it safe to operate without rmaps, using the TDP MMU. Then further changes will allow the rmaps to be allocated lazily when needed for nested oprtation. No functional change expected. Signed-off-by: Ben Gardon Reviewed-by: David Hildenbrand --- arch/x86/include/asm/kvm_host.h | 8 ++++++++ arch/x86/kvm/mmu/mmu.c | 2 ++ arch/x86/kvm/x86.c | 18 +++++++++++++----- 3 files changed, 23 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ad22d4839bcc..00065f9bbc5e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1122,6 +1122,12 @@ struct kvm_arch { */ spinlock_t tdp_mmu_pages_lock; #endif /* CONFIG_X86_64 */ + + /* + * If set, rmaps have been allocated for all memslots and should be + * allocated for any newly created or modified memslots. + */ + bool memslots_have_rmaps; }; struct kvm_vm_stat { @@ -1853,4 +1859,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) int kvm_cpu_dirty_log_size(void); +inline bool kvm_memslots_have_rmaps(struct kvm *kvm); + #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 930ac8a7e7c9..8761b4925755 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5469,6 +5469,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) kvm_mmu_init_tdp_mmu(kvm); + kvm->arch.memslots_have_rmaps = true; + node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; kvm_page_track_register_notifier(kvm, node); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fc32a7dbe4c4..d7a40ce342cc 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10868,7 +10868,13 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, return -ENOMEM; } -static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, +bool kvm_memslots_have_rmaps(struct kvm *kvm) +{ + return kvm->arch.memslots_have_rmaps; +} + +static int kvm_alloc_memslot_metadata(struct kvm *kvm, + struct kvm_memory_slot *slot, unsigned long npages) { int i; @@ -10881,9 +10887,11 @@ static int kvm_alloc_memslot_metadata(struct kvm_memory_slot *slot, */ memset(&slot->arch, 0, sizeof(slot->arch)); - r = alloc_memslot_rmap(slot, npages); - if (r) - return r; + if (kvm_memslots_have_rmaps(kvm)) { + r = alloc_memslot_rmap(slot, npages); + if (r) + return r; + } for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { struct kvm_lpage_info *linfo; @@ -10954,7 +10962,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, enum kvm_mr_change change) { if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) - return kvm_alloc_memslot_metadata(memslot, + return kvm_alloc_memslot_metadata(kvm, memslot, mem->memory_size >> PAGE_SHIFT); return 0; } From patchwork Thu May 6 18:42:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A35EDC433ED for ; Thu, 6 May 2021 18:43:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8616261289 for ; Thu, 6 May 2021 18:43:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236529AbhEFSoV (ORCPT ); Thu, 6 May 2021 14:44:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236479AbhEFSoN (ORCPT ); Thu, 6 May 2021 14:44:13 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62A4DC06138D for ; Thu, 6 May 2021 11:43:14 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id a3-20020a2580430000b02904f7a1a09012so7020196ybn.3 for ; Thu, 06 May 2021 11:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zhL8FcfNScm9PkXj7Tnrl0I6RhUdnvKmvH1J8Dy7QUQ=; b=DNrJDk6WIDGw5JhVnZ+VsOQCNnNUjt/avv4geA9QT5TONqnlHYiM7VuWXVzY4pul6e NR8m80fQb4EYvj1Ud3mmlDd7/GfLTdsD87plp5x+cw2ja6zHuJ2i9ZEJk/EIBqGjvHsP P2SAu4Dc/Zr2sCXvozOgiWVTbxUXqwZHUXylSXGRxPN6NkHWcnqOmr582WajrL/nNk3C uWvh23HoiCgFoDTTpLAuRnx0YASCjWruSl5OM/MBYf6A6KoUHvzPa4tDxYWF+jZVdC3S Wqfl+yilGvTkJ6VPbuepGFC+3+ePBHp6OjKItg3UMd+4auMCodyGt5dzsNbrVnWjt+/R tDEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zhL8FcfNScm9PkXj7Tnrl0I6RhUdnvKmvH1J8Dy7QUQ=; b=YRd8XbQsmJSWtYncAKwMAP8OXQPjMde5pjP+Oimdf23FyfDFDye2e3uJuH+4eqDeWd qNw/mXSRpDXwORa6nR2RS/90i3m2lKNfknYg592wrJAEvUaXZURzfL7B0xpIsCLZ2qWj eSr1rW1M9VEh0WIHLLgnM4vOnz+EMIVzFh49fzKFa8/kxJ3A+H9tXdcdHg583YfJmier KYwiqF+5u55htdzLHYM6lRarb4CavoYruFw3P1VHJiDt76LGvxa6OoqDsXR2oUPVMFv3 dDnU8m+2zMMFb33MH2n5/8qnjMADe3q2PUpn44vUWYgYkT15mXItb5J4q/2m+dOaHhzo 1jmw== X-Gm-Message-State: AOAM533MmOy3YdBWOnRmz2zmW3BI8gU9DtWBWuJrhiHOh3Xe+n1GmnHn 85M+gfewTEM+zVO8darPVtjGOcxSNn5y X-Google-Smtp-Source: ABdhPJyBQ56x2JTSMyMjqsURTn9cs4CSWa93n5guW5qSo4vNHRQfv+K9LXF9g1VMqA9/rB/n0m7BTWQ7CNse X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:a25:a466:: with SMTP id f93mr7989143ybi.264.1620326593607; Thu, 06 May 2021 11:43:13 -0700 (PDT) Date: Thu, 6 May 2021 11:42:39 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-7-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 6/8] KVM: x86/mmu: Skip rmap operations if rmaps not allocated From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If only the TDP MMU is being used to manage the memory mappings for a VM, then many rmap operations can be skipped as they are guaranteed to be no-ops. This saves some time which would be spent on the rmap operation. It also avoids acquiring the MMU lock in write mode for many operations. This makes it safe to run the VM without rmaps allocated, when only using the TDP MMU and sets the stage for waiting to allocate the rmaps until they're needed. Signed-off-by: Ben Gardon Reported-by: kernel test robot --- arch/x86/kvm/mmu/mmu.c | 128 +++++++++++++++++++++++++---------------- 1 file changed, 77 insertions(+), 51 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8761b4925755..730ea84bf7e7 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1189,6 +1189,10 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, true); + + if (!kvm_memslots_have_rmaps(kvm)) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1218,6 +1222,10 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, if (is_tdp_mmu_enabled(kvm)) kvm_tdp_mmu_clear_dirty_pt_masked(kvm, slot, slot->base_gfn + gfn_offset, mask, false); + + if (!kvm_memslots_have_rmaps(kvm)) + return; + while (mask) { rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); @@ -1260,9 +1268,12 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, int i; bool write_protected = false; - for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); - write_protected |= __rmap_write_protect(kvm, rmap_head, true); + if (kvm_memslots_have_rmaps(kvm)) { + for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { + rmap_head = __gfn_to_rmap(gfn, i, slot); + write_protected |= __rmap_write_protect(kvm, rmap_head, + true); + } } if (is_tdp_mmu_enabled(kvm)) @@ -1433,9 +1444,10 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); @@ -1445,9 +1457,10 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool flush; + bool flush = false; - flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + flush = kvm_handle_gfn_range(kvm, range, kvm_set_pte_rmapp); if (is_tdp_mmu_enabled(kvm)) flush |= kvm_tdp_mmu_set_spte_gfn(kvm, range); @@ -1500,9 +1513,10 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 *spte, gfn_t gfn) bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + young = kvm_handle_gfn_range(kvm, range, kvm_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_age_gfn_range(kvm, range); @@ -1512,9 +1526,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) { - bool young; + bool young = false; - young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); + if (kvm_memslots_have_rmaps(kvm)) + young = kvm_handle_gfn_range(kvm, range, kvm_test_age_rmapp); if (is_tdp_mmu_enabled(kvm)) young |= kvm_tdp_mmu_test_age_gfn(kvm, range); @@ -5440,7 +5455,8 @@ static void kvm_mmu_zap_all_fast(struct kvm *kvm) */ kvm_reload_remote_mmus(kvm); - kvm_zap_obsolete_pages(kvm); + if (kvm_memslots_have_rmaps(kvm)) + kvm_zap_obsolete_pages(kvm); write_unlock(&kvm->mmu_lock); @@ -5492,29 +5508,29 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) int i; bool flush = false; - write_lock(&kvm->mmu_lock); - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - kvm_for_each_memslot(memslot, slots) { - gfn_t start, end; - - start = max(gfn_start, memslot->base_gfn); - end = min(gfn_end, memslot->base_gfn + memslot->npages); - if (start >= end) - continue; + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + gfn_t start, end; + + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (start >= end) + continue; - flush = slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PG_LEVEL_4K, - KVM_MAX_HUGEPAGE_LEVEL, - start, end - 1, true, flush); + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, start, + end - 1, true, flush); + } } + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); + write_unlock(&kvm->mmu_lock); } - if (flush) - kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end); - - write_unlock(&kvm->mmu_lock); - if (is_tdp_mmu_enabled(kvm)) { flush = false; @@ -5541,12 +5557,15 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, struct kvm_memory_slot *memslot, int start_level) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, - start_level, KVM_MAX_HUGEPAGE_LEVEL, false); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, + start_level, KVM_MAX_HUGEPAGE_LEVEL, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5616,16 +5635,15 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, struct kvm_memory_slot *slot = (struct kvm_memory_slot *)memslot; bool flush; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); - - if (flush) - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, slot, kvm_mmu_zap_collapsible_spte, true); + if (flush) + kvm_arch_flush_remote_tlbs_memslot(kvm, slot); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { - flush = false; - read_lock(&kvm->mmu_lock); flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); if (flush) @@ -5652,11 +5670,14 @@ void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, struct kvm_memory_slot *memslot) { - bool flush; + bool flush = false; - write_lock(&kvm->mmu_lock); - flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, false); - write_unlock(&kvm->mmu_lock); + if (kvm_memslots_have_rmaps(kvm)) { + write_lock(&kvm->mmu_lock); + flush = slot_handle_leaf(kvm, memslot, __rmap_clear_dirty, + false); + write_unlock(&kvm->mmu_lock); + } if (is_tdp_mmu_enabled(kvm)) { read_lock(&kvm->mmu_lock); @@ -5681,6 +5702,14 @@ void kvm_mmu_zap_all(struct kvm *kvm) int ign; write_lock(&kvm->mmu_lock); + if (is_tdp_mmu_enabled(kvm)) + kvm_tdp_mmu_zap_all(kvm); + + if (!kvm_memslots_have_rmaps(kvm)) { + write_unlock(&kvm->mmu_lock); + return; + } + restart: list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) { if (WARN_ON(sp->role.invalid)) @@ -5693,9 +5722,6 @@ void kvm_mmu_zap_all(struct kvm *kvm) kvm_mmu_commit_zap_page(kvm, &invalid_list); - if (is_tdp_mmu_enabled(kvm)) - kvm_tdp_mmu_zap_all(kvm); - write_unlock(&kvm->mmu_lock); } From patchwork Thu May 6 18:42:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9C13C433B4 for ; Thu, 6 May 2021 18:43:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BC560613BA for ; Thu, 6 May 2021 18:43:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236561AbhEFSod (ORCPT ); Thu, 6 May 2021 14:44:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236519AbhEFSoS (ORCPT ); Thu, 6 May 2021 14:44:18 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DD39C06138A for ; Thu, 6 May 2021 11:43:19 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id l19-20020a0ce5130000b02901b6795e3304so4873456qvm.2 for ; Thu, 06 May 2021 11:43:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iw/a8Ja/3KurYblK4L1VkCBkw/kvgNKeAvoJlP3V0N4=; b=sOcR6WopGmQeOsEVbql1czxjIYXtvwnQfRsQ9Tol6fmEo1JJ8bPTkrMBEsGvyG20Fa HDu8Sl1hcFB+B4vi4znrlOA9p0rvXQvFW99L6+Hqsnp6VrMpW6hZU0fE6ULYXoSTKUol 0uCChy4AQZvw7i3NN/VoQ7FbcioWSHO8+g7kgJ1Kg4jIbLVcSRa8j8X9/zdijZc8t2xF DobXZ1IkwUfAP55V1Mcvg/mDvE26XnXcdyNo+Fo5cw403GV9LatgK2OYKM+VvsMiBgEm Tt4ZMoAj2ra7Uy97dhOTMAaeP1AW8iDZXD+1AWvO8J32Zl1XJHTyJqiWClVH5WUXvyv4 cX5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iw/a8Ja/3KurYblK4L1VkCBkw/kvgNKeAvoJlP3V0N4=; b=e/Jcfshq0ppTx5WJclm9k+oBCNxKqk3ZMA0uoBCM+wufovK5eTtMrcr6tPsE3LRSQG z7NZmyEFAUkeXuUENHPooAdeSvNOQTm9afXTMh74mzzrRuBg8K5oulMjO6B9SX5/56ts 6Z/BRwCWa1jkz6w+32PiQXckgF7/Blp+NEcwtxDCkdU5ZFseByhwQlNhV9D4ymJCxO3I yedCNx3GsKS2Ashb8PWBi0fq7fCvvo3q7V0R+nGMCfBMCphlmPi2jXQN+vZqIkJIfXVL Od8a7EkPi9UuY8lBL5OA1Dj1KyBhaMTwv9hirJdqiCDP8LAPYr3ROedMx9gGIa32mIen /Z1A== X-Gm-Message-State: AOAM530L/jrjqtmxGIthtVnqm4xF3Mbl9v5oF/K6IDkSP480XjFeC77R jodFtf6tArDjfOpVqXjhZodTWgy4cnrM X-Google-Smtp-Source: ABdhPJzLQCaK09iohpWRePuwJmaHeqNvzJRKQOMofUmstX0MustmnJ+kXp0rKMpS9TZj3mWEx8UpdIW3VlhS X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:a0c:ee23:: with SMTP id l3mr6174793qvs.55.1620326598310; Thu, 06 May 2021 11:43:18 -0700 (PDT) Date: Thu, 6 May 2021 11:42:40 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-8-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 7/8] KVM: x86/mmu: Protect rmaps independently with SRCU From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In preparation for lazily allocating the rmaps when the TDP MMU is in use, protect the rmaps with SRCU. Unfortunately, this requires propagating a pointer to struct kvm around to several functions. Suggested-by: Paolo Bonzini Signed-off-by: Ben Gardon Reported-by: kernel test robot Reported-by: kernel test robot --- arch/x86/kvm/mmu/mmu.c | 57 +++++++++++++++++++++++++----------------- arch/x86/kvm/x86.c | 6 ++--- 2 files changed, 37 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 730ea84bf7e7..48067c572c02 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -927,13 +927,18 @@ static void pte_list_remove(struct kvm_rmap_head *rmap_head, u64 *sptep) __pte_list_remove(sptep, rmap_head); } -static struct kvm_rmap_head *__gfn_to_rmap(gfn_t gfn, int level, +static struct kvm_rmap_head *__gfn_to_rmap(struct kvm *kvm, gfn_t gfn, + int level, struct kvm_memory_slot *slot) { + struct kvm_rmap_head *head; unsigned long idx; idx = gfn_to_index(gfn, slot->base_gfn, level); - return &slot->arch.rmap[level - PG_LEVEL_4K][idx]; + head = srcu_dereference_check(slot->arch.rmap[level - PG_LEVEL_4K], + &kvm->srcu, + lockdep_is_held(&kvm->slots_arch_lock)); + return &head[idx]; } static struct kvm_rmap_head *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, @@ -944,7 +949,7 @@ static struct kvm_rmap_head *gfn_to_rmap(struct kvm *kvm, gfn_t gfn, slots = kvm_memslots_for_spte_role(kvm, sp->role); slot = __gfn_to_memslot(slots, gfn); - return __gfn_to_rmap(gfn, sp->role.level, slot); + return __gfn_to_rmap(kvm, gfn, sp->role.level, slot); } static bool rmap_can_add(struct kvm_vcpu *vcpu) @@ -1194,7 +1199,8 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, return; while (mask) { - rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), + rmap_head = __gfn_to_rmap(kvm, + slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); __rmap_write_protect(kvm, rmap_head, false); @@ -1227,7 +1233,8 @@ static void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, return; while (mask) { - rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), + rmap_head = __gfn_to_rmap(kvm, + slot->base_gfn + gfn_offset + __ffs(mask), PG_LEVEL_4K, slot); __rmap_clear_dirty(kvm, rmap_head, slot); @@ -1270,7 +1277,7 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, if (kvm_memslots_have_rmaps(kvm)) { for (i = PG_LEVEL_4K; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { - rmap_head = __gfn_to_rmap(gfn, i, slot); + rmap_head = __gfn_to_rmap(kvm, gfn, i, slot); write_protected |= __rmap_write_protect(kvm, rmap_head, true); } @@ -1373,17 +1380,19 @@ struct slot_rmap_walk_iterator { }; static void -rmap_walk_init_level(struct slot_rmap_walk_iterator *iterator, int level) +rmap_walk_init_level(struct kvm *kvm, struct slot_rmap_walk_iterator *iterator, + int level) { iterator->level = level; iterator->gfn = iterator->start_gfn; - iterator->rmap = __gfn_to_rmap(iterator->gfn, level, iterator->slot); - iterator->end_rmap = __gfn_to_rmap(iterator->end_gfn, level, + iterator->rmap = __gfn_to_rmap(kvm, iterator->gfn, level, + iterator->slot); + iterator->end_rmap = __gfn_to_rmap(kvm, iterator->end_gfn, level, iterator->slot); } static void -slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, +slot_rmap_walk_init(struct kvm *kvm, struct slot_rmap_walk_iterator *iterator, struct kvm_memory_slot *slot, int start_level, int end_level, gfn_t start_gfn, gfn_t end_gfn) { @@ -1393,7 +1402,7 @@ slot_rmap_walk_init(struct slot_rmap_walk_iterator *iterator, iterator->start_gfn = start_gfn; iterator->end_gfn = end_gfn; - rmap_walk_init_level(iterator, iterator->start_level); + rmap_walk_init_level(kvm, iterator, iterator->start_level); } static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) @@ -1401,7 +1410,8 @@ static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator) return !!iterator->rmap; } -static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) +static void slot_rmap_walk_next(struct kvm *kvm, + struct slot_rmap_walk_iterator *iterator) { if (++iterator->rmap <= iterator->end_rmap) { iterator->gfn += (1UL << KVM_HPAGE_GFN_SHIFT(iterator->level)); @@ -1413,15 +1423,15 @@ static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator) return; } - rmap_walk_init_level(iterator, iterator->level); + rmap_walk_init_level(kvm, iterator, iterator->level); } -#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \ - _start_gfn, _end_gfn, _iter_) \ - for (slot_rmap_walk_init(_iter_, _slot_, _start_level_, \ - _end_level_, _start_gfn, _end_gfn); \ - slot_rmap_walk_okay(_iter_); \ - slot_rmap_walk_next(_iter_)) +#define for_each_slot_rmap_range(_kvm_, _slot_, _start_level_, _end_level_, \ + _start_gfn, _end_gfn, _iter_) \ + for (slot_rmap_walk_init(_kvm_, _iter_, _slot_, _start_level_, \ + _end_level_, _start_gfn, _end_gfn); \ + slot_rmap_walk_okay(_iter_); \ + slot_rmap_walk_next(_kvm_, _iter_)) typedef bool (*rmap_handler_t)(struct kvm *kvm, struct kvm_rmap_head *rmap_head, struct kvm_memory_slot *slot, gfn_t gfn, @@ -1434,8 +1444,9 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, struct slot_rmap_walk_iterator iterator; bool ret = false; - for_each_slot_rmap_range(range->slot, PG_LEVEL_4K, KVM_MAX_HUGEPAGE_LEVEL, - range->start, range->end - 1, &iterator) + for_each_slot_rmap_range(kvm, range->slot, PG_LEVEL_4K, + KVM_MAX_HUGEPAGE_LEVEL, range->start, + range->end - 1, &iterator) ret |= handler(kvm, iterator.rmap, range->slot, iterator.gfn, iterator.level, range->pte); @@ -5233,8 +5244,8 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot, { struct slot_rmap_walk_iterator iterator; - for_each_slot_rmap_range(memslot, start_level, end_level, start_gfn, - end_gfn, &iterator) { + for_each_slot_rmap_range(kvm, memslot, start_level, end_level, + start_gfn, end_gfn, &iterator) { if (iterator.rmap) flush |= fn(kvm, iterator.rmap, memslot); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d7a40ce342cc..1098ab73a704 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10854,9 +10854,9 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, lpages = gfn_to_index(slot->base_gfn + npages - 1, slot->base_gfn, level) + 1; - slot->arch.rmap[i] = - kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), - GFP_KERNEL_ACCOUNT); + rcu_assign_pointer(slot->arch.rmap[i], + kvcalloc(lpages, sizeof(*slot->arch.rmap[i]), + GFP_KERNEL_ACCOUNT)); if (!slot->arch.rmap[i]) goto out_free; } From patchwork Thu May 6 18:42:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 12242973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB86EC433ED for ; Thu, 6 May 2021 18:43:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE0E4613B5 for ; Thu, 6 May 2021 18:43:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236453AbhEFSom (ORCPT ); Thu, 6 May 2021 14:44:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236458AbhEFSoX (ORCPT ); Thu, 6 May 2021 14:44:23 -0400 Received: from mail-qv1-xf49.google.com (mail-qv1-xf49.google.com [IPv6:2607:f8b0:4864:20::f49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A4E5C061574 for ; Thu, 6 May 2021 11:43:23 -0700 (PDT) Received: by mail-qv1-xf49.google.com with SMTP id b1-20020a0c9b010000b02901c4bcfbaa53so4836656qve.19 for ; Thu, 06 May 2021 11:43:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=K/x5z5bZjszhScOsSUvYig/rlmtrsL2ZrnZOND+61kE=; b=LUC4WOb6mauhV5uFzsrXpaReIKKRZrn4AoIkNHikJ5LaaU4mVWIpGlAa95eSlp313V CYiLL2JHFDEQxNcWb5LxNu/RM++wYeQhCOos4qUwt2JCcKeV131gknJ+qCBabvIWM/ek SbgOsdvEotdJ/Rmea6GljXnmeAbG1J07HZo/SzHspNM2b9iBx8K2ffenZNFWSBaGFFl3 jph7tFMnI06LokeGXCTzelOpyYp1gbTzfuPQ5igkRagT2cOs7lgniCnFFSsShcaM5LwL /WvzreBptU9Nx17RYv9DpApkg0JwOprezc4WN1Y2z6nh0Mcv7SLrwJhSWi8LbgnbfKk3 uLHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K/x5z5bZjszhScOsSUvYig/rlmtrsL2ZrnZOND+61kE=; b=DD044KcHKNJjleWNpRaeztf8oQQ/VE7aWtMtprZ0atl5+qxOVTN/S9QpFVJQaa6noR 4y8S12qqRMvqn9tD0PNnx/+0OmCiOvA3ktBS4PStpKko7MSrIwpI02PpyfOJ6J0b7izz 12x8tLoUmAU7VfkeNtOqTCDpqxNuvatBU8TAQkxqI8dCz0EExiKsOx/FqAZOSGB+j0iN TSLra/4ZiiCW13KxGeKCgSNeajfP5KiCZoEGVqXa+H5XDeOqVTd4yzUaXKWekTeu9Mz2 N3jc9TEqdD8VB6hcu97TSO0+xRBgpJDIwZ5MAsIpNoMxHZr08bG/SlckrqPLYsYBgHGz 2DNg== X-Gm-Message-State: AOAM530IVqV9Jposr7QyTCbBWZ0GCpVQQ/nQBouLkggrRSb2qFiM8yFT jIG+bXEOAa6RPtlQ7NSeJr6bxwKvBdHX X-Google-Smtp-Source: ABdhPJy/HwoTP0gK5bHmoUE0N8nTQYyJkQguHvxLj+bAE0kbE8QnN/AET23D4SxE+skS2wnCdCb688CvgShr X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:9258:9474:54ca:4500]) (user=bgardon job=sendgmr) by 2002:ad4:4634:: with SMTP id x20mr6230195qvv.49.1620326602821; Thu, 06 May 2021 11:43:22 -0700 (PDT) Date: Thu, 6 May 2021 11:42:41 -0700 In-Reply-To: <20210506184241.618958-1-bgardon@google.com> Message-Id: <20210506184241.618958-9-bgardon@google.com> Mime-Version: 1.0 References: <20210506184241.618958-1-bgardon@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 8/8] KVM: x86/mmu: Lazily allocate memslot rmaps From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Peter Xu , Sean Christopherson , Peter Shier , Yulei Zhang , Wanpeng Li , Xiao Guangrong , Kai Huang , Keqian Zhu , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If the TDP MMU is in use, wait to allocate the rmaps until the shadow MMU is actually used. (i.e. a nested VM is launched.) This saves memory equal to 0.2% of guest memory in cases where the TDP MMU is used and there are no nested guests involved. Signed-off-by: Ben Gardon Reported-by: kernel test robot --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/mmu/mmu.c | 14 ++++++++++--- arch/x86/kvm/mmu/tdp_mmu.c | 6 ++++-- arch/x86/kvm/mmu/tdp_mmu.h | 4 ++-- arch/x86/kvm/x86.c | 37 ++++++++++++++++++++++++++++++++- 5 files changed, 54 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 00065f9bbc5e..7b8e1532fb55 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1860,5 +1860,6 @@ static inline int kvm_cpu_get_apicid(int mps_cpu) int kvm_cpu_dirty_log_size(void); inline bool kvm_memslots_have_rmaps(struct kvm *kvm); +int alloc_all_memslots_rmaps(struct kvm *kvm); #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 48067c572c02..e3a3b65829c5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3306,6 +3306,10 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu) } } + r = alloc_all_memslots_rmaps(vcpu->kvm); + if (r) + return r; + write_lock(&vcpu->kvm->mmu_lock); r = make_mmu_pages_available(vcpu); if (r < 0) @@ -5494,9 +5498,13 @@ void kvm_mmu_init_vm(struct kvm *kvm) { struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; - kvm_mmu_init_tdp_mmu(kvm); - - kvm->arch.memslots_have_rmaps = true; + if (!kvm_mmu_init_tdp_mmu(kvm)) + /* + * No smp_load/store wrappers needed here as we are in + * VM init and there cannot be any memslots / other threads + * accessing this struct kvm yet. + */ + kvm->arch.memslots_have_rmaps = true; node->track_write = kvm_mmu_pte_write; node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 83cbdbe5de5a..5342aca2c8e0 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -14,10 +14,10 @@ static bool __read_mostly tdp_mmu_enabled = false; module_param_named(tdp_mmu, tdp_mmu_enabled, bool, 0644); /* Initializes the TDP MMU for the VM, if enabled. */ -void kvm_mmu_init_tdp_mmu(struct kvm *kvm) +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) - return; + return false; /* This should not be changed for the lifetime of the VM. */ kvm->arch.tdp_mmu_enabled = true; @@ -25,6 +25,8 @@ void kvm_mmu_init_tdp_mmu(struct kvm *kvm) INIT_LIST_HEAD(&kvm->arch.tdp_mmu_roots); spin_lock_init(&kvm->arch.tdp_mmu_pages_lock); INIT_LIST_HEAD(&kvm->arch.tdp_mmu_pages); + + return true; } static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm, diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index 5fdf63090451..b046ab5137a1 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -80,12 +80,12 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes, int *root_level); #ifdef CONFIG_X86_64 -void kvm_mmu_init_tdp_mmu(struct kvm *kvm); +bool kvm_mmu_init_tdp_mmu(struct kvm *kvm); void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm); static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return kvm->arch.tdp_mmu_enabled; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return sp->tdp_mmu_page; } #else -static inline void kvm_mmu_init_tdp_mmu(struct kvm *kvm) {} +static inline bool kvm_mmu_init_tdp_mmu(struct kvm *kvm) { return false; } static inline void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm) {} static inline bool is_tdp_mmu_enabled(struct kvm *kvm) { return false; } static inline bool is_tdp_mmu_page(struct kvm_mmu_page *sp) { return false; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 1098ab73a704..95e74fb9fc20 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -10868,9 +10868,44 @@ static int alloc_memslot_rmap(struct kvm_memory_slot *slot, return -ENOMEM; } +int alloc_all_memslots_rmaps(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *slot; + int r = 0; + int i; + + if (kvm_memslots_have_rmaps(kvm)) + return 0; + + mutex_lock(&kvm->slots_arch_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(slot, slots) { + r = alloc_memslot_rmap(slot, slot->npages); + if (r) { + mutex_unlock(&kvm->slots_arch_lock); + return r; + } + } + } + + /* + * memslots_have_rmaps is set and read in different lock contexts, + * so protect it with smp_load/store. + */ + smp_store_release(&kvm->arch.memslots_have_rmaps, true); + mutex_unlock(&kvm->slots_arch_lock); + return 0; +} + bool kvm_memslots_have_rmaps(struct kvm *kvm) { - return kvm->arch.memslots_have_rmaps; + /* + * memslots_have_rmaps is set and read in different lock contexts, + * so protect it with smp_load/store. + */ + return smp_load_acquire(&kvm->arch.memslots_have_rmaps); } static int kvm_alloc_memslot_metadata(struct kvm *kvm,