From patchwork Fri Apr 29 20:11:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 12832710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F67DC433EF for ; Fri, 29 Apr 2022 20:12:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380621AbiD2UPX (ORCPT ); Fri, 29 Apr 2022 16:15:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380588AbiD2UPJ (ORCPT ); Fri, 29 Apr 2022 16:15:09 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB69F3DDDD for ; Fri, 29 Apr 2022 13:11:49 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id x2-20020a63aa42000000b003aafe948eeeso4236915pgo.0 for ; Fri, 29 Apr 2022 13:11:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=WJSDpsc2WvoBFv9mTA61a5Umprsh/icO78ayN0cr9Bs=; b=XK8m1lEPYyGBc9UNev+UIJgN/P7IC1BvuvVMJTbn/r6py0F3Lw6Fv3q8ILFyBYMVLZ MnvcTtwLPs9qLTOzDrfbzBbzcFsKgFcIpkP1iDnH3nigu5jcxcjT3SQG6HmSeBeeWNXh nWqMIpSxa67WhLX5qrBEPIdWZClzZXZCQmTuCwlOQ52XSMIiZurYcyKlAE100i/H4G+X p0yV6mz7huTMa+l9U35w3XcRl/r9Xa8/Qmqq531nkOdpeqQgz1iv9qqZRPytgVAUNefO TLvub9at+LkqfyoK37ZiP+2KwOj6XQdMrsV1bz+J2e5sV3w4mKrIKmrOwHFLFZvOJ7UN qxsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=WJSDpsc2WvoBFv9mTA61a5Umprsh/icO78ayN0cr9Bs=; b=CW1w5B7F5l/s0xDuhn2J8V6XmItJWhK4IqobxFHTJIKPQcdgDpldOGNE9K1+By4z9k pqXA1xzJO/hrInDkG/JqCOecHtsE1NrsmUoHbitrJ+sySPqEVFx0TYnDJK6WNI2zyrAD E3W4CiqzhxJvIVpSYY3ElHD8o8qo5I+J45we9GRMioLoG4UqOXqNDhMWJp8O4zOg4V07 SCeA2Kl9xxntxYAmPCaQdgRWw8rNn4Pf4pwuOOVxLP6YHIwAfzvePQmfEJcmHtLTb4Co w4HL61mvtPRGvdvI0xoVuK9mEu5eoeKR87kYZTjkxfwyZ1emOBtdj6FDhRbuIN5pkq6K dRyQ== X-Gm-Message-State: AOAM530uzQrZu859H7BmjJvdof4Ip2+shE6eOPTeQqelviRt/h1250Dc seUorgUDRlXrc9vmn3osnGhmxBIAL5JAQ1qw X-Google-Smtp-Source: ABdhPJy8M95IzubrsL7x95NvP2WiBYnG038db0b/2iw0dRRfI3FAVtvV34XGoZ0NsslM2ywf/tGMkw5/l/vSYvKs X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:5154:0:b0:39d:5e18:9d98 with SMTP id r20-20020a635154000000b0039d5e189d98mr761530pgl.145.1651263109293; Fri, 29 Apr 2022 13:11:49 -0700 (PDT) Date: Fri, 29 Apr 2022 20:11:30 +0000 In-Reply-To: <20220429201131.3397875-1-yosryahmed@google.com> Message-Id: <20220429201131.3397875-4-yosryahmed@google.com> Mime-Version: 1.0 References: <20220429201131.3397875-1-yosryahmed@google.com> X-Mailer: git-send-email 2.36.0.464.gb9c8b46e94-goog Subject: [PATCH v4 3/4] KVM: x86/mmu: count KVM mmu usage in secondary pagetable stats. From: Yosry Ahmed To: Tejun Heo , Johannes Weiner , Zefan Li , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Andrew Morton , Michal Hocko , Roman Gushchin , Shakeel Butt , Oliver Upton Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Count the pages used by KVM mmu on x86 for in secondary pagetable stats. For the legacy mmu, accounting pagetable stats is combined KVM's existing for mmu pages in newly introduced kvm_[un]account_mmu_page() helpers. For tdp mmu, introduce new tdp_[un]account_mmu_page() helpers. That combines accounting pagetable stats with the tdp_mmu_pages counter accounting. tdp_mmu_pages counter introduced in this series [1]. This patch was rebased on top of the first two patches in that series. [1]https://lore.kernel.org/lkml/20220401063636.2414200-1-mizhang@google.com/ Signed-off-by: Yosry Ahmed --- arch/x86/kvm/mmu/mmu.c | 16 ++++++++++++++-- arch/x86/kvm/mmu/tdp_mmu.c | 16 ++++++++++++++-- 2 files changed, 28 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 78d8e1d8fb99..e5b0e826445d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1679,6 +1679,18 @@ static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, long nr) percpu_counter_add(&kvm_total_used_mmu_pages, nr); } +static void kvm_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) +{ + kvm_mod_used_mmu_pages(kvm, +1); + kvm_account_pgtable_pages((void *)sp->spt, +1); +} + +static void kvm_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) +{ + kvm_mod_used_mmu_pages(kvm, -1); + kvm_account_pgtable_pages((void *)sp->spt, -1); +} + static void kvm_mmu_free_page(struct kvm_mmu_page *sp) { MMU_WARN_ON(!is_empty_shadow_page(sp->spt)); @@ -1734,7 +1746,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct */ sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); - kvm_mod_used_mmu_pages(vcpu->kvm, +1); + kvm_account_mmu_page(vcpu->kvm, sp); return sp; } @@ -2363,7 +2375,7 @@ static bool __kvm_mmu_prepare_zap_page(struct kvm *kvm, list_add(&sp->link, invalid_list); else list_move(&sp->link, invalid_list); - kvm_mod_used_mmu_pages(kvm, -1); + kvm_unaccount_mmu_page(kvm, sp); } else { /* * Remove the active root from the active page list, the root diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3456277ade18..6295c4da5dee 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -371,6 +371,18 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, } } +static void tdp_account_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) +{ + atomic64_inc(&kvm->arch.tdp_mmu_pages); + kvm_account_pgtable_pages((void *)sp->spt, +1); +} + +static void tdp_unaccount_mmu_page(struct kvm *kvm, struct kvm_mmu_page *sp) +{ + atomic64_dec(&kvm->arch.tdp_mmu_pages); + kvm_account_pgtable_pages((void *)sp->spt, -1); +} + /** * tdp_mmu_unlink_sp() - Remove a shadow page from the list of used pages * @@ -383,7 +395,7 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, static void tdp_mmu_unlink_sp(struct kvm *kvm, struct kvm_mmu_page *sp, bool shared) { - atomic64_dec(&kvm->arch.tdp_mmu_pages); + tdp_unaccount_mmu_page(kvm, sp); if (!sp->lpage_disallowed) return; @@ -1121,7 +1133,7 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, tdp_mmu_set_spte(kvm, iter, spte); } - atomic64_inc(&kvm->arch.tdp_mmu_pages); + tdp_account_mmu_page(kvm, sp); return 0; }