From patchwork Mon Apr 4 23:41:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 12800943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9C3AC433F5 for ; Mon, 4 Apr 2022 23:43:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 100E66B0075; Mon, 4 Apr 2022 19:42:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B1186B0078; Mon, 4 Apr 2022 19:42:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E938A6B007B; Mon, 4 Apr 2022 19:42:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id DD4486B0075 for ; Mon, 4 Apr 2022 19:42:18 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9CEC0AB1E4 for ; Mon, 4 Apr 2022 23:42:08 +0000 (UTC) X-FDA: 79320822336.16.8D4915B Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf09.hostedemail.com (Postfix) with ESMTP id 2A1F1140033 for ; Mon, 4 Apr 2022 23:42:08 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id c10-20020a170902d48a00b00156590dfa4cso4007525plg.20 for ; Mon, 04 Apr 2022 16:42:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=J9JdczjCflhADAkwzg8dC3TDq1QsxwIOBSphHHb7CsE=; b=SaWbXJ+eSFTVQL00bJErXWshHip0oHjDBJJ3CIjnu+HT1vgtdjTOKQ5DbQ/ufD1TV0 KjV/tA6YYnA81ykCQC3j//faUYtVWANYG3TWqO5vvleQIOaF1wMwH6CIx9OarPSJijtl N+NrDkerr2DpDmOf14B1eQHC03/3lJhoA+J6XEex9Gke9EWDb63M83lHmpeUDSeirv38 Bw/oONKyFCZRdQ3jATywM7tgwYQ9e7o6Jz9JGpPKOqLpFTMvm8mz1gMR24/R5Rsuc0A/ 2r+sC5P+gQ7BrHrHJYLzRQKzF2SgfzAsiq9kAWb+a5gOV1n4QSezrxgW+PCrKtynH9BG sPFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=J9JdczjCflhADAkwzg8dC3TDq1QsxwIOBSphHHb7CsE=; b=bFumddWRXk/jOvPUnqQ4EDI0j7GNFK4xQGqGsUWjzq2yvmIdd6TyrsfraJs2dSruAl GcEndPL6nBSXEVZhazSFDKWv33YPV5V8Hfgd07gB5a4GUag0t5vGCVqkL2hOTYwo14ly mkF2zOBhHkbMoReoF3F8G3WE7vcg38gthng1SF7X/9rECQB43E1sracp1m3qk1RE8n+W 9bwIxKpkyEt52zOx9gOcr5VgzZ0w/oAUT20Ysl4DhLgLCTkamWjlna8Y04H9J8QogDRd nUoL/lmISUAuLT1HKctbevTieWYxlLMeOCGBv/P82jsv0tmSnwz4NAT5Pq1HNw43cDY6 su2Q== X-Gm-Message-State: AOAM5329wkgvVSjU3jBGFZ86NcjhkPUapPmciJWjOBafxb0UEDsvR9u7 FV7HKoLhus9aCWh3AjomCe7XWIaLwVafQCh8 X-Google-Smtp-Source: ABdhPJxUcLxVvHcungJ/u8mKlomdmSoM+z9Z0/wfx0EDKoCLliJIJcUmyYrzylj+sfx8Tmyi4TDgZPNYbu5dQXeU X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:90b:1e4e:b0:1c7:3512:c2ac with SMTP id pi14-20020a17090b1e4e00b001c73512c2acmr817714pjb.61.1649115727218; Mon, 04 Apr 2022 16:42:07 -0700 (PDT) Date: Mon, 4 Apr 2022 23:41:52 +0000 In-Reply-To: <20220404234154.1251388-1-yosryahmed@google.com> Message-Id: <20220404234154.1251388-4-yosryahmed@google.com> Mime-Version: 1.0 References: <20220404234154.1251388-1-yosryahmed@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v2 3/5] KVM: arm64: mm: count KVM page table pages in pagetable stats From: Yosry Ahmed To: Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: mizhang@google.com, David Matlack , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shakeel Butt , Andrew Morton , Yosry Ahmed Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SaWbXJ+e; spf=pass (imf09.hostedemail.com: domain of 3T4JLYgoKCJoSIMLS4BG87AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--yosryahmed.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3T4JLYgoKCJoSIMLS4BG87AIIAF8.6IGFCHOR-GGEP46E.ILA@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: gz3wk6dtwoycrp1gcuicf5s3xuq7udx1 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2A1F1140033 X-HE-Tag: 1649115728-25658 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Count the pages used by KVM in arm64 for page tables in pagetable stats. Account pages allocated for PTEs in pgtable init functions and kvm_set_table_pte(). Since most page table pages are freed using put_page(), add a helper function put_pte_page() that checks if this is the last ref for a pte page before putting it, and unaccounts stats accordingly. Signed-off-by: Yosry Ahmed --- arch/arm64/kernel/image-vars.h | 3 ++ arch/arm64/kvm/hyp/pgtable.c | 50 +++++++++++++++++++++------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 884844849c70..4c87a718ccf0 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -139,6 +139,9 @@ KVM_NVHE_ALIAS(__hyp_rodata_end); /* pKVM static key */ KVM_NVHE_ALIAS(kvm_protected_mode_initialized); +/* Called by kvm_account_pgtable_pages() to update pagetable stats */ +KVM_NVHE_ALIAS(__mod_lruvec_page_state); + #endif /* CONFIG_KVM */ #endif /* __ARM64_KERNEL_IMAGE_VARS_H */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 2cb3867eb7c2..53e13c3313e9 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -152,6 +152,7 @@ static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp, WARN_ON(kvm_pte_valid(old)); smp_store_release(ptep, pte); + kvm_account_pgtable_pages((void *)childp, +1); } static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, u32 level) @@ -326,6 +327,14 @@ int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, return ret; } +static void put_pte_page(kvm_pte_t *ptep, struct kvm_pgtable_mm_ops *mm_ops) +{ + /* If this is the last page ref, decrement pagetable stats first. */ + if (!mm_ops->page_count || mm_ops->page_count(ptep) == 1) + kvm_account_pgtable_pages((void *)ptep, -1); + mm_ops->put_page(ptep); +} + struct hyp_map_data { u64 phys; kvm_pte_t attr; @@ -488,10 +497,10 @@ static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, dsb(ish); isb(); - mm_ops->put_page(ptep); + put_pte_page(ptep, mm_ops); if (childp) - mm_ops->put_page(childp); + put_pte_page(childp, mm_ops); return 0; } @@ -522,6 +531,7 @@ int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, pgt->pgd = (kvm_pte_t *)mm_ops->zalloc_page(NULL); if (!pgt->pgd) return -ENOMEM; + kvm_account_pgtable_pages((void *)pgt->pgd, +1); pgt->ia_bits = va_bits; pgt->start_level = KVM_PGTABLE_MAX_LEVELS - levels; @@ -541,10 +551,10 @@ static int hyp_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (!kvm_pte_valid(pte)) return 0; - mm_ops->put_page(ptep); + put_pte_page(ptep, mm_ops); if (kvm_pte_table(pte, level)) - mm_ops->put_page(kvm_pte_follow(pte, mm_ops)); + put_pte_page(kvm_pte_follow(pte, mm_ops), mm_ops); return 0; } @@ -558,7 +568,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt) }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); - pgt->mm_ops->put_page(pgt->pgd); + put_pte_page(pgt->pgd, pgt->mm_ops); pgt->pgd = NULL; } @@ -694,7 +704,7 @@ static void stage2_put_pte(kvm_pte_t *ptep, struct kvm_s2_mmu *mmu, u64 addr, kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, addr, level); } - mm_ops->put_page(ptep); + put_pte_page(ptep, mm_ops); } static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte) @@ -795,7 +805,7 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (data->anchor) { if (stage2_pte_is_counted(pte)) - mm_ops->put_page(ptep); + put_pte_page(ptep, mm_ops); return 0; } @@ -848,8 +858,8 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, childp = kvm_pte_follow(*ptep, mm_ops); } - mm_ops->put_page(childp); - mm_ops->put_page(ptep); + put_pte_page(childp, mm_ops); + put_pte_page(ptep, mm_ops); return ret; } @@ -962,7 +972,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (!kvm_pte_valid(pte)) { if (stage2_pte_is_counted(pte)) { kvm_clear_pte(ptep); - mm_ops->put_page(ptep); + put_pte_page(ptep, mm_ops); } return 0; } @@ -988,7 +998,7 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, kvm_granule_size(level)); if (childp) - mm_ops->put_page(childp); + put_pte_page(childp, mm_ops); return 0; } @@ -1177,16 +1187,17 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb) { - size_t pgd_sz; + u32 pgd_num; u64 vtcr = mmu->arch->vtcr; u32 ia_bits = VTCR_EL2_IPA(vtcr); u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0; - pgd_sz = kvm_pgd_pages(ia_bits, start_level) * PAGE_SIZE; - pgt->pgd = mm_ops->zalloc_pages_exact(pgd_sz); + pgd_num = kvm_pgd_pages(ia_bits, start_level); + pgt->pgd = mm_ops->zalloc_pages_exact(pgd_num * PAGE_SIZE); if (!pgt->pgd) return -ENOMEM; + kvm_account_pgtable_pages((void *)pgt->pgd, +pgd_num); pgt->ia_bits = ia_bits; pgt->start_level = start_level; @@ -1210,17 +1221,17 @@ static int stage2_free_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (!stage2_pte_is_counted(pte)) return 0; - mm_ops->put_page(ptep); + put_pte_page(ptep, mm_ops); if (kvm_pte_table(pte, level)) - mm_ops->put_page(kvm_pte_follow(pte, mm_ops)); + put_pte_page(kvm_pte_follow(pte, mm_ops), mm_ops); return 0; } void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) { - size_t pgd_sz; + u32 pgd_num; struct kvm_pgtable_walker walker = { .cb = stage2_free_walker, .flags = KVM_PGTABLE_WALK_LEAF | @@ -1229,7 +1240,8 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) }; WARN_ON(kvm_pgtable_walk(pgt, 0, BIT(pgt->ia_bits), &walker)); - pgd_sz = kvm_pgd_pages(pgt->ia_bits, pgt->start_level) * PAGE_SIZE; - pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz); + pgd_num = kvm_pgd_pages(pgt->ia_bits, pgt->start_level); + kvm_account_pgtable_pages((void *)pgt->pgd, -pgd_num); + pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_num * PAGE_SIZE); pgt->pgd = NULL; }