From patchwork Tue Mar 2 14:59:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12113757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 905BEC433DB for ; Wed, 3 Mar 2021 16:07:23 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EBD3564ED0 for ; Wed, 3 Mar 2021 16:07:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EBD3564ED0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=p23b1L+wJ3fJWmdEb14vK+8yRst06AORJ/Vth8VJpDg=; b=iiwVRryp9xo13z oaLBnGN8eSM4bAW3DDQHJwv7gKeDqr466FvnkUGbaoYJLhO66voy5tl6jzvqcEkuxZHjWuyxvqt96 e/7tHEAvUNoNxRqzf3nm9V3IBY2cP1w3cV6Kks7ebX1oQZpmvPqjhK22uZIY/f4SoUisYaiNV0GjQ 0vXcieyjd03IkNnUyGLKHuHPJdasZeFvCFCTOpZCdXCpwU69W8kQKOt9rZ/u5T1R7o733JgrtYo96 oV6t7b8jJ/0LTA2hc2PAOzSCsE7+kFUDgagmW2ALcz/CZetLcOvpcx3vRw5UZsoQbv6N+vHaMn7qX TF4ONjskxVrIH4lD/xgQ==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lHTzd-005WV8-SM; Wed, 03 Mar 2021 16:04:59 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lHSoP-005Da2-LS for linux-arm-kernel@desiato.infradead.org; Wed, 03 Mar 2021 14:49:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=5Zt+as2f0a/zAo23kFlziOtkEY/5uJ8Xq9YYontT/aE=; b=HZFuPJYgx+BLPQRYG/Dpu5yHLX qvQFQ5jEk30EVF67wmz+SPmjs9tRX2DJGxOBUmVzsm91U++92NS5bhvmHDJ7dOCxYrBwp3QDq3CzJ RYRLeK0z2S3nHeiv6Wdqa/XNgF0c9Xjzwqg9wpBeY5VOl4tHSdkDeZLcLsADr8wg2LvmdnLgRjhO4 Pd4/ReVOY9E5z8X2njC6vv59b6rIlzjaAaqGp2v8wWM0no2lys2pqo1DkfpHkKWQaMQzC5q6pwphh co8oWHtz0ZeqCniR2L9bqlToYFqZrPWX2mbD0s5WZAPWtt/XdbVhPt1XDg2GN4y0vNRCswpXDp8Yv 4LjwY6Eg==; Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by casper.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lH6XZ-00HJNZ-Tw for linux-arm-kernel@lists.infradead.org; Tue, 02 Mar 2021 15:02:30 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 73so1277365wma.3 for ; Tue, 02 Mar 2021 07:02:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=5Zt+as2f0a/zAo23kFlziOtkEY/5uJ8Xq9YYontT/aE=; b=oy/UdeB5LbnLjtFmM/dbYim6mMzp2qHYBSC13VZs9FaJTFSyaT5CDRtQj/Eb4TG+BN 0qtiQe0mzOlHVdur176OUctyO9lmahpfO/cGM2Iz8/eLYjlye4xly371PfFUU6L1dmsW flOfBgxFPo02Ph1rABAUHRoOy8BisQRcd1qsoH8uwGogHwNT1F7aM+pl3qwYmMwE3rZ9 EOf4aVi4DTalvrWxvq7Nfmlls3aOTfLbQl8kPnYd+v0IKHNr1fcEHNU1ksLlXUbt8htj tPYk6Ic5sJ+UTglqHEL5gmqam/FGCqcrJCZzjWFrHaxx9v8IRAOvLk/SYa3HSSkqCdAe wljw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5Zt+as2f0a/zAo23kFlziOtkEY/5uJ8Xq9YYontT/aE=; b=Vg2ZNhAD+8G02UY32YUUHzzm5fnln9sG6LlGCchPxge5kn6AMncpXlssfcKJbJUNKD 4hVGDIw69iiwf7U/bLt0pK8kK93c7RpR2feQ1FT50NwiLU8z13uTmVUAOZLM4nryKE/n KCNBGm/Mb1O5goeH1EEIR4MKet5wE2dYRrbxuH4VK4cOWU8UqBxltKXL38qw0BG2noSZ MTca3qZJcRolUOz3TcQVmOPuhefPaW4jzRTHirQXOsaZbqZrjOzdWsqtyWhG7g3NP810 u4jc7c/lrlc6tzVBSeDSlyBK2PgkODZE2HX4cbZa7a8yER/4wtCYOV/WTisbd92N/RjQ 80GA== X-Gm-Message-State: AOAM533gjnG4vik5htAYLQJkrz92baWjieCMjvAkiN+hoGOgPabWLBup js8dS0BbQe51MMhyLnrI/i+Aw2Gr8DsX X-Google-Smtp-Source: ABdhPJxty67guDux2vyMCWfPogWra+Q8rpbOely2ebtEwkY4f4QkKw5qaFq5tmMfz2TLhcjiBewjHI5nTeut X-Received: from r2d2-qp.c.googlers.com ([fda3:e722:ac3:10:28:9cb1:c0a8:1652]) (user=qperret job=sendgmr) by 2002:a05:600c:2254:: with SMTP id a20mr4602912wmm.115.1614697269979; Tue, 02 Mar 2021 07:01:09 -0800 (PST) Date: Tue, 2 Mar 2021 14:59:58 +0000 In-Reply-To: <20210302150002.3685113-1-qperret@google.com> Message-Id: <20210302150002.3685113-29-qperret@google.com> Mime-Version: 1.0 References: <20210302150002.3685113-1-qperret@google.com> X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v3 28/32] KVM: arm64: Add kvm_pgtable_stage2_idmap_greedy() From: Quentin Perret To: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com Cc: android-kvm@google.com, linux-kernel@vger.kernel.org, kernel-team@android.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, tabba@google.com, mark.rutland@arm.com, dbrazdil@google.com, mate.toth-pal@arm.com, seanjc@google.com, qperret@google.com, robh+dt@kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210302_150230_803907_C0ED2C64 X-CRM114-Status: GOOD ( 20.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add a new map function to the KVM page-table library that allows to greedily create block identity-mappings. This will be useful to create lazily the host stage 2 page-table as it will own most of memory and will always be identity mapped. The new helper function creates the mapping in 2 steps: it first walks the page-table to compute the largest possible granule that can be used to idmap a given address without overriding existing incompatible mappings; and then creates a mapping accordingly. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 37 +++++++++ arch/arm64/kvm/hyp/pgtable.c | 119 +++++++++++++++++++++++++++ 2 files changed, 156 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index c9f6ed76e0ad..e51dcce69a5e 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -96,6 +96,16 @@ enum kvm_pgtable_prot { #define PAGE_HYP_RO (KVM_PGTABLE_PROT_R) #define PAGE_HYP_DEVICE (PAGE_HYP | KVM_PGTABLE_PROT_DEVICE) +/** + * struct kvm_mem_range - Range of Intermediate Physical Addresses + * @start: Start of the range. + * @end: End of the range. + */ +struct kvm_mem_range { + u64 start; + u64 end; +}; + /** * enum kvm_pgtable_walk_flags - Flags to control a depth-first page-table walk. * @KVM_PGTABLE_WALK_LEAF: Visit leaf entries, including invalid @@ -379,4 +389,31 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_pgtable_walker *walker); +/** + * kvm_pgtable_stage2_idmap_greedy() - Identity-map an Intermediate Physical + * Address with a leaf entry at the highest + * possible level. + * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). + * @addr: Input address to identity-map. + * @prot: Permissions and attributes for the mapping. + * @range: Boundaries of the maximum memory region to map. + * @mc: Cache of pre-allocated memory from which to allocate page-table + * pages. + * + * This function attempts to install high-level identity-mappings covering @addr + * without overriding existing mappings with incompatible permissions or + * attributes. An existing table entry may be coalesced into a block mapping + * if and only if it covers @addr and all its leafs are either invalid and/or + * have permissions and attributes strictly matching @prot. The mapping is + * guaranteed to be contained within the boundaries specified by @range at call + * time. If only a subset of the memory specified by @range is mapped (because + * of e.g. alignment issues or existing incompatible mappings), @range will be + * updated accordingly. + * + * Return: 0 on success, negative error code on failure. + */ +int kvm_pgtable_stage2_idmap_greedy(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_prot prot, + struct kvm_mem_range *range, + void *mc); #endif /* __ARM64_KVM_PGTABLE_H__ */ diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 8aa01a9e2603..6897d771e2b2 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -987,3 +987,122 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) pgt->mm_ops->free_pages_exact(pgt->pgd, pgd_sz); pgt->pgd = NULL; } + +struct stage2_reduce_range_data { + kvm_pte_t attr; + u64 target_addr; + u32 start_level; + struct kvm_mem_range *range; +}; + +static int __stage2_reduce_range(struct stage2_reduce_range_data *data, u64 addr) +{ + u32 level = data->start_level; + + for (; level < KVM_PGTABLE_MAX_LEVELS; level++) { + u64 granule = kvm_granule_size(level); + u64 start = ALIGN_DOWN(data->target_addr, granule); + u64 end = start + granule; + + /* + * The pinned address is in the current range, try one level + * deeper. + */ + if (start == ALIGN_DOWN(addr, granule)) + continue; + + /* + * Make sure the current range is a reduction of the existing + * range before updating it. + */ + if (data->range->start <= start && end <= data->range->end) { + data->start_level = level; + data->range->start = start; + data->range->end = end; + return 0; + } + } + + return -EINVAL; +} + +#define KVM_PTE_LEAF_S2_COMPAT_MASK (KVM_PTE_LEAF_ATTR_S2_PERMS | \ + KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR | \ + KVM_PTE_LEAF_SW_BIT_PROT_NONE) + +static int stage2_reduce_range_walker(u64 addr, u64 end, u32 level, + kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) +{ + struct stage2_reduce_range_data *data = arg; + kvm_pte_t attr; + int ret; + + if (addr < data->range->start || addr >= data->range->end) + return 0; + + attr = *ptep & KVM_PTE_LEAF_S2_COMPAT_MASK; + if (!attr || attr == data->attr) + return 0; + + /* + * An existing mapping with incompatible protection attributes is + * 'pinned', so reduce the range if we hit one. + */ + ret = __stage2_reduce_range(data, addr); + if (ret) + return ret; + + return -EAGAIN; +} + +static int stage2_reduce_range(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_prot prot, + struct kvm_mem_range *range) +{ + struct stage2_reduce_range_data data = { + .start_level = pgt->start_level, + .range = range, + .target_addr = addr, + }; + struct kvm_pgtable_walker walker = { + .cb = stage2_reduce_range_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = &data, + }; + int ret; + + data.attr = stage2_get_prot_attr(prot) & KVM_PTE_LEAF_S2_COMPAT_MASK; + if (!data.attr) + return -EINVAL; + + /* Reduce the kvm_mem_range to a granule size */ + ret = __stage2_reduce_range(&data, range->end); + if (ret) + return ret; + + /* Walk the range to check permissions and reduce further if needed */ + do { + ret = kvm_pgtable_walk(pgt, range->start, range->end, &walker); + } while (ret == -EAGAIN); + + return ret; +} + +int kvm_pgtable_stage2_idmap_greedy(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_prot prot, + struct kvm_mem_range *range, + void *mc) +{ + u64 size; + int ret; + + ret = stage2_reduce_range(pgt, addr, prot, range); + if (ret) + return ret; + + size = range->end - range->start; + return kvm_pgtable_stage2_map(pgt, range->start, size, range->start, + prot, mc); +}