From patchwork Tue Jan 26 12:44:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12046363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8C09C433E6 for ; Tue, 26 Jan 2021 12:47:57 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4772C2311F for ; Tue, 26 Jan 2021 12:47:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4772C2311F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UA2E33f6XlQlP4Gm7oWHzeGu2W7tKq9CayVZEwXiTJw=; b=elrAHX1LzdWa4aNIh5AR1TFdO 2pireyV2oocu8ZlidLMaZtSkIzqSMrQqRhpDjyinRSIV9tAQ0/zsLu6VpAeg16GOaQRBLJx5iF2kJ HEmdmZlDP3RcrjVZ4btmkI1BXZUm7zDHHNQ8DFahKQVHxIX9cITxsT39XHbyOBWLF2c8UAcXODomp XoAuWcavGOR+LRFg+YdaUZ8p4F0Pu6SZwdudhhq2cy8MKOVL+VzvA0PemzfypK3llueT4fYFhgxly 9sL3YjAnSukTsJnLuo5yfqthl7SesVxOO+g+/Y327ULpPQa9DpyMql5wjRmnLPfDAcaZ04uyijXxk IOTPc2kPA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nj0-0000Tv-AL; Tue, 26 Jan 2021 12:45:38 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nii-0000Q2-BL for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2021 12:45:21 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DQ5yR0rkWzjDdn; Tue, 26 Jan 2021 20:43:59 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Tue, 26 Jan 2021 20:45:04 +0800 From: Keqian Zhu To: , , , , Marc Zyngier , Will Deacon , Catalin Marinas Subject: [RFC PATCH 4/7] kvm: arm64: Add some HW_DBM related pgtable interfaces Date: Tue, 26 Jan 2021 20:44:41 +0800 Message-ID: <20210126124444.27136-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210126124444.27136-1-zhukeqian1@huawei.com> References: <20210126124444.27136-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210126_074520_873520_6A457A8B X-CRM114-Status: GOOD ( 16.24 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , yubihong@huawei.com, jiangkunkun@huawei.com, Suzuki K Poulose , Cornelia Huck , Kirti Wankhede , xiexiangyou@huawei.com, zhengchuan@huawei.com, Alex Williamson , James Morse , wanghaibin.wang@huawei.com, Robin Murphy Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This adds set_dbm, clear_dbm and sync_dirty interfaces in pgtable layer. (1) set_dbm: Set DBM bit for last level PTE of a specified range. TLBI is completed. (2) clear_dbm: Clear DBM bit for last level PTE of a specified range. TLBI is not acted. (3) sync_dirty: Scan last level PTE of a specific range. Log dirty if PTE is writable. Besides, save the dirty state of PTE if it's invalided by map or unmap. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_pgtable.h | 45 ++++++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 70 ++++++++++++++++++++++++++++ 2 files changed, 115 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 52ab38db04c7..8984b5227cfc 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -204,6 +204,51 @@ int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); */ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +/** + * kvm_pgtable_stage2_clear_dbm() - Clear DBM of guest stage-2 address range + * without TLB invalidation (only last level). + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). + * @addr: Intermediate physical address from which to clear DBM, + * @size: Size of the range. + * + * The offset of @addr within a page is ignored and @size is rounded-up to + * the next page boundary. + * + * Note that it is the caller's responsibility to invalidate the TLB after + * calling this function to ensure that the disabled HW dirty are visible + * to the CPUs. + * + * Return: 0 on success, negative error code on failure. + */ +int kvm_pgtable_stage2_clear_dbm(struct kvm_pgtable *pgt, u64 addr, u64 size); + +/** + * kvm_pgtable_stage2_set_dbm() - Set DBM of guest stage-2 address range to + * enable HW dirty (only last level). + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). + * @addr: Intermediate physical address from which to set DBM. + * @size: Size of the range. + * + * The offset of @addr within a page is ignored and @size is rounded-up to + * the next page boundary. + * + * Return: 0 on success, negative error code on failure. + */ +int kvm_pgtable_stage2_set_dbm(struct kvm_pgtable *pgt, u64 addr, u64 size); + +/** + * kvm_pgtable_stage2_sync_dirty() - Sync HW dirty state into memslot. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). + * @addr: Intermediate physical address from which to sync. + * @size: Size of the range. + * + * The offset of @addr within a page is ignored and @size is rounded-up to + * the next page boundary. + * + * Return: 0 on success, negative error code on failure. + */ +int kvm_pgtable_stage2_sync_dirty(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entry. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0f8a319f16fe..b6f0d2f3aee4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -43,6 +43,7 @@ #define KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54) +#define KVM_PTE_LEAF_ATTR_HI_S2_DBM BIT(51) #define KVM_PTE_LEAF_ATTR_HI_S2_XN BIT(54) struct kvm_pgtable_walk_data { @@ -485,6 +486,11 @@ static int stage2_map_set_prot_attr(enum kvm_pgtable_prot prot, return 0; } +static bool stage2_pte_writable(kvm_pte_t pte) +{ + return pte & KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W; +} + static bool stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) @@ -509,6 +515,11 @@ static bool stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, /* There's an existing valid leaf entry, so perform break-before-make */ kvm_set_invalid_pte(ptep); kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, addr, level); + + /* Save the possible hardware dirty info */ + if ((level == KVM_PGTABLE_MAX_LEVELS - 1) && stage2_pte_writable(*ptep)) + mark_page_dirty(data->mmu->kvm, addr >> PAGE_SHIFT); + kvm_set_valid_leaf_pte(ptep, phys, data->attr, level); out: data->phys += granule; @@ -547,6 +558,10 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, if (kvm_pte_valid(pte)) put_page(page); + /* + * HW DBM is not working during page merging, so we don't + * need to save possible hardware dirty info here. + */ return 0; } @@ -707,6 +722,10 @@ static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, addr, level); put_page(virt_to_page(ptep)); + /* Save the possible hardware dirty info */ + if ((level == KVM_PGTABLE_MAX_LEVELS - 1) && stage2_pte_writable(*ptep)) + mark_page_dirty(mmu->kvm, addr >> PAGE_SHIFT); + if (need_flush) { stage2_flush_dcache(kvm_pte_follow(pte), kvm_granule_size(level)); @@ -792,6 +811,30 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) NULL, NULL); } +int kvm_pgtable_stage2_set_dbm(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + int ret; + u64 offset; + + ret = stage2_update_leaf_attrs(pgt, addr, size, + KVM_PTE_LEAF_ATTR_HI_S2_DBM, 0, BIT(3), + NULL, NULL); + if (!ret) + return ret; + + for (offset = 0; offset < size; offset += PAGE_SIZE) + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, pgt->mmu, addr, 3); + + return 0; +} + +int kvm_pgtable_stage2_clear_dbm(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return stage2_update_leaf_attrs(pgt, addr, size, + 0, KVM_PTE_LEAF_ATTR_HI_S2_DBM, BIT(3), + NULL, NULL); +} + kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) { kvm_pte_t pte = 0; @@ -844,6 +887,33 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, return ret; } +static int stage2_sync_dirty_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, + void * const arg) +{ + kvm_pte_t pte = *ptep; + struct kvm *kvm = arg; + + if (!kvm_pte_valid(pte)) + return 0; + + if ((level == KVM_PGTABLE_MAX_LEVELS - 1) && stage2_pte_writable(pte)) + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + + return 0; +} + +int kvm_pgtable_stage2_sync_dirty(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm_pgtable_walker walker = { + .cb = stage2_sync_dirty_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = pgt->mmu->kvm, + }; + + return kvm_pgtable_walk(pgt, addr, size, &walker); +} + static int stage2_flush_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, enum kvm_pgtable_walk_flags flag, void * const arg)