From patchwork Fri Aug 25 09:35:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Shameer Kolothum X-Patchwork-Id: 13365366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C8145C3DA66 for ; Fri, 25 Aug 2023 09:37:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BDe9L6gW89MUhlnnttZWuUyLh5VxQ1YMRlJ43K/N1kk=; b=mDGE4WVRBvyKO8 dj/vDEEP2jcAWYP3WdZq4tVNsipnZeCRRbLfYBNMjTjH4HYEhIGbVvFYPu5Vt6mBYyyyx3vJXuYL5 plIPVbCK1uxk+KrOkAp1kEkr6ILHBKN+cbktlm9k66PTLGm8rjlgH7ZNTyZoCE4o2LQzy9sLPvScB 8E/UDMjGEgDFIeEmvllWt0SJeiaBPPsHPL39baPPdFCEdsIfkn+qJqFIH7XfMP1PKclPuJNKDx1Ud nEA1TnGl/gjxWNllZSdrVniCwdBAjrLZFOWzNSakPVBD5WrjHPRC06HCgrg++/hLNniYnY5Mznqnz 3hR9/OfXYrfpoOp2yCJQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qZTFX-004keM-2F; Fri, 25 Aug 2023 09:37:03 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qZTFM-004kah-2v for linux-arm-kernel@lists.infradead.org; Fri, 25 Aug 2023 09:36:54 +0000 Received: from lhrpeml500005.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4RXF9L1k6cz688KN; Fri, 25 Aug 2023 17:32:38 +0800 (CST) Received: from A2006125610.china.huawei.com (10.202.227.178) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 25 Aug 2023 10:36:43 +0100 From: Shameer Kolothum To: , , , , , , CC: , , , , , Subject: [RFC PATCH v2 4/8] KVM: arm64: Set DBM for previously writeable pages Date: Fri, 25 Aug 2023 10:35:24 +0100 Message-ID: <20230825093528.1637-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20230825093528.1637-1-shameerali.kolothum.thodi@huawei.com> References: <20230825093528.1637-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.202.227.178] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230825_023653_231731_08319879 X-CRM114-Status: GOOD ( 13.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We only set DBM if the page is writeable (S2AP[1] == 1). But once migration starts, CLEAR_LOG path will write protect the pages (S2AP[1] = 0) and there isn't an easy way to differentiate the writeable pages that gets write protected from read-only pages as we only have S2AP[1] bit to check. Introduced a ctx->flag KVM_PGTABLE_WALK_WC_HINT to identify the dirty page tracking related write-protect page table walk and used one of the "Reserved for software use" bit in page descriptor to mark a page as "writeable-clean".  Signed-off-by: Shameer Kolothum --- arch/arm64/include/asm/kvm_pgtable.h | 5 +++++ arch/arm64/kvm/hyp/pgtable.c | 25 ++++++++++++++++++++++--- 2 files changed, 27 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index a12add002b89..67bcbc5984f9 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -190,6 +190,8 @@ enum kvm_pgtable_prot { #define KVM_PGTABLE_PROT_RW (KVM_PGTABLE_PROT_R | KVM_PGTABLE_PROT_W) #define KVM_PGTABLE_PROT_RWX (KVM_PGTABLE_PROT_RW | KVM_PGTABLE_PROT_X) +#define KVM_PGTABLE_PROT_WC KVM_PGTABLE_PROT_SW0 /*write-clean*/ + #define PKVM_HOST_MEM_PROT KVM_PGTABLE_PROT_RWX #define PKVM_HOST_MMIO_PROT KVM_PGTABLE_PROT_RW @@ -221,6 +223,8 @@ typedef bool (*kvm_pgtable_force_pte_cb_t)(u64 addr, u64 end, * operations required. * @KVM_PGTABLE_WALK_HW_DBM: Indicates that the attribute update is * HW DBM related. + * @KVM_PGTABLE_WALK_WC_HINT: Update the page as writeable-clean(software attribute) + * if we are write protecting a writeable page. */ enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_LEAF = BIT(0), @@ -231,6 +235,7 @@ enum kvm_pgtable_walk_flags { KVM_PGTABLE_WALK_SKIP_BBM_TLBI = BIT(5), KVM_PGTABLE_WALK_SKIP_CMO = BIT(6), KVM_PGTABLE_WALK_HW_DBM = BIT(7), + KVM_PGTABLE_WALK_WC_HINT = BIT(8), }; struct kvm_pgtable_visit_ctx { diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index d7a46a00a7f6..4552bfb1f274 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -69,6 +69,11 @@ struct kvm_pgtable_walk_data { const u64 end; }; +static bool kvm_pgtable_walk_wc_hint(const struct kvm_pgtable_visit_ctx *ctx) +{ + return ctx->flags & KVM_PGTABLE_WALK_WC_HINT; +} + static bool kvm_pgtable_walk_hw_dbm(const struct kvm_pgtable_visit_ctx *ctx) { return ctx->flags & KVM_PGTABLE_WALK_HW_DBM; @@ -771,13 +776,24 @@ static bool stage2_pte_writeable(kvm_pte_t pte) return pte & KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W; } +static bool stage2_pte_is_write_clean(kvm_pte_t pte) +{ + return kvm_pte_valid(pte) && (pte & KVM_PGTABLE_PROT_WC); +} + +static bool stage2_pte_can_be_write_clean(const struct kvm_pgtable_visit_ctx *ctx, + kvm_pte_t new) +{ + return (stage2_pte_writeable(ctx->old) && !stage2_pte_writeable(new)); +} + static void kvm_update_hw_dbm(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new) { kvm_pte_t old_pte, pte = ctx->old; - /* Only set DBM if page is writeable */ - if ((new & KVM_PTE_LEAF_ATTR_HI_S2_DBM) && !stage2_pte_writeable(pte)) + /* Only set DBM if page is writeable-clean */ + if ((new & KVM_PTE_LEAF_ATTR_HI_S2_DBM) && !stage2_pte_is_write_clean(pte)) return; /* Clear DBM walk is not shared, update */ @@ -805,6 +821,9 @@ static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_ } if (!kvm_pgtable_walk_shared(ctx)) { + if (kvm_pgtable_walk_wc_hint(ctx) && + stage2_pte_can_be_write_clean(ctx, new)) + new |= KVM_PGTABLE_PROT_WC; WRITE_ONCE(*ctx->ptep, new); return true; } @@ -1306,7 +1325,7 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) { return stage2_update_leaf_attrs(pgt, addr, size, 0, KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W, - NULL, NULL, 0); + NULL, NULL, KVM_PGTABLE_WALK_WC_HINT); } kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr)