From patchwork Tue Jan 26 12:44:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12046357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCEE8C433E0 for ; Tue, 26 Jan 2021 12:47:50 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A12062311D for ; Tue, 26 Jan 2021 12:47:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A12062311D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Yz00kb/YFajDgPo4dxKozuxCNZjI2omQWyKOWCWXZ80=; b=YsE2BgQYopz7zLAxuvgcaHIHW A0igpqkbnq/5tuhctSAxugmOqigMjn82vEu6GS3vTcpwQu84NBDLupEuh+BN5yCX4cokHUZAs4Y1/ HR2vVgdniSM54BtrUD6IQg+hrIH6dKCnDSdnBKMkutvD+ejfuvvkkyzbep1Np3irPYFuQgpZrGlF7 7U5on0CyftbxmMWTnH3p6HBaAl/qrqk2gZfy7TIHpBAzKE7G9HfRLjwJnJm1WsxWs9FRB2j3XETZE xRdvzIirwXw9oM4k7NfF8NZBxMbvuZXaV+D9bwQrtrapdTEJO0L6hOL0vt1QSC0YdX/AYFVOFpVrh UtPfW5cCw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nj5-0000Um-KG; Tue, 26 Jan 2021 12:45:44 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nin-0000RQ-6E for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2021 12:45:27 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DQ5yX1ljRzjDdp; Tue, 26 Jan 2021 20:44:04 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Tue, 26 Jan 2021 20:45:05 +0800 From: Keqian Zhu To: , , , , Marc Zyngier , Will Deacon , Catalin Marinas Subject: [RFC PATCH 5/7] kvm: arm64: Add some HW_DBM related mmu interfaces Date: Tue, 26 Jan 2021 20:44:42 +0800 Message-ID: <20210126124444.27136-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210126124444.27136-1-zhukeqian1@huawei.com> References: <20210126124444.27136-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210126_074525_758199_3C802B70 X-CRM114-Status: UNSURE ( 9.06 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , yubihong@huawei.com, jiangkunkun@huawei.com, Suzuki K Poulose , Cornelia Huck , Kirti Wankhede , xiexiangyou@huawei.com, zhengchuan@huawei.com, Alex Williamson , James Morse , wanghaibin.wang@huawei.com, Robin Murphy Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This adds set_dbm, clear_dbm and sync_dirty interfaces in mmu layer. They simply wrap those interfaces of pgtable layer, adding reschedule semantics. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_mmu.h | 7 +++++++ arch/arm64/kvm/mmu.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index e52d82aeadca..51dd31ba1b94 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -183,6 +183,13 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size, void **haddr); void free_hyp_pgds(void); +void kvm_stage2_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages); +void kvm_stage2_set_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages); +void kvm_stage2_sync_dirty(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages); + void stage2_unmap_vm(struct kvm *kvm); int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu); void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu); diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7d2257cc5438..18717fd12731 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -609,6 +609,36 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); } +void kvm_stage2_clear_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t addr = base_gfn << PAGE_SHIFT; + phys_addr_t end = (base_gfn + npages) << PAGE_SHIFT; + + stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_clear_dbm); +} + +void kvm_stage2_set_dbm(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t addr = base_gfn << PAGE_SHIFT; + phys_addr_t end = (base_gfn + npages) << PAGE_SHIFT; + + stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_set_dbm); +} + +void kvm_stage2_sync_dirty(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long npages) +{ + phys_addr_t base_gfn = slot->base_gfn + gfn_offset; + phys_addr_t addr = base_gfn << PAGE_SHIFT; + phys_addr_t end = (base_gfn + npages) << PAGE_SHIFT; + + stage2_apply_range_resched(kvm, addr, end, kvm_pgtable_stage2_sync_dirty); +} + static void clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) { __clean_dcache_guest_page(pfn, size);