From patchwork Mon Nov 30 12:18:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 11940323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1F89C64E8A for ; Mon, 30 Nov 2020 12:20:52 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 22827206D8 for ; Mon, 30 Nov 2020 12:20:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NuH1jF6p" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 22827206D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=reeiwdEN83z1j7QjNvfzGkXXONd9XhhSsnuxxoVtPXg=; b=NuH1jF6p3hiqOtIxfki0WVJ+d WNCqdD+B8QNmc+BEw8l0EjL9ZthMZVlzadq1UPKBLXL017hSH4G+XFwYXwD3OWb8OsVYlsLHZcaa+ hpAV2RjnW+E0XICULxQfS9eXVYsxvwwmtVjCej5upD0MJjp8Eb8HgJ8Tqaf1HQUhzPUfnXI4W5Wrs kWjHBan55HlZf2stja/i7y3wzlGEzl7ikPulGnFiOSpY9CH1gzX187D9mqrCxpZ6Wk4EMU1GyclyT wMnCGTR8mOA3zHhAS0Nm5TbBOwVY1Bet14sDoQ/zo0gJoLNAPuZ6gcL7AGzPts4J8zIDcmg1TXpF3 bBFnwzw/Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9S-0005i7-95; Mon, 30 Nov 2020 12:19:30 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kji9H-0005fW-Es for linux-arm-kernel@lists.infradead.org; Mon, 30 Nov 2020 12:19:21 +0000 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Cl45j6WPjz15K9J; Mon, 30 Nov 2020 20:18:49 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.186.123) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.487.0; Mon, 30 Nov 2020 20:19:08 +0800 From: Yanan Wang To: , , Marc Zyngier , Catalin Marinas , Will Deacon , James Morse , "Julien Thierry" , Suzuki K Poulose , Gavin Shan , Quentin Perret Subject: [RFC PATCH 2/3] KVM: arm64: Fix handling of merging tables into a block entry Date: Mon, 30 Nov 2020 20:18:46 +0800 Message-ID: <20201130121847.91808-3-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201130121847.91808-1-wangyanan55@huawei.com> References: <20201130121847.91808-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.186.123] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201130_071920_309538_26C1D706 X-CRM114-Status: GOOD ( 16.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lushenming@huawei.com, jiangkunkun@huawei.com, Yanan Wang , yezengruan@huawei.com, wangjingyi11@huawei.com, yuzenghui@huawei.com, wanghaibin.wang@huawei.com, zhukeqian1@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In dirty logging case(logging_active == True), we need to collapse a block entry into a table if necessary. After dirty logging is canceled, when merging tables back into a block entry, we should not only free the non-huge page tables but also unmap the non-huge mapping for the block. Without the unmap, inconsistent TLB entries for the pages in the the block will be created. We could also use unmap_stage2_range API to unmap the non-huge mapping, but this could potentially free the upper level page-table page which will be useful later. Signed-off-by: Yanan Wang --- arch/arm64/kvm/hyp/pgtable.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 696b6aa83faf..fec8dc9f2baa 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -500,6 +500,9 @@ static int stage2_map_walk_table_pre(u64 addr, u64 end, u32 level, return 0; } +static void stage2_flush_dcache(void *addr, u64 size); +static bool stage2_pte_cacheable(kvm_pte_t pte); + static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct stage2_map_data *data) { @@ -507,9 +510,17 @@ static int stage2_map_walk_leaf(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, struct page *page = virt_to_page(ptep); if (data->anchor) { - if (kvm_pte_valid(pte)) + if (kvm_pte_valid(pte)) { + kvm_set_invalid_pte(ptep); + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, data->mmu, + addr, level); put_page(page); + if (stage2_pte_cacheable(pte)) + stage2_flush_dcache(kvm_pte_follow(pte), + kvm_granule_size(level)); + } + return 0; } @@ -574,7 +585,7 @@ static int stage2_map_walk_table_post(u64 addr, u64 end, u32 level, * The behaviour of the LEAF callback then depends on whether or not the * anchor has been set. If not, then we're not using a block mapping higher * up the table and we perform the mapping at the existing leaves instead. - * If, on the other hand, the anchor _is_ set, then we drop references to + * If, on the other hand, the anchor _is_ set, then we unmap the mapping of * all valid leaves so that the pages beneath the anchor can be freed. * * Finally, the TABLE_POST callback does nothing if the anchor has not