From patchwork Wed Dec 16 12:28:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yanan Wang X-Patchwork-Id: 11977459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98CD6C4361B for ; Wed, 16 Dec 2020 12:31:39 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3D7D82311B for ; Wed, 16 Dec 2020 12:31:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D7D82311B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=G9ZWJnFFXOIPNOnaaZs11FuBpXsjDF7sJRbCPYP4+SE=; b=bLjSkpFHJae7LZRX8pDXlf5/t kkYbibktOlmx8RULtBk/y90BAPTeyuEmv2E1U5SvsKumXRjGUnmfiPTm8nsf5GoN4005SCNtE1NyV AU0vZYAEXcQmOCVulpKWM5lOCj3v5CjXkjVxg5pVwSOqAtPD2dHPhACpGiLI/6ppQa08x1/fiQIiE sNF54H+isT1mxPBm5SN8g+aIIxPAed8RC6VpsHGxElzFyQbDBjzLjO5/YNg8xEeM5L7jJ5O4eG5P4 l9J8l9FyTbcGuR6ds7u+n1zgnaicmUuyFuBQGW95ZNDv81CwPI4/uZ1wBFRM2mcQBkXoQm91dwXNS +7Y99r6ag==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kpVwF-00052O-RG; Wed, 16 Dec 2020 12:29:52 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kpVvj-0004st-3Z for linux-arm-kernel@merlin.infradead.org; Wed, 16 Dec 2020 12:29:19 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=lIW9KYr0m1O8sRM/6ry7m4AzJuKQD+5QlKy93T9hUHU=; b=h1FBPjeNg5Vbx1cGdikTzI53VV HJ370o94l0uIQG18NWkL4qNsqDV/vLPOe1ePOd2oLbP+n3Hl8ThW9uBInuqxYkyVq1w88wB+5aN9B 6w+N9JLRomKm5NQzAf6UgsCBhkT0ONrdVemBsS1vZwuQ4ZhQQpdU+WiIrnrAWQYy05KySjaGOVucJ z5nsa68aQtaaeaxmcKOo7icGWdFhcB2La0xsyokWfVnZIbCqD0UU95ODqPFRBBBDEsbUxh7lu9fzn PtOpWzqAUpcL1BF8NN3N6R+6NCSAYa/pXlekZSKg0YPGcpTbDdoNySIt1Gq1ZIf9XF5B8oF7C38Uw NtnhhaLA==; Received: from szxga04-in.huawei.com ([45.249.212.190]) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kpVve-00031M-5J for linux-arm-kernel@lists.infradead.org; Wed, 16 Dec 2020 12:29:17 +0000 Received: from DGGEMS414-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4CwvYK11p2z15cyv; Wed, 16 Dec 2020 20:28:21 +0800 (CST) Received: from DESKTOP-TMVL5KK.china.huawei.com (10.174.187.128) by DGGEMS414-HUB.china.huawei.com (10.3.19.214) with Microsoft SMTP Server id 14.3.498.0; Wed, 16 Dec 2020 20:28:51 +0800 From: Yanan Wang To: , , Marc Zyngier , Catalin Marinas , Will Deacon , James Morse , "Julien Thierry" , Suzuki K Poulose , Gavin Shan , Quentin Perret Subject: [PATCH v2 3/3] KVM: arm64: Mark the page dirty only if the fault is handled successfully Date: Wed, 16 Dec 2020 20:28:44 +0800 Message-ID: <20201216122844.25092-4-wangyanan55@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20201216122844.25092-1-wangyanan55@huawei.com> References: <20201216122844.25092-1-wangyanan55@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.128] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201216_122914_780888_8DA9F980 X-CRM114-Status: GOOD ( 14.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: yuzenghui@huawei.com, wanghaibin.wang@huawei.com, Yanan Wang , zhukeqian1@huawei.com, yezengruan@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We now mark the page dirty and set the bitmap before calling fault handlers in user_mem_abort(), and we might end up having spurious dirty pages if update of permissions or mapping has failed. So, mark the page dirty only if the fault is handled successfully. Let the guest directly enter again but not return to userspace if we were trying to recreate the same mapping or only change access permissions with BBM, which is not permitted in the mapping path. Signed-off-by: Yanan Wang --- arch/arm64/kvm/mmu.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 75814a02d189..72e516a10914 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -879,11 +879,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (vma_pagesize == PAGE_SIZE && !force_pte) vma_pagesize = transparent_hugepage_adjust(memslot, hva, &pfn, &fault_ipa); - if (writable) { + if (writable) prot |= KVM_PGTABLE_PROT_W; - kvm_set_pfn_dirty(pfn); - mark_page_dirty(kvm, gfn); - } if (fault_status != FSC_PERM && !device) clean_dcache_guest_page(pfn, vma_pagesize); @@ -911,6 +908,19 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, memcache); } + /* Mark the page dirty only if the fault is handled successfully */ + if (writable && !ret) { + kvm_set_pfn_dirty(pfn); + mark_page_dirty(kvm, gfn); + } + + /* Let the guest directly enter again if we were trying to recreate the + * same mapping or only change access permissions with BBM, which is not + * permitted in the mapping path. + */ + if (ret == -EAGAIN) + ret = 0; + out_unlock: spin_unlock(&kvm->mmu_lock); kvm_set_pfn_accessed(pfn);