From patchwork Thu Jul 2 13:55:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11638985 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0646413B4 for ; Thu, 2 Jul 2020 13:58:45 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D3F2F20772 for ; Thu, 2 Jul 2020 13:58:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ZXkp7AxD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3F2F20772 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=vua+4WcNWD3mR2DT2mQpCpK52mAfDNG8llmzB+zQym8=; b=ZXkp7AxDsbq2AeeydbiXAw23V Twf23Jzg/0FXKQDLH5KIRYi5lwKq8P0/PX3R05DhFr0kPwVKLBP/WI5KV9wQ+EWGERMdNrEGtsTW6 1BhQEdVFSfJRr6igSsJbn6YA25Ad3Z0ceOWF4oUPNNa1ogGqIVIQse0BeQV+M1AOYX8YnkW8TbQdm fQREwD8CGAUEnAs3H4Xpv1BHmmEouZC4U1oH2A8wlUeV2YpAgBFAaDDxs0mfGEdoHiub0ZkeLhtl8 9Wz7z7s60fhBPwk3wHrrz6fW+Q+kOGbM5+rl5iEXSruP+7Bn6D+mKhQTtpNmha+ZbcFGhqgXV6Coj fmdMk+Pjw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqziC-0002Gl-Rj; Thu, 02 Jul 2020 13:57:12 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jqzi8-0002Dp-Cw for linux-arm-kernel@lists.infradead.org; Thu, 02 Jul 2020 13:57:09 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 887BE1F41B7E81547404; Thu, 2 Jul 2020 21:56:10 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.187.22) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Thu, 2 Jul 2020 21:56:04 +0800 From: Keqian Zhu To: , , , Subject: [PATCH v2 3/8] KVM: arm64: Modify stage2 young mechanism to support hw DBM Date: Thu, 2 Jul 2020 21:55:51 +0800 Message-ID: <20200702135556.36896-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200702135556.36896-1-zhukeqian1@huawei.com> References: <20200702135556.36896-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.187.22] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200702_095708_738437_854FAB84 X-CRM114-Status: GOOD ( 12.38 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Steven Price , liangpeng10@huawei.com, Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Marking PTs young (set AF bit) should be atomic to avoid cover dirty status set by hardware. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/kvm_mmu.h | 31 ++++++++++++++++++++++--------- arch/arm64/kvm/mmu.c | 15 ++++++++------- 2 files changed, 30 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 4c12b7ad8ae8..a1b6131d980c 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -219,6 +219,18 @@ static inline void kvm_set_s2pte_readonly(pte_t *ptep) } while (pteval != old_pteval); } +static inline void kvm_set_s2pte_young(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval |= PTE_AF; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + static inline bool kvm_s2pte_readonly(pte_t *ptep) { return (READ_ONCE(pte_val(*ptep)) & PTE_S2_RDWR) == PTE_S2_RDONLY; @@ -234,6 +246,11 @@ static inline void kvm_set_s2pmd_readonly(pmd_t *pmdp) kvm_set_s2pte_readonly((pte_t *)pmdp); } +static inline void kvm_set_s2pmd_young(pmd_t *pmdp) +{ + kvm_set_s2pte_young((pte_t *)pmdp); +} + static inline bool kvm_s2pmd_readonly(pmd_t *pmdp) { return kvm_s2pte_readonly((pte_t *)pmdp); @@ -249,6 +266,11 @@ static inline void kvm_set_s2pud_readonly(pud_t *pudp) kvm_set_s2pte_readonly((pte_t *)pudp); } +static inline void kvm_set_s2pud_young(pud_t *pudp) +{ + kvm_set_s2pte_young((pte_t *)pudp); +} + static inline bool kvm_s2pud_readonly(pud_t *pudp) { return kvm_s2pte_readonly((pte_t *)pudp); @@ -259,15 +281,6 @@ static inline bool kvm_s2pud_exec(pud_t *pudp) return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN); } -static inline pud_t kvm_s2pud_mkyoung(pud_t pud) -{ - return pud_mkyoung(pud); -} - -static inline bool kvm_s2pud_young(pud_t pud) -{ - return pud_young(pud); -} static inline bool arm_mmu_hw_dbm_supported(void) { diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index b3cb8b6da4c2..ab8a6ceecbd8 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2008,8 +2008,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * Resolve the access fault by making the page young again. * Note that because the faulting entry is guaranteed not to be * cached in the TLB, we don't need to invalidate anything. - * Only the HW Access Flag updates are supported for Stage 2 (no DBM), - * so there is no need for atomic (pte|pmd)_mkyoung operations. + * + * Note: Both DBM and HW AF updates are supported for Stage2, so + * young operations should be atomic. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) { @@ -2027,15 +2028,15 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) goto out; if (pud) { /* HugeTLB */ - *pud = kvm_s2pud_mkyoung(*pud); + kvm_set_s2pud_young(pud); pfn = kvm_pud_pfn(*pud); pfn_valid = true; } else if (pmd) { /* THP, HugeTLB */ - *pmd = pmd_mkyoung(*pmd); + kvm_set_s2pmd_young(pmd); pfn = pmd_pfn(*pmd); pfn_valid = true; - } else { - *pte = pte_mkyoung(*pte); /* Just a page... */ + } else { /* Just a page... */ + kvm_set_s2pte_young(pte); pfn = pte_pfn(*pte); pfn_valid = true; } @@ -2280,7 +2281,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void * return 0; if (pud) - return kvm_s2pud_young(*pud); + return pud_young(*pud); else if (pmd) return pmd_young(*pmd); else