From patchwork Mon May 25 11:24:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F29A01391 for ; Mon, 25 May 2020 11:26:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C1F2420723 for ; Mon, 25 May 2020 11:26:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="EsChtBes" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C1F2420723 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2+wE4xbpc7vXGNLEHhqejhOBhrKo8kB/yDQoJZOtzPw=; b=EsChtBesHHbkQt /ujZFRGHqvV9QC5zf5B/YZbMH3tZpnVJzPc7JLZuA8C2nHjtgI5b3W3u3ygHLleD/V/NBHup/Di8p gYjbfag8OzvtfnhHjHi0Joq3rt7rrB3nlqncU+8FX/KgdTjC+iCbOwIbI3YJsButg4Z2kO0K97N+P vk51UbhqGF0tUO//Di33zWfo+XkdSL9LMtJec6K7FiitmXE2sBqqhugBNOlC4wjqzF2WUKvnQnbK+ 2tf+lstxkq/UBSDD8YBDoYzO/VZxkJWwlDclQd7z6n15q+pdvWJb6esaECTAvO1lpaWoynOeREISs 2h1aMc8k4PLaIYnTTVQw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBFF-0006zO-J7; Mon, 25 May 2020 11:26:13 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-0005jr-MY for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:23 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 6B0142A7762008D953ED; Mon, 25 May 2020 19:25:09 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:02 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 1/7] KVM: arm64: Add some basic functions for hw DBM Date: Mon, 25 May 2020 19:24:00 +0800 Message-ID: <20200525112406.28224-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_950993_C0E8C767 X-CRM114-Status: GOOD ( 10.04 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Peng Liang , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Prepare some basic functions used by following patches to support hardware DBM. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/kvm_mmu.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 30b0e8d6b895..8df078f0ee67 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -285,6 +285,30 @@ static inline bool kvm_s2pud_young(pud_t pud) return pud_young(pud); } +#ifdef CONFIG_ARM64_HW_AFDBM +static inline bool kvm_hw_dbm_enabled(void) +{ + return !!(read_sysreg(vtcr_el2) & VTCR_EL2_HD); +} + +static inline void kvm_set_s2pte_dbm(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval |= PTE_DBM; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + +static inline bool kvm_s2pte_dbm(pte_t *ptep) +{ + return !!(READ_ONCE(pte_val(*ptep)) & PTE_DBM); +} +#endif /* CONFIG_ARM64_HW_AFDBM */ + #define hyp_pte_table_empty(ptep) kvm_page_empty(ptep) #ifdef __PAGETABLE_PMD_FOLDED From patchwork Mon May 25 11:24:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568617 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6266E1391 for ; Mon, 25 May 2020 11:27:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 40F56207FB for ; Mon, 25 May 2020 11:27:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="uRWB0kGC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40F56207FB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EBJz5/W4NSmM9wpdCNEuJNvf1rfLlSUag/oa1EYnFkE=; b=uRWB0kGChtuKp/ 2WgrbOx3bMVxPzqwqntWzBIvfnawpOI8tDrEKmrnhDQz00kb0YrfQzO6Q8ZWgfgB1U+n0bpKKzSV2 haKybSDVZWdlHbUQWG9mvwRmEuQjCkc5fpTvQ0pFav0T68vwVpH/jun7O9FHmr8zFLatahCntxqcL wUq9oJcysP+YWxbMLNwpuqdLq9uzQNgg7OyylgjcbWlWVrOnEL9wMhjha29fZVNozLdD24f+0/Ace oDiWS9UsCmKqjOwPLh1F0Z3/oyPJ3vUBI8XecAvzRGx8rGoZ4A13SWa0naSP8Uy6y/KqRaOpWKB0p 3a5FW5NaCYgTBkPfwojQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBFw-0007g0-P0; Mon, 25 May 2020 11:26:56 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-0005jq-MR for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:23 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 6277791ED5DCDAB77912; Mon, 25 May 2020 19:25:09 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:03 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 2/7] KVM: arm64: Set DBM bit of PTEs if hw DBM enabled Date: Mon, 25 May 2020 19:24:01 +0800 Message-ID: <20200525112406.28224-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_929813_C548077A X-CRM114-Status: GOOD ( 12.40 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Peng Liang , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In user_mem_abort, for normal case (mem_type is PAGE_S2), set DBM bit of PTEs if hw DBM enabled. We also check and set DBM bit during write protect PTEs to make it works well if we miss some cases. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/pgtable-prot.h | 1 + virt/kvm/arm/mmu.c | 14 +++++++++++++- 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 1305e28225fc..f9910ba2afd8 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -79,6 +79,7 @@ extern bool arm64_use_ng_mappings; }) #define PAGE_S2 __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PAGE_S2_XN) +#define PAGE_S2_DBM __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PAGE_S2_XN | PTE_DBM) #define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_S2_XN) #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index e3b9ee268823..dc97988eb2e0 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1426,6 +1426,10 @@ static void stage2_wp_ptes(pmd_t *pmd, phys_addr_t addr, phys_addr_t end) pte = pte_offset_kernel(pmd, addr); do { if (!pte_none(*pte)) { +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm_hw_dbm_enabled() && !kvm_s2pte_dbm(pte)) + kvm_set_s2pte_dbm(pte); +#endif if (!kvm_s2pte_readonly(pte)) kvm_set_s2pte_readonly(pte); } @@ -1827,7 +1831,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { - pte_t new_pte = kvm_pfn_pte(pfn, mem_type); + pte_t new_pte; + +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm_hw_dbm_enabled() && + pgprot_val(mem_type) == pgprot_val(PAGE_S2)) { + mem_type = PAGE_S2_DBM; + } +#endif + new_pte = kvm_pfn_pte(pfn, mem_type); if (writable) { new_pte = kvm_s2pte_mkwrite(new_pte); From patchwork Mon May 25 11:24:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568619 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 564A81391 for ; Mon, 25 May 2020 11:27:17 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D028F20723 for ; Mon, 25 May 2020 11:27:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="rqSv8LKb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D028F20723 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v/qMChMmjptait1mA4FVewrMPMTnA8ZNk6eMPW8IH2Y=; b=rqSv8LKbbNk1up 41tKD6Y6Lc1TDUUGN0paaufw5alrg6n+PNUlWp6JkMWTI5LxC0ueDYCmtIc2qVyMJEwbSol5RZTcm vcn7DjnYDw0v+hLEhDr9nNCQpRWyxEPI/kTEN2YZd8adA1zlGyXToUjFj6xSE+W6ztHOVKbM7XcBl ValstvxK1THsBf5tzDPqRiWvcLztySzPoQe4IYmpGN3K3HBvanPmpHLF07MEBoL24XZSnlvVXu5Ja Oc5jjBVx4snl91xjA/GFYwMFy8WdvDkcvOAKOH3TQHyz8gmC8egF32VHCgy+f5E5O+ncgqYFmgSzs IGq2Jzzwf+yrymV6euSQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBG9-0007ty-GY; Mon, 25 May 2020 11:27:09 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-00067m-MT for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:24 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 8C55850255F4F8827112; Mon, 25 May 2020 19:25:14 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:04 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 3/7] KVM: arm64: Traverse page table entries when sync dirty log Date: Mon, 25 May 2020 19:24:02 +0800 Message-ID: <20200525112406.28224-4-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_946438_5C59ACA6 X-CRM114-Status: GOOD ( 14.73 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Peng Liang , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org For hardware management of dirty state, dirty state is stored in page table entries. We have to traverse page table entries when sync dirty log. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/kvm_host.h | 1 + virt/kvm/arm/arm.c | 6 +- virt/kvm/arm/mmu.c | 127 ++++++++++++++++++++++++++++++ 3 files changed, 133 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 32c8a675e5a4..916617d3fed6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -480,6 +480,7 @@ u64 __kvm_call_hyp(void *hypfn, ...); void force_vm_exit(const cpumask_t *mask); void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); +int kvm_mmu_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot); int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int exception_index); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 48d0ec44ad77..975311fa3a27 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1191,7 +1191,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp, void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) { - +#ifdef CONFIG_ARM64_HW_AFDBM + if (kvm_hw_dbm_enabled()) { + kvm_mmu_sync_dirty_log(kvm, memslot); + } +#endif } void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index dc97988eb2e0..ff8df9702e04 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -2266,6 +2266,133 @@ int kvm_mmu_init(void) return err; } +#ifdef CONFIG_ARM64_HW_AFDBM +/** + * stage2_sync_dirty_log_ptes() - synchronize dirty log from PMD range + * @kvm: The KVM pointer + * @pmd: pointer to pmd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_ptes(struct kvm *kvm, pmd_t *pmd, + phys_addr_t addr, phys_addr_t end) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + do { + if (!pte_none(*pte) && !kvm_s2pte_readonly(pte)) { + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + } + } while (pte++, addr += PAGE_SIZE, addr != end); +} + +/** + * stage2_sync_dirty_log_pmds() - synchronize dirty log from PUD range + * @kvm: The KVM pointer + * @pud: pointer to pud entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_pmds(struct kvm *kvm, pud_t *pud, + phys_addr_t addr, phys_addr_t end) +{ + pmd_t *pmd; + phys_addr_t next; + + pmd = stage2_pmd_offset(kvm, pud, addr); + do { + next = stage2_pmd_addr_end(kvm, addr, end); + if (!pmd_none(*pmd) && !pmd_thp_or_huge(*pmd)) { + stage2_sync_dirty_log_ptes(kvm, pmd, addr, next); + } + } while (pmd++, addr = next, addr != end); +} + +/** + * stage2_sync_dirty_log_puds() - synchronize dirty log from PGD range + * @kvm: The KVM pointer + * @pgd: pointer to pgd entry + * @addr: range start address + * @end: range end address + */ +static void stage2_sync_dirty_log_puds(struct kvm *kvm, pgd_t *pgd, + phys_addr_t addr, phys_addr_t end) +{ + pud_t *pud; + phys_addr_t next; + + pud = stage2_pud_offset(kvm, pgd, addr); + do { + next = stage2_pud_addr_end(kvm, addr, end); + if (!stage2_pud_none(kvm, *pud) && !stage2_pud_huge(kvm, *pud)) { + stage2_sync_dirty_log_pmds(kvm, pud, addr, next); + } + } while (pud++, addr = next, addr != end); +} + +/** + * stage2_sync_dirty_log_range() - synchronize dirty log from stage2 memory + * region range + * @kvm: The KVM pointer + * @addr: Start address of range + * @end: End address of range + */ +static void stage2_sync_dirty_log_range(struct kvm *kvm, phys_addr_t addr, + phys_addr_t end) +{ + pgd_t *pgd; + phys_addr_t next; + + pgd = kvm->arch.pgd + stage2_pgd_index(kvm, addr); + do { + /* + * Release kvm_mmu_lock periodically if the memory region is + * large. Otherwise, we may see kernel panics with + * CONFIG_DETECT_HUNG_TASK, CONFIG_LOCKUP_DETECTOR, + * CONFIG_LOCKDEP. Additionally, holding the lock too long + * will also starve other vCPUs. We have to also make sure + * that the page tables are not freed while we released + * the lock. + */ + cond_resched_lock(&kvm->mmu_lock); + if (!READ_ONCE(kvm->arch.pgd)) + break; + next = stage2_pgd_addr_end(kvm, addr, end); + if (stage2_pgd_present(kvm, *pgd)) + stage2_sync_dirty_log_puds(kvm, pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +/** + * kvm_mmu_sync_dirty_log() - synchronize dirty log from stage2 entries for + * memory slot + * @kvm: The KVM pointer + * @slot: The memory slot to synchronize dirty log + * + * Called to synchronize dirty log (marked by hw) after memory region + * KVM_GET_DIRTY_LOG operation is called. After this function returns + * all dirty log information (for that hw will modify page tables during + * this routine, it is true only when guest is stopped, but it is OK + * because we won't miss dirty log finally.) are collected into memslot + * dirty_bitmap. Afterwards dirty_bitmap can be copied to userspace. + * + * Acquires kvm_mmu_lock. Called with kvm->slots_lock mutex acquired, + * serializing operations for VM memory regions. + */ +int kvm_mmu_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) +{ + phys_addr_t start = memslot->base_gfn << PAGE_SHIFT; + phys_addr_t end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + + spin_lock(&kvm->mmu_lock); + stage2_sync_dirty_log_range(kvm, start, end); + spin_unlock(&kvm->mmu_lock); + + return 0; +} +#endif /* CONFIG_ARM64_HW_AFDBM */ + void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, struct kvm_memory_slot *old, From patchwork Mon May 25 11:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568591 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F198C1392 for ; Mon, 25 May 2020 11:25:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C0BC720723 for ; Mon, 25 May 2020 11:25:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="iLSRzrVv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C0BC720723 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=MHQsBOQSgGzMpsgne/WPlhgCqcbGZ/dq0Z843tFQy6E=; b=iLSRzrVvvL566M 5FAaVWb20GZVfd15V9WEznSiWsmd0QuM+maVCaynpXjyKOqFStm14Cf/RK4XgfXoTPxElFTZpJAKN hSObwQ5EaZwopvDPA5Um/vlydzNBCvk5+x+r9deWdxNBLSSmuCACPmIblUQfrK0z33uFJ42ylqVvU r7O+jqGpdUf8eAEVUisKoHtA0504laAO5noJpbNDRJRFyIKTCDNKwxpvY+nb/zo5WQ8Ybj0lW8oQL UgyqjiD8p7qii32wNJzHuSKFMKrm3HRHxHoGKCQWYMrlkny+zs4wlY5iQWFnPyoQjX6Nn6xVU+wqM ZrXsEBpJFcvwafXZ+uTg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEb-0006Jd-G6; Mon, 25 May 2020 11:25:33 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-00067j-MS for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:21 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 737446457F0D4572A428; Mon, 25 May 2020 19:25:14 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:05 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 4/7] KVM: arm64: Steply write protect page table by mask bit Date: Mon, 25 May 2020 19:24:03 +0800 Message-ID: <20200525112406.28224-5-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_931227_AE099BDF X-CRM114-Status: GOOD ( 11.72 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org During dirty log clear, page table entries are write protected according to a mask. In the past we write protect all entries corresponding to the mask from ffs to fls. Though there may be zero bits between this range, we are holding the kvm mmu lock so we won't write protect entries that we don't want to. We are about to add support for hardware management of dirty state to arm64, holding kvm mmu lock will be not enough. We should write protect entries steply by mask bit. Signed-off-by: Keqian Zhu --- virt/kvm/arm/mmu.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index ff8df9702e04..779859b85d6d 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1568,10 +1568,16 @@ static void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, gfn_t gfn_offset, unsigned long mask) { phys_addr_t base_gfn = slot->base_gfn + gfn_offset; - phys_addr_t start = (base_gfn + __ffs(mask)) << PAGE_SHIFT; - phys_addr_t end = (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + phys_addr_t start, end; + u32 i; - stage2_wp_range(kvm, start, end); + for (i = __ffs(mask); i <= __fls(mask); i++) { + if (test_bit(i, &mask)) { + start = (base_gfn + i) << PAGE_SHIFT; + end = (base_gfn + i + 1) << PAGE_SHIFT; + stage2_wp_range(kvm, start, end); + } + } } /* From patchwork Mon May 25 11:24:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568611 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73EBB1392 for ; Mon, 25 May 2020 11:26:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 43C3B20723 for ; Mon, 25 May 2020 11:26:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="oz63l/RU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 43C3B20723 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=38PenWoYzCCcq1YZ0vhAz84I8Od0feTjeAi7l775lpk=; b=oz63l/RUkLVr1C 1eW2gziJtSflG/vyJVzYi9mOVuzpK3/nmktN7WrvNDOhnp3T9fZIboxMsrprrNVhGEyr5iHhrIi1Z WatqyZpnGY7KeFySJmGNfWFvtiwYTUSXEXfKpSPFt7Ygv+yvbvNf1x5YNx5J8igvQCLA3+fxnsfFH 8wBNsRZKPvf8dSajzw4MIORq1fUqlQ9Opk8aJJ1HHruJJ3BnZUOEVMipL6g64b4MKRuFe31uDYlSA ux5cyXtWbQSez18rULWodhG8Ql2Z+8Z9D0yh3Ls0y7CIfgTO7dWV1HSLHESe05aNj06vNO1wEiIeB 8yNZppVVyEtZPQ841B8w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBFV-0007DT-Sy; Mon, 25 May 2020 11:26:29 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-00067k-MY for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:23 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 7BD22BFD676F8AA7CA59; Mon, 25 May 2020 19:25:14 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:06 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 5/7] kvm: arm64: Modify stage2 young mechanism to support hw DBM Date: Mon, 25 May 2020 19:24:04 +0800 Message-ID: <20200525112406.28224-6-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_947486_9E1C23B8 X-CRM114-Status: GOOD ( 12.49 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Making page table entries young (set AF bit) should be atomic to avoid cover dirty info that is set by hardware. Signed-off-by: Keqian Zhu --- arch/arm64/include/asm/kvm_mmu.h | 32 ++++++++++++++++++++++---------- virt/kvm/arm/mmu.c | 10 +++++----- 2 files changed, 27 insertions(+), 15 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 8df078f0ee67..a4620d87e456 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -235,6 +235,18 @@ static inline void kvm_set_s2pte_readonly(pte_t *ptep) } while (pteval != old_pteval); } +static inline void kvm_set_s2pte_young(pte_t *ptep) +{ + pteval_t old_pteval, pteval; + + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval |= PTE_AF; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, pteval); + } while (pteval != old_pteval); +} + static inline bool kvm_s2pte_readonly(pte_t *ptep) { return (READ_ONCE(pte_val(*ptep)) & PTE_S2_RDWR) == PTE_S2_RDONLY; @@ -250,6 +262,11 @@ static inline void kvm_set_s2pmd_readonly(pmd_t *pmdp) kvm_set_s2pte_readonly((pte_t *)pmdp); } +static inline void kvm_set_s2pmd_young(pmd_t *pmdp) +{ + kvm_set_s2pte_young((pte_t *)pmdp); +} + static inline bool kvm_s2pmd_readonly(pmd_t *pmdp) { return kvm_s2pte_readonly((pte_t *)pmdp); @@ -265,6 +282,11 @@ static inline void kvm_set_s2pud_readonly(pud_t *pudp) kvm_set_s2pte_readonly((pte_t *)pudp); } +static inline void kvm_set_s2pud_young(pud_t *pudp) +{ + kvm_set_s2pte_young((pte_t *)pudp); +} + static inline bool kvm_s2pud_readonly(pud_t *pudp) { return kvm_s2pte_readonly((pte_t *)pudp); @@ -275,16 +297,6 @@ static inline bool kvm_s2pud_exec(pud_t *pudp) return !(READ_ONCE(pud_val(*pudp)) & PUD_S2_XN); } -static inline pud_t kvm_s2pud_mkyoung(pud_t pud) -{ - return pud_mkyoung(pud); -} - -static inline bool kvm_s2pud_young(pud_t pud) -{ - return pud_young(pud); -} - #ifdef CONFIG_ARM64_HW_AFDBM static inline bool kvm_hw_dbm_enabled(void) { diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 779859b85d6d..e1d9e4b98cb6 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1888,15 +1888,15 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) goto out; if (pud) { /* HugeTLB */ - *pud = kvm_s2pud_mkyoung(*pud); + kvm_set_s2pud_young(pud); pfn = kvm_pud_pfn(*pud); pfn_valid = true; } else if (pmd) { /* THP, HugeTLB */ - *pmd = pmd_mkyoung(*pmd); + kvm_set_s2pmd_young(pmd); pfn = pmd_pfn(*pmd); pfn_valid = true; - } else { - *pte = pte_mkyoung(*pte); /* Just a page... */ + } else { /* Just a page... */ + kvm_set_s2pte_young(pte); pfn = pte_pfn(*pte); pfn_valid = true; } @@ -2141,7 +2141,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, u64 size, void * return 0; if (pud) - return kvm_s2pud_young(*pud); + return pud_young(*pud); else if (pmd) return pmd_young(*pmd); else From patchwork Mon May 25 11:24:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568599 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BDA51392 for ; Mon, 25 May 2020 11:25:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC2C320787 for ; Mon, 25 May 2020 11:25:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="HgAcNxZH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC2C320787 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=EWTkyR7ZNHNtRnlIPDXlze8eYlJJmZWb2IkJlwsdq/E=; b=HgAcNxZHu5cKOy 3WMph3Q7WWsu/pBFkLwAP1yp3/L4zNZoUla3PwXok9gXeUfS3ehkKgI+1eck9xM75lWP5Pjo5Uyzd Zqa3D3mgJ+uztUwRJ4E1tHUalmGWhKscQEFy+GHOKsensanxlgTQ9UpodKar55acT4iVRAdb7d1gr xhgAL/Tz2ups9VC23TON3WRmbx0BDOE4UJwXRAX3yEEg99LvPtPkft0jaXtK6qMa3iZhl8+7qSHyM rp9N2sDhSQaL3xRbmXk/3HsABkfa1Wr1EVqtuxZbhoIJ6MKyyTeRzono8z4KUq6jAP4NX6Hb+Z9IM mYkl4dQWwTv8rDzqwZTQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEs-0006cz-Mr; Mon, 25 May 2020 11:25:50 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-00067o-MY for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:23 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 9506F6CC1EAF105C267D; Mon, 25 May 2020 19:25:14 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:07 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 6/7] kvm: arm64: Save stage2 PTE dirty info if it is coverred Date: Mon, 25 May 2020 19:24:05 +0800 Message-ID: <20200525112406.28224-7-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_940376_7D2B780E X-CRM114-Status: GOOD ( 12.55 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org kvm_set_pte is called to replace a target PTE with a desired one. We always replace it, but if hw DBM is enalbled and dirty info is coverred, should let caller know it. Caller can decide to whether save the dirty info. kvm_set_pmd and kvm_set_pud is not modified, because we only use DBM in PTEs for now. Signed-off-by: Keqian Zhu --- virt/kvm/arm/mmu.c | 39 +++++++++++++++++++++++++++++++++++---- 1 file changed, 35 insertions(+), 4 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index e1d9e4b98cb6..43d89c6333f0 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -185,10 +185,34 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr put_page(virt_to_page(pmd)); } -static inline void kvm_set_pte(pte_t *ptep, pte_t new_pte) +/* + * @ret: true if dirty info is coverred. + */ +static inline bool kvm_set_pte(pte_t *ptep, pte_t new_pte) { +#ifdef CONFIG_ARM64_HW_AFDBM + pteval_t old_pteval, new_pteval, pteval; + + if (!kvm_hw_dbm_enabled() || pte_none(*ptep) || + !kvm_s2pte_readonly(&new_pte)) { + WRITE_ONCE(*ptep, new_pte); + dsb(ishst); + return false; + } + + new_pteval = pte_val(new_pte); + pteval = READ_ONCE(pte_val(*ptep)); + do { + old_pteval = pteval; + pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, new_pteval); + } while (pteval != old_pteval); + + return !kvm_s2pte_readonly((pte_t *)&pteval); +#else WRITE_ONCE(*ptep, new_pte); dsb(ishst); + return false; +#endif } static inline void kvm_set_pmd(pmd_t *pmdp, pmd_t new_pmd) @@ -249,7 +273,10 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd, if (!pte_none(*pte)) { pte_t old_pte = *pte; - kvm_set_pte(pte, __pte(0)); + if (kvm_set_pte(pte, __pte(0))) { + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + } + kvm_tlb_flush_vmid_ipa(kvm, addr); /* No need to invalidate the cache for device mappings */ @@ -1291,13 +1318,17 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (pte_val(old_pte) == pte_val(*new_pte)) return 0; - kvm_set_pte(pte, __pte(0)); + if (kvm_set_pte(pte, __pte(0))) { + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + } kvm_tlb_flush_vmid_ipa(kvm, addr); } else { get_page(virt_to_page(pte)); } - kvm_set_pte(pte, *new_pte); + if (kvm_set_pte(pte, *new_pte)) { + mark_page_dirty(kvm, addr >> PAGE_SHIFT); + } return 0; } From patchwork Mon May 25 11:24:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 11568589 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D06141392 for ; Mon, 25 May 2020 11:25:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A3D2020723 for ; Mon, 25 May 2020 11:25:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="jMfou6HT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3D2020723 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Py43uJ1kaetfGyPfsiTH42ydRIKpQxxEN11DMLSnJsg=; b=jMfou6HTJSWtJW TKFwcYPnnZlvgHBP6OkzlyupQZu0/FJAE6Z33oGE3nHzdbK/jlyREPUy2nDsjtXIcRD1WJLrFpVBw mRi8rLPRcdZ46k8d/yF7stqJRNLxQLKwn0bFFZPNnHen1wRvpSrNdobTCW141JNd0iB8AduUJeXkz gne3JrC4gIO2lvbn2n+PtH+v41HDJv1xs62JuQjgwaDGKsYBw33yBz8NbAmsl/L4gYexVU/OEx0zQ q75U/VJvNFWZS8SWMu6oVxLm26kmSj4fxMpj8Yca02HMb3HaP/6bcKMuYL7hkSisE+h6+wQp3YzSo 3ZPUdtwdXEOcSht85lSg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEQ-00069u-RZ; Mon, 25 May 2020 11:25:22 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jdBEN-00067l-MN for linux-arm-kernel@lists.infradead.org; Mon, 25 May 2020 11:25:21 +0000 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 8459C6C346A43648B102; Mon, 25 May 2020 19:25:14 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.173.221.230) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.487.0; Mon, 25 May 2020 19:25:07 +0800 From: Keqian Zhu To: , , , Subject: [RFC PATCH 7/7] KVM: arm64: Enable stage2 hardware DBM Date: Mon, 25 May 2020 19:24:06 +0800 Message-ID: <20200525112406.28224-8-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20200525112406.28224-1-zhukeqian1@huawei.com> References: <20200525112406.28224-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.221.230] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200525_042519_936146_1C063331 X-CRM114-Status: UNSURE ( 9.94 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.35 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.35 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Suzuki K Poulose , Catalin Marinas , Keqian Zhu , Sean Christopherson , Peng Liang , Alexios Zavras , zhengxiang9@huawei.com, Mark Brown , James Morse , Marc Zyngier , wanghaibin.wang@huawei.com, Thomas Gleixner , Will Deacon , Andrew Morton , Julien Thierry Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org We are ready to support hw management of dirty state, enable it if hardware support it. Signed-off-by: Keqian Zhu Signed-off-by: Peng Liang --- arch/arm64/include/asm/sysreg.h | 2 ++ arch/arm64/kvm/reset.c | 9 ++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index ebc622432831..371ea6d65c16 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -721,6 +721,8 @@ #define ID_AA64MMFR1_VMIDBITS_8 0 #define ID_AA64MMFR1_VMIDBITS_16 2 +#define ID_AA64MMFR1_HADBS_DBS 2 + /* id_aa64mmfr2 */ #define ID_AA64MMFR2_E0PD_SHIFT 60 #define ID_AA64MMFR2_FWB_SHIFT 40 diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 30b7ea680f66..cb727e1fb581 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -392,7 +392,7 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) { u64 vtcr = VTCR_EL2_FLAGS; u32 parange, phys_shift; - u8 lvls; + u8 lvls, hadbs; if (type & ~KVM_VM_TYPE_ARM_IPA_SIZE_MASK) return -EINVAL; @@ -428,6 +428,13 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type) */ vtcr |= VTCR_EL2_HA; + hadbs = (read_sysreg(id_aa64mmfr1_el1) >> + ID_AA64MMFR1_HADBS_SHIFT) & 0xf; +#ifdef CONFIG_ARM64_HW_AFDBM + if (hadbs == ID_AA64MMFR1_HADBS_DBS) + vtcr |= VTCR_EL2_HD; +#endif + /* Set the vmid bits */ vtcr |= (kvm_get_vmid_bits() == 16) ? VTCR_EL2_VS_16BIT :