From patchwork Tue Jan 26 12:44:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: zhukeqian X-Patchwork-Id: 12046351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFF5FC433E0 for ; Tue, 26 Jan 2021 12:47:46 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A57FC23104 for ; Tue, 26 Jan 2021 12:47:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A57FC23104 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=ARgbYFiP2qaiHEcXVffozANJSK13Y/Bhmqbj2F47dB8=; b=GrYKy7IR8kiIGBdLf2T6RGPPF jlblbCeOsA2kLFLM49Gm4Cs6dVCpO1eELD3jgFuNBAvuHWei4vY421knL+F1HMQyQ3Qbh7aDGCNIj XChfDrSmCWGxwH/QMfRlIMa/NMpvcJyWW2pDzj/9yyMhkoRVDeOBe9Cg3ARCJW0kWiNU3yfZuN9nr N39z/qGPEQGii+hIMXQn7Qr6sGU3CHL3EtcOLMYkR/HbpulM1nZSeRp79Lf8kRfqfBnx+hVxbb53v 47txbfA0YUgkUTO3tc+O9YXeYUEC2/Bsoum55FbCaYdJ12dr2eY52cDo7EseLQvVAy8qAQzuaQvbi Razhro0nA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nj2-0000UE-HN; Tue, 26 Jan 2021 12:45:40 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l4Nij-0000Q7-4H for linux-arm-kernel@lists.infradead.org; Tue, 26 Jan 2021 12:45:22 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DQ5yR1JhfzjDdq; Tue, 26 Jan 2021 20:43:59 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Tue, 26 Jan 2021 20:45:03 +0800 From: Keqian Zhu To: , , , , Marc Zyngier , Will Deacon , Catalin Marinas Subject: [RFC PATCH 2/7] kvm: arm64: Use atomic operation when update PTE Date: Tue, 26 Jan 2021 20:44:39 +0800 Message-ID: <20210126124444.27136-3-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210126124444.27136-1-zhukeqian1@huawei.com> References: <20210126124444.27136-1-zhukeqian1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210126_074521_427567_071E9BD0 X-CRM114-Status: GOOD ( 12.54 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , yubihong@huawei.com, jiangkunkun@huawei.com, Suzuki K Poulose , Cornelia Huck , Kirti Wankhede , xiexiangyou@huawei.com, zhengchuan@huawei.com, Alex Williamson , James Morse , wanghaibin.wang@huawei.com, Robin Murphy Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We are about to add HW_DBM support for stage2 dirty log, so software updating PTE may race with the MMU trying to set the access flag or dirty state. Use atomic oparations to avoid reverting these bits set by MMU. Signed-off-by: Keqian Zhu --- arch/arm64/kvm/hyp/pgtable.c | 41 ++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 14 deletions(-) diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index bdf8e55ed308..4915ba35f93b 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -153,10 +153,34 @@ static kvm_pte_t *kvm_pte_follow(kvm_pte_t pte) return __va(kvm_pte_to_phys(pte)); } +/* + * We may race with the MMU trying to set the access flag or dirty state, + * use atomic oparations to avoid reverting these bits. + * + * Return original PTE. + */ +static kvm_pte_t kvm_update_pte(kvm_pte_t *ptep, kvm_pte_t bit_set, + kvm_pte_t bit_clr) +{ + kvm_pte_t old_pte, pte = *ptep; + + do { + old_pte = pte; + pte &= ~bit_clr; + pte |= bit_set; + + if (old_pte == pte) + break; + + pte = cmpxchg_relaxed(ptep, old_pte, pte); + } while (pte != old_pte); + + return old_pte; +} + static void kvm_set_invalid_pte(kvm_pte_t *ptep) { - kvm_pte_t pte = *ptep; - WRITE_ONCE(*ptep, pte & ~KVM_PTE_VALID); + kvm_update_pte(ptep, 0, KVM_PTE_VALID); } static void kvm_set_table_pte(kvm_pte_t *ptep, kvm_pte_t *childp) @@ -723,18 +747,7 @@ static int stage2_attr_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, return 0; data->level = level; - data->pte = pte; - pte &= ~data->attr_clr; - pte |= data->attr_set; - - /* - * We may race with the CPU trying to set the access flag here, - * but worst-case the access flag update gets lost and will be - * set on the next access instead. - */ - if (data->pte != pte) - WRITE_ONCE(*ptep, pte); - + data->pte = kvm_update_pte(ptep, data->attr_set, data->attr_clr); return 0; }