From patchwork Mon Jul 19 10:47:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12385355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1EFDC07E9B for ; Mon, 19 Jul 2021 10:50:52 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 72C856101E for ; Mon, 19 Jul 2021 10:50:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 72C856101E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9ei4mdxtIxJfCmuEiYKiGB7uz0Wjp/pA0MKsrPsf9RE=; b=q/saEMVrsloIoKbx8P7sjCp9yT fSv2SMWMz9AmmzYHfoOzbcT0vXAWWpzZCfKTpbNdL2BVtvl6K+SynNIxt67BhaHZpM6ZSFmsF1W4u V8JErOgGVgD1aLroSuLkhGdoonv+Gctl9oGdIg8C2z37v7Lq/qez3/A36ZoqAkAqOpG2SlLAgNvJd JYp05b75Vh1KU448V3NLis/NmK0vVyQnGwvoPSbHxllPNrLpGTNXTWM9ebO2gllDo1/1pMy0p2JBF ANOHu2ZdpepOeibzmEzNPS0IMCgO/qhcLCMiPvgD73tTJHFGuyDaczfcfTKurL5nCrpU/qkl532b2 zJe7dmsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5Qpj-009Kys-C5; Mon, 19 Jul 2021 10:49:11 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m5QoQ-009KTB-2P for linux-arm-kernel@lists.infradead.org; Mon, 19 Jul 2021 10:47:52 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id h13-20020a0cedad0000b0290310fecb5f78so3097597qvr.0 for ; Mon, 19 Jul 2021 03:47:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lb/gYwEJ52V8NJMW9ljOtVfH6PwbyXsZiRh6U1L061M=; b=dVcGRuTeuLeRZV2QwASmH66frNnilIZiSJsAaKb1s76JoXlM4AtD3npXiYQzxt28XK WRmijrISMsClSbfD/j1ZqOtq+NAIkXrS2vs0tgWk2FNH+4EGuiPN0pghsbdxN4MoQF5k dIL201YwNFekTdUovrgHjn/z4+PZxwtyG77qNXlJ+DbJY5BfxZyGA38C7WIW2DXBG+c7 cAVEg+Vw0+L5Ce2pVQUm3xmwl4XlB7do98+Wr4I207ZGws9WsRZZ3sWM5nrre3xUZJ+Q 0GH6Old6+be+vA+5Ga601BkY28wqGSPU4phFvCk5iEc0GkkR2DcpFMBULOEnVKIW5OcC zJew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lb/gYwEJ52V8NJMW9ljOtVfH6PwbyXsZiRh6U1L061M=; b=gSn42nkv4Gkd6yYTiIw3js/TOJf7dlx3dKdfBzo2Ye/7noJJgh3mhIPgy0Scsg2ZhZ TzZ+ZiUtCvbVpOf1DSYue99m/xeZezHp7j/Sr1Wkcc+2UneLS8o4JzpTrVG5Q30ddCH2 knL0Z+RWulDDOkDuPwpuS9vxtOhDLEu4uXNLuCiPb7jilCTdKHrQ+wLXQZh/ZGvOHO/W 85hcvLRX6hXsJ/VCndRSbp27JTqJz0DTt5CAZM7dODHibt0pg9fQBocV1kM62LV8L7D5 Ylv9U91jkFrY2d7DSRZI0PTdm6psP7JM7KkYk/HGbKQnYdXyX9Up8Oscu1Vs9D33IwGa 9jQQ== X-Gm-Message-State: AOAM533/ghhGOGIvKxQJe+ITd6E/ovflAWcJ3XF75VKaXyVJSvlTLrmZ xSUAXwzrCMs9GdCnHAuwsFJoCecJ5ocJ X-Google-Smtp-Source: ABdhPJytH3lHIJjCHy/P5Ydbwldz1i5qqlyWptr5/IHXQucIC5VXiU31XNnB+3cuKdHQC5Bm27x6uAI37Z5o X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:1a96:a43f:6c2e:bb5a]) (user=qperret job=sendgmr) by 2002:a05:6214:7e9:: with SMTP id bp9mr24508149qvb.4.1626691668039; Mon, 19 Jul 2021 03:47:48 -0700 (PDT) Date: Mon, 19 Jul 2021 11:47:24 +0100 In-Reply-To: <20210719104735.3681732-1-qperret@google.com> Message-Id: <20210719104735.3681732-4-qperret@google.com> Mime-Version: 1.0 References: <20210719104735.3681732-1-qperret@google.com> X-Mailer: git-send-email 2.32.0.402.g57bb445576-goog Subject: [PATCH 03/14] KVM: arm64: Continue stage-2 map when re-creating mappings From: Quentin Perret To: maz@kernel.org, james.morse@arm.com, alexandru.elisei@arm.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will@kernel.org Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, ardb@kernel.org, qwandor@google.com, tabba@google.com, dbrazdil@google.com, kernel-team@android.com, Quentin Perret , Yanan Wang X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210719_034750_204536_015D2246 X-CRM114-Status: GOOD ( 20.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The stage-2 map walkers currently return -EAGAIN when re-creating identical mappings or only changing access permissions. This allows to optimize mapping pages for concurrent (v)CPUs faulting on the same page. While this works as expected when touching one page-table leaf at a time, this can lead to difficult situations when mapping larger ranges. Indeed, a large map operation can fail in the middle if an existing mapping is found in the range, even if it has compatible attributes, hence leaving only half of the range mapped. To avoid having to deal with such failures in the caller, don't interrupt the map operation when hitting existing PTEs, but make sure to still return -EAGAIN so that user_mem_abort() can mark the page dirty when needed. Cc: Yanan Wang Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 2 +- arch/arm64/kvm/hyp/pgtable.c | 21 +++++++++++++++++---- 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index d6649352c8b3..af62203d2f7a 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -258,7 +258,7 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); * If device attributes are not explicitly requested in @prot, then the * mapping will be normal, cacheable. * - * Note that the update of a valid leaf PTE in this function will be aborted, + * Note that the update of a valid leaf PTE in this function will be skipped, * if it's trying to recreate the exact same mapping or only change the access * permissions. Instead, the vCPU will exit one more time from guest if still * needed and then go through the path of relaxing permissions. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 978f341d02ca..bb73c5331b7c 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -475,6 +475,8 @@ struct stage2_map_data { void *memcache; struct kvm_pgtable_mm_ops *mm_ops; + + int ret; }; u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift) @@ -612,8 +614,10 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, * the vCPU will exit one more time from guest if still needed * and then go through the path of relaxing permissions. */ - if (!stage2_pte_needs_update(old, new)) - return -EAGAIN; + if (!stage2_pte_needs_update(old, new)) { + data->ret = -EAGAIN; + goto out; + } stage2_put_pte(ptep, data->mmu, addr, level, mm_ops); } @@ -629,6 +633,7 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level, smp_store_release(ptep, new); if (stage2_pte_is_counted(new)) mm_ops->get_page(ptep); +out: if (kvm_phys_is_valid(phys)) data->phys += granule; return 0; @@ -771,6 +776,7 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, .mmu = pgt->mmu, .memcache = mc, .mm_ops = pgt->mm_ops, + .ret = 0, }; struct kvm_pgtable_walker walker = { .cb = stage2_map_walker, @@ -789,7 +795,10 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, ret = kvm_pgtable_walk(pgt, addr, size, &walker); dsb(ishst); - return ret; + if (ret) + return ret; + + return map_data.ret; } int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -802,6 +811,7 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, .memcache = mc, .mm_ops = pgt->mm_ops, .owner_id = owner_id, + .ret = 0, }; struct kvm_pgtable_walker walker = { .cb = stage2_map_walker, @@ -815,7 +825,10 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, return -EINVAL; ret = kvm_pgtable_walk(pgt, addr, size, &walker); - return ret; + if (ret) + return ret; + + return map_data.ret; } static int stage2_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep,