From patchwork Wed Oct 13 15:58:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12556267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41975C433F5 for ; Wed, 13 Oct 2021 16:05:20 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0821D61168 for ; Wed, 13 Oct 2021 16:05:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0821D61168 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=8D0mB6D8+3cNMg49bauCIAcfT2wK4CsFrDhN2UoW9q4=; b=veV3BvtXuUbGycXr4vGXO4G0ta FwkDCUB8W5x5ZZes5Lw+J/5Djd7J4ELT1XUB2ZjmapIrltAFyVEoutW60dKuWBfViRGQnFYnJAX7P nL0lEgkU8IBlzTuzl0Rb0Kacx9UWUwjVCCq695LHEyhqpxtgHC0q/9CrQLzkVQqF6m4sAowOu0ETS o/pl8UmYFnZwOjD7/7W3HbeaKJW3DoMVj7B10uJZ3sbsAPMswrC+Znbxpvro2VpBCdKFa+6DrqKqk TfaSNJ16sRd6pDn+3iyB4TDKSH4YPxn7Bf0NfGD7khvJS2lHzD5SezioDQAND/2Gb/jEFf/Cru2FZ yiWfGYbQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1magj9-00HVOt-Bj; Wed, 13 Oct 2021 16:03:35 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mageh-00HTP3-5k for linux-arm-kernel@lists.infradead.org; Wed, 13 Oct 2021 15:59:00 +0000 Received: by mail-wr1-x44a.google.com with SMTP id c4-20020a5d6cc4000000b00160edc8bb28so2350766wrc.9 for ; Wed, 13 Oct 2021 08:58:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GgLFMX5ztF6MYNVqoJJByZkg6Wz7p+0QQnn9yyaHjyk=; b=cUeoq1DeA87knFLwnUtEgKxY640obWSkOmx7AfI3ZJdKiaB6Mx8rYTVKhNur1p87QB jm9ZaR/lI0BJG1IX3S2G7KPqWoItGL5ubgXk8A3ThwjlnGXWXTFuvnQLUDi9HnHWWMWY oj1BPDy46BPaw6G9ju6PLhpN2geyNjQ7KHUYpJH2FzlmIDiStfRs+ZAPrx/9VnFeG00N Xh8arHQniKhSNwtLAmoFpn9z4CiaUNi8HP4k1rIX/nfjhUqrglB/BShDGi56lWAICv2k xyPA38fgIGMrmJbxsCNuuEyMuM7lP/hOo1LvhRqPCaED1bbCdjWGx0Zl/XAfsq/MflpU 2pQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GgLFMX5ztF6MYNVqoJJByZkg6Wz7p+0QQnn9yyaHjyk=; b=xGUU592VirYxGZo+glRP7B2ZUPpkKK6tZ4LXkyVHjVGWIHZYRUL0zzhcY63Re8VT// EjDQRIBcJuknTFS7KLwCogaKeVIPQkYKQOygMt9N1FNm1aOOs0DcLRokyapMZTgLmGqY 2NwtEToyg2Y4MnH5Ezji6uWJ4RTzW/Zc8ycRoYUH2tj0siQ6+p30tUcQPOK3yRjPCoyc KjWgx0OYQWgg0SaHMC7oFuD5BHeaB+PfJfRMmlHTaWn4NcOiQZghv37HQ05TOu2Ude2K 0oShoNPkKLD9w8jHFkb87JHDBdQ/CX6T9CU5hcJhI1Ry1dUvFZh52jRMdxF2krXQM0F6 hu4w== X-Gm-Message-State: AOAM532TRAGea5KeCCZt1RYuD5jiStO2Wwmq3wvYUqOmLS3pH48wKB8M K1x/pWTxPvHNMrWICwBnQbc6lbbGio58 X-Google-Smtp-Source: ABdhPJx39xlz/CuqQEOAPZTgH38vTKO4XOGwFEQgLo6W0tWY3kTr+StXLmrf/wZMhbyCtA1/odiou1VmT96v X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:65b5:73d3:1558:b9ae]) (user=qperret job=sendgmr) by 2002:a05:600c:1c05:: with SMTP id j5mr142153wms.1.1634140737029; Wed, 13 Oct 2021 08:58:57 -0700 (PDT) Date: Wed, 13 Oct 2021 16:58:25 +0100 In-Reply-To: <20211013155831.943476-1-qperret@google.com> Message-Id: <20211013155831.943476-11-qperret@google.com> Mime-Version: 1.0 References: <20211013155831.943476-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH 10/16] KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211013_085859_264288_3A4D6EDA X-CRM114-Status: GOOD ( 16.01 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 21 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 63 ++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 027783829584..9d076f36401d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -251,6 +251,27 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt); int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot); +/** + * kvm_pgtable_hyp_unmap() - Remove a mapping from a hypervisor stage-1 page-table. + * @pgt: Page-table structure initialised by kvm_pgtable_hyp_init(). + * @addr: Virtual address from which to remove the mapping. + * @size: Size of the mapping. + * + * The offset of @addr within a page is ignored, @size is rounded-up to + * the next page boundary and @phys is rounded-down to the previous page + * boundary. + * + * TLB invalidation is performed for each page-table entry cleared during the + * unmapping operation and the reference count for the page-table page + * containing the cleared entry is decremented, with unreferenced pages being + * freed. The unmapping operation will stop early if it encounters either an + * invalid page-table entry or a valid block mapping which maps beyond the range + * being unmapped. + * + * Return: Number of bytes unmapped, which may be 0. + */ +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_get_vtcr() - Helper to construct VTCR_EL2 * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 768a58835153..6ad4cb2d6947 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -463,6 +463,69 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } +struct hyp_unmap_data { + u64 unmapped; + struct kvm_pgtable_mm_ops *mm_ops; +}; + +static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + kvm_pte_t pte = *ptep, *childp = NULL; + u64 granule = kvm_granule_size(level); + struct hyp_unmap_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; + + if (!kvm_pte_valid(pte)) + return -EINVAL; + + if (kvm_pte_table(pte, level)) { + childp = kvm_pte_follow(pte, mm_ops); + + if (mm_ops->page_count(childp) != 1) + return 0; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vae2is, __TLBI_VADDR(addr, 0), level); + } else { + if (end - addr < granule) + return -EINVAL; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); + data->unmapped += granule; + } + + dsb(ish); + isb(); + mm_ops->put_page(ptep); + + if (childp) + mm_ops->put_page(childp); + + return 0; +} + +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct hyp_unmap_data unmap_data = { + .mm_ops = pgt->mm_ops, + }; + struct kvm_pgtable_walker walker = { + .cb = hyp_unmap_walker, + .arg = &unmap_data, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + }; + + if (!pgt->mm_ops->page_count) + return 0; + + kvm_pgtable_walk(pgt, addr, size, &walker); + return unmap_data.unmapped; +} + int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, struct kvm_pgtable_mm_ops *mm_ops) {