From patchwork Wed Dec 15 16:12:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12696302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1D486C433F5 for ; Wed, 15 Dec 2021 16:27:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=Yz57TMxQoTTWeNuYxP/wpf27iPeiWI5qCtLOazhxT7c=; b=oUapGC4H+CjOE73kYgWe9o3uIE 2uedod7grlgsm8/TAgylxsD+2fEXJYtfHj/T2ax9EMIBsciInj/4YYsvQr07vRBTdmRPDabpbVjKc OSHBVnDpD8c33FdzLCmLGHEcnSRuEBaFgclI1TEO23ExpIYgJkJUsiC+tdjPZmksPBtURxKKVuPox 0zOLHCLhyM1DxirnMu+wIc/doYonBDgkzEBG/1t5AYwT7bto8IdI34Z7V2OlesuKwgpJX/vEvTQIe WR2hwMeYCbN5iJR0dfiVSQ4knZiaWqJfYGLAVOc2aybU9ptLB5k+po1l+vq6qMFG9+TLMjhfRgkUy d42aJBSw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxX6d-001hUY-9a; Wed, 15 Dec 2021 16:26:16 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mxWtn-001c5e-V3 for linux-arm-kernel@lists.infradead.org; Wed, 15 Dec 2021 16:13:01 +0000 Received: by mail-wr1-x449.google.com with SMTP id p3-20020a056000018300b00186b195d4ddso6040857wrx.15 for ; Wed, 15 Dec 2021 08:12:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ynF5lWvyiEFeIgUvxnlGN+fY90UCVaWVJ6r5LvF4TWY=; b=Z2bHKM6wBp9WnHm2RC+jSBb+1SB32iP9QhizIbhIErFB2+qCbRnYlbIotlBafMopur hac0wkBDsatfSzXXy8pdAx6rMwgY/jfNU8CHvQTFegTor4Qe3PBHLff7Wy2+Ivr8ZuDv zJ0FH+ethDYoeeJwxDEex5obvhi9i88xDwkOiQwKsVwrl28RQkhpkyTlt+rCLDAAIrAd AMttVGcClfOjioeT0j3Nr1ssCaL+wsx8QzYcfPyx86j16O36ZCpMNHZb12ABFjNobxZK n3Iiyvai6haQIoQOIFpmRG5gvvBtUjGn/05wqjE0Wk50DukCvN5/XLN/zEgEGfx+dXah U7wQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ynF5lWvyiEFeIgUvxnlGN+fY90UCVaWVJ6r5LvF4TWY=; b=VPwROpccHFF+9KBQ2IwCHHBw/GpNgJVrMhuYTiawWp2lguOquwMnFam1Jp3r1IK5dT aCz32t+y1E+qmN3NFoRrUCScElL6C0O/16vbCMOJB7lA4/GJxjuF2JigS7WNm2Xff+pg 0tFVQR5ZGWtmrT3pTsNDF4CFLStTs8Tt7FF9E8ldrphfSO1zdFu29aiJ7Ozac51doBiA RMaGuEsSimQG1wajBKB5/Ys6imIdcxQqtm6A7v+vrqnslTuAOkscNXw0WjXnFC+IPZpd Sl753UZeuXRS6oAiWxoFtRbyYgj6Bf1HabGcH+ifEvSIYBeXbjFW29UapKjt8nmvnXfD uGsQ== X-Gm-Message-State: AOAM530vhmmp8uAFV8kvup0qYoGMiyGZxxtIH1F8Z3KRo1DuAV4WZfFL oXpixLVXfEgy2h14mWyol8TGyDU61ym/ X-Google-Smtp-Source: ABdhPJwQrmB63I33rYFWRbLk36OUKQ48YQKkPHhLj3/UyrM3DSZUyiBypkgiW3maONiY0gjgeTRYUoMH96Mt X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:fc03:4f5b:4e9b:3ec1]) (user=qperret job=sendgmr) by 2002:adf:fc88:: with SMTP id g8mr5002042wrr.334.1639584776847; Wed, 15 Dec 2021 08:12:56 -0800 (PST) Date: Wed, 15 Dec 2021 16:12:22 +0000 In-Reply-To: <20211215161232.1480836-1-qperret@google.com> Message-Id: <20211215161232.1480836-6-qperret@google.com> Mime-Version: 1.0 References: <20211215161232.1480836-1-qperret@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH v4 05/14] KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon Cc: qperret@google.com, qwandor@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211215_081300_084304_321087D9 X-CRM114-Status: GOOD ( 15.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 21 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 63 ++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 027783829584..9d076f36401d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -251,6 +251,27 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt); int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot); +/** + * kvm_pgtable_hyp_unmap() - Remove a mapping from a hypervisor stage-1 page-table. + * @pgt: Page-table structure initialised by kvm_pgtable_hyp_init(). + * @addr: Virtual address from which to remove the mapping. + * @size: Size of the mapping. + * + * The offset of @addr within a page is ignored, @size is rounded-up to + * the next page boundary and @phys is rounded-down to the previous page + * boundary. + * + * TLB invalidation is performed for each page-table entry cleared during the + * unmapping operation and the reference count for the page-table page + * containing the cleared entry is decremented, with unreferenced pages being + * freed. The unmapping operation will stop early if it encounters either an + * invalid page-table entry or a valid block mapping which maps beyond the range + * being unmapped. + * + * Return: Number of bytes unmapped, which may be 0. + */ +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_get_vtcr() - Helper to construct VTCR_EL2 * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index e50e9158fc56..adc73f8cd24f 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -451,6 +451,69 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } +struct hyp_unmap_data { + u64 unmapped; + struct kvm_pgtable_mm_ops *mm_ops; +}; + +static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + kvm_pte_t pte = *ptep, *childp = NULL; + u64 granule = kvm_granule_size(level); + struct hyp_unmap_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; + + if (!kvm_pte_valid(pte)) + return -EINVAL; + + if (kvm_pte_table(pte, level)) { + childp = kvm_pte_follow(pte, mm_ops); + + if (mm_ops->page_count(childp) != 1) + return 0; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vae2is, __TLBI_VADDR(addr, 0), level); + } else { + if (end - addr < granule) + return -EINVAL; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); + data->unmapped += granule; + } + + dsb(ish); + isb(); + mm_ops->put_page(ptep); + + if (childp) + mm_ops->put_page(childp); + + return 0; +} + +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct hyp_unmap_data unmap_data = { + .mm_ops = pgt->mm_ops, + }; + struct kvm_pgtable_walker walker = { + .cb = hyp_unmap_walker, + .arg = &unmap_data, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + }; + + if (!pgt->mm_ops->page_count) + return 0; + + kvm_pgtable_walk(pgt, addr, size, &walker); + return unmap_data.unmapped; +} + int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, struct kvm_pgtable_mm_ops *mm_ops) {