From patchwork Tue Oct 19 12:12:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quentin Perret X-Patchwork-Id: 12569733 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBD83C433F5 for ; Tue, 19 Oct 2021 12:17:22 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 99A9661260 for ; Tue, 19 Oct 2021 12:17:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 99A9661260 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=WpXfCwFURsmM4cakSVLo9nFO7dt+uAaw9hdOg92hJK0=; b=SGQvpj/op5W4pyFU7fkDhJbKP5 AVd8uPzvSEdf7zVibRHcn9PNKC+ZL9OWU1NbRwnBPjXuXecIp4yCeivmBfuywLsxyIIOIVBD/mP7c PPZS7EDVd/1BvDrByJVH5zlsuGZe781n6TYaeek8eamfHT829Vhr9sWIJnCqV0s4s0Y1J7Z7z/p08 vOerTwZ2q6pqdkxhq39QdNuxmy4xB0kGU8V/CVs4w5n92bpm37CM2M8EmqTjar06cucoaPSkduvXu AI4Ue+1U1uMO4MJD8hdAWTDs4azGs24hn8w8Q8klIPHevT4T6r+TBurIzxgTKkVo46dpef+gN2Zb8 HDotFVqQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mco1r-0016MU-97; Tue, 19 Oct 2021 12:15:40 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mcnze-0015Em-RC for linux-arm-kernel@lists.infradead.org; Tue, 19 Oct 2021 12:13:24 +0000 Received: by mail-wr1-x44a.google.com with SMTP id 75-20020adf82d1000000b00160cbb0f800so10084353wrc.22 for ; Tue, 19 Oct 2021 05:13:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lCXO97acIcTVNhdFrwlNIxSBr90mPvzDNoClgWHJCM0=; b=lLI82bpsEIkmjoY5bXVC+fOWz3tBbzQRnGXROzrfoL0Hsv0Kg91MrtueOcbv3Emc/D qaQT0oCkTvnG0PCKOTLDzY6v56OT7Xcsj19OR9UJSMXrfkR9x8ClnzTDgv+H6isG1hCM tFShnrWPTZoIA+XbRBnJGPdh49+d/LzmInXPzxoXDPmstwOFRmG/JKhXFV5i6uSWJVcI 9RX4o1KhQiK2xrPYekIxETgAMU85Q52iJ43Rosf3Bg+xTnx67n6CpMqukh034pJa10wc wLC72mXLvthBUu7yCdluNZmPKe6Y6/szco4QpoMjuBJ3QODkWdoutMHmV/S4TRmiPCZY 9ocw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lCXO97acIcTVNhdFrwlNIxSBr90mPvzDNoClgWHJCM0=; b=RgyzxV5UYeso3TVIDUdbwggLkv6LOlfQiuOes+KrT66e1QxI6zay0McpkjA6CGNiW2 gsaKms48xLiLaqz5p7RINARIhR+4b131ClM+FA4PVhUL2ER1Lk4N6aD4W9ooBWNhjlay /UkZWeKzWbgNVmyOWKxXAQV7Q+u58OHTtNbjJG6SAXouZ4jT08sv6+ffsSUpEMVg5QDC Qum9Kvexxv5GZ9ZvWutJxt7g2nS/z/mxmlg0/oAt8MqVc4q0uWvUeLt0+Cb9FihZ9isf prSASsaAMlHtVs6LcTexkh3HGRy81NPKRDa7Y1GIESi2e3IGcwp3FiBz70tIREpJL1jx aP6w== X-Gm-Message-State: AOAM532ubUU8fhSxrEpQ3xauttsMNC8dyZIuQ2WTaVz9+NZUEbios5gS LHv0uKigX1CwOHH1R4f9KkVdrY/wfQJ6 X-Google-Smtp-Source: ABdhPJye5OMqyc5G8hN4J/2112S8j3HbKdiZz7xBn4/qs578wVcSY9YucExD5V8rQ+B6hGbkCyayZWZROE/d X-Received: from luke.lon.corp.google.com ([2a00:79e0:d:210:59ca:401f:83a8:de6d]) (user=qperret job=sendgmr) by 2002:a1c:770b:: with SMTP id t11mr5737869wmi.19.1634645600892; Tue, 19 Oct 2021 05:13:20 -0700 (PDT) Date: Tue, 19 Oct 2021 13:12:55 +0100 In-Reply-To: <20211019121304.2732332-1-qperret@google.com> Message-Id: <20211019121304.2732332-7-qperret@google.com> Mime-Version: 1.0 References: <20211019121304.2732332-1-qperret@google.com> X-Mailer: git-send-email 2.33.0.1079.g6e70778dc9-goog Subject: [PATCH v2 06/15] KVM: arm64: Implement kvm_pgtable_hyp_unmap() at EL2 From: Quentin Perret To: Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Will Deacon , Fuad Tabba , David Brazdil , Andrew Walbran Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kernel-team@android.com, qperret@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211019_051322_925176_BAEA70D0 X-CRM114-Status: GOOD ( 16.31 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Will Deacon Implement kvm_pgtable_hyp_unmap() which can be used to remove hypervisor stage-1 mappings at EL2. Signed-off-by: Will Deacon Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 21 ++++++++++ arch/arm64/kvm/hyp/pgtable.c | 63 ++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 027783829584..9d076f36401d 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -251,6 +251,27 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt); int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, enum kvm_pgtable_prot prot); +/** + * kvm_pgtable_hyp_unmap() - Remove a mapping from a hypervisor stage-1 page-table. + * @pgt: Page-table structure initialised by kvm_pgtable_hyp_init(). + * @addr: Virtual address from which to remove the mapping. + * @size: Size of the mapping. + * + * The offset of @addr within a page is ignored, @size is rounded-up to + * the next page boundary and @phys is rounded-down to the previous page + * boundary. + * + * TLB invalidation is performed for each page-table entry cleared during the + * unmapping operation and the reference count for the page-table page + * containing the cleared entry is decremented, with unreferenced pages being + * freed. The unmapping operation will stop early if it encounters either an + * invalid page-table entry or a valid block mapping which maps beyond the range + * being unmapped. + * + * Return: Number of bytes unmapped, which may be 0. + */ +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); + /** * kvm_get_vtcr() - Helper to construct VTCR_EL2 * @mmfr0: Sanitized value of SYS_ID_AA64MMFR0_EL1 register. diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 768a58835153..6ad4cb2d6947 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -463,6 +463,69 @@ int kvm_pgtable_hyp_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 phys, return ret; } +struct hyp_unmap_data { + u64 unmapped; + struct kvm_pgtable_mm_ops *mm_ops; +}; + +static int hyp_unmap_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + kvm_pte_t pte = *ptep, *childp = NULL; + u64 granule = kvm_granule_size(level); + struct hyp_unmap_data *data = arg; + struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops; + + if (!kvm_pte_valid(pte)) + return -EINVAL; + + if (kvm_pte_table(pte, level)) { + childp = kvm_pte_follow(pte, mm_ops); + + if (mm_ops->page_count(childp) != 1) + return 0; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vae2is, __TLBI_VADDR(addr, 0), level); + } else { + if (end - addr < granule) + return -EINVAL; + + kvm_clear_pte(ptep); + dsb(ishst); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); + data->unmapped += granule; + } + + dsb(ish); + isb(); + mm_ops->put_page(ptep); + + if (childp) + mm_ops->put_page(childp); + + return 0; +} + +u64 kvm_pgtable_hyp_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct hyp_unmap_data unmap_data = { + .mm_ops = pgt->mm_ops, + }; + struct kvm_pgtable_walker walker = { + .cb = hyp_unmap_walker, + .arg = &unmap_data, + .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, + }; + + if (!pgt->mm_ops->page_count) + return 0; + + kvm_pgtable_walk(pgt, addr, size, &walker); + return unmap_data.unmapped; +} + int kvm_pgtable_hyp_init(struct kvm_pgtable *pgt, u32 va_bits, struct kvm_pgtable_mm_ops *mm_ops) {