From patchwork Fri Feb 28 10:25:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 357DEC19776 for ; Fri, 28 Feb 2025 10:37:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=rZhAYztASFyJGMSgZE5i6lkqu3vzWqDwef4eaMAfdZ0=; b=3b/MOxmpk64O4BdphOMTLaW0DV Q7h7d0DmAbOsu2Oby4dCdK6/l6fyEMiAPr+9nElVUHWJSyoKv2qwfJ4STp6Z4EFtpncKEjhvV2fHc JYgbUhRJA3g9Mu3924ivYbZBfkzEHeOp3+p68igqMndD1xTCL6Xx1p8CzxeMhyXi3PLcMH3QSfqwi Nn5I8njjiPrbGgeH0xjTEZdAW1vCyTX8qDx0l1rULMXQ85h9hbAAeLcZ/vNZHFUiyl2/WMjCMY96S cmE1HO6idJhC4Dv6X4uPk63OuaTs0ZMxLBmoDpwJcgi77UKX60LJ76FEb2LFaM0y8MtTHpqopfvmh 3+q3+fPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tnxkH-0000000AcvM-305M; Fri, 28 Feb 2025 10:37:29 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxYy-0000000AaRw-1E6k for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:25:49 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-43935e09897so14222655e9.1 for ; Fri, 28 Feb 2025 02:25:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738346; x=1741343146; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rZhAYztASFyJGMSgZE5i6lkqu3vzWqDwef4eaMAfdZ0=; b=duWcSlv+DEdHkiffa2TQJ18wTQJLVYCoRKxrgE++RWXm26/MoNEQcWai/X/oxTYlye T9CeUZ8T0wWdMObz8KBn9ALKsDao/CpvYUadY22VL1voWtPt7wd19XMaLgPi0vgoI8wb UiqA/iD02B/BIKpE3tKI8hkxO2Lt7kOMJPC1U4me0mdaPXeYbRUEZXdlC5dXyU8e/Xuo a60wnf+tI9gdAgxtTOeVviJgfkslogB7NRAIGadGTgmz1hcNFyurWkmivdvBvrd86iGo JoauIDUYT0DR6LkYEOfpgjwb8dxZYgd/OsxlxWSRt9SxwACkcepWPfj8BGVEyLT0igtF cTEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738346; x=1741343146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rZhAYztASFyJGMSgZE5i6lkqu3vzWqDwef4eaMAfdZ0=; b=SOxMbH1uxTOoURV3Bwa+QurlEIjWgruaoQlMLDK5Nh2ZnQlkL6KawLxJlFmCUCRBJx wtS5VJ65JKhIVAnhEvXd/UDwiAg7K4vtvtAMrSQ2rsFBGSvfoXZ+kkzc01mC87oOomEx MvoyNJADV1dkZvXWtSs839o+/ilIiA7QY2JVkRhbfTAxRaveZHJ0Y8LMER+J9ZG/ACI6 kihY/IdN4zpBCpjb6+2N6uq8vIVXNaVBAn1dfxJc6vHdNMuFSrKBXUvrq9MuauqpnQZs 1ln2dCbizHb1SfuP/ipESyrt97j0vcboLWxC9FWTMIcBY/Gs27y35fAI+HbOSaDsSzST vKFg== X-Forwarded-Encrypted: i=1; AJvYcCVbZVj9FUkHEetFHjbWb315xH9OOA7pon/ww9OoyCAdkjnV1vCuhPOXlYfyGshl4ZNhmY9qso2SKjXx+aik6BIP@lists.infradead.org X-Gm-Message-State: AOJu0YwPsUM93TGzUsmq2aAcx34GOvZQmIvRyjSbMUxAntawPmHzv3xb 84xtnR2ztZYhieWqjInMV5mUEYMts1n9c3CxUFoRnY3WYI3R66KjOJ4EmlCbiWDSOdHn5RhinP4 GhvFOEk/Klsz0GRP7Xg== X-Google-Smtp-Source: AGHT+IErysWGWTMM3bCvvoDqVF2QmB3ahdAOa99vtob8aheucDELjb3c2wdk5I/bEmp96LHcj4ZpHr7SpAP9x51t X-Received: from wmbfm6.prod.google.com ([2002:a05:600c:c06:b0:439:9601:298d]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1986:b0:439:98b0:f8db with SMTP id 5b1f17b1804b1-43ba66f9800mr25488185e9.16.1740738346325; Fri, 28 Feb 2025 02:25:46 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:17 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-2-vdonnefort@google.com> Subject: [PATCH 1/9] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022548_328770_4DFC5313 X-CRM114-Status: UNSURE ( 8.42 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 19c3c631708c..a796e257c41f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,24 @@ static void guest_s2_put_page(void *addr) static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va += PAGE_SIZE; + size -= PAGE_SIZE; + } } static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va += PAGE_SIZE; + size -= PAGE_SIZE; + } } int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) From patchwork Fri Feb 28 10:25:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6D92FC19776 for ; Fri, 28 Feb 2025 10:39:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=woexeE9czyXYmcShP359qQQ+UyXxB2V/SfwPim+kvA4=; b=GdniingZls4rBwT0Wn4j97vl1G 84gBw3W7DycF5WE56IN5JdVJ6oLRzaEl55dsH3qWHrvuFzkymAdflk9ZJLAfNskAB4gFss5mXB8Vd CN3XO58rBsUDPCf+3WcYb4K6zF7LHoyaBDT4yJ8crtJlVtSh1JxhP9rpdJU4jDHcBWAuXdDBNkWiJ PDZBf3LPCv4W2vUQi6tMFeWdZ81nHunUa8dzBo+Ht6Zykj03Z24LgIU3Wu5Dy+cGcB0BLywgHTLlz HKqRulY7DmoHzJy7aBCIF5sXPIOjqU7WxNqoGH4korjMg1OB2NRsZQU9xW5QtlsE14//5O0EkcTdz 9x4AkCIw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tnxlo-0000000Ad4h-1YeZ; Fri, 28 Feb 2025 10:39:04 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZ0-0000000AaSL-0uhl for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:25:51 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4388eee7073so13509715e9.0 for ; Fri, 28 Feb 2025 02:25:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738348; x=1741343148; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=woexeE9czyXYmcShP359qQQ+UyXxB2V/SfwPim+kvA4=; b=r5t8eMceDLVA8jB9D8UjW/RSami4Uvqcu70nW5EmwLLVfU5HcqHV2UsXJ+iv7ltHSu Wo9tH5askOerUWDllPW8X0+/BDuTN08ExN4bYYgl9XO91fmZP/x27HhAtMnuBlXakcKO xkVz/Ea+LD9KpA6ujKYcN6rpGz1KAj2skJDq+Ty/cVG61Bbzfa8lNMu5mPkbWZkmtuIJ sv3zQKkiaYBlmhlboTqfdY1pqF92Gh74hBGkBvkKFhvS0NteqL/G97E+tIH4PPiRd6RJ lDjxR4y/XzX6jnHkxdPED1iFn+VVq/8YlCK7kFdcciCnSh+LYUi71GvAMOO2Uq1hFiaK Z15w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738348; x=1741343148; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=woexeE9czyXYmcShP359qQQ+UyXxB2V/SfwPim+kvA4=; b=kc1npyWO369NuSE8ERdBCqmOLxaZScdl7AipbOuTSSmeA5W74X5YICIBje1kTTiXtk hmm8ZIPdbUnuSnq46T7om60tZd+Rii8efbLCwQGbSdU1RUE58OzDIDUB4iZ0cmwM0lbd VLHxFDc9IwJ6SgH+GDt/SfsQp5aH8ItVaDQ6Nzw8tMlMCouI9/2LgI8bTzwNDxDsSZcA LeMleDxibsVjCWqNiQmmiJhb+ZG3Wl60gJFK4dVt1GXNpyqlhOdnwgYziBRO4no0+AEc MfK6nAO9c+xfCvPlSUYHCkBQ7XIvzfkw6uPM4vxIcU67yYxT78AN/rgscLHbJNhss/Vw DM4Q== X-Forwarded-Encrypted: i=1; AJvYcCVMxWynaVAbHULN0ka2j2JMd75YCmOH/y1qTgqJVV2UOv/IQVptadsbXFFHAsqqcKcUI3RGiDXiQkYJ4trgfAkk@lists.infradead.org X-Gm-Message-State: AOJu0YyN1uviRhC+WCDDPAkG77QNeFS4GBrVCb8Ea5d4vdrZayCWaOgh jDEZa67punpCZz3oL2Pj+rqBJ+3RZGeJRg0TkqXieT5YU8SyaaooGFAYzOBSacjVi0eXlPQts0p 9lHErvKiB+qXB9ljs0g== X-Google-Smtp-Source: AGHT+IHc3ncN6k+2sm0Vab1oQPDzwV4k07eVX5WNpjt/D15gKXa33kx3e8bUSpOfh0E2+Ri4/HEFpe1b3cCIm0UN X-Received: from wmbec10.prod.google.com ([2002:a05:600c:610a:b0:439:804a:4a89]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3549:b0:436:1b86:f05 with SMTP id 5b1f17b1804b1-43ba62901admr21902365e9.11.1740738348312; Fri, 28 Feb 2025 02:25:48 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:18 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-3-vdonnefort@google.com> Subject: [PATCH 2/9] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022550_257888_2793C924 X-CRM114-Status: GOOD ( 17.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 978f38c386ee..1abbab5e2ff8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 2c37680d954c..e71601746935 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -249,7 +249,8 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret = -EINVAL; @@ -264,7 +265,7 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt) if (ret) goto out; - ret = __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret = __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) = ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index a796e257c41f..2e49bd6e4ae8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,9 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } +#define for_each_hyp_page(start, size, page) \ + for (page = hyp_phys_to_page(start); page < hyp_phys_to_page((start) + (size)); page++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr = hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -503,10 +506,25 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 size, static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm_page_state state) { - phys_addr_t end = addr + size; + struct hyp_page *page; + + for_each_hyp_page(addr, size, page) + page->host_state = state; +} + +static void __host_update_share_guest_count(u64 phys, u64 size, bool inc) +{ + struct hyp_page *page; - for (; addr < end; addr += PAGE_SIZE) - hyp_phys_to_page(addr)->host_state = state; + for_each_hyp_page(phys, size, page) { + if (inc) { + WARN_ON(page->host_share_guest_count++ == U32_MAX); + } else { + WARN_ON(!page->host_share_guest_count--); + if (!page->host_share_guest_count) + page->host_state = PKVM_PAGE_OWNED; + } + } } int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -621,16 +639,16 @@ static int check_page_state_range(struct kvm_pgtable *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end = addr + size; + struct hyp_page *page; int ret; - ret = check_range_allowed_memory(addr, end); + ret = check_range_allowed_memory(addr, addr + size); if (ret) return ret; hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr += PAGE_SIZE) { - if (hyp_phys_to_page(addr)->host_state != state) + for_each_hyp_page(addr, size, page) { + if (page->host_state != state) return -EPERM; } @@ -680,10 +698,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d = { .desired = state, .get_page_state = guest_get_page_state, @@ -890,49 +907,75 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, u64 *size) +{ + if (nr_pages == 1) { + *size = PAGE_SIZE; + return 0; + } + + /* We solely support PMD_SIZE huge-pages */ + if (nr_pages != (1 << (PMD_SHIFT - PAGE_SHIFT))) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, PMD_SIZE)) + return -EINVAL; + + *size = PMD_SIZE; + return 0; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys = hyp_pfn_to_phys(pfn); u64 ipa = hyp_pfn_to_phys(gfn); struct hyp_page *page; + u64 size; int ret; if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; - ret = check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret = __guest_check_transition_size(phys, ipa, nr_pages, &size); if (ret) return ret; host_lock_component(); guest_lock_component(vm); - ret = __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret = __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; page = hyp_phys_to_page(phys); + ret = __host_check_page_state_range(phys, size, page->host_state); + if (ret) + goto unlock; + switch (page->host_state) { case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OWNED)); + WARN_ON(__host_set_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED)); break; case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - WARN_ON(1); - fallthrough; + for_each_hyp_page(phys, size, page) { + /* Only host to np-guest multi-sharing is tolerated */ + if (WARN_ON(!page->host_share_guest_count)) { + ret = -EPERM; + goto unlock; + } + } + break; default: ret = -EPERM; goto unlock; } - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; + __host_update_share_guest_count(phys, size, true); unlock: guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 930b677eb9b0..00fd9a524bf7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -361,7 +361,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, return -EINVAL; lockdep_assert_held_write(&kvm->mmu_lock); - ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret == -EPERM) From patchwork Fri Feb 28 10:25:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996212 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2D340C19776 for ; Fri, 28 Feb 2025 10:42:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=+mthpUBcuFXcir4pSjUd/6TiIbqiuZjMcArsC605TwA=; b=KDI+oYy5CjL4PVBA+kc7q7oH2i qjCL1e4Gkvqw9uK5juvXAFhnbS9IFNA2rGxg+HYzuijNDbLPX+OfJlm5vSmi3oz5xkZq0hmKKS9TP C9pQXo0m7/sruhvAKXIc7sYfK9xPK4Cy1lbdjo/CDhw3/3DjyYkhMPbPMQTDcLOBmo5Zpgn43X5XH JJkWPBpAqTe9QcLZreFqdpugEPJpcLQv6enbV6Jescz1tmQeJZDPc1W9RoWabmjmQe8S129xTz6S8 XgSN3gvN8IEcXRR5YBo4LxQrnKmnVXimThVMikFwxFGPeJGc/kDEFRYIV1V/FVpOlrPxI5YYPkWzg hiUiElYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tnxor-0000000AdV9-34Si; Fri, 28 Feb 2025 10:42:13 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZ5-0000000AaTB-178I for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:25:56 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4399a5afcb3so18921565e9.3 for ; Fri, 28 Feb 2025 02:25:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738353; x=1741343153; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+mthpUBcuFXcir4pSjUd/6TiIbqiuZjMcArsC605TwA=; b=cOwXY5cGan4DhTCkKrf122oWJVHPavJeOFEUh/dbUaddShJ1DP440cfmtxAxDWXDgv 9IUedBWYGUtPauSTsfr9+OpsioTzd+DnZ26VUPb5Z7njyOdyTIj7CaIj/Y/u4fZS/6Wp 1fuwn76Ec5E8+VQXIPfrQAdwqNZ4022lPB2CCVVn50Es3T3WJrC0ITpijkAQ4Rpy/XDY DdTkCk8WHW49som07yMx4yk5qivC/pEf8+XUrA6f547wqo2AhU9YD0ubkYFolCg6YWrc N0SE+3qOJjpPK7ILLrx+fxvaHP3R8/mr0ZCMCBDkhQCNvPW7vXN3A/4CGDELmlBzi+t7 Q5ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738353; x=1741343153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+mthpUBcuFXcir4pSjUd/6TiIbqiuZjMcArsC605TwA=; b=TBZkt+K4wjpr+5fiQ1Mgol0co4RBntISsF4UGcxhK24LNyGclO6qvwx10ZjzpTNXKi HLtjImhmrQZFhM1AIjCCe6cF1JXtaMmOgur0BN6ebzwv11ma/I+t2QUvHcwU86qAKn/1 PoquwvffECp8sM7HYNGme6jjFU2nZP7V7zwbzoxvVHtqEf0bHcSVmmon8gxEOh5D6t/W QGBlu1NgVVHpjhCODo+mbMzwsEFvoEo9c6snFJ8M+8aa9a9ZGo5GDrz1yD+hvCZkkgmm zloF0vUxZHkB8lAT5HSWyqnr1KcfbQxoFlNy4mgROOmVHXMeD1aR7ll5H1u0bVPj3tkQ 9rRw== X-Forwarded-Encrypted: i=1; AJvYcCVDetQD7DgrAVy5SeYjWj1zatL8nzf6I6GdC3ozi2kf6xKi9IaYjW/KxBIG3wbr/mjF+FcsQCZhTUSGFUfT9IQe@lists.infradead.org X-Gm-Message-State: AOJu0YwQzkcbjfJHhTQlQqoEMWp1nPWSDlPviCsSfmtiQrEBbMDeJh56 5CZdVQ4xcXmOihKhqJA54ohPx+bkJej0kPbSVUAUV4TDotkTwrkOuKzFSCjTwMsipXBO6lOdvYj aYYnu8uqRMwuJIVR6eQ== X-Google-Smtp-Source: AGHT+IHtP/oHqzhg0ASQAq95RZF5lYANl+I3P8Vxk+cLo0Dh2BmKOXowoVEPz/OhfN0g9+f+h5G4M/hs2XzAvpfv X-Received: from wmsp28.prod.google.com ([2002:a05:600c:1d9c:b0:439:47d8:90a7]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d0f:b0:43b:a397:345f with SMTP id 5b1f17b1804b1-43baa2c14e7mr14416685e9.11.1740738353821; Fri, 28 Feb 2025 02:25:53 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:20 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-5-vdonnefort@google.com> Subject: [PATCH 3/9] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022555_307833_D3CB828E X-CRM114-Status: GOOD ( 11.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 1abbab5e2ff8..343569e4bdeb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index e71601746935..7f22d104c1f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -274,6 +274,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret = -EINVAL; @@ -284,7 +285,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_ctxt) if (!hyp_vm) goto out; - ret = __pkvm_host_unshare_guest(gfn, hyp_vm); + ret = __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) = ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 2e49bd6e4ae8..ad45f5eaa1fd 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -984,13 +984,12 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu return ret; } -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ipa, u64 size) { - enum pkvm_page_state state; struct hyp_page *page; kvm_pte_t pte; - u64 phys; s8 level; + u64 phys; int ret; ret = kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); @@ -998,51 +997,52 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level != KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) != size) return -E2BIG; - state = guest_get_page_state(pte, ipa); - if (state != PKVM_PAGE_SHARED_BORROWED) - return -EPERM; + ret = __guest_check_page_state_range(vm, ipa, size, PKVM_PAGE_SHARED_BORROWED); + if (ret) + return ret; phys = kvm_pte_to_phys(pte); - ret = check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret = check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; - page = hyp_phys_to_page(phys); - if (page->host_state != PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(phys, size, page) { + if (page->host_state != PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } *__phys = phys; return 0; } -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *vm) { u64 ipa = hyp_pfn_to_phys(gfn); - struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; + ret = __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); - ret = __check_host_shared_guest(vm, &phys, ipa); + ret = __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; - ret = kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret = kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; - page = hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + __host_update_share_guest_count(phys, size, false); unlock: guest_unlock_component(vm); @@ -1062,7 +1062,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); - ret = __check_host_shared_guest(vm, &phys, ipa); + ret = __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); guest_unlock_component(vm); host_unlock_component(); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 00fd9a524bf7..b65fcf245fc9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -385,7 +385,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); From patchwork Fri Feb 28 10:25:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996214 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2CE4EC19776 for ; Fri, 28 Feb 2025 10:47:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=EBYTppOnWrtb7iUVf/Keem6eGl uzFHrsZ3wzJQvHWGpLakUrZ8XbkMqZUT/5ixjdNw/qdEpk1+iHW7DUMHlR8rQ5UZuNtNzgKUrR6L+ iTcqq93u33t49uguGme2/Cnxaln497Vh8CIDxbN+kUFBGdez462rEbGC7HcY1P4bELKiHcN6dnVHd lXIGu2jsq3Aty+IEhl0wsm2aX9fVYptxtU1Ff7a9nSY0+biWpU0MFU2vxORYIxr/cNPLFNZq4ikAT qkDDEuN3rHhZGMtfh8NBCss1ve4wxIWurnGpE0pp4zGMKB39hwgICZksagyRNozJUMy4bNxrCkqzh zvLqXBWg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tnxtS-0000000Ae0f-0dhj; Fri, 28 Feb 2025 10:46:58 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZA-0000000AaUv-0fMN for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:26:01 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-390e8df7ab6so512590f8f.3 for ; Fri, 28 Feb 2025 02:25:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738358; x=1741343158; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=WTWyiiiRGux9+DtBuz0+kGnoUHbiLPWJN3vD+3Nw+WkwSjkn11b2ZRG6mJH4HVfNAn uA/rseM2JVdiYIrl8WX8Q8MoWTryTm3snR5tnWr3hxG3wOcPigC3Wyt4ujuJOXTnqCbM r6Sb9OwuGgUd2aoOOR4MQzClfdDtx+7bfj87irSejsmzT6Zcbgx8mI1b+Z0eTuIgm9+P T91gjPLoZOV/DZY50+MsJidnVLz2r1dVm1DzdSmOEC0dDI1PFioxPa+vgIeFRKhcVfeB TiPUY/bAl4t4OP70Qr5/Ve4rFxIZ5pvpRhJFR3DCYR9cgEPHewC5wyXDjxUsJQsK2sVs Vh9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738358; x=1741343158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=ARO+CIWOfZ5fanFlke2jQVpBmttizNWfGHvi1TN+wNwn9M8ahmvZJ7O32XuaIlrEcc R8ONIyDn8KS68YId5yr5gPfhvR5Tcya5Oe0Ig+IwxlnQUdKvxD0oVgDbXDhuBpw1VB5T 1zH2c4YbjUnOD8xHM3ts6HwZDRmpURdqvflnbj7/ggEzzfg/z08XIzIzGeOOu0fiX43O VSoxZVsxisuwW5/uu3Z61eXGQ6/hJDzBz3D44hMX+DXHlQ1mpYy6bp/i1M6IUPs5kV7Z CKnypK1R3GOXgkha3xrtfZE7FtP3jjUY8GYeoFRq9PPkMPLmdQOfDoPLwPnOhMbp3vea M4tg== X-Forwarded-Encrypted: i=1; AJvYcCVRuiANERIcrjhh2k6k0R/Wm0Lv+diCRm4LbNr7uH2rrcIL3pYHEI9Tty/2Bj1XbfLSDBKXzCVEY340o+iYtC+T@lists.infradead.org X-Gm-Message-State: AOJu0YyPIs6kNJHGUDsnzMyG3NFlMsTBULEg69eK7a+Qng31k4NjjW5q xoQoUTOYuYWaidrrHlUdbznwHFgGLL8mgqiiak2qzRtTUfuaX+F+X9OoK7m0DKJJaIaFAWRNY60 kFNOtj7/BPwQA7Bq9PA== X-Google-Smtp-Source: AGHT+IGc36H3jkruTF0XKWyqw/7fJ4jy+lnOLdRAHV8wJRsfUXrjOVQj1smZ2A50z4LIQhr1dOWCMFS3Os99bjd5 X-Received: from wmbec10.prod.google.com ([2002:a05:600c:610a:b0:439:804a:4a89]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:6da3:0:b0:38d:b325:471f with SMTP id ffacd0b85a97d-390ec7cc89dmr2470457f8f.15.1740738358111; Fri, 28 Feb 2025 02:25:58 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:22 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-7-vdonnefort@google.com> Subject: [PATCH 4/9] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022600_210070_1A1CAAED X-CRM114-Status: GOOD ( 11.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 343569e4bdeb..ad6131033114 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index 7f22d104c1f1..e13771a67827 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -314,6 +314,7 @@ static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret = -EINVAL; @@ -324,7 +325,7 @@ static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *host_ctxt if (!hyp_vm) goto out; - ret = __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret = __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) = ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index ad45f5eaa1fd..c273b9c46e11 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1051,7 +1051,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *vm) return ret; } -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 size) { u64 phys; int ret; @@ -1062,7 +1062,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); - ret = __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret = __check_host_shared_guest(vm, &phys, ipa, size); guest_unlock_component(vm); host_unlock_component(); @@ -1082,7 +1082,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret = kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1090,17 +1090,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_ return ret; } -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *vm) { - u64 ipa = hyp_pfn_to_phys(gfn); + u64 size, ipa = hyp_pfn_to_phys(gfn); int ret; if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; - assert_host_shared_guest(vm, ipa); + ret = __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret = kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); return ret; @@ -1114,7 +1118,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold); guest_unlock_component(vm); @@ -1130,7 +1134,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b65fcf245fc9..3ea92bb79e8c 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -404,7 +404,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn); + ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn, 1); if (WARN_ON(ret)) break; } From patchwork Fri Feb 28 10:25:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996216 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8C659C19776 for ; Fri, 28 Feb 2025 10:50:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=kBFHvME4v7wAoT7habVFSIRXDLX0Fgacd+7YWwIrSSE=; b=tqXsH5ZriA169lgxoKcDLo9Ity 46DPgJXB0lc9qJj9bwvpWU6P75aH33IQq4/xUtuId/20l+2LgZcBiBn4vTdlBCFwDJ9Yjr7MdxCkE fzQxWPtz4+FZlrdpvQ47pAWcXMR/P1VJyigG5V7ZgwIj868tqxMcNRbW2376ztmxUoWA+6w5SVO6K 0bwJLJAdrE5rxQVXPeWz6dElDxPTQu72eVPrDI6FJeLRInDpBRB/l/bu24W0OilrRFc/NE2/QL4Jf fPuArRCYY/WqeNikM2sZZG8HGHKUrYF6zwxbLXGO5F9aeFhounmsJg7kkRIv3tV5gLFbfoGGO/8yc SjJIycuA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tnxwW-0000000AeZE-22dy; Fri, 28 Feb 2025 10:50:08 +0000 Received: from mail-wr1-x449.google.com ([2a00:1450:4864:20::449]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZE-0000000AaWr-2UpM for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:26:05 +0000 Received: by mail-wr1-x449.google.com with SMTP id ffacd0b85a97d-390ddb917abso955670f8f.3 for ; Fri, 28 Feb 2025 02:26:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738362; x=1741343162; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kBFHvME4v7wAoT7habVFSIRXDLX0Fgacd+7YWwIrSSE=; b=cUa0xnIzdZrdQho7Zol2dy+NS5aDNwmW2t3pWqSfCAaxk3Yn9iSZSrDz+uuOo6HL73 MD2FDVE+yDyyhLK85oUgvOZSQCN6Osy7RbF8po8DwXTs4acnXn1KmJk3p3ub8f8rSFOJ kl2d6xrg9FYNXT6bUPoaf1vonTVqbiaAQwvd5BTqNPntWE3n5W+6CGLf8il/o9q+vhXy Z6VvrQaVWNsgj5o3ol2c46s6rOhUDpyK7+kIAeXcqST0PEGhuPJCxraHgQmC6TIMSZ60 chBq0IhKliNpTshFi+mEK6Xqgxsp/35Yg+FpogB+EhMQ2a3h+JYRSjWRJSwAcPSgMyhd z8YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738362; x=1741343162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kBFHvME4v7wAoT7habVFSIRXDLX0Fgacd+7YWwIrSSE=; b=EIwTgSiFcHJ9FlMgxRgoIk93EF1SnyfjpL0HAmtPbL5Xcl68rSQ/QE7ttDXkrWaAjW U/k4f6xlN+JLwH1iOI5VJyZ9WSCQQw5Fw10JILjZvFG/Rukdii+rkbi8oVfQRz3U1Svj rVafWpLc5DAOuXK9mIKmWiFvhR9SR11bq+iEroaZxYMrXwRpRBCd2k5PGAy5x7xdZlh5 5PQY6j1rtJTZcdYpsHXPQKT/Mr1N4lZR3XuoFkrWbD2TDZOD5SEEMrdCr7Bno9TZibfc B1nB+7Qc9Spl9S7uovOxB2+CFyaIHC1wcpeWJkzshsVMXMpoXSwZneZvYkm0T6CIVmL1 AHHA== X-Forwarded-Encrypted: i=1; AJvYcCWS4v+aloZyv3YOf5Q0dgNDTTMp2wO82ErBoHnl4mOBZQ5J8FeQbNl7jorWi1CytGFuu8JeI3ohvydAuEt45Xb9@lists.infradead.org X-Gm-Message-State: AOJu0Yx74ghjTTLyWsEP1pKF9/cUk20Nu/pMPhDaNl5FyAXuHuP+xdB1 YIO4jAauGsTN7PoocibbtEGCoh2TE+sPAcwORHaVBwezFpVwiBwy23qoxEPih3hG8lZbI/zhIJ2 ETXjVOhGqLx/N8TZPaQ== X-Google-Smtp-Source: AGHT+IEwHls1BLgdvt5KFxyshaKSutUV2RLW38Q+fbEdSBaFOyihQeFxU0qCKw+zxyowZaUN4jkoWQg3AuWNfSEu X-Received: from wrbce9.prod.google.com ([2002:a5d:5e09:0:b0:390:e493:b594]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1a8d:b0:38d:d0ca:fbd5 with SMTP id ffacd0b85a97d-390ec9bbc4cmr2282409f8f.22.1740738362092; Fri, 28 Feb 2025 02:26:02 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:24 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-9-vdonnefort@google.com> Subject: [PATCH 5/9] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022604_631302_967AD6FF X-CRM114-Status: GOOD ( 10.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index ad6131033114..0c88c92fc3a2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c index e13771a67827..a6353aacc36c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -335,7 +335,8 @@ static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret = -EINVAL; @@ -346,7 +347,7 @@ static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_context *ho if (!hyp_vm) goto out; - ret = __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret = __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) = ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index c273b9c46e11..25944d3f8203 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1110,17 +1110,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *vm) return ret; } -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hyp_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, struct pkvm_hyp_vm *vm) { - u64 ipa = hyp_pfn_to_phys(gfn); + u64 size, ipa = hyp_pfn_to_phys(gfn); int ret; if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret = __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mkold); + ret = kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 3ea92bb79e8c..2eb1cc30124e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -434,7 +434,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |= kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle, mapping->gfn, - mkold); + 1, mkold); return young; } From patchwork Fri Feb 28 10:25:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996234 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45595C19776 for ; Fri, 28 Feb 2025 10:53:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=K1TB5i50sA775ckn0lqMsqJUq2 eYQbKBopO+wTJGyhwkHKUz8+58BECeo2nPVGjbvu6DZPQud0UD/OmdtlVbvl908GVDle2w4VeBk9g tGtLk4Nh+UCba6Oa1bEorDuVEUKpdLEasBcBY8hCSLqH9H9IyMzJZbL95m0RhipHBO1DCI8hHBw1p e910NDXHPNy0UOhCvE5wD+TI49gJRC09q9QZkU6FLKZ+gxwfEmuvY892ZFxoKxvPkw1xNV7VP3aZ7 /zbCnlr1YYNvtv2U4aOBYKpGcaAziBBXYswBxwq5ykU4MBCnaRilBiHGLPGmhiJFy9x0cw8UCZsOX DLasNnRQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tnxzZ-0000000Aeuj-3C9c; Fri, 28 Feb 2025 10:53:17 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZH-0000000AaY9-3IbJ for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:26:09 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4394c0a58e7so12590405e9.0 for ; Fri, 28 Feb 2025 02:26:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738366; x=1741343166; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=VEVHZimU2x9c3/kWwOOpN+pSRuDRMuDexZrnhT02b82ag9CL7afvZIX2p2a2QrLvJa 9y10aN/mGgcHHDJ2UQemjTRWn6jQ2RTyL817RlVR+7OjVf5s9z8CT7z63+Tn03E9LBN2 yYrQKFRGg4H6BWxXr5U4uK8mWMrqwPH+hzAFlkrOpMhyWR8oWwHPF1cVExvRTatD87KL Ddf576Kbc1hGIP/vKMHNFVG0anZhx4HRcwHoT+GNQTYLJs+fvBdNuYBQjheWxUck/bo+ AzuWCV2JvxzwfLTsXb17dDCMuQuuR+BhotHVO2Fm7MjCy37xw1TzNZPdV9yC1lIIHBh3 eRwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738366; x=1741343166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=ti08SmmF6pZOTOPFvHCpd7hln6hX8RcByNeRhEBL309pteNuA/w5QY/mJtF8zQH/MT OeTllQWpUryfezKZnQdiUCLAnfZ/YJ4IO9nRD64T64ILgJ0NMJAqHcwjjs5UX1gsmST4 OTGLUmc4Ngu5FsBgrNaqYdOMaonGoai5SVz5ZTJeuNgnxwxcS8KnCdRug1omzNvFpsp5 PwDyB20QxR8b1tZXfLq3ZIZEgddD3tMiTQFGOTQ7pvFWBGzcTtdg8czxtymKSfXtslnK oy1IED/bpctLQkv/b3NtFoN3HmXsmndRDHJqs/ZrtSi7z8H6YDgAr+OpQ6p1PN63bgBE uwZw== X-Forwarded-Encrypted: i=1; AJvYcCXdrtRrAC5VMmGeZobVP2g1XtpjpWDqEtBZVFpjfHqmr1AYXY7+ZxUxocSx27cJ4/ANLVjKQwPWCc7FINRiWTnH@lists.infradead.org X-Gm-Message-State: AOJu0Yy+aiuKcPJMa9CPFVcaqye+QUpT55K79//uwCKlGnyN5kmPs6DW sF8dI+eAJaIQi0bOCGASgb+t59KmjDNmltloSkzP8509n54Q5u8tAy2iSFRF/Sz5M6aFVeg5//A 6fUi22TGKXpKGtgj1tg== X-Google-Smtp-Source: AGHT+IEZhgA3sjj9bqeEk1zidbn1TNaulw5oerE9lhwnrZnuvWYyl5QCNJRYqFMrruyHX5mq6Mbs76MBS6SDWMEK X-Received: from wmbfj14.prod.google.com ([2002:a05:600c:c8e:b0:439:86cb:c729]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5489:b0:439:9543:9491 with SMTP id 5b1f17b1804b1-43ba676062dmr18262165e9.25.1740738366120; Fri, 28 Feb 2025 02:26:06 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:26 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-11-vdonnefort@google.com> Subject: [PATCH 6/9] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022607_844595_D775EBBF X-CRM114-Status: GOOD ( 16.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index eb65f12e81d9..f0d52efb858e 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 2eb1cc30124e..da637c565ac9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ #include +#include #include #include #include @@ -270,80 +271,63 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a = rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b = rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 gfn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node = root->rb_node, *prev = NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping = rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn == gfn) - return node; - prev = node; - node = (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } -/* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the loop to allow freeing - * of __map inline. - */ +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp = find_first_mapping_node(&(__pgt)->pkvm_mappings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp = pkvm_mapping_iter_first(&(__pgt)->pkvm_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map = rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp = rb_next(__tmp); \ + __map = __tmp; \ + __tmp = pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >= ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings = RB_ROOT; + pgt->pkvm_mappings = RB_ROOT_CACHED; pgt->mmu = mmu; return 0; } -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start, u64 end) { struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle = kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; if (!handle) - return; + return 0; - node = rb_first(&pgt->pkvm_mappings); - while (node) { - mapping = rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node = rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -371,28 +355,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn = gfn; mapping->pfn = pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); return ret; } int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle = kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret = 0; + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } - - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) From patchwork Fri Feb 28 10:25:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E009CC282C5 for ; Fri, 28 Feb 2025 10:55:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=ZG04NELY60/MUYtJNmz9i9uCWH Zr5qkuoAi4JGumBYRnAia/UjZzYWDO+QuxFN7c8oAhz+cQMHW3JMFRgdwqxHgb90LF7M8VRu2vVbS NCT2lVRhEBMO/wgVxmuN9McFDsRDtAQ+uX9vS/NIQ5nAmkgEW3P35iNxwXWGVpeH/6jzWqavd06RY QbPF18ipkSpyge/cD5SGZcg/X4x0jl4YlWfOtWO6U9KmZNpxdg9TAezeE7LkSx5PqxiG+d0HiGRwA Mt2qDBI20xNRB++60ykxOjhyAloxmL5+OQ+CCfVYDBT5/v4UwluA5HabttiLRWCQ0SgTcNh3h3CP9 knWMJnow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tny16-0000000AfBl-1uNE; Fri, 28 Feb 2025 10:54:52 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZJ-0000000AaYW-3a3N for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:26:10 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4394b8bd4e1so10359145e9.0 for ; Fri, 28 Feb 2025 02:26:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738368; x=1741343168; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=M5ij9DzNvxuOmcxuaURA8dcYPJS15LFj907dGBLJnNWR+BmzfYdDwbOe+yHtzWexL1 0jbBcUNnLh5zaN7O24UKW6HY7Oz4o5Z7OG938AI8TWe9Hy2TyaD5/V7gfnc0Hax4aD4g iQFVddKPaZZzptC4KztArqtdjKF0M5qLWROiObF+ui0m5uYyH5u9f6ZlNS4j7DNyrdkj 42Qc3fJQmRO8DNISwvb6QZDzGs5SEWrQgjnTDcMtyHowBjPmpIqAzA2ObcTNbDkunIXU TAbs6lPndRtCOt7WJySANSR+Nfuaj6IObospWW0sUqav7z5GfQojbRiIIWagzatgd/ap YINg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738368; x=1741343168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=dJdgdcT6MB4Ia6l8TANtJpQ7qWkHW5E6juFcfHSmaT/MscGebk3bbA/gUa5VWBIl/r i37pQHrUkcXxrCqi0HN111x2K/1VGqJxayWMg5yw8ieaZuutrQ1ZQXMsVR7LW6lrj0hf oUm1aCw/qtKA+Es5nTNL+ZyGRy49l/arL3d96z+e2xdTXECO4yCgY7mLxJBo7csExzT2 ohGLbRAYefd/1jZz8CHbJW7nAWXdXEQASpKu7pQY+EiJJ2c7yObEoGpF/SUXS9fz0SqE jnHcQbB3irs91bSE3O2IdI0kwkkjbewDyqWxWYtbFR0ianl5rBesIKdUHYxBtyoHNTZ0 sP8Q== X-Forwarded-Encrypted: i=1; AJvYcCXi9QLy1XeNB2YzOgdwBzjRUn+lcssC7VEAs6TXg/SsTbOWqDqv1mEARtgOm/iKd/a9fxTvjGMftNPZ5gPFaTgX@lists.infradead.org X-Gm-Message-State: AOJu0YydoEPumkVIB1bZt14cjLAHVGYXnbTcHO0U5eq/MfWfWF9KUwF2 WSeDZHGrBDDfi4S6FAKb1/ioh8qTNpG+rzE216jn63AkrtiiErKVNmsgTEurJ30ty7nKboG6FLi ZLqFQrtr9YmAmYSxf0w== X-Google-Smtp-Source: AGHT+IH/ZodnD1gyWS5hoJvXm/WVgeMEtL4EusvAc9MyOb9eH7X6R8XviTaOHXTcXiUhD+8ewQSvRRjN0xDhTCV2 X-Received: from wmbfm26.prod.google.com ([2002:a05:600c:c1a:b0:439:848f:2fc7]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c25:b0:439:5a37:8157 with SMTP id 5b1f17b1804b1-43ba6774a03mr24052865e9.30.1740738368068; Fri, 28 Feb 2025 02:26:08 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:27 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-12-vdonnefort@google.com> Subject: [PATCH 7/9] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022609_892290_1FFC4745 X-CRM114-Status: GOOD ( 13.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h index f0d52efb858e..0e944a754b96 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index da637c565ac9..9c9833f27fe3 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -278,7 +278,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -315,7 +315,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start, u64 e return 0; for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn, 1); + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -345,16 +346,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, return -EINVAL; lockdep_assert_held_write(&kvm->mmu_lock); - ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret == -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening because of a race + * with another vCPU, or because we're changing between page and block mappings. As per + * user_mem_abort(), same-size permission faults are handled in the relax_perms() path. + */ + mapping = pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + size - 1); + if (mapping) { + if (size == (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or smaller. */ + ret = __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping = NULL; } + ret = kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn = gfn; mapping->pfn = pfn; + mapping->nr_pages = size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); return ret; @@ -376,7 +393,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn, 1); + ret = kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -391,7 +409,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); return 0; } @@ -406,7 +425,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |= kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle, mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); return young; } From patchwork Fri Feb 28 10:25:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 85714C19776 for ; Fri, 28 Feb 2025 10:58:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yRMnVBqxbNr0g4BbhZbbaQ06EF+wmwJkU+0Xf25VfM4=; b=n49IVyXKnH9B7+PxXjO/5/MUe6 Z9Yp3OeptZsszU5PShcYNlxMOjCrqQb17Q2IE16EPMEU08GxUiomZvAwCQ/L8NiYHxKu43V7yH1Nh V8iu7c0QRCWfIZ2nfMOHl9hJGZrIZGR3W1m0LdnNUb7Xh7oKFTulFDPljxW+eMTLPl9Pzejg4E0eH NU36jQ09D4fbT2/4440AtGlYshV0r0JDMQvrqg93NRn0zGIfZ9H5oBwk+4F92CR5HH9aO8H3BA31q rqX6hmq52QL1pCFtQeeOkW06xm60wA4F5Fh2DhV96xl4hjOAq1Nez9pjcjFElvSY8pzp26wuxI7JN 6qkSxO7A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tny4B-0000000Ag2w-3cXs; Fri, 28 Feb 2025 10:58:03 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZO-0000000Aaa1-0iTM for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:26:15 +0000 Received: by mail-wm1-x349.google.com with SMTP id 5b1f17b1804b1-4399a5afc95so8258445e9.3 for ; Fri, 28 Feb 2025 02:26:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738372; x=1741343172; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yRMnVBqxbNr0g4BbhZbbaQ06EF+wmwJkU+0Xf25VfM4=; b=geaK+W9x0fvHSLC4e0mNhMTY1Y/xsecYirAHcJM3MPoQlPqjJW2dof9HuITPxdzEdy PzEa7JadWHI8qmgUpxm6ahl5Q5/WLq0dLKH9zT4kFC0k98p/0K6Qr79MFh3k196kWlYq tl6jTnfjpxWhKrVEC+Pnm8yp/Mv/psG2Cixd/75C9ERg3lCryWPA9daTaZByOpC2N62f f/221H78M/pNhkl1dk+hx4xDfT78I9sbi9n3JwtfX6wrPNlfa+HCrgl8NTddaK4hn/9y br9CxqPWxNqfUSkUjLDjxgEQX+5kB0mwcYM16hA5Sl9bTfSROpINkj/ZAsHVLBMbekVo HzyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738372; x=1741343172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yRMnVBqxbNr0g4BbhZbbaQ06EF+wmwJkU+0Xf25VfM4=; b=F1O3rUdl3cYRRDau9unCNbLv7ITHwbZHbzD76LuIwcF0cMRyJc+WB4cL5/LXxoBjWX 28QCpkGHxrqMkSyf9qt2MZVFfATW2AfJemnPwYyyEU5ZlKq9xD2owoP1kIXOBGgDRDFv Mm0zGst9eHRU1C8BPBaspniZdIgH0jLGVW42/V1mK7zxg71v0kXjyW7TkITi8nfdQJkd 8oW7L+OYMvq9uwNwmkDc4lY3YTod9VtDF8jNG1g4llIy5ANcoE4ke7Ei09fnqVin9T0B ZOW7EcHCXl2GlMuCA5eDjHVqDFmSPHupmxKfVF5zpDu1NkdGN32SAQhMi5eGkBUMaFiN Z+KQ== X-Forwarded-Encrypted: i=1; AJvYcCWWfGbBd8DDvsIms6KoRlJpCQmzz+DQmBcJgWLU9e8cHzEXWR0kZlf/iuhuaDQeEu1PvBpoDznYeSV5poML1B3w@lists.infradead.org X-Gm-Message-State: AOJu0Yw/OWthp8q6efAwouMF3qL2dtPoLMqe1QPDwz560YUfUxiBdJf5 nvcPJTnnEaGuDoBxyKjwtozQnAP3ZdYyRZ+37PSSH4dgsQW1vewZ29R8JgVVhkXQ2F/AYqtT5OO GwfMxmw2pepIB2Q/dXg== X-Google-Smtp-Source: AGHT+IH0fwtXCTDeJhT/l/khwHsuYplMhcSCRFECxM5q7EjwmroI+BdQCjojPmRxAfbH2kPDmDbv1a422TO5a1h/ X-Received: from wmqe15.prod.google.com ([2002:a05:600c:4e4f:b0:439:9fd1:8341]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1387:b0:439:91dd:cfa3 with SMTP id 5b1f17b1804b1-43ba6760305mr20891625e9.29.1740738372225; Fri, 28 Feb 2025 02:26:12 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:29 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-14-vdonnefort@google.com> Subject: [PATCH 8/9] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022614_210358_DFA2D649 X-CRM114-Status: UNSURE ( 9.68 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 25944d3f8203..271893eff021 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -167,7 +167,7 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) static bool guest_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) { - return true; + return false; } static void *guest_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1f55b0c7b11d..3143f3b52c93 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1525,7 +1525,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte = true; vma_shift = PAGE_SHIFT; } else { @@ -1535,7 +1535,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: - if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + if (is_protected_kvm_enabled() || + fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) break; fallthrough; #endif diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 9c9833f27fe3..b40bcdb1814d 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -342,7 +342,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, u64 pfn = phys >> PAGE_SHIFT; int ret; - if (size != PAGE_SIZE) + if (size != PAGE_SIZE && size != PMD_SIZE) return -EINVAL; lockdep_assert_held_write(&kvm->mmu_lock); From patchwork Fri Feb 28 10:25:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vincent Donnefort X-Patchwork-Id: 13996238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CCC6DC19776 for ; Fri, 28 Feb 2025 10:59:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:Cc:To:From: Subject:Message-ID:References:Mime-Version:In-Reply-To:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=okFrKrhPumydq8rLHSqC6+IdPsjGdyC8aVxeiqDQVLo=; b=ZsYRp45vimZRjX9lag/Y+SpWlS K64jSgwP/rKANERSa+SpgrGBK3DWH9Jhi1iJVrYDy7udF0b4aKdujKF+3vJ7RLjMsc9IoaMEfBnYD 188EgcUiPeom376O+5ZGoV1n9bWoIGkGJVdRRT0apmXNNr8pItaOtyRNedY/9xaWUJ6I3t81f7z8P KLF5XBtLEbzgOrbmHK95ge2OQyA0H+PrAyk8RIuWGHOwhajwliQrPHXdfwogvb8rDb1ucLfOdhXwj 3caxYKiq1G4J0t9mIkubkds6TW2i/e1J1Aqzjl71c4ZnB4f1Fd4pVQlTKZKRUJlX2SYH+wkm3GjP5 uB8tvoFQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tny5i-0000000AgI4-269T; Fri, 28 Feb 2025 10:59:38 +0000 Received: from mail-wm1-x34a.google.com ([2a00:1450:4864:20::34a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tnxZP-0000000AaaZ-42aM for linux-arm-kernel@lists.infradead.org; Fri, 28 Feb 2025 10:26:17 +0000 Received: by mail-wm1-x34a.google.com with SMTP id 5b1f17b1804b1-4394c747c72so8301575e9.1 for ; Fri, 28 Feb 2025 02:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738374; x=1741343174; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=okFrKrhPumydq8rLHSqC6+IdPsjGdyC8aVxeiqDQVLo=; b=QLycJR0iXtYnO1w8NlSmIUiBQPTxFjh7UaQrxJ6Kfh+l4ihucgR4cIE/863OHkrrZy nOBrsu0HsVeniZjhvSyIkSu1FDJBjPL9ENO85pvHjix6QnlYfTA1Vz+vx74Y8qDCEvLP Sshm7hd68uU9cMlL+bkUbuaUbJ6XT2ZF4jvFAPlrHujiWcYNMOlxccUdv2+Q34CLLxuU X3R1raQ73xSQba/+tKqX/wT1Apw8NbmFtk7SaobU5DQOjk1N7WbMCFNxIuRu//VgguZP prYRS90qXr0AQDla72IT+pzVEDiJQO4QLPyRjlmE0T38E7XF/OB5uXTdsVK/bGhAIpEo zD5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738374; x=1741343174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=okFrKrhPumydq8rLHSqC6+IdPsjGdyC8aVxeiqDQVLo=; b=TzWW0+lS9anYNVRoP5Dyw3Q0NWI5Lj7o36Ge0TrpeTIioJArWjPyoJeChvxVdH66yT 73JiuxFNvonEWHdOmw20uVRhntwbBaOWPcOvXupJQNtXBRv3IC+2Ah/EcSFP2llyeBCC uHWJKTczA/PRFh29wBdgaXPJ+GdqQhUFVBUOPHMNq7IocjSl5sHDfq5sh+ThrbItVtd6 cWvdc63gSFdVFKqCILPzvrAojhxGW3sYZhiEfcSbUfOrdvcQMWvQ8wMeJYsNoNPAJO6E 5OsRT5N8sGa8cHADC0G6VfHhlhs9aA+P2oClEZ1iyXqlylUcUzisgZdD2liQS8FGZAyo YlCQ== X-Forwarded-Encrypted: i=1; AJvYcCX/xzl38P90jvqELoJtj3X+0hMxyYAwpw2xoaObtbuwv2OZ7tU8ReZ/93VACFI0dcLjImQNNeQSl0BTfuyvk5Nk@lists.infradead.org X-Gm-Message-State: AOJu0YywUxn0UxLcAZmKaAd8jU/A2bnXGXiYJGjHwhX4/YXwX7c+2LDM lwwNheK06GnUbbxbDZxmNA3al0quJIPVF3f7+1duqZkLhgD8gphat0ROZ3HiLuXjkqjJ5zAHagM Oa5Jw5EhHL3+rzvasgQ== X-Google-Smtp-Source: AGHT+IHw8E3se+rg2k3z4uSikRRjQKSYo2S3MZjJ8Y0FSN5936j7LMJM+12PR3REuojrFW4s9g1hMmiAVM7Iz4UL X-Received: from wmbfm6.prod.google.com ([2002:a05:600c:c06:b0:439:9601:298d]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f0d:b0:439:9828:c450 with SMTP id 5b1f17b1804b1-43ba67082e6mr22770435e9.15.1740738374284; Fri, 28 Feb 2025 02:26:14 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:30 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-15-vdonnefort@google.com> Subject: [PATCH 9/9] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250228_022616_005180_0DBF28D0 X-CRM114-Status: GOOD ( 17.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; #define KVM_PHYS_INVALID (-1ULL) +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h index 230e4f2527de..b0c72bc2d5ba 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys); +void hyp_fixblock_unmap(void); int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 271893eff021..d27ce31370aa 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -220,25 +220,64 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } +static void *__fixmap_guest_page(void *va, size_t *size) +{ + if (IS_ALIGNED(*size, PMD_SIZE)) { + void *addr = hyp_fixblock_map(__hyp_pa(va)); + + if (addr) + return addr; + + *size = PAGE_SIZE; + } + + if (IS_ALIGNED(*size, PAGE_SIZE)) + return hyp_fixmap_map(__hyp_pa(va)); + + WARN_ON(1); + + return NULL; +} + +static void __fixunmap_guest_page(size_t size) +{ + switch (size) { + case PAGE_SIZE: + hyp_fixmap_unmap(); + break; + case PMD_SIZE: + hyp_fixblock_unmap(); + break; + default: + WARN_ON(1); + } +} + static void clean_dcache_guest_page(void *va, size_t size) { while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va += PAGE_SIZE; - size -= PAGE_SIZE; + size_t fixmap_size = size == PMD_SIZE ? size : PAGE_SIZE; + void *addr = __fixmap_guest_page(va, &fixmap_size); + + __clean_dcache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -= fixmap_size; + va += fixmap_size; } } static void invalidate_icache_guest_page(void *va, size_t size) { while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va += PAGE_SIZE; - size -= PAGE_SIZE; + size_t fixmap_size = size == PMD_SIZE ? size : PAGE_SIZE; + void *addr = __fixmap_guest_page(va, &fixmap_size); + + __invalidate_icache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -= fixmap_size; + va += fixmap_size; } } diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..e3b1bece8504 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phys) { - struct hyp_fixmap_slot *slot = this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep = slot->ptep; pte = *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep = slot->ptep; u64 addr = slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) == KVM_PTE_TYPE_PAGE) + level = KVM_PGTABLE_LAST_LEVEL; + else + level = KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PMD level */ WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#mf10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot = per_cpu_ptr(&fixmap_slots, (u64)ctx->arg); + struct hyp_fixmap_slot *slot = (struct hyp_fixmap_slot *)ctx->arg; - if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) != kvm_granule_size(ctx->level)) return -EINVAL; slot->addr = ctx->addr; @@ -296,13 +306,73 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker = { .cb = __create_fixmap_slot_cb, .flags = KVM_PGTABLE_WALK_LEAF, - .arg = (void *)cpu, + .arg = (void *)per_cpu_ptr(&fixmap_slots, cpu), }; return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } -int hyp_create_pcpu_fixmap(void) +#ifndef CONFIG_ARM64_64K_PAGES +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); + +void *hyp_fixblock_map(phys_addr_t phys) +{ + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +} + +void hyp_fixblock_unmap(void) +{ + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +} + +static int create_fixblock(void) +{ + struct kvm_pgtable_walker walker = { + .cb = __create_fixmap_slot_cb, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = (void *)&hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i = 0; i < hyp_memblock_nr; i++) { + phys = ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >= hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr = ALIGN(__io_map_base, PMD_SIZE); + ret = __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP); + if (ret) + goto unlock; + + ret = kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +} +#else +void hyp_fixblock_unmap(void) { WARN_ON(1); } +void *hyp_fixblock_map(phys_addr_t phys) { return NULL; } +static int create_fixblock(void) { return 0; } +#endif + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +392,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } - return 0; + return create_fixblock(); } int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index d62bcb5634a2..fb69cf5e6ea8 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -295,7 +295,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; - ret = hyp_create_pcpu_fixmap(); + ret = hyp_create_fixmap(); if (ret) goto out; diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker;