From patchwork Wed Feb 1 12:53:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124384 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3DECC05027 for ; Wed, 1 Feb 2023 14:05:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r/q7ZWKcpd8q8O99mUptWK1kb6Ktf4kseipPC2zI1e8=; b=KzT+ba4d0BTtiU fLWpjxMjCnCwsIuVcn7yVNEogEyy/8MWjp7itsp7bEnA/F2PH83a0WAk6BcFQAqk0NqnvLo8f0qfv sF5NIyO4moWPEj0Qy89WjZpV6SAGLaSPPSuGkB+07jwUWR2D6aqp+Bhieu9gI08vPo10dDx/H1xx4 vemhS+pJRvF13h/xw+3kykyJBSjeArXArR16NgiG5K1joJGLtJfBwmpaRtI1cJKkoPfq1wUl/KwgY FNagF/Np4wCTO5Mcimjq8DTPrqs7/Y/AI57d5ywg2Dpxbl9GnoGzw5hes+IzxxQv9X4w4HpgTXbFk 9NT/G6SApCgxn7Rnt9tA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDjA-00CCVz-QN; Wed, 01 Feb 2023 14:04:44 +0000 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCi6-00BnFc-6g for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:37 +0000 Received: by mail-wr1-x42c.google.com with SMTP id q10so17233077wrm.4 for ; Wed, 01 Feb 2023 04:59:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GNUt4LXC/vnI+uinFgyT66VRoqWna4CviemrNT21aBc=; b=thk5T6OXjBQKQ8JCuo3/JBvaiAeliPJbKUA6Ev5LIvBWS09yiKK0shaiUVxTtsTeBt c+XuSCQ1TP1IABgnoI6i3lGgX+giB2jgzc1THVgnusuLSQNNNt6bS+ImpDXuysQeJ11F JLcR0da7d/4keUdWrMMV2L7qbinuKb59mEpdP3CKABCoa7skqfyKpnUaZECC+1fYWn+j l070kGUPgEqRTrzSe+a7rh2sFc8U5Y5/WwbZmcxsKDn5r9s/zFcm47LcB05dNt10Vb1q pEcW8SBeF5Bj3vtIw+Q+nhL7Z+MyrwOPXm6jZ+5aZvuVzUPsAqGjE1ZVojd5nV5B6WZX GYGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GNUt4LXC/vnI+uinFgyT66VRoqWna4CviemrNT21aBc=; b=xbW3vUOMNcd9v5tismiQdhA2o3Ppa2lP4ARNnPLcQuhClXJfynXXXPFbqlDr2zoL8j hpZTVW7v+SiENpLMe9UzzyM8Ew9sFwaAB0XmgaZjfwiqmRn+uJpe4iRyj3CzoB9FuIXU K7lymHRFskMUBGvlA6BmcV2SmDcD+Tm9Zp32ouuY/lhPaGlbdBLyXQEvVnA+ZKgfgzKX HAb+zlJqWEiVVvWUIBdtlvt+Ul5JvQhC6c8TgmDbBwmyonvN/i+AMtlvQOPRKmQnAR99 SbmqfWSNnji/d3XZKLxESFBmsH/0SdXtYtPqqLxT8ClC2cwWSKfY8PvJb5v0kTA804Jm 6ktw== X-Gm-Message-State: AO0yUKXaVtl0zRYv22Z7yjklLoPqMljUBI2BDPIKhP/lXXfvqnSYm244 5wb0xnE+C23NEm3l40g++AYkBg== X-Google-Smtp-Source: AK7set9dF9IqnNhJnBWjTtGsSxR/PrtlZWcMlOX8jMonsihAZRhFZWtS6tLGLt7ILypTZ2wjrtzJig== X-Received: by 2002:a05:6000:1889:b0:2bf:c5e4:1af4 with SMTP id a9-20020a056000188900b002bfc5e41af4mr2995748wri.15.1675256373642; Wed, 01 Feb 2023 04:59:33 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:33 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 18/45] KVM: arm64: iommu: Add per-cpu page queue Date: Wed, 1 Feb 2023 12:53:02 +0000 Message-Id: <20230201125328.2186498-19-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045934_283572_7C125572 X-CRM114-Status: GOOD ( 20.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The hyp driver will need to allocate pages when handling some hypercalls, to populate page, stream and domain tables. Add a per-cpu page queue that will contain host pages to be donated and reclaimed. When the driver needs a new page, it sets the needs_page bit and returns to the host with an error. The host pushes a page and retries the hypercall. The queue is per-cpu to ensure that IOMMU map()/unmap() requests from different CPUs don't step on each others. It is populated on demand rather than upfront to avoid wasting memory, as these allocations should be relatively rare. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 4 ++ include/kvm/iommu.h | 15 +++++++ arch/arm64/kvm/hyp/nvhe/iommu/iommu.c | 52 +++++++++++++++++++++++++ 4 files changed, 73 insertions(+) create mode 100644 include/kvm/iommu.h create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/iommu.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 530347cdebe3..f7dfc88c9f5b 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -28,6 +28,8 @@ hyp-obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ hyp-obj-$(CONFIG_DEBUG_LIST) += list_debug.o hyp-obj-y += $(lib-objs) +hyp-obj-$(CONFIG_KVM_IOMMU) += iommu/iommu.o + ## ## Build rules for compiling nVHE hyp code ## Output of this folder is `kvm_nvhe.o`, a partially linked object diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index 26a95717b613..4959c30977b8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -3,6 +3,10 @@ #define __ARM64_KVM_NVHE_IOMMU_H__ #if IS_ENABLED(CONFIG_KVM_IOMMU) +int kvm_iommu_init(void); +void *kvm_iommu_donate_page(void); +void kvm_iommu_reclaim_page(void *p); + /* Hypercall handlers */ int kvm_iommu_alloc_domain(pkvm_handle_t iommu_id, pkvm_handle_t domain_id, unsigned long pgd_hva); diff --git a/include/kvm/iommu.h b/include/kvm/iommu.h new file mode 100644 index 000000000000..12b06a5df889 --- /dev/null +++ b/include/kvm/iommu.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_IOMMU_H +#define __KVM_IOMMU_H + +#include + +struct kvm_hyp_iommu_memcache { + struct kvm_hyp_memcache pages; + bool needs_page; +} ____cacheline_aligned_in_smp; + +extern struct kvm_hyp_iommu_memcache *kvm_nvhe_sym(kvm_hyp_iommu_memcaches); +#define kvm_hyp_iommu_memcaches kvm_nvhe_sym(kvm_hyp_iommu_memcaches) + +#endif /* __KVM_IOMMU_H */ diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c new file mode 100644 index 000000000000..1a9184fbbd27 --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/iommu.c @@ -0,0 +1,52 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * IOMMU operations for pKVM + * + * Copyright (C) 2022 Linaro Ltd. + */ + +#include +#include +#include +#include +#include + +struct kvm_hyp_iommu_memcache __ro_after_init *kvm_hyp_iommu_memcaches; + +void *kvm_iommu_donate_page(void) +{ + void *p; + int cpu = hyp_smp_processor_id(); + struct kvm_hyp_memcache tmp = kvm_hyp_iommu_memcaches[cpu].pages; + + if (!tmp.nr_pages) { + kvm_hyp_iommu_memcaches[cpu].needs_page = true; + return NULL; + } + + p = pkvm_admit_host_page(&tmp); + if (!p) + return NULL; + + kvm_hyp_iommu_memcaches[cpu].pages = tmp; + memset(p, 0, PAGE_SIZE); + return p; +} + +void kvm_iommu_reclaim_page(void *p) +{ + int cpu = hyp_smp_processor_id(); + + pkvm_teardown_donated_memory(&kvm_hyp_iommu_memcaches[cpu].pages, p, + PAGE_SIZE); +} + +int kvm_iommu_init(void) +{ + enum kvm_pgtable_prot prot; + + /* The memcache is shared with the host */ + prot = pkvm_mkstate(PAGE_HYP, PKVM_PAGE_SHARED_OWNED); + return pkvm_create_mappings(kvm_hyp_iommu_memcaches, + kvm_hyp_iommu_memcaches + NR_CPUS, prot); +}