From patchwork Wed Feb 1 12:53:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jean-Philippe Brucker X-Patchwork-Id: 13124391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6CD15C05027 for ; Wed, 1 Feb 2023 14:08:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YQloGxYlb4qNNJRXDsL55zpoUax+3RWeS9w39N7RLIw=; b=vK0o3IX87JmmTI GDn+uI10CklgxT+kNA44PU21RW67dOt+E/V9SDHisajrjffOsAhmREGGKwQoqAmabY0FcAQwt/eu8 uC9gLvHPHrKMLYicnE3LDofWneBmhmvecR37RhwT9whjCZSTjQ2MK0O1qcps4dUIsUOm5FgTywQlw F/CLhjjS3j+JI3kcOBvOx8L0eiJB/mrpETIeP4Ales/B7Vl9q4hnx2EeqYDp0SKT50RiGhC6oQU+Y veWprfzo4QdMDi/EqRKn/1TIlLWsyo6/J1Hq20JXbCejv87am/q7qYII9Y4UQRTuFs4gHniO6Yr9s R9yOxAtVyMZLUxNEjCsQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNDmN-00CE5b-Vd; Wed, 01 Feb 2023 14:08:04 +0000 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNCiC-00BnIT-Cp for linux-arm-kernel@lists.infradead.org; Wed, 01 Feb 2023 12:59:42 +0000 Received: by mail-wr1-x42d.google.com with SMTP id bk16so17210484wrb.11 for ; Wed, 01 Feb 2023 04:59:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BKjgmaDxLkmrQ0mpxwx2r9pGg6x4GEO3iHZz6Uf5p3w=; b=li73nmc1UxwrkMWjkGKFFhTVQtrHociEVH9aYlHsY9ZKApJNsJx7aTCpI2BzVQGSVB hBkwUB/RWJWUZ4yqLIl2Y9KPBcRezlTjL3RAB9Ztcn9gHLHOEgjY5BU9iKVYIqN+oZxW musGWcucKajPIP8AF4AZ8MbembXZgbejsfKvMg4DodT/iPV3e4bUFnrqmOCnuq0p/wg/ 2NkB3jwUmyzAg4RRPrK7FpYWu7SEPYM6DT0Wd4jgqd6vATEZkchKW5bNFyCSKBPtm8/S 5JQki4YOGVI2Wz77ZTbUBm1ss/Oms4VhtxdJklLOIydXox4fVdVJ8VbwnbRuehbT4YZz 7J5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BKjgmaDxLkmrQ0mpxwx2r9pGg6x4GEO3iHZz6Uf5p3w=; b=u8IcaNUhRE2VOMah8nAn8XnS5F6ZUU+Y2+vIELauxpQFUpPdmI4+ZPLrhr0S/sWRxZ UrnU5mJRjijMSmwKechRw/KujWbwctMMIRtZoBB1GnUmbcvtUToaHwJCq6ZFMGzQHEEC En/3TKNObLMEWYeymjdML9oPqBqlj0SC9pYKyKJ6D2ZcCBrCWeqbXvjDMSsCjxHZfqBi GrN2DjFTb9RV30N2dvnhuiiSMO4Y294HI+mtORXsw8a6XeI7EcKwp1WPa1zAH5vV4lnx B+RKUT/1lIZmZkh3PaQeR860RqcDZvbK+rgDEDt9Fyre6HPkrU9F2SqJAaN9rO+xl5pt 8N8g== X-Gm-Message-State: AO0yUKXOEMMZCaImQpZmYrJW4MUSkLNoOhe0axzBWocKyZzzPCZgAJQm nT7V8Br1p0YSzJjuBjMGALgIRg== X-Google-Smtp-Source: AK7set929JCQtOsi6+aIpV+17WDiEBZh7VrjLqph/lKy2FUykdpHt3ALamuIzZ3nT3DrqEwV0ZjE+A== X-Received: by 2002:a5d:47a2:0:b0:2bf:b5bd:8f60 with SMTP id 2-20020a5d47a2000000b002bfb5bd8f60mr1982390wrb.61.1675256379854; Wed, 01 Feb 2023 04:59:39 -0800 (PST) Received: from localhost.localdomain (054592b0.skybroadband.com. [5.69.146.176]) by smtp.gmail.com with ESMTPSA id m15-20020a056000024f00b002bfae16ee2fsm17972811wrz.111.2023.02.01.04.59.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Feb 2023 04:59:39 -0800 (PST) From: Jean-Philippe Brucker To: maz@kernel.org, catalin.marinas@arm.com, will@kernel.org, joro@8bytes.org Cc: robin.murphy@arm.com, james.morse@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, smostafa@google.com, dbrazdil@google.com, ryan.roberts@arm.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, iommu@lists.linux.dev, Jean-Philippe Brucker Subject: [RFC PATCH 26/45] KVM: arm64: smmu-v3: Support io-pgtable Date: Wed, 1 Feb 2023 12:53:10 +0000 Message-Id: <20230201125328.2186498-27-jean-philippe@linaro.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230201125328.2186498-1-jean-philippe@linaro.org> References: <20230201125328.2186498-1-jean-philippe@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230201_045940_455456_D8C2783E X-CRM114-Status: GOOD ( 17.91 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement the hypervisor version of io-pgtable allocation functions, mirroring drivers/iommu/io-pgtable-arm.c. Page allocation uses the IOMMU memcache filled by the host, except for the PGD which may be larger than a page. Signed-off-by: Jean-Philippe Brucker --- arch/arm64/kvm/hyp/nvhe/Makefile | 2 + arch/arm64/kvm/hyp/include/nvhe/iommu.h | 7 ++ include/linux/io-pgtable-arm.h | 6 ++ .../arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c | 97 +++++++++++++++++++ 4 files changed, 112 insertions(+) create mode 100644 arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile index 349c874762c8..8359909bd796 100644 --- a/arch/arm64/kvm/hyp/nvhe/Makefile +++ b/arch/arm64/kvm/hyp/nvhe/Makefile @@ -30,6 +30,8 @@ hyp-obj-y += $(lib-objs) hyp-obj-$(CONFIG_KVM_IOMMU) += iommu/iommu.o hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/arm-smmu-v3.o +hyp-obj-$(CONFIG_ARM_SMMU_V3_PKVM) += iommu/io-pgtable-arm.o \ + ../../../../../drivers/iommu/io-pgtable-arm-common.o ## ## Build rules for compiling nVHE hyp code diff --git a/arch/arm64/kvm/hyp/include/nvhe/iommu.h b/arch/arm64/kvm/hyp/include/nvhe/iommu.h index 0ba59d20bef3..c7744cca6e13 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/iommu.h +++ b/arch/arm64/kvm/hyp/include/nvhe/iommu.h @@ -6,7 +6,14 @@ #include #if IS_ENABLED(CONFIG_ARM_SMMU_V3_PKVM) +#include + int kvm_arm_smmu_v3_register(void); + +int kvm_arm_io_pgtable_init(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data); +int kvm_arm_io_pgtable_alloc(struct io_pgtable *iop, unsigned long pgd_hva); +int kvm_arm_io_pgtable_free(struct io_pgtable *iop); #else /* CONFIG_ARM_SMMU_V3_PKVM */ static inline int kvm_arm_smmu_v3_register(void) { diff --git a/include/linux/io-pgtable-arm.h b/include/linux/io-pgtable-arm.h index 2b3e69386d08..b89b8ec57721 100644 --- a/include/linux/io-pgtable-arm.h +++ b/include/linux/io-pgtable-arm.h @@ -161,8 +161,14 @@ static inline bool iopte_leaf(arm_lpae_iopte pte, int lvl, return iopte_type(pte) == ARM_LPAE_PTE_TYPE_BLOCK; } +#ifdef __KVM_NVHE_HYPERVISOR__ +#include +#define __arm_lpae_virt_to_phys hyp_virt_to_phys +#define __arm_lpae_phys_to_virt hyp_phys_to_virt +#else #define __arm_lpae_virt_to_phys __pa #define __arm_lpae_phys_to_virt __va +#endif /* Generic functions */ void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, diff --git a/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c b/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c new file mode 100644 index 000000000000..a46490acb45c --- /dev/null +++ b/arch/arm64/kvm/hyp/nvhe/iommu/io-pgtable-arm.c @@ -0,0 +1,97 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2022 Arm Ltd. + */ +#include +#include +#include +#include +#include +#include + +#include +#include + +bool __ro_after_init selftest_running; + +void *__arm_lpae_alloc_pages(size_t size, gfp_t gfp, struct io_pgtable_cfg *cfg) +{ + void *addr = kvm_iommu_donate_page(); + + BUG_ON(size != PAGE_SIZE); + + if (addr && !cfg->coherent_walk) + kvm_flush_dcache_to_poc(addr, size); + + return addr; +} + +void __arm_lpae_free_pages(void *addr, size_t size, struct io_pgtable_cfg *cfg) +{ + BUG_ON(size != PAGE_SIZE); + + if (!cfg->coherent_walk) + kvm_flush_dcache_to_poc(addr, size); + + kvm_iommu_reclaim_page(addr); +} + +void __arm_lpae_sync_pte(arm_lpae_iopte *ptep, int num_entries, + struct io_pgtable_cfg *cfg) +{ + if (!cfg->coherent_walk) + kvm_flush_dcache_to_poc(ptep, sizeof(*ptep) * num_entries); +} + +int kvm_arm_io_pgtable_init(struct io_pgtable_cfg *cfg, + struct arm_lpae_io_pgtable *data) +{ + int ret = arm_lpae_init_pgtable_s2(cfg, data); + + if (ret) + return ret; + + data->iop.cfg = *cfg; + return 0; +} + +int kvm_arm_io_pgtable_alloc(struct io_pgtable *iopt, unsigned long pgd_hva) +{ + size_t pgd_size, alignment; + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iopt->ops); + + pgd_size = ARM_LPAE_PGD_SIZE(data); + /* + * If it has eight or more entries, the table must be aligned on + * its size. Otherwise 64 bytes. + */ + alignment = max(pgd_size, 8 * sizeof(arm_lpae_iopte)); + if (!IS_ALIGNED(pgd_hva, alignment)) + return -EINVAL; + + iopt->pgd = pkvm_map_donated_memory(pgd_hva, pgd_size); + if (!iopt->pgd) + return -ENOMEM; + + if (!data->iop.cfg.coherent_walk) + kvm_flush_dcache_to_poc(iopt->pgd, pgd_size); + + /* Ensure the empty pgd is visible before any actual TTBR write */ + wmb(); + + return 0; +} + +int kvm_arm_io_pgtable_free(struct io_pgtable *iopt) +{ + struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(iopt->ops); + size_t pgd_size = ARM_LPAE_PGD_SIZE(data); + + if (!data->iop.cfg.coherent_walk) + kvm_flush_dcache_to_poc(iopt->pgd, pgd_size); + + /* Free all tables but the pgd */ + __arm_lpae_free_pgtable(data, data->start_level, iopt->pgd, true); + pkvm_unmap_donated_memory(iopt->pgd, pgd_size); + return 0; +}