From patchwork Wed Oct 27 10:44:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0F9EC433FE for ; Wed, 27 Oct 2021 10:45:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA723610A5 for ; Wed, 27 Oct 2021 10:45:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241524AbhJ0Krc (ORCPT ); Wed, 27 Oct 2021 06:47:32 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:20798 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241507AbhJ0Kr2 (ORCPT ); Wed, 27 Oct 2021 06:47:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331502; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dsTW88JLxmgtPXVjSysbobchSF9Z8lO6SXfy0oNtT3s=; b=cCVVbAVJRbTounO52FfqsmKjVShy41xLOjfuRkD8V+kos2M6MoIOwYdSZiTzFZEjw6Bkux jowqfHgoffGPRIQLCbJ3RN/eUKPoRpkPCsjZ0E3g6UVhRNrkFC/rvrqS5U6W9eWLiiD72q ixDSzVwOSYlDlCNOFdIGCaV80WJkwoY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-261-SwsB4E-xMYmszDiSO5eE_A-1; Wed, 27 Oct 2021 06:44:59 -0400 X-MC-Unique: SwsB4E-xMYmszDiSO5eE_A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E2C7610A8E00; Wed, 27 Oct 2021 10:44:55 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 216A0100238C; Wed, 27 Oct 2021 10:44:48 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 1/9] iommu: Introduce attach/detach_pasid_table API Date: Wed, 27 Oct 2021 12:44:20 +0200 Message-Id: <20211027104428.1059740-2-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In virtualization use case, when a guest is assigned a PCI host device, protected by a virtual IOMMU on the guest, the physical IOMMU must be programmed to be consistent with the guest mappings. If the physical IOMMU supports two translation stages it makes sense to program guest mappings onto the first stage/level (ARM/Intel terminology) while the host owns the stage/level 2. In that case, it is mandated to trap on guest configuration settings and pass those to the physical iommu driver. This patch adds a new API to the iommu subsystem that allows to set/unset the pasid table information. A generic iommu_pasid_table_config struct is introduced in a new iommu.h uapi header. This is going to be used by the VFIO user API. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Liu, Yi L Signed-off-by: Ashok Raj Signed-off-by: Jacob Pan Signed-off-by: Eric Auger --- v13 -> v14: - export iommu_attach_pasid_table - add dummy iommu_uapi_attach_pasid_table - swap base_ptr and format in iommu_pasid_table_config v12 -> v13: - Fix config check v11 -> v12: - add argsz, name the union --- drivers/iommu/iommu.c | 69 ++++++++++++++++++++++++++++++++++++++ include/linux/iommu.h | 27 +++++++++++++++ include/uapi/linux/iommu.h | 54 +++++++++++++++++++++++++++++ 3 files changed, 150 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 3303d707bab4..6033c263c6e6 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2236,6 +2236,75 @@ int iommu_uapi_sva_unbind_gpasid(struct iommu_domain *domain, struct device *dev } EXPORT_SYMBOL_GPL(iommu_uapi_sva_unbind_gpasid); +int iommu_attach_pasid_table(struct iommu_domain *domain, + struct iommu_pasid_table_config *cfg) +{ + if (unlikely(!domain->ops->attach_pasid_table)) + return -ENODEV; + + return domain->ops->attach_pasid_table(domain, cfg); +} +EXPORT_SYMBOL_GPL(iommu_attach_pasid_table); + +int iommu_uapi_attach_pasid_table(struct iommu_domain *domain, + void __user *uinfo) +{ + struct iommu_pasid_table_config pasid_table_data = { 0 }; + u32 minsz; + + if (unlikely(!domain->ops->attach_pasid_table)) + return -ENODEV; + + /* + * No new spaces can be added before the variable sized union, the + * minimum size is the offset to the union. + */ + minsz = offsetof(struct iommu_pasid_table_config, vendor_data); + + /* Copy minsz from user to get flags and argsz */ + if (copy_from_user(&pasid_table_data, uinfo, minsz)) + return -EFAULT; + + /* Fields before the variable size union are mandatory */ + if (pasid_table_data.argsz < minsz) + return -EINVAL; + + /* PASID and address granu require additional info beyond minsz */ + if (pasid_table_data.version != PASID_TABLE_CFG_VERSION_1) + return -EINVAL; + if (pasid_table_data.format == IOMMU_PASID_FORMAT_SMMUV3 && + pasid_table_data.argsz < + offsetofend(struct iommu_pasid_table_config, vendor_data.smmuv3)) + return -EINVAL; + + /* + * User might be using a newer UAPI header which has a larger data + * size, we shall support the existing flags within the current + * size. Copy the remaining user data _after_ minsz but not more + * than the current kernel supported size. + */ + if (copy_from_user((void *)&pasid_table_data + minsz, uinfo + minsz, + min_t(u32, pasid_table_data.argsz, sizeof(pasid_table_data)) - minsz)) + return -EFAULT; + + /* Now the argsz is validated, check the content */ + if (pasid_table_data.config < IOMMU_PASID_CONFIG_TRANSLATE || + pasid_table_data.config > IOMMU_PASID_CONFIG_ABORT) + return -EINVAL; + + return domain->ops->attach_pasid_table(domain, &pasid_table_data); +} +EXPORT_SYMBOL_GPL(iommu_uapi_attach_pasid_table); + +void iommu_detach_pasid_table(struct iommu_domain *domain) +{ + if (unlikely(!domain->ops->detach_pasid_table)) + return; + + domain->ops->detach_pasid_table(domain); +} +EXPORT_SYMBOL_GPL(iommu_detach_pasid_table); + static void __iommu_detach_device(struct iommu_domain *domain, struct device *dev) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index d2f3435e7d17..e34a1b1c805b 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -232,6 +232,8 @@ struct iommu_iotlb_gather { * @cache_invalidate: invalidate translation caches * @sva_bind_gpasid: bind guest pasid and mm * @sva_unbind_gpasid: unbind guest pasid and mm + * @attach_pasid_table: attach a pasid table + * @detach_pasid_table: detach the pasid table * @def_domain_type: device default domain type, return value: * - IOMMU_DOMAIN_IDENTITY: must use an identity domain * - IOMMU_DOMAIN_DMA: must use a dma domain @@ -297,6 +299,9 @@ struct iommu_ops { void *drvdata); void (*sva_unbind)(struct iommu_sva *handle); u32 (*sva_get_pasid)(struct iommu_sva *handle); + int (*attach_pasid_table)(struct iommu_domain *domain, + struct iommu_pasid_table_config *cfg); + void (*detach_pasid_table)(struct iommu_domain *domain); int (*page_response)(struct device *dev, struct iommu_fault_event *evt, @@ -430,6 +435,11 @@ extern int iommu_uapi_sva_unbind_gpasid(struct iommu_domain *domain, struct device *dev, void __user *udata); extern int iommu_sva_unbind_gpasid(struct iommu_domain *domain, struct device *dev, ioasid_t pasid); +extern int iommu_attach_pasid_table(struct iommu_domain *domain, + struct iommu_pasid_table_config *cfg); +extern int iommu_uapi_attach_pasid_table(struct iommu_domain *domain, + void __user *udata); +extern void iommu_detach_pasid_table(struct iommu_domain *domain); extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev); extern struct iommu_domain *iommu_get_dma_domain(struct device *dev); extern int iommu_map(struct iommu_domain *domain, unsigned long iova, @@ -1035,6 +1045,23 @@ iommu_aux_get_pasid(struct iommu_domain *domain, struct device *dev) return -ENODEV; } +static inline +int iommu_attach_pasid_table(struct iommu_domain *domain, + struct iommu_pasid_table_config *cfg) +{ + return -ENODEV; +} + +static inline +int iommu_uapi_attach_pasid_table(struct iommu_domain *domain, + void __user *uinfo) +{ + return -ENODEV; +} + +static inline +void iommu_detach_pasid_table(struct iommu_domain *domain) {} + static inline struct iommu_sva * iommu_sva_bind_device(struct device *dev, struct mm_struct *mm, void *drvdata) { diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h index 59178fc229ca..8c079a78dfec 100644 --- a/include/uapi/linux/iommu.h +++ b/include/uapi/linux/iommu.h @@ -339,4 +339,58 @@ struct iommu_gpasid_bind_data { } vendor; }; +/** + * struct iommu_pasid_smmuv3 - ARM SMMUv3 Stream Table Entry stage 1 related + * information + * @version: API version of this structure + * @s1fmt: STE s1fmt (format of the CD table: single CD, linear table + * or 2-level table) + * @s1dss: STE s1dss (specifies the behavior when @pasid_bits != 0 + * and no PASID is passed along with the incoming transaction) + * @padding: reserved for future use (should be zero) + * + * The PASID table is referred to as the Context Descriptor (CD) table on ARM + * SMMUv3. Please refer to the ARM SMMU 3.x spec (ARM IHI 0070A) for full + * details. + */ +struct iommu_pasid_smmuv3 { +#define PASID_TABLE_SMMUV3_CFG_VERSION_1 1 + __u32 version; + __u8 s1fmt; + __u8 s1dss; + __u8 padding[2]; +}; + +/** + * struct iommu_pasid_table_config - PASID table data used to bind guest PASID + * table to the host IOMMU + * @argsz: User filled size of this data + * @version: API version to prepare for future extensions + * @base_ptr: guest physical address of the PASID table + * @format: format of the PASID table + * @pasid_bits: number of PASID bits used in the PASID table + * @config: indicates whether the guest translation stage must + * be translated, bypassed or aborted. + * @padding: reserved for future use (should be zero) + * @vendor_data.smmuv3: table information when @format is + * %IOMMU_PASID_FORMAT_SMMUV3 + */ +struct iommu_pasid_table_config { + __u32 argsz; +#define PASID_TABLE_CFG_VERSION_1 1 + __u32 version; + __u64 base_ptr; +#define IOMMU_PASID_FORMAT_SMMUV3 1 + __u32 format; + __u8 pasid_bits; +#define IOMMU_PASID_CONFIG_TRANSLATE 1 +#define IOMMU_PASID_CONFIG_BYPASS 2 +#define IOMMU_PASID_CONFIG_ABORT 3 + __u8 config; + __u8 padding[2]; + union { + struct iommu_pasid_smmuv3 smmuv3; + } vendor_data; +}; + #endif /* _UAPI_IOMMU_H */ From patchwork Wed Oct 27 10:44:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 755E3C433F5 for ; Wed, 27 Oct 2021 10:45:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 584E860F6F for ; Wed, 27 Oct 2021 10:45:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241513AbhJ0Krl (ORCPT ); Wed, 27 Oct 2021 06:47:41 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:35265 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236103AbhJ0Krk (ORCPT ); Wed, 27 Oct 2021 06:47:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331515; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DYKtuFeI92e2qzvw3oJX8Blw0OOiqQbveU+79Ggu+Ko=; b=DBFfLoc6s/scvG4esosZwWpJQxFJUvPZrHxwTEYeQ6BVmOO+TKJ9h9NDnetb5MC36hqnDf 5ey4RyT2HMTM+eyOYxTQTxU3NEuYIeNwdmGBZQdHb5lRCapqV7ScCkbk1hBGvgIw8R7pNB aTk5wbYmv5qS+lzUQSl8n6HC8TEkX3A= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-520-XEWrSar_PfGWmbj8tWOsww-1; Wed, 27 Oct 2021 06:45:13 -0400 X-MC-Unique: XEWrSar_PfGWmbj8tWOsww-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EFD3A19200C0; Wed, 27 Oct 2021 10:45:09 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 44FDC100E12D; Wed, 27 Oct 2021 10:44:56 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 2/9] iommu: Introduce iommu_get_nesting Date: Wed, 27 Oct 2021 12:44:21 +0200 Message-Id: <20211027104428.1059740-3-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add iommu_get_nesting() which allow to retrieve whether a domain uses nested stages. Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 8 ++++++++ drivers/iommu/arm/arm-smmu/arm-smmu.c | 8 ++++++++ drivers/iommu/intel/iommu.c | 13 +++++++++++++ drivers/iommu/iommu.c | 10 ++++++++++ include/linux/iommu.h | 8 ++++++++ 5 files changed, 47 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index a388e318f86e..61477853a536 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2731,6 +2731,13 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain) return ret; } +static bool arm_smmu_get_nesting(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + return smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED; +} + static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args) { return iommu_fwspec_add_ids(dev, args->args, 1); @@ -2845,6 +2852,7 @@ static struct iommu_ops arm_smmu_ops = { .release_device = arm_smmu_release_device, .device_group = arm_smmu_device_group, .enable_nesting = arm_smmu_enable_nesting, + .get_nesting = arm_smmu_get_nesting, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 4bc75c4ce402..167cf1d51279 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -1522,6 +1522,13 @@ static int arm_smmu_enable_nesting(struct iommu_domain *domain) return ret; } +static bool arm_smmu_get_nesting(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + return smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED; +} + static int arm_smmu_set_pgtable_quirks(struct iommu_domain *domain, unsigned long quirks) { @@ -1595,6 +1602,7 @@ static struct iommu_ops arm_smmu_ops = { .probe_finalize = arm_smmu_probe_finalize, .device_group = arm_smmu_device_group, .enable_nesting = arm_smmu_enable_nesting, + .get_nesting = arm_smmu_get_nesting, .set_pgtable_quirks = arm_smmu_set_pgtable_quirks, .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index d75f59ae28e6..e42767bd47f9 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -5524,6 +5524,18 @@ intel_iommu_enable_nesting(struct iommu_domain *domain) return ret; } +static bool intel_iommu_get_nesting(struct iommu_domain *domain) +{ + struct dmar_domain *dmar_domain = to_dmar_domain(domain); + bool nesting; + + spin_lock_irqsave(&device_domain_lock, flags); + nesting = dmar_domain->flags & DOMAIN_FLAG_NESTING_MODE && + !(dmar_domain->flags & DOMAIN_FLAG_USE_FIRST_LEVEL); + spin_unlock_irqrestore(&device_domain_lock, flags); + return nesting; +} + /* * Check that the device does not live on an external facing PCI port that is * marked as untrusted. Such devices should not be able to apply quirks and @@ -5561,6 +5573,7 @@ const struct iommu_ops intel_iommu_ops = { .domain_alloc = intel_iommu_domain_alloc, .domain_free = intel_iommu_domain_free, .enable_nesting = intel_iommu_enable_nesting, + .get_nesting = intel_iommu_get_nesting, .attach_dev = intel_iommu_attach_device, .detach_dev = intel_iommu_detach_device, .aux_attach_dev = intel_iommu_aux_attach_device, diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 6033c263c6e6..3e639c4e8015 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2843,6 +2843,16 @@ int iommu_enable_nesting(struct iommu_domain *domain) } EXPORT_SYMBOL_GPL(iommu_enable_nesting); +bool iommu_get_nesting(struct iommu_domain *domain) +{ + if (domain->type != IOMMU_DOMAIN_UNMANAGED) + return false; + if (!domain->ops->enable_nesting) + return false; + return domain->ops->get_nesting(domain); +} +EXPORT_SYMBOL_GPL(iommu_get_nesting); + int iommu_set_pgtable_quirks(struct iommu_domain *domain, unsigned long quirk) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index e34a1b1c805b..846e19151f40 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -213,6 +213,7 @@ struct iommu_iotlb_gather { * group and attached to the groups domain * @device_group: find iommu group for a particular device * @enable_nesting: Enable nesting + * @get_nesting: get whether the domain uses nested stages * @set_pgtable_quirks: Set io page table quirks (IO_PGTABLE_QUIRK_*) * @get_resv_regions: Request list of reserved regions for a device * @put_resv_regions: Free list of reserved regions for a device @@ -271,6 +272,7 @@ struct iommu_ops { void (*probe_finalize)(struct device *dev); struct iommu_group *(*device_group)(struct device *dev); int (*enable_nesting)(struct iommu_domain *domain); + bool (*get_nesting)(struct iommu_domain *domain); int (*set_pgtable_quirks)(struct iommu_domain *domain, unsigned long quirks); @@ -690,6 +692,7 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev, void *drvdata); void iommu_sva_unbind_device(struct iommu_sva *handle); u32 iommu_sva_get_pasid(struct iommu_sva *handle); +bool iommu_get_nesting(struct iommu_domain *domain); #else /* CONFIG_IOMMU_API */ @@ -1108,6 +1111,11 @@ static inline struct iommu_fwspec *dev_iommu_fwspec_get(struct device *dev) { return NULL; } + +static inline bool iommu_get_nesting(struct iommu_domain *domain) +{ + return false; +} #endif /* CONFIG_IOMMU_API */ /** From patchwork Wed Oct 27 10:44:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587007 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88EFDC433F5 for ; Wed, 27 Oct 2021 10:45:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 74AAC60F6F for ; Wed, 27 Oct 2021 10:45:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241517AbhJ0KsC (ORCPT ); Wed, 27 Oct 2021 06:48:02 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]:35774 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241518AbhJ0Krx (ORCPT ); Wed, 27 Oct 2021 06:47:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331527; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dqSepvb2YiqoeKHY293Z6dXyQ5qrifJdEOlDVnVYd+E=; b=O4b0Bktxp1U7zoBGQ5GpjZhAv6iVBDQEeoOo7eEd7vQdB5FQs++jx1iYzEpfpF4nTu4LmP aWMozMnyg/3BBMtubj+7e9d53jt7+DXX0k6vKdmYN0KcV3WMt17tUr0aZ8IEKiuQ3+jjQi O8nJ3hfzLwxmA50aXaFRKAHY7w+XBGw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-73-1XhAUh16NcaPLx3qAevXIg-1; Wed, 27 Oct 2021 06:45:24 -0400 X-MC-Unique: 1XhAUh16NcaPLx3qAevXIg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2D4DA80A5C1; Wed, 27 Oct 2021 10:45:20 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5315F10016FE; Wed, 27 Oct 2021 10:45:10 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 3/9] iommu/smmuv3: Allow s1 and s2 configs to coexist Date: Wed, 27 Oct 2021 12:44:22 +0200 Message-Id: <20211027104428.1059740-4-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org In true nested mode, both s1_cfg and s2_cfg will coexist. Let's remove the union and add a "set" field in each config structure telling whether the config is set and needs to be applied when writing the STE. In legacy nested mode, only the second stage is used. In true nested mode, both stages are used and the S1 config is "set" when the guest passes its pasid table. No functional change intended. Signed-off-by: Eric Auger --- v13 -> v14: - slight reword of the commit message v12 -> v13: - does not dynamically allocate s1-cfg and s2_cfg anymore. Add the set field --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 43 +++++++++++++-------- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 8 ++-- 2 files changed, 31 insertions(+), 20 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 61477853a536..b8384a834552 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1254,8 +1254,8 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, u64 val = le64_to_cpu(dst[0]); bool ste_live = false; struct arm_smmu_device *smmu = NULL; - struct arm_smmu_s1_cfg *s1_cfg = NULL; - struct arm_smmu_s2_cfg *s2_cfg = NULL; + struct arm_smmu_s1_cfg *s1_cfg; + struct arm_smmu_s2_cfg *s2_cfg; struct arm_smmu_domain *smmu_domain = NULL; struct arm_smmu_cmdq_ent prefetch_cmd = { .opcode = CMDQ_OP_PREFETCH_CFG, @@ -1270,13 +1270,24 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, } if (smmu_domain) { + s1_cfg = &smmu_domain->s1_cfg; + s2_cfg = &smmu_domain->s2_cfg; + switch (smmu_domain->stage) { case ARM_SMMU_DOMAIN_S1: - s1_cfg = &smmu_domain->s1_cfg; + s1_cfg->set = true; + s2_cfg->set = false; break; case ARM_SMMU_DOMAIN_S2: + s1_cfg->set = false; + s2_cfg->set = true; + break; case ARM_SMMU_DOMAIN_NESTED: - s2_cfg = &smmu_domain->s2_cfg; + /* + * Actual usage of stage 1 depends on nested mode: + * legacy (2d stage only) or true nested mode + */ + s2_cfg->set = true; break; default: break; @@ -1303,7 +1314,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, val = STRTAB_STE_0_V; /* Bypass/fault */ - if (!smmu_domain || !(s1_cfg || s2_cfg)) { + if (!smmu_domain || !(s1_cfg->set || s2_cfg->set)) { if (!smmu_domain && disable_bypass) val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); else @@ -1322,7 +1333,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, return; } - if (s1_cfg) { + if (s1_cfg->set) { u64 strw = smmu->features & ARM_SMMU_FEAT_E2H ? STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1; @@ -1344,7 +1355,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt); } - if (s2_cfg) { + if (s2_cfg->set) { BUG_ON(ste_live); dst[2] = cpu_to_le64( FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) | @@ -2036,23 +2047,23 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_device *smmu = smmu_domain->smmu; + struct arm_smmu_s1_cfg *s1_cfg = &smmu_domain->s1_cfg; + struct arm_smmu_s2_cfg *s2_cfg = &smmu_domain->s2_cfg; free_io_pgtable_ops(smmu_domain->pgtbl_ops); /* Free the CD and ASID, if we allocated them */ - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { - struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg; - + if (s1_cfg->set) { /* Prevent SVA from touching the CD while we're freeing it */ mutex_lock(&arm_smmu_asid_lock); - if (cfg->cdcfg.cdtab) + if (s1_cfg->cdcfg.cdtab) arm_smmu_free_cd_tables(smmu_domain); - arm_smmu_free_asid(&cfg->cd); + arm_smmu_free_asid(&s1_cfg->cd); mutex_unlock(&arm_smmu_asid_lock); - } else { - struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg; - if (cfg->vmid) - arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid); + } + if (s2_cfg->set) { + if (s2_cfg->vmid) + arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); } kfree(smmu_domain); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 4cb136f07914..db1a84d24e30 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -597,12 +597,14 @@ struct arm_smmu_s1_cfg { struct arm_smmu_ctx_desc cd; u8 s1fmt; u8 s1cdmax; + bool set; }; struct arm_smmu_s2_cfg { u16 vmid; u64 vttbr; u64 vtcr; + bool set; }; struct arm_smmu_strtab_cfg { @@ -716,10 +718,8 @@ struct arm_smmu_domain { atomic_t nr_ats_masters; enum arm_smmu_domain_stage stage; - union { - struct arm_smmu_s1_cfg s1_cfg; - struct arm_smmu_s2_cfg s2_cfg; - }; + struct arm_smmu_s1_cfg s1_cfg; + struct arm_smmu_s2_cfg s2_cfg; struct iommu_domain domain; From patchwork Wed Oct 27 10:44:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587009 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8353C433F5 for ; Wed, 27 Oct 2021 10:45:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AC4A560F6F for ; Wed, 27 Oct 2021 10:45:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241522AbhJ0KsV (ORCPT ); Wed, 27 Oct 2021 06:48:21 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:30422 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239794AbhJ0KsU (ORCPT ); Wed, 27 Oct 2021 06:48:20 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331555; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=A3iBruzgVjBpNQLe016TvUa6z09nU41gUGAbVUk4TJo=; b=ESJULDiNq7GjEvcTADdmjZnKGhezTIJMF8LNlQdUANn8Yk7BkO42P/APJdoxncVnZOxGDw snQDfWmUl1HG4PyawM97iHB0uiuyp/g1U/WxAGPE4JCWqyLpyPDH/RdBAZqxMdKfEZtmp9 o12dAwW6JV6esEwjUatwEbtoVlbvFC4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-373-SKuEFmtkNWeHx0IkeGm9Ug-1; Wed, 27 Oct 2021 06:45:52 -0400 X-MC-Unique: SKuEFmtkNWeHx0IkeGm9Ug-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 8DBC4362F8; Wed, 27 Oct 2021 10:45:48 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 87E4010016FE; Wed, 27 Oct 2021 10:45:20 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 4/9] iommu/smmuv3: Get prepared for nested stage support Date: Wed, 27 Oct 2021 12:44:23 +0200 Message-Id: <20211027104428.1059740-5-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org When nested stage translation is setup, both s1_cfg and s2_cfg are set. We introduce a new smmu_domain abort field that will be set upon guest stage1 configuration passing. If no guest stage1 config has been attached, it is ignored when writing the STE. arm_smmu_write_strtab_ent() is modified to write both stage fields in the STE and deal with the abort field. In nested mode, only stage 2 is "finalized" as the host does not own/configure the stage 1 context descriptor; guest does. Signed-off-by: Eric Auger --- v13 -> v14: - removed BUG_ON(ste_live && !nested) as this should never happen - restored the old comment as there is always an abort in between S2 -> S1 + S2 and S1 + S2 -> S2 - remove sparse warning v10 -> v11: - Fix an issue reported by Shameer when switching from with vSMMU to without vSMMU. Despite the spec does not seem to mention it seems to be needed to reset the 2 high 64b when switching from S1+S2 cfg to S1 only. Especially dst[3] needs to be reset (S2TTB). On some implementations, if the S2TTB is not reset, this causes a C_BAD_STE error --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 55 ++++++++++++++++++--- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 + 2 files changed, 49 insertions(+), 8 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index b8384a834552..5e0917e1226b 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1252,7 +1252,8 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, * 3. Update Config, sync */ u64 val = le64_to_cpu(dst[0]); - bool ste_live = false; + bool s1_live = false, s2_live = false, ste_live; + bool abort, translate = false; struct arm_smmu_device *smmu = NULL; struct arm_smmu_s1_cfg *s1_cfg; struct arm_smmu_s2_cfg *s2_cfg; @@ -1292,6 +1293,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, default: break; } + translate = s1_cfg->set || s2_cfg->set; } if (val & STRTAB_STE_0_V) { @@ -1299,23 +1301,36 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, case STRTAB_STE_0_CFG_BYPASS: break; case STRTAB_STE_0_CFG_S1_TRANS: + s1_live = true; + break; case STRTAB_STE_0_CFG_S2_TRANS: - ste_live = true; + s2_live = true; + break; + case STRTAB_STE_0_CFG_NESTED: + s1_live = true; + s2_live = true; break; case STRTAB_STE_0_CFG_ABORT: - BUG_ON(!disable_bypass); break; default: BUG(); /* STE corruption */ } } + ste_live = s1_live || s2_live; + /* Nuke the existing STE_0 value, as we're going to rewrite it */ val = STRTAB_STE_0_V; /* Bypass/fault */ - if (!smmu_domain || !(s1_cfg->set || s2_cfg->set)) { - if (!smmu_domain && disable_bypass) + + if (!smmu_domain) + abort = disable_bypass; + else + abort = smmu_domain->abort; + + if (abort || !translate) { + if (abort) val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); else val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS); @@ -1333,11 +1348,17 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, return; } + if (ste_live) { + /* First invalidate the live STE */ + dst[0] = cpu_to_le64(STRTAB_STE_0_CFG_ABORT); + arm_smmu_sync_ste_for_sid(smmu, sid); + } + if (s1_cfg->set) { u64 strw = smmu->features & ARM_SMMU_FEAT_E2H ? STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1; - BUG_ON(ste_live); + BUG_ON(s1_live); dst[1] = cpu_to_le64( FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) | FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) | @@ -1356,7 +1377,14 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, } if (s2_cfg->set) { - BUG_ON(ste_live); + u64 vttbr = s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK; + + if (s2_live) { + u64 s2ttb = le64_to_cpu(dst[3]) & STRTAB_STE_3_S2TTB_MASK; + + BUG_ON(s2ttb != vttbr); + } + dst[2] = cpu_to_le64( FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) | FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) | @@ -1366,9 +1394,12 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2R); - dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK); + dst[3] = cpu_to_le64(vttbr); val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS); + } else { + dst[2] = 0; + dst[3] = 0; } if (master->ats_enabled) @@ -2173,6 +2204,14 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, return 0; } + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && + (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1) || + !(smmu->features & ARM_SMMU_FEAT_TRANS_S2))) { + dev_info(smmu_domain->smmu->dev, + "does not implement two stages\n"); + return -EINVAL; + } + /* Restrict the stage to what we can actually support */ if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1)) smmu_domain->stage = ARM_SMMU_DOMAIN_S2; diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index db1a84d24e30..05959df01618 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -207,6 +207,7 @@ #define STRTAB_STE_0_CFG_BYPASS 4 #define STRTAB_STE_0_CFG_S1_TRANS 5 #define STRTAB_STE_0_CFG_S2_TRANS 6 +#define STRTAB_STE_0_CFG_NESTED 7 #define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4) #define STRTAB_STE_0_S1FMT_LINEAR 0 @@ -720,6 +721,7 @@ struct arm_smmu_domain { enum arm_smmu_domain_stage stage; struct arm_smmu_s1_cfg s1_cfg; struct arm_smmu_s2_cfg s2_cfg; + bool abort; struct iommu_domain domain; From patchwork Wed Oct 27 10:44:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587011 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4A98C433F5 for ; Wed, 27 Oct 2021 10:46:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8545F6109E for ; Wed, 27 Oct 2021 10:46:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241547AbhJ0Ksj (ORCPT ); Wed, 27 Oct 2021 06:48:39 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:52536 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241540AbhJ0Kse (ORCPT ); Wed, 27 Oct 2021 06:48:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331568; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/wWoc6t6AOckXTnAZ7Ldd/3Pb194iKhVVhHdl2FeLkU=; b=IzI95/VrB9SvVUZbRKnnPQ37IIYWIeTbLd7H3WGWiVho84TWYDDzU0Ee95Lvwtgc0AlDFa 9uw+zprhO9Ba5gR9ZkL0c3Lk3YcIRb3mWofPBli+U8vJ/i1hfFIcIrh8sc0J+uzaefdaLL RIAhSYtmCeHHxbuasFMtdz465y9z9K4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-376-IBd4Yo9MP-WEqWhBBS6iLQ-1; Wed, 27 Oct 2021 06:46:05 -0400 X-MC-Unique: IBd4Yo9MP-WEqWhBBS6iLQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0E07210A8E04; Wed, 27 Oct 2021 10:46:02 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id E769610016FE; Wed, 27 Oct 2021 10:45:48 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 5/9] iommu/smmuv3: Implement attach/detach_pasid_table Date: Wed, 27 Oct 2021 12:44:24 +0200 Message-Id: <20211027104428.1059740-6-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On attach_pasid_table() we program STE S1 related info set by the guest into the actual physical STEs. At minimum we need to program the context descriptor GPA and compute whether the stage1 is translated/bypassed or aborted. On detach, the stage 1 config is unset and the abort flag is unset. Signed-off-by: Eric Auger --- v14 -> v15: - add a comment before arm_smmu_get_cd_ptr to warn the developper this function must not be used in case of nested (Keqian) v13 -> v14: - on PASID table detach, reset the abort flag (Keqian) v7 -> v8: - remove smmu->features check, now done on domain finalize v6 -> v7: - check versions and comment the fact we don't need to take into account s1dss and s1fmt v3 -> v4: - adapt to changes in iommu_pasid_table_config - different programming convention at s1_cfg/s2_cfg/ste.abort v2 -> v3: - callback now is named set_pasid_table and struct fields are laid out differently. v1 -> v2: - invalidate the STE before changing them - hold init_mutex - handle new fields --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 93 +++++++++++++++++++++ 1 file changed, 93 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 5e0917e1226b..bb2681581283 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1004,6 +1004,10 @@ static void arm_smmu_write_cd_l1_desc(__le64 *dst, WRITE_ONCE(*dst, cpu_to_le64(val)); } +/* + * Must not be used in case of nested mode where the CD table is owned + * by the guest + */ static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, u32 ssid) { @@ -2809,6 +2813,93 @@ static void arm_smmu_get_resv_regions(struct device *dev, iommu_dma_get_resv_regions(dev, head); } +static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, + struct iommu_pasid_table_config *cfg) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_master *master; + struct arm_smmu_device *smmu; + unsigned long flags; + int ret = -EINVAL; + + if (cfg->format != IOMMU_PASID_FORMAT_SMMUV3) + return -EINVAL; + + if (cfg->version != PASID_TABLE_CFG_VERSION_1 || + cfg->vendor_data.smmuv3.version != PASID_TABLE_SMMUV3_CFG_VERSION_1) + return -EINVAL; + + mutex_lock(&smmu_domain->init_mutex); + + smmu = smmu_domain->smmu; + + if (!smmu) + goto out; + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + goto out; + + switch (cfg->config) { + case IOMMU_PASID_CONFIG_ABORT: + smmu_domain->s1_cfg.set = false; + smmu_domain->abort = true; + break; + case IOMMU_PASID_CONFIG_BYPASS: + smmu_domain->s1_cfg.set = false; + smmu_domain->abort = false; + break; + case IOMMU_PASID_CONFIG_TRANSLATE: + /* we do not support S1 <-> S1 transitions */ + if (smmu_domain->s1_cfg.set) + goto out; + + /* + * we currently support a single CD so s1fmt and s1dss + * fields are also ignored + */ + if (cfg->pasid_bits) + goto out; + + smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; + smmu_domain->s1_cfg.set = true; + smmu_domain->abort = false; + break; + default: + goto out; + } + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) + arm_smmu_install_ste_for_dev(master); + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + ret = 0; +out: + mutex_unlock(&smmu_domain->init_mutex); + return ret; +} + +static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_master *master; + unsigned long flags; + + mutex_lock(&smmu_domain->init_mutex); + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + goto unlock; + + smmu_domain->s1_cfg.set = false; + smmu_domain->abort = false; + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) + arm_smmu_install_ste_for_dev(master); + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + +unlock: + mutex_unlock(&smmu_domain->init_mutex); +} + static bool arm_smmu_dev_has_feature(struct device *dev, enum iommu_dev_features feat) { @@ -2906,6 +2997,8 @@ static struct iommu_ops arm_smmu_ops = { .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, + .attach_pasid_table = arm_smmu_attach_pasid_table, + .detach_pasid_table = arm_smmu_detach_pasid_table, .dev_has_feat = arm_smmu_dev_has_feature, .dev_feat_enabled = arm_smmu_dev_feature_enabled, .dev_enable_feat = arm_smmu_dev_enable_feature, From patchwork Wed Oct 27 10:44:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 449ABC433F5 for ; Wed, 27 Oct 2021 10:46:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 240E261052 for ; Wed, 27 Oct 2021 10:46:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239832AbhJ0Ksq (ORCPT ); Wed, 27 Oct 2021 06:48:46 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:53121 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241556AbhJ0Ksn (ORCPT ); Wed, 27 Oct 2021 06:48:43 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331578; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mKqol+eiHhY/8uBMeYH2/ZvvCRs5UK7zGUwcYxZlW0U=; b=hoahk3NugMf8lHIt3MDGVtyzp0740FexBVHktwtP37BMhnB9AB7jAFLVy+hs/3ywugodyZ XnVSkqBr333M79T8JRa5v730HnMagu2YzCRZ21LZGL3BYesF4g7E7TuleBTRKE+JvKhIE7 Dsmm9OrwTIlK6JUgZYzGRgfuK8I5wWk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-405-xDxRKFU2ONqxnXy-79riug-1; Wed, 27 Oct 2021 06:46:16 -0400 X-MC-Unique: xDxRKFU2ONqxnXy-79riug-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 61CBE802B52; Wed, 27 Oct 2021 10:46:13 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 656AE100238C; Wed, 27 Oct 2021 10:46:02 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 6/9] iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs Date: Wed, 27 Oct 2021 12:44:25 +0200 Message-Id: <20211027104428.1059740-7-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With nested stage support, soon we will need to invalidate S1 contexts and ranges tagged with an unmanaged asid, this latter being managed by the guest. So let's introduce 2 helpers that allow to invalidate with externally managed ASIDs Signed-off-by: Eric Auger --- v15 -> v16: - Use arm_smmu_cmdq_issue_cmd_with_sync() v14 -> v15: - Always send CMDQ_OP_TLBI_NH_VA and do not test smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H as the guest does not run in hyp mode atm (Zenghui). v13 -> v14 - Actually send the NH_ASID command (reported by Xingang Wang) --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 ++++++++++++++++----- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index bb2681581283..d5e722105624 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1871,9 +1871,9 @@ int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, int ssid, } /* IO_PGTABLE API */ -static void arm_smmu_tlb_inv_context(void *cookie) +static void __arm_smmu_tlb_inv_context(struct arm_smmu_domain *smmu_domain, + int ext_asid) { - struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cmdq_ent cmd; @@ -1884,7 +1884,12 @@ static void arm_smmu_tlb_inv_context(void *cookie) * insertion to guarantee those are observed before the TLBI. Do be * careful, 007. */ - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + if (ext_asid >= 0) { /* guest stage 1 invalidation */ + cmd.opcode = CMDQ_OP_TLBI_NH_ASID; + cmd.tlbi.asid = ext_asid; + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg.cd.asid); } else { cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; @@ -1894,6 +1899,13 @@ static void arm_smmu_tlb_inv_context(void *cookie) arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); } +static void arm_smmu_tlb_inv_context(void *cookie) +{ + struct arm_smmu_domain *smmu_domain = cookie; + + __arm_smmu_tlb_inv_context(smmu_domain, -1); +} + static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd, unsigned long iova, size_t size, size_t granule, @@ -1955,9 +1967,10 @@ static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd, arm_smmu_cmdq_batch_submit(smmu, &cmds); } -static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, - size_t granule, bool leaf, - struct arm_smmu_domain *smmu_domain) +static void +arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, + size_t granule, bool leaf, int ext_asid, + struct arm_smmu_domain *smmu_domain) { struct arm_smmu_cmdq_ent cmd = { .tlbi = { @@ -1965,7 +1978,16 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, }, }; - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + if (ext_asid >= 0) { /* guest stage 1 invalidation */ + /* + * At the moment the guest only uses NS-EL1, to be + * revisited when nested virt gets supported with E2H + * exposed. + */ + cmd.opcode = CMDQ_OP_TLBI_NH_VA; + cmd.tlbi.asid = ext_asid; + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { cmd.opcode = smmu_domain->smmu->features & ARM_SMMU_FEAT_E2H ? CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA; cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; @@ -1973,6 +1995,7 @@ static void arm_smmu_tlb_inv_range_domain(unsigned long iova, size_t size, cmd.opcode = CMDQ_OP_TLBI_S2_IPA; cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; } + __arm_smmu_tlb_inv_range(&cmd, iova, size, granule, smmu_domain); /* @@ -2011,7 +2034,7 @@ static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather, static void arm_smmu_tlb_inv_walk(unsigned long iova, size_t size, size_t granule, void *cookie) { - arm_smmu_tlb_inv_range_domain(iova, size, granule, false, cookie); + arm_smmu_tlb_inv_range_domain(iova, size, granule, false, -1, cookie); } static const struct iommu_flush_ops arm_smmu_flush_ops = { @@ -2548,7 +2571,7 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain, arm_smmu_tlb_inv_range_domain(gather->start, gather->end - gather->start + 1, - gather->pgsize, true, smmu_domain); + gather->pgsize, true, -1, smmu_domain); } static phys_addr_t From patchwork Wed Oct 27 10:44:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D04F1C433F5 for ; Wed, 27 Oct 2021 10:46:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B7C4C6109E for ; Wed, 27 Oct 2021 10:46:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239782AbhJ0Ks6 (ORCPT ); Wed, 27 Oct 2021 06:48:58 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:29053 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241550AbhJ0Ks5 (ORCPT ); Wed, 27 Oct 2021 06:48:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331591; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fmDFJfLNPrVWdqUbXfKL9SAwAMM9d1ycI/NAMCsnAJY=; b=HE1Cs535tJxDcV9mE8lPAV76rDiWERkVpWPfJXLbsnQ6epw+yiL0Ph9JqaiMiyOGK44Xu8 1BS+OrJseap7PET74ZurLw5/a95Qunzk2yO783+CX4mO0MpnVgQiPPbfpupMwsIKe0wV9/ XIHkXqboJzlCh2ye2c+aZ8cv/6IC2X4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-419-80buNhwjNfO9fKzOU1bivg-1; Wed, 27 Oct 2021 06:46:28 -0400 X-MC-Unique: 80buNhwjNfO9fKzOU1bivg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5644F1006AA6; Wed, 27 Oct 2021 10:46:24 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE4D0100E809; Wed, 27 Oct 2021 10:46:13 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 7/9] iommu/smmuv3: Implement cache_invalidate Date: Wed, 27 Oct 2021 12:44:26 +0200 Message-Id: <20211027104428.1059740-8-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Implement domain-selective, pasid selective and page-selective IOTLB invalidations. Signed-off-by: Eric Auger --- v15 -> v16: - make sure the range is set (RIL guest) and check the granule size is supported by the physical IOMMU - use cmd_with_sync v14 -> v15: - remove the redundant arm_smmu_cmdq_issue_sync(smmu) in IOMMU_INV_GRANU_ADDR case (Zenghui) - if RIL is not supported by the host, make sure the granule_size that is passed by the userspace is supported or fix it (Chenxiang) v13 -> v14: - Add domain invalidation - do global inval when asid is not provided with addr granularity v7 -> v8: - ASID based invalidation using iommu_inv_pasid_info - check ARCHID/PASID flags in addr based invalidation - use __arm_smmu_tlb_inv_context and __arm_smmu_tlb_inv_range_nosync v6 -> v7 - check the uapi version v3 -> v4: - adapt to changes in the uapi - add support for leaf parameter - do not use arm_smmu_tlb_inv_range_nosync or arm_smmu_tlb_inv_context anymore v2 -> v3: - replace __arm_smmu_tlb_sync by arm_smmu_cmdq_issue_sync v1 -> v2: - properly pass the asid --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 82 +++++++++++++++++++++ 1 file changed, 82 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index d5e722105624..e84a7c3e8730 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2923,6 +2923,87 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) mutex_unlock(&smmu_domain->init_mutex); } +static int +arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev, + struct iommu_cache_invalidate_info *inv_info) +{ + struct arm_smmu_cmdq_ent cmd = {.opcode = CMDQ_OP_TLBI_NSNH_ALL}; + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + return -EINVAL; + + if (!smmu) + return -EINVAL; + + if (inv_info->version != IOMMU_CACHE_INVALIDATE_INFO_VERSION_1) + return -EINVAL; + + if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID || + inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) { + return -ENOENT; + } + + if (!(inv_info->cache & IOMMU_CACHE_INV_TYPE_IOTLB)) + return -EINVAL; + + /* IOTLB invalidation */ + + switch (inv_info->granularity) { + case IOMMU_INV_GRANU_PASID: + { + struct iommu_inv_pasid_info *info = + &inv_info->granu.pasid_info; + + if (info->flags & IOMMU_INV_ADDR_FLAGS_PASID) + return -ENOENT; + if (!(info->flags & IOMMU_INV_PASID_FLAGS_ARCHID)) + return -EINVAL; + + __arm_smmu_tlb_inv_context(smmu_domain, info->archid); + return 0; + } + case IOMMU_INV_GRANU_ADDR: + { + struct iommu_inv_addr_info *info = &inv_info->granu.addr_info; + uint64_t granule_size = info->granule_size; + uint64_t size = info->nb_granules * info->granule_size; + bool leaf = info->flags & IOMMU_INV_ADDR_FLAGS_LEAF; + int tg; + + if (info->flags & IOMMU_INV_ADDR_FLAGS_PASID) + return -ENOENT; + + if (!(info->flags & IOMMU_INV_ADDR_FLAGS_ARCHID)) + break; + + tg = __ffs(granule_size); + if (!granule_size || granule_size & ~(1ULL << tg) || + !(granule_size & smmu->pgsize_bitmap)) + return -EINVAL; + + /* range invalidation must be used */ + if (!size) + return -EINVAL; + + arm_smmu_tlb_inv_range_domain(info->addr, size, + granule_size, leaf, + info->archid, smmu_domain); + return 0; + } + case IOMMU_INV_GRANU_DOMAIN: + break; + default: + return -EINVAL; + } + + /* Global S1 invalidation */ + cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + arm_smmu_cmdq_issue_cmd_with_sync(smmu, &cmd); + return 0; +} + static bool arm_smmu_dev_has_feature(struct device *dev, enum iommu_dev_features feat) { @@ -3022,6 +3103,7 @@ static struct iommu_ops arm_smmu_ops = { .put_resv_regions = generic_iommu_put_resv_regions, .attach_pasid_table = arm_smmu_attach_pasid_table, .detach_pasid_table = arm_smmu_detach_pasid_table, + .cache_invalidate = arm_smmu_cache_invalidate, .dev_has_feat = arm_smmu_dev_has_feature, .dev_feat_enabled = arm_smmu_dev_feature_enabled, .dev_enable_feat = arm_smmu_dev_enable_feature, From patchwork Wed Oct 27 10:44:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587017 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0246FC433F5 for ; Wed, 27 Oct 2021 10:46:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBE4361052 for ; Wed, 27 Oct 2021 10:46:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241543AbhJ0KtP (ORCPT ); Wed, 27 Oct 2021 06:49:15 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:55182 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237283AbhJ0KtO (ORCPT ); Wed, 27 Oct 2021 06:49:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331609; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ucB6qYth9awjSVF9KpKyQxRRiqhi1Ogy5qUZ99c8+rw=; b=gHEW4sxPvIM07i7kEK6SzWmZISKKWvV/1O3rSm1pM3Wo0Biw+bW/8Ye24Yqck1lvNVlCqJ 3507z62Rd2yX0VIo3IrdNB/xeps+fAnv6jB1hZJpNoF6C9wz6GkfCTwppPaWbCVBsnMtUx NIzi8C+VmlogOGERVYJxAPcdq47JllI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-130-mhX66nfDMJOZ_InWEF68Kg-1; Wed, 27 Oct 2021 06:46:43 -0400 X-MC-Unique: mhX66nfDMJOZ_InWEF68Kg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E4D3479EE2; Wed, 27 Oct 2021 10:46:39 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id B30A310016FE; Wed, 27 Oct 2021 10:46:24 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 8/9] iommu/smmuv3: report additional recoverable faults Date: Wed, 27 Oct 2021 12:44:27 +0200 Message-Id: <20211027104428.1059740-9-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Up to now we have only reported translation faults. Now that the guest can induce some configuration faults, let's report them too. Add propagation for BAD_SUBSTREAMID, CD_FETCH, BAD_CD, WALK_EABT. We also fix the transcoding for some existing translation faults. Signed-off-by: Eric Auger --- v14 -> v15: - adapt to removal of IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID in [PATCH v13 10/10] iommu/arm-smmu-v3: Add stall support for platform devices --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 40 +++++++++++++++++++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 4 +++ 2 files changed, 42 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index e84a7c3e8730..ddfc069c10ae 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1488,6 +1488,7 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) u32 perm = 0; struct arm_smmu_master *master; bool ssid_valid = evt[0] & EVTQ_0_SSV; + u8 type = FIELD_GET(EVTQ_0_ID, evt[0]); u32 sid = FIELD_GET(EVTQ_0_SID, evt[0]); struct iommu_fault_event fault_evt = { }; struct iommu_fault *flt = &fault_evt.fault; @@ -1540,8 +1541,6 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) } else { flt->type = IOMMU_FAULT_DMA_UNRECOV; flt->event = (struct iommu_fault_unrecoverable) { - .reason = reason, - .flags = IOMMU_FAULT_UNRECOV_ADDR_VALID, .perm = perm, .addr = FIELD_GET(EVTQ_2_ADDR, evt[2]), }; @@ -1550,6 +1549,43 @@ static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) flt->event.flags |= IOMMU_FAULT_UNRECOV_PASID_VALID; flt->event.pasid = FIELD_GET(EVTQ_0_SSID, evt[0]); } + + switch (type) { + case EVT_ID_TRANSLATION_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_PTE_FETCH; + flt->event.flags |= IOMMU_FAULT_UNRECOV_ADDR_VALID; + break; + case EVT_ID_ADDR_SIZE_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_OOR_ADDRESS; + flt->event.flags |= IOMMU_FAULT_UNRECOV_ADDR_VALID; + break; + case EVT_ID_ACCESS_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_ACCESS; + flt->event.flags |= IOMMU_FAULT_UNRECOV_ADDR_VALID; + break; + case EVT_ID_PERMISSION_FAULT: + flt->event.reason = IOMMU_FAULT_REASON_PERMISSION; + flt->event.flags |= IOMMU_FAULT_UNRECOV_ADDR_VALID; + break; + case EVT_ID_BAD_SUBSTREAMID: + flt->event.reason = IOMMU_FAULT_REASON_PASID_INVALID; + break; + case EVT_ID_CD_FETCH: + flt->event.reason = IOMMU_FAULT_REASON_PASID_FETCH; + flt->event.flags |= IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID; + break; + case EVT_ID_BAD_CD: + flt->event.reason = IOMMU_FAULT_REASON_BAD_PASID_ENTRY; + break; + case EVT_ID_WALK_EABT: + flt->event.reason = IOMMU_FAULT_REASON_WALK_EABT; + flt->event.flags |= IOMMU_FAULT_UNRECOV_ADDR_VALID | + IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID; + break; + default: + /* TODO: report other unrecoverable faults. */ + return -EFAULT; + } } mutex_lock(&smmu->streams_mutex); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 05959df01618..b914570ee5ba 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -379,6 +379,10 @@ #define EVTQ_0_ID GENMASK_ULL(7, 0) +#define EVT_ID_BAD_SUBSTREAMID 0x08 +#define EVT_ID_CD_FETCH 0x09 +#define EVT_ID_BAD_CD 0x0a +#define EVT_ID_WALK_EABT 0x0b #define EVT_ID_TRANSLATION_FAULT 0x10 #define EVT_ID_ADDR_SIZE_FAULT 0x11 #define EVT_ID_ACCESS_FAULT 0x12 From patchwork Wed Oct 27 10:44:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 12587019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EAFEC433EF for ; Wed, 27 Oct 2021 10:47:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EE1AF6023E for ; Wed, 27 Oct 2021 10:47:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239786AbhJ0Kt3 (ORCPT ); Wed, 27 Oct 2021 06:49:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:29765 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237283AbhJ0Kt2 (ORCPT ); Wed, 27 Oct 2021 06:49:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635331622; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4jf1m3TJlglHhUX7yLsviN+p9FvmvAmCzhSZjHu7gUY=; b=SBvyK5ohu+tMjx1OK0WY7UtNuJvLnmUAO7A1G8RSuUwvO47pO5RpzbIzpSUwMi4KW+gk1o xNCYxuzRIDhMrN+X/h6AFP5Aq6MxRFglr9u/jBMcRasYOSqh0sGOwqKZLt/qeC933S/lvw KwB0d8EP+BZ6dVVzDKP8m92z238XiJ8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-457-sitbeS-9M3-FpXC8aIuiYQ-1; Wed, 27 Oct 2021 06:47:01 -0400 X-MC-Unique: sitbeS-9M3-FpXC8aIuiYQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B597D89CD12; Wed, 27 Oct 2021 10:46:57 +0000 (UTC) Received: from laptop.redhat.com (unknown [10.39.193.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 571721042AEE; Wed, 27 Oct 2021 10:46:40 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, jean-philippe@linaro.org, zhukeqian1@huawei.com Cc: alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, kevin.tian@intel.com, ashok.raj@intel.com, maz@kernel.org, peter.maydell@linaro.org, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, wangxingang5@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, nicoleotsuka@gmail.com, chenxiang66@hisilicon.com, sumitg@nvidia.com, nicolinc@nvidia.com, vdumpa@nvidia.com, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, lushenming@huawei.com, vsethi@nvidia.com Subject: [RFC v16 9/9] iommu/smmuv3: Disallow nested mode in presence of HW MSI regions Date: Wed, 27 Oct 2021 12:44:28 +0200 Message-Id: <20211027104428.1059740-10-eric.auger@redhat.com> In-Reply-To: <20211027104428.1059740-1-eric.auger@redhat.com> References: <20211027104428.1059740-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Nested mode currently is not compatible with HW MSI reserved regions. Indeed MSI transactions targeting those MSI doorbells bypass the SMMU. This would require the guest to also bypass those ranges but the guest has no information about them. Let's check nested mode is not attempted in such configuration. Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 23 +++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index ddfc069c10ae..12e7d7920f27 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2488,6 +2488,23 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master) arm_smmu_install_ste_for_dev(master); } +static bool arm_smmu_has_hw_msi_resv_region(struct device *dev) +{ + struct iommu_resv_region *region; + bool has_msi_resv_region = false; + LIST_HEAD(resv_regions); + + iommu_get_resv_regions(dev, &resv_regions); + list_for_each_entry(region, &resv_regions, list) { + if (region->type == IOMMU_RESV_MSI) { + has_msi_resv_region = true; + break; + } + } + iommu_put_resv_regions(dev, &resv_regions); + return has_msi_resv_region; +} + static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; @@ -2545,6 +2562,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ret = -EINVAL; goto out_unlock; } + /* Nested mode is not compatible with MSI HW reserved regions */ + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && + arm_smmu_has_hw_msi_resv_region(dev)) { + ret = -EINVAL; + goto out_unlock; + } master->domain = smmu_domain;