From patchwork Mon Nov 16 10:43:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 11908189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E1B6C5519F for ; Mon, 16 Nov 2020 12:12:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E0C7620656 for ; Mon, 16 Nov 2020 12:12:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="AwuiZlDj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729377AbgKPKoX (ORCPT ); Mon, 16 Nov 2020 05:44:23 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:43927 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729371AbgKPKoW (ORCPT ); Mon, 16 Nov 2020 05:44:22 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605523460; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tsefOJadPA1aocTAGfXJUDZXHqV0tACZcG8qI+p638w=; b=AwuiZlDj+592OdRijO4QBdiB1j6amYAiLlBJNIBwo4rDPeJ/tWr5fwJhK2jVxL94GH2xnB GAWwFqJ1a3/0/DQPwOtJ9kUlrWAm5EEl8mTsUmFEx+TsFE2PUCctIIWGSwAZltF0COI4EG gwXsA1azqhcdIenoxrxumVJY9HbGdDo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-382-F5SNU774Mnym7SbeLQ6j2A-1; Mon, 16 Nov 2020 05:44:17 -0500 X-MC-Unique: F5SNU774Mnym7SbeLQ6j2A-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DDC795F9CA; Mon, 16 Nov 2020 10:44:14 +0000 (UTC) Received: from laptop.redhat.com (ovpn-113-230.ams2.redhat.com [10.36.113.230]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD6FE5C5FE; Mon, 16 Nov 2020 10:44:10 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, will@kernel.org, joro@8bytes.org, maz@kernel.org, robin.murphy@arm.com Cc: jean-philippe@linaro.org, zhangfei.gao@linaro.org, zhangfei.gao@gmail.com, vivek.gautam@arm.com, shameerali.kolothum.thodi@huawei.com, alex.williamson@redhat.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, tn@semihalf.com, nicoleotsuka@gmail.com Subject: [PATCH v12 07/15] iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs Date: Mon, 16 Nov 2020 11:43:08 +0100 Message-Id: <20201116104316.31816-8-eric.auger@redhat.com> In-Reply-To: <20201116104316.31816-1-eric.auger@redhat.com> References: <20201116104316.31816-1-eric.auger@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org With nested stage support, soon we will need to invalidate S1 contexts and ranges tagged with an unmanaged asid, this latter being managed by the guest. So let's introduce 2 helpers that allow to invalidate with externally managed ASIDs Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 35 +++++++++++++++++---- 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 08ab0dd81049..73f7a56101dd 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1679,9 +1679,9 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, } /* IO_PGTABLE API */ -static void arm_smmu_tlb_inv_context(void *cookie) +static void __arm_smmu_tlb_inv_context(struct arm_smmu_domain *smmu_domain, + int ext_asid) { - struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cmdq_ent cmd; @@ -1692,7 +1692,11 @@ static void arm_smmu_tlb_inv_context(void *cookie) * insertion to guarantee those are observed before the TLBI. Do be * careful, 007. */ - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + if (ext_asid >= 0) { /* guest stage 1 invalidation */ + cmd.opcode = CMDQ_OP_TLBI_NH_ASID; + cmd.tlbi.asid = ext_asid; + cmd.tlbi.vmid = smmu_domain->s2_cfg->vmid; + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg->cd.asid); } else { cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; @@ -1703,9 +1707,17 @@ static void arm_smmu_tlb_inv_context(void *cookie) arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); } -static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, +static void arm_smmu_tlb_inv_context(void *cookie) +{ + struct arm_smmu_domain *smmu_domain = cookie; + + __arm_smmu_tlb_inv_context(smmu_domain, -1); +} + +static void __arm_smmu_tlb_inv_range(unsigned long iova, size_t size, size_t granule, bool leaf, - struct arm_smmu_domain *smmu_domain) + struct arm_smmu_domain *smmu_domain, + int ext_asid) { struct arm_smmu_device *smmu = smmu_domain->smmu; unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0; @@ -1720,7 +1732,11 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, if (!size) return; - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + if (ext_asid >= 0) { /* guest stage 1 invalidation */ + cmd.opcode = CMDQ_OP_TLBI_NH_VA; + cmd.tlbi.asid = ext_asid; + cmd.tlbi.vmid = smmu_domain->s2_cfg->vmid; + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { cmd.opcode = CMDQ_OP_TLBI_NH_VA; cmd.tlbi.asid = smmu_domain->s1_cfg->cd.asid; } else { @@ -1780,6 +1796,13 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, arm_smmu_atc_inv_domain(smmu_domain, 0, start, size); } +static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, + size_t granule, bool leaf, + struct arm_smmu_domain *smmu_domain) +{ + __arm_smmu_tlb_inv_range(iova, size, granule, leaf, smmu_domain, -1); +} + static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather, unsigned long iova, size_t granule, void *cookie)