From patchwork Mon Feb 22 15:53:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shameerali Kolothum Thodi X-Patchwork-Id: 12098975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23C80C433DB for ; Mon, 22 Feb 2021 15:56:16 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BCB7464E40 for ; Mon, 22 Feb 2021 15:56:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BCB7464E40 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=yZDMbXnpjZaNnrPt6ng9ZaeKTjlrra76J1ppx6QwGu4=; b=Y8LL9sJ6bOiXOR7WpuZhWm6Qo OOCpYYTBfVHY3RJVsdeQrOMT7iU8clX0mdnN/fNpwGOt4Y+n5LeQV5MyjeFiHBnFGhovxLMyr01wL j4hZcmH3UJybHprEeuYFB/vLTFyXH3wUAZkYRZR+q5BVAY6fZnLSqFlcE4lidUEC92mJFmMbuARUm XpbjErubWlywemFgN6xn5wnKZlxkVeqnsorc3lUJJJ6WFqmd6rdlKUjkCoE4XKJ93oAC0GQfQ6nYX E7WteU3/j88N38jn80MDs5Iua9MsWfHcx3+wXhLezPovSLIvrdPXZl1CX/2nJAEj8H+Ls5LEMknS0 Ot3fZyJYg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDY0-00062V-CW; Mon, 22 Feb 2021 15:54:56 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1lEDXv-00060d-MD for linux-arm-kernel@lists.infradead.org; Mon, 22 Feb 2021 15:54:53 +0000 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Dkmsp5cdxzlNSL; Mon, 22 Feb 2021 23:52:46 +0800 (CST) Received: from S00345302A-PC.china.huawei.com (10.47.88.147) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.498.0; Mon, 22 Feb 2021 23:54:37 +0800 From: Shameer Kolothum To: , , Subject: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM Date: Mon, 22 Feb 2021 15:53:37 +0000 Message-ID: <20210222155338.26132-5-shameerali.kolothum.thodi@huawei.com> X-Mailer: git-send-email 2.12.0.windows.1 In-Reply-To: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> References: <20210222155338.26132-1-shameerali.kolothum.thodi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.88.147] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210222_105452_253851_A4A064AC X-CRM114-Status: GOOD ( 14.23 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jean-philippe@linaro.org, maz@kernel.org, linuxarm@openeuler.org, eric.auger@redhat.com, alex.williamson@redhat.com, prime.zeng@hisilicon.com, jonathan.cameron@huawei.com, zhangfei.gao@linaro.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If the SMMU supports BTM and the device belongs to NESTED domain with shared pasid table, we need to use the VMID allocated by the KVM for the s2 configuration. Hence, request a pinned VMID from KVM. Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++++++++++++++- 1 file changed, 47 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 26bf7da1bcd0..04f83f7c8319 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -28,6 +28,7 @@ #include #include #include +#include #include @@ -2195,6 +2196,33 @@ static void arm_smmu_bitmap_free(unsigned long *map, int idx) clear_bit(idx, map); } +static int arm_smmu_pinned_vmid_get(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + + master = list_first_entry_or_null(&smmu_domain->devices, + struct arm_smmu_master, domain_head); + if (!master) + return -EINVAL; + + return kvm_pinned_vmid_get(master->dev); +} + +static int arm_smmu_pinned_vmid_put(struct arm_smmu_domain *smmu_domain) +{ + struct arm_smmu_master *master; + + master = list_first_entry_or_null(&smmu_domain->devices, + struct arm_smmu_master, domain_head); + if (!master) + return -EINVAL; + + if (smmu_domain->s2_cfg.vmid) + return kvm_pinned_vmid_put(master->dev); + + return 0; +} + static void arm_smmu_domain_free(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); @@ -2215,8 +2243,11 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) mutex_unlock(&arm_smmu_asid_lock); } if (s2_cfg->set) { - if (s2_cfg->vmid) - arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); + if (s2_cfg->vmid) { + if (!(smmu->features & ARM_SMMU_FEAT_BTM) && + smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + arm_smmu_bitmap_free(smmu->vmid_map, s2_cfg->vmid); + } } kfree(smmu_domain); @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) goto out; + if (smmu->features & ARM_SMMU_FEAT_BTM) { + ret = arm_smmu_pinned_vmid_get(smmu_domain); + if (ret < 0) + goto out; + + if (smmu_domain->s2_cfg.vmid) + arm_smmu_bitmap_free(smmu->vmid_map, smmu_domain->s2_cfg.vmid); + + smmu_domain->s2_cfg.vmid = (u16)ret; + } + smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr; smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits; smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt; @@ -3221,6 +3263,7 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_master *master; unsigned long flags; @@ -3237,6 +3280,8 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) arm_smmu_install_ste_for_dev(master); spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + if (smmu->features & ARM_SMMU_FEAT_BTM) + arm_smmu_pinned_vmid_put(smmu_domain); unlock: mutex_unlock(&smmu_domain->init_mutex); }