From patchwork Mon Jun 22 17:28:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 11618539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 98112618 for ; Mon, 22 Jun 2020 17:35:10 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 716A720732 for ; Mon, 22 Jun 2020 17:35:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="MPRbKsIA"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="OjIIO3sJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 716A720732 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=U5Os8rXp5KNQ6U/eq/r8PiF+bG+CsSPwb+0j8cXiYcI=; b=MPRbKsIA6ba94+yUzpCpQ1tly KBEKKlWUeBpsA033g3W0J+cqHjXV0BE0AnBkBQ0F8yBOxiade3kByUb5iGUieb21gmsgamF0h1NEG z/gkTUObNtlLdsEY+q3jQ8zKiFOwgFReIQwi3Z20uDN3xXpGJ8l3ASoH27BYl7W4EAz3dlzv4UTll Okb3eDejAAAjHDQ9TifJFAaiQ1jNVInsZMoq8zVlY4HK20zCxTQgTxNKTNKvxIyYIUPgN6CIxaghS lWvythn/tk+dTCahYQLU3FcCtkReeCpABZQWR3FJHRe41UZcGxZrU+0sD2Jz1W7RZWZs1+fsd9XlP 2C9gylCcA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnQJu-0004gY-MU; Mon, 22 Jun 2020 17:33:22 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnQJs-0004fT-7l for linux-arm-kernel@merlin.infradead.org; Mon, 22 Jun 2020 17:33:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:CC:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=DJt+CF08Nz7nSMVNZYOEQU7k+JPPjOeP9nDuL4ZI/Cg=; b=OjIIO3sJdsZOvbQUkMOI5isOmt Z0kJq03peCxTUvaybjyvqW518Y//fVsuJVfFj+UBo+q4zinevKzJgEL5bpXXjhpO2OFmGuz5aL9z4 +Q95Fcuq1gdRT+qgxsYHbf6qasaQfLJJ3Nd3kGoN6s2vU3/wZ8dMNMu+xj6vLvClRmxIO8JImWBb8 GAdyAzBKliUkmfR0evFK0sGIIKMUGjaZw5EVaIjdUev6JFMzMn4ooe7TFwQsV8JfyW/Uiu/eUKllG zAOcWYyRy7WhLN4OMDdGCLI187g/a3RZBZxdLTdeOoIQxS61BK5idtJTMh01TvTtEGawmW7w0OcOY n/8JWvcg==; Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by casper.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jnQJn-0002Ia-9h for linux-arm-kernel@lists.infradead.org; Mon, 22 Jun 2020 17:33:19 +0000 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 9675D3F52B4B178862C3; Tue, 23 Jun 2020 01:32:53 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.487.0; Tue, 23 Jun 2020 01:32:43 +0800 From: John Garry To: , Subject: [PATCH 3/4] iommu/arm-smmu-v3: Always issue a CMD_SYNC per batch Date: Tue, 23 Jun 2020 01:28:39 +0800 Message-ID: <1592846920-45338-4-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1592846920-45338-1-git-send-email-john.garry@huawei.com> References: <1592846920-45338-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200622_183316_862840_87F4753E X-CRM114-Status: GOOD ( 18.71 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.4.4 on casper.infradead.org summary: Content analysis details: (-4.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: trivial@kernel.org, maz@kernel.org, joro@8bytes.org, John Garry , linuxarm@huawei.com, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org To ensure that a CPU does not send more than a permitted amount of commands to the cmdq, ensure that each batch includes a CMD_SYNC. When issuing a CMD_SYNC, we always wait for the consumption of its batch of commands - as such, we guarantee that any CPU will not issue more than its permitted amount. Signed-off-by: John Garry --- drivers/iommu/arm-smmu-v3.c | 86 +++++++++++++++++-------------------- 1 file changed, 39 insertions(+), 47 deletions(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4e9677b066f1..45a39ccaf455 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1373,11 +1373,15 @@ static void arm_smmu_cmdq_write_entries(struct arm_smmu_cmdq *cmdq, u64 *cmds, * - Command insertion is totally ordered, so if two CPUs each race to * insert their own list of commands then all of the commands from one * CPU will appear before any of the commands from the other CPU. + * + * - A CMD_SYNC is always inserted, ensuring that any CPU does not issue + * more than the permitted amount commands at once. */ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, - u64 *cmds, int n, bool sync) + u64 *cmds, int n) { u64 cmd_sync[CMDQ_ENT_DWORDS]; + const int sync = 1; u32 prod; unsigned long flags; bool owner; @@ -1419,19 +1423,17 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, * Dependency ordering from the cmpxchg() loop above. */ arm_smmu_cmdq_write_entries(cmdq, cmds, llq.prod, n); - if (sync) { - prod = queue_inc_prod_n(&llq, n); - arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod); - queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS); - /* - * In order to determine completion of our CMD_SYNC, we must - * ensure that the queue can't wrap twice without us noticing. - * We achieve that by taking the cmdq lock as shared before - * marking our slot as valid. - */ - arm_smmu_cmdq_shared_lock(cmdq); - } + prod = queue_inc_prod_n(&llq, n); + arm_smmu_cmdq_build_sync_cmd(cmd_sync, smmu, prod); + queue_write(Q_ENT(&cmdq->q, prod), cmd_sync, CMDQ_ENT_DWORDS); + + /* + * In order to determine completion of our CMD_SYNC, we must ensure + * that the queue can't wrap twice without us noticing. We achieve that + * by taking the cmdq lock as shared before marking our slot as valid. + */ + arm_smmu_cmdq_shared_lock(cmdq); /* 3. Mark our slots as valid, ensuring commands are visible first */ dma_wmb(); @@ -1468,26 +1470,21 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smmu_device *smmu, atomic_set_release(&cmdq->owner_prod, prod); } - /* 5. If we are inserting a CMD_SYNC, we must wait for it to complete */ - if (sync) { - llq.prod = queue_inc_prod_n(&llq, n); - ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq); - if (ret) { - dev_err_ratelimited(smmu->dev, - "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n", - llq.prod, - readl_relaxed(cmdq->q.prod_reg), - readl_relaxed(cmdq->q.cons_reg)); - } + /* 5. Since we always insert a CMD_SYNC, we must wait for it to complete */ + llq.prod = queue_inc_prod_n(&llq, n); + ret = arm_smmu_cmdq_poll_until_sync(smmu, &llq); + if (ret) + dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout at 0x%08x [hwprod 0x%08x, hwcons 0x%08x]\n", + llq.prod, readl_relaxed(cmdq->q.prod_reg), + readl_relaxed(cmdq->q.cons_reg)); - /* - * Try to unlock the cmdq lock. This will fail if we're the last - * reader, in which case we can safely update cmdq->q.llq.cons - */ - if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) { - WRITE_ONCE(cmdq->q.llq.cons, llq.cons); - arm_smmu_cmdq_shared_unlock(cmdq); - } + /* + * Try to unlock the cmdq lock. This will fail if we're the last reader, + * in which case we can safely update cmdq->q.llq.cons + */ + if (!arm_smmu_cmdq_shared_tryunlock(cmdq)) { + WRITE_ONCE(cmdq->q.llq.cons, llq.cons); + arm_smmu_cmdq_shared_unlock(cmdq); } local_irq_restore(flags); @@ -1505,12 +1502,7 @@ static int arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu, return -EINVAL; } - return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1, false); -} - -static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu) -{ - return arm_smmu_cmdq_issue_cmdlist(smmu, NULL, 0, true); + return arm_smmu_cmdq_issue_cmdlist(smmu, cmd, 1); } static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu, @@ -1521,7 +1513,7 @@ static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu, struct arm_smmu_ll_queue *llq = &q->q.llq; if (cmds->num == llq->max_cmd_per_batch) { - arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false); + arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num); cmds->num = 0; } arm_smmu_cmdq_build_cmd(&cmds->cmds[cmds->num * CMDQ_ENT_DWORDS], cmd); @@ -1531,7 +1523,7 @@ static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu, static int arm_smmu_cmdq_batch_submit(struct arm_smmu_device *smmu, struct arm_smmu_cmdq_batch *cmds) { - return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, true); + return arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num); } /* Context descriptor manipulation functions */ @@ -1803,7 +1795,6 @@ static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid) }; arm_smmu_cmdq_issue_cmd(smmu, &cmd); - arm_smmu_cmdq_issue_sync(smmu); } static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, @@ -2197,17 +2188,21 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, size_t size, static int arm_smmu_atc_inv_master(struct arm_smmu_master *master) { - int i; + int i, ret = 0; struct arm_smmu_cmdq_ent cmd; arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd); for (i = 0; i < master->num_sids; i++) { + int rc; + cmd.atc.sid = master->sids[i]; - arm_smmu_cmdq_issue_cmd(master->smmu, &cmd); + rc = arm_smmu_cmdq_issue_cmd(master->smmu, &cmd); + if (rc) + ret = rc; } - return arm_smmu_cmdq_issue_sync(master->smmu); + return ret; } static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, @@ -2280,7 +2275,6 @@ static void arm_smmu_tlb_inv_context(void *cookie) * careful, 007. */ arm_smmu_cmdq_issue_cmd(smmu, &cmd); - arm_smmu_cmdq_issue_sync(smmu); arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); } @@ -3667,7 +3661,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) /* Invalidate any cached configuration */ cmd.opcode = CMDQ_OP_CFGI_ALL; arm_smmu_cmdq_issue_cmd(smmu, &cmd); - arm_smmu_cmdq_issue_sync(smmu); /* Invalidate any stale TLB entries */ if (smmu->features & ARM_SMMU_FEAT_HYP) { @@ -3677,7 +3670,6 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL; arm_smmu_cmdq_issue_cmd(smmu, &cmd); - arm_smmu_cmdq_issue_sync(smmu); /* Event queue */ writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);