From patchwork Fri Aug 21 13:54:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Garry X-Patchwork-Id: 11729533 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6502C138C for ; Fri, 21 Aug 2020 14:00:01 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3DC6820720 for ; Fri, 21 Aug 2020 14:00:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Y2oHNH/t" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3DC6820720 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MqsmTjJriQ9z8lMfBHVllMxHn7RIMaxJZ7BC1HcvWLs=; b=Y2oHNH/tw44mH+1YxwGm00W85 kEj+ZAlo999T9o2Y1D4TCBqbLWZqdFbAScCNNQBz8AwTcl+pQ4yROSqkbhxQvm1uTOGqevZY9LJ46 p157lcbS3CxSODJ7z4zc9tOihExcIhssG1CFProTrWZ3vfvS27x4TnQ46vNBwYs6p25d1VXiqD+dM Rv+sqJhG7wZr1IxRgBxQeAyEsMuIp2QyNaJKrotp0SczCZKKTMwNW+VcjE+HMDw8bD6zLPJ6cs0Lw SbMFkV8eykONN75bC2kv3hJS2/1PRstvx9+5gOApF6UAVkS2G/+7ivsbHlxvCndI4PJjLPjFflwTR dYIsag/Eg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1k97Z1-0005Sg-RF; Fri, 21 Aug 2020 13:58:40 +0000 Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1k97Yp-0005KX-Ba for linux-arm-kernel@lists.infradead.org; Fri, 21 Aug 2020 13:58:29 +0000 Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 1F74097332D8AA97487D; Fri, 21 Aug 2020 21:58:15 +0800 (CST) Received: from localhost.localdomain (10.69.192.58) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Fri, 21 Aug 2020 21:58:04 +0800 From: John Garry To: , Subject: [PATCH v2 1/2] iommu/arm-smmu-v3: Calculate max commands per batch Date: Fri, 21 Aug 2020 21:54:21 +0800 Message-ID: <1598018062-175608-2-git-send-email-john.garry@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1598018062-175608-1-git-send-email-john.garry@huawei.com> References: <1598018062-175608-1-git-send-email-john.garry@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.58] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200821_095827_701434_4CC28FFE X-CRM114-Status: GOOD ( 14.60 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.190 listed in wl.mailspike.net] -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.190 listed in list.dnswl.org] -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: maz@kernel.org, joro@8bytes.org, John Garry , linux-kernel@vger.kernel.org, linuxarm@huawei.com, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Calculate the batch size limit such that all CPUs in the system cannot issue so many commands as to fill the command queue. Signed-off-by: John Garry --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 46 +++++++++++++++++++-- 1 file changed, 42 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 7196207be7ea..a705fa3e18ea 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -532,6 +532,7 @@ struct arm_smmu_ll_queue { u8 __pad[SMP_CACHE_BYTES]; } ____cacheline_aligned_in_smp; u32 max_n_shift; + u32 max_cmd_per_batch; }; struct arm_smmu_queue { @@ -1515,7 +1516,10 @@ static void arm_smmu_cmdq_batch_add(struct arm_smmu_device *smmu, struct arm_smmu_cmdq_batch *cmds, struct arm_smmu_cmdq_ent *cmd) { - if (cmds->num == CMDQ_BATCH_ENTRIES) { + struct arm_smmu_cmdq *q = &smmu->cmdq; + struct arm_smmu_ll_queue *llq = &q->q.llq; + + if (cmds->num == llq->max_cmd_per_batch) { arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false); cmds->num = 0; } @@ -3177,6 +3181,41 @@ static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu, return 0; } +static int arm_smmu_init_cmd_queue(struct arm_smmu_device *smmu, + struct arm_smmu_queue *q, + unsigned long prod_off, + unsigned long cons_off, + size_t dwords) +{ + u32 cpus = num_possible_cpus(), entries_for_prod; + int ret; + + ret = arm_smmu_init_one_queue(smmu, q, prod_off, cons_off, dwords, + "cmdq"); + if (ret) + return ret; + + entries_for_prod = 1 << q->llq.max_n_shift; + + /* + * We need at least 2 commands in a batch (1 x CMD_SYNC and 1 x + * whatever else). + */ + if (entries_for_prod < 2 * cpus) { + dev_err(smmu->dev, "command queue size too small, suggest reduce #CPUs\n"); + return -ENXIO; + } + + /* + * When finding max_cmd_per_batch, deduct 1 entry per batch to take + * account of a CMD_SYNC being issued also. + */ + q->llq.max_cmd_per_batch = min((entries_for_prod / cpus) - 1, + (u32)CMDQ_BATCH_ENTRIES); + + return 0; +} + static void arm_smmu_cmdq_free_bitmap(void *data) { unsigned long *bitmap = data; @@ -3210,9 +3249,8 @@ static int arm_smmu_init_queues(struct arm_smmu_device *smmu) int ret; /* cmdq */ - ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD, - ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS, - "cmdq"); + ret = arm_smmu_init_cmd_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD, + ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS); if (ret) return ret;