From patchwork Thu Jan 31 13:52:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 10790639 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97A46746 for ; Thu, 31 Jan 2019 13:53:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A3C930F67 for ; Thu, 31 Jan 2019 13:53:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E32D30F74; Thu, 31 Jan 2019 13:53:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 147D330F75 for ; Thu, 31 Jan 2019 13:53:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=b2D7UOLCoTJpwK+LXse8PURnEUUzMg/mFKRK7BXR85I=; b=axzgyqoPGMwjcR MP5dF3ugrOa3wVFlm2R8KUbbZh7D1tqxX9tSdIrF6jIbgt+HUycFQEJl8vQL+mmbXL8V92nvcY2ph cTixI0Lv+XlpRC9YL+rAapfnjo34NABrOrFSUv+SqOp85OohNt7yDtyrMScKT+BXossZQJi08dm7t nSIirVyxEPbUwnfOfDNgq0ROu6fsnO0KEzOWSuhmHJ5zjF4CY5wYMATWyMpbsCLsM8g1WdWW9lL6z uSkhOzMWvL4AkxHjOpMj6h261MzYin2X8DhKn0I604ditIVB0qiKn8gVsF2S9FCJvmqeK8LDb48cR QHKzZ2sAV26wsTopQUxg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gpCm6-0006sQ-Oq; Thu, 31 Jan 2019 13:53:02 +0000 Received: from szxga07-in.huawei.com ([45.249.212.35] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gpCm3-0006rx-KE for linux-arm-kernel@lists.infradead.org; Thu, 31 Jan 2019 13:53:01 +0000 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id D2E2C22D26A4371F7288; Thu, 31 Jan 2019 21:52:50 +0800 (CST) Received: from HGHY1l002753561.china.huawei.com (10.177.23.164) by DGGEMS408-HUB.china.huawei.com (10.3.19.208) with Microsoft SMTP Server id 14.3.408.0; Thu, 31 Jan 2019 21:52:42 +0800 From: Zhen Lei To: John Garry , Robin Murphy , Will Deacon , Joerg Roedel , linux-arm-kernel , iommu , linux-kernel Subject: [PATCH RFC 1/1] iommu: set the default iommu-dma mode as non-strict Date: Thu, 31 Jan 2019 21:52:11 +0800 Message-ID: <20190131135211.6732-1-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.19.2.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190131_055259_837493_DE4C4B37 X-CRM114-Status: GOOD ( 11.15 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yunsheng Lin , Zhen Lei Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, many peripherals are faster than before. For example, the top speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But when iommu page-table mapping enabled, it's hard to reach the top speed in strict mode, because of frequently map and unmap operations. In order to keep abreast of the times, I think it's better to set non-strict as default. Below it's our iperf performance data of 25Gb netcard: strict mode: 18-20 Gb/s non-strict mode: 23.5 Gb/s Signed-off-by: Zhen Lei --- Documentation/admin-guide/kernel-parameters.txt | 4 ++-- drivers/iommu/iommu.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) -- 1.8.3 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index b799bcf..667221f 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1779,13 +1779,13 @@ iommu.strict= [ARM64] Configure TLB invalidation behaviour Format: { "0" | "1" } - 0 - Lazy mode. + 0 - Lazy mode (default). Request that DMA unmap operations use deferred invalidation of hardware TLBs, for increased throughput at the cost of reduced device isolation. Will fall back to strict mode if not supported by the relevant IOMMU driver. - 1 - Strict mode (default). + 1 - Strict mode. DMA unmap operations invalidate IOMMU hardware TLBs synchronously. diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index 3ed4db3..10e0b49 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -43,7 +43,7 @@ #else static unsigned int iommu_def_domain_type = IOMMU_DOMAIN_DMA; #endif -static bool iommu_dma_strict __read_mostly = true; +static bool iommu_dma_strict __read_mostly; struct iommu_callback_data { const struct iommu_ops *ops;