Message ID | 20200628111251.19108-3-song.bao.hua@hisilicon.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show
Return-Path: <SRS0=xIgC=AJ=lists.infradead.org=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@kernel.org> Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC20014E3 for <patchwork-linux-arm@patchwork.kernel.org>; Sun, 28 Jun 2020 11:19:56 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 94DBC20B80 for <patchwork-linux-arm@patchwork.kernel.org>; Sun, 28 Jun 2020 11:19:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="fUd3gm36" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94DBC20B80 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-ID:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=c2Lbv8BLB5GGfn3Z1gRYdfmS4Jck5Hga1Q1DFZM11Po=; b=fUd3gm36WKAS/KB+fthLffCbM avMj6qqGe1PWqOH5MX1pdRnoTWsMLF4szlGIv54auvILUidqpnVeFcFxI7siX5nqGFnyu9Iomu/vN zyiUunxCkC7LpuuB1X+W+xuE0pqyJ5FK/D1DKv7rn5k4ifbxw2W90rmkHnin9gNbxx7Vyb29eBlf6 5LuY2AyWNX3df680hIXvPZDLRev9YNfdVp6u/p3gFZAwSt1yPqNIjPSAexAB8v040RQehmWdEgufm upErxUNau+aGMkSmJlCs4VNaVDStZA1Zd1DHhLkqNPyYg9IDZ32UMIjDtr4NQtHRy9ZCMQCw1QIk5 7x4nOiBmw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpVH7-0004gT-Uf; Sun, 28 Jun 2020 11:15:06 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpVH3-0004fV-Tv for linux-arm-kernel@lists.infradead.org; Sun, 28 Jun 2020 11:15:03 +0000 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id A70DF5C48FD15961F035; Sun, 28 Jun 2020 19:14:53 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.201.68) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Sun, 28 Jun 2020 19:14:46 +0800 From: Barry Song <song.bao.hua@hisilicon.com> To: <hch@lst.de>, <m.szyprowski@samsung.com>, <robin.murphy@arm.com>, <will@kernel.org>, <ganapatrao.kulkarni@cavium.com>, <catalin.marinas@arm.com> Subject: [PATCH v3 2/2] arm64: mm: reserve per-numa CMA to localize coherent dma buffers Date: Sun, 28 Jun 2020 23:12:51 +1200 Message-ID: <20200628111251.19108-3-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200628111251.19108-1-song.bao.hua@hisilicon.com> References: <20200628111251.19108-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.201.68] X-CFilter-Loop: Reflected X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on merlin.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.32 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: <linux-arm-kernel.lists.infradead.org> List-Unsubscribe: <http://lists.infradead.org/mailman/options/linux-arm-kernel>, <mailto:linux-arm-kernel-request@lists.infradead.org?subject=unsubscribe> List-Archive: <http://lists.infradead.org/pipermail/linux-arm-kernel/> List-Post: <mailto:linux-arm-kernel@lists.infradead.org> List-Help: <mailto:linux-arm-kernel-request@lists.infradead.org?subject=help> List-Subscribe: <http://lists.infradead.org/mailman/listinfo/linux-arm-kernel>, <mailto:linux-arm-kernel-request@lists.infradead.org?subject=subscribe> Cc: Barry Song <song.bao.hua@hisilicon.com>, Steve Capper <steve.capper@arm.com>, linuxarm@huawei.com, linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@linux.ibm.com>, Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" <linux-arm-kernel-bounces@lists.infradead.org> Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org |
Series |
make dma_alloc_coherent NUMA-aware by per-NUMA CMA
|
expand
|
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 1e93cfc7c47a..a01eeb829372 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -429,6 +429,8 @@ void __init bootmem_init(void) hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); #endif + dma_pernuma_cma_reserve(); + /* * Sparsemem tries to allocate bootmem in memory_present(), so must be * done after the fixed reservations.
Right now, smmu is using dma_alloc_coherent() to get memory to save queues and tables. Typically, on ARM64 server, there is a default CMA located at node0, which could be far away from node2, node3 etc. with this patch, smmu will get memory from local numa node to save command queues and page tables. that means dma_unmap latency will be shrunk much. Meanwhile, when iommu.passthrough is on, device drivers which call dma_ alloc_coherent() will also get local memory and avoid the travel between numa nodes. Cc: Christoph Hellwig <hch@lst.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Will Deacon <will@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Steve Capper <steve.capper@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> --- -v3: * move dma_pernuma_cma_reserve() after hugetlb_cma_reserve() to reuse the comment before hugetlb_cma_reserve() with respect to Robin's comment arch/arm64/mm/init.c | 2 ++ 1 file changed, 2 insertions(+)