Message ID | 20200603024231.61748-4-song.bao.hua@hisilicon.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show
Return-Path: <SRS0=LAVu=7Q=lists.infradead.org=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@kernel.org> Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 94571618 for <patchwork-linux-arm@patchwork.kernel.org>; Wed, 3 Jun 2020 02:45:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 59D32206A2 for <patchwork-linux-arm@patchwork.kernel.org>; Wed, 3 Jun 2020 02:45:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="kEq1F+YO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59D32206A2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=HxpDeU4bpaf64cPglJOmQn6RSmXu0+DboU1FV9tNu6s=; b=kEq1F+YObhGRx5 Pm10GWCg5czNmF4mwBmcEaozT5cW5XlQ9Vvu50eBcMmIRkpPpTetu26RzqZkkc/ACBkqdMXlcVAjf lbJTjqkqsF/ALsyjXFaCPaXP4P+gtMcU2hSZIbcsctdZzcp6Tps7r9ZM1zAhGqPAidZVF6ce5QAeQ HzSsx1lZjc9x458YtbL3hIUzWqgvUqkcnYiCxiiDv5+t4tDxl8JyiwDrg2Ms+K8bB6QWdmNMTtw93 9eBu9or5oxS6B+jo9VkbvRzGnox2Oy9Hi5tdtEET8pNHbcCKOp+ZZ0nn/CB7lRNWZcO2ZspOd55PH +YQmONVCTp8Irou4CCMA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jgJPD-0004CE-Cr; Wed, 03 Jun 2020 02:45:27 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jgJOX-0001Lt-RS for linux-arm-kernel@lists.infradead.org; Wed, 03 Jun 2020 02:44:49 +0000 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id DC136CB971D0477D9F6E; Wed, 3 Jun 2020 10:44:43 +0800 (CST) Received: from SWX921481.china.huawei.com (10.126.201.193) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Wed, 3 Jun 2020 10:44:34 +0800 From: Barry Song <song.bao.hua@hisilicon.com> To: <hch@lst.de>, <m.szyprowski@samsung.com>, <robin.murphy@arm.com>, <catalin.marinas@arm.com> Subject: [PATCH 3/3] arm64: mm: reserve per-numa CMA after numa_init Date: Wed, 3 Jun 2020 14:42:31 +1200 Message-ID: <20200603024231.61748-4-song.bao.hua@hisilicon.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: <20200603024231.61748-1-song.bao.hua@hisilicon.com> References: <20200603024231.61748-1-song.bao.hua@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.126.201.193] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200602_194446_069444_0A0ACDE2 X-CRM114-Status: UNSURE ( 9.65 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H4 RBL: Very Good reputation (+4) [45.249.212.191 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: <linux-arm-kernel.lists.infradead.org> List-Unsubscribe: <http://lists.infradead.org/mailman/options/linux-arm-kernel>, <mailto:linux-arm-kernel-request@lists.infradead.org?subject=unsubscribe> List-Archive: <http://lists.infradead.org/pipermail/linux-arm-kernel/> List-Post: <mailto:linux-arm-kernel@lists.infradead.org> List-Help: <mailto:linux-arm-kernel-request@lists.infradead.org?subject=help> List-Subscribe: <http://lists.infradead.org/mailman/listinfo/linux-arm-kernel>, <mailto:linux-arm-kernel-request@lists.infradead.org?subject=subscribe> Cc: Barry Song <song.bao.hua@hisilicon.com>, john.garry@huawei.com, linux-kernel@vger.kernel.org, linuxarm@huawei.com, iommu@lists.linux-foundation.org, prime.zeng@hisilicon.com, Jonathan.Cameron@huawei.com, Will Deacon <will@kernel.org>, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" <linux-arm-kernel-bounces@lists.infradead.org> Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org |
Series |
support per-numa CMA for ARM server
|
expand
|
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 8f0e70ebb49d..204a534982b2 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -474,6 +474,8 @@ void __init bootmem_init(void) arm64_numa_init(); + dma_pernuma_cma_reserve(); + #ifdef CONFIG_ARM64_4K_PAGES hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); #endif
Right now, smmu is using dma_alloc_coherent() to get memory to save queues and tables. Typically, on ARM64 server, there is a default CMA located at node0, which could be far away from node2, node3 etc. with this patch, smmu will get memory from local numa node to save command queues and page tables. that means dma_unmap latency will be shrunk much. Meanwhile, when iommu.passthrough is on, device drivers which call dma_ alloc_coherent() will also get local memory and avoid the travel between numa nodes. Cc: Will Deacon <will@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> --- arch/arm64/mm/init.c | 2 ++ 1 file changed, 2 insertions(+)