From patchwork Fri Jan 8 23:05:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 7990861 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0E027BEEE5 for ; Fri, 8 Jan 2016 23:08:08 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E6F8D2014A for ; Fri, 8 Jan 2016 23:08:06 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0479120138 for ; Fri, 8 Jan 2016 23:08:06 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aHg72-0004rS-Jr; Fri, 08 Jan 2016 23:06:28 +0000 Received: from mail-pf0-x236.google.com ([2607:f8b0:400e:c00::236]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aHg6o-0004hd-SF for linux-arm-kernel@lists.infradead.org; Fri, 08 Jan 2016 23:06:15 +0000 Received: by mail-pf0-x236.google.com with SMTP id e65so15850780pfe.0 for ; Fri, 08 Jan 2016 15:05:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dtT+WoQ4EaRtjVifhSTIGOAiTRAarkfmMzdeN01fM1E=; b=MWC/lNgAbLsFoaEiRD2FaQ5Lyy64v9vRGEXZ3+pcQGDjB5AtoFjNpenntJCub1j3XZ YsV0q4wjwJzG5pauKP8DdoQxcGfGIkrKiZTXcit7ByJSQy+aIzsp2H/eplg0KFS+9HCx mzmoIIplqvLEvISNfyz5UDgRk34WUyXIGPPHI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dtT+WoQ4EaRtjVifhSTIGOAiTRAarkfmMzdeN01fM1E=; b=YmBg31Zoxjz+a8UngOIG26h1ZkbJ2hHb+oN4kZjHbnnP6ew2lO09hyD5sX6in7GrqY kfbs6udQisOHRylG4g+uzDl9nhgF5phmMD7gH6ksRy5od9EdlvqliUhbSOBjJUQ+r57B 6VLsTqToDvMCjFF6vTAXfjLS73MechUYUuHEJruUZVNUrCp2/AvBcvkQgsjDAytd3m2k qtdcwL9d9v5SJWWSXB9mkPVofnR+ykWFXVUCp0znFStKru2wPG7fJSy1IVd4MRaktxo5 zJUvkEGu4hV+KfhkpZFUnkJKT4QFAiG9mOTkuumcewx8LbdSbZTssPCG+2LebfouagSi S06g== X-Gm-Message-State: ALoCoQmar03+55W4XpU901S2MrbTo5yd8nqh0mtPV1mqCYWPqSnAevrFoIn3C7/Qon8DXS9Em5663M6QPw4xpg+laQ762hLhNg== X-Received: by 10.98.71.93 with SMTP id u90mr8006714pfa.165.1452294353801; Fri, 08 Jan 2016 15:05:53 -0800 (PST) Received: from tictac.mtv.corp.google.com ([172.22.65.76]) by smtp.gmail.com with ESMTPSA id z7sm7027783pfi.19.2016.01.08.15.05.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 08 Jan 2016 15:05:53 -0800 (PST) From: Douglas Anderson To: Russell King , Mauro Carvalho Chehab , Robin Murphy , Tomasz Figa , Marek Szyprowski Subject: [PATCH v5 1/5] ARM: dma-mapping: Optimize allocation Date: Fri, 8 Jan 2016 15:05:28 -0800 Message-Id: <1452294332-23415-2-git-send-email-dianders@chromium.org> X-Mailer: git-send-email 2.6.0.rc2.230.g3dd15c0 In-Reply-To: <1452294332-23415-1-git-send-email-dianders@chromium.org> References: <1452294332-23415-1-git-send-email-dianders@chromium.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160108_150614_951656_A5BC7B37 X-CRM114-Status: GOOD ( 21.70 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: laurent.pinchart+renesas@ideasonboard.com, Pawel Osciak , mike.looijmans@topic.nl, lorenx4@gmail.com, Dmitry Torokhov , will.deacon@arm.com, Douglas Anderson , linux-kernel@vger.kernel.org, carlo@caione.org, dan.j.williams@intel.com, akpm@linux-foundation.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The __iommu_alloc_buffer() is expected to be called to allocate pretty sizeable buffers. Upon simple tests of video I saw it trying to allocate 4,194,304 bytes. The function tries to allocate large chunks in order to optimize IOMMU TLB usage. The current function is very, very slow. One problem is the way it keeps trying and trying to allocate big chunks. Imagine a very fragmented memory that has 4M free but no contiguous pages at all. Further imagine allocating 4M (1024 pages). We'll do the following memory allocations: - For page 1: - Try to allocate order 10 (no retry) - Try to allocate order 9 (no retry) - ... - Try to allocate order 0 (with retry, but not needed) - For page 2: - Try to allocate order 9 (no retry) - Try to allocate order 8 (no retry) - ... - Try to allocate order 0 (with retry, but not needed) - ... - ... Total number of calls to alloc() calls for this case is: sum(int(math.log(i, 2)) + 1 for i in range(1, 1025)) => 9228 The above is obviously worse case, but given how slow alloc can be we really want to try to avoid even somewhat bad cases. I timed the old code with a device under memory pressure and it wasn't hard to see it take more than 120 seconds to allocate 4 megs of memory! (NOTE: testing was done on kernel 3.14, so possibly mainline would behave differently). A second problem is that allocating big chunks under memory pressure when we don't need them is just not a great idea anyway unless we really need them. We can make due pretty well with smaller chunks so it's probably wise to leave bigger chunks for other users once memory pressure is on. Let's adjust the allocation like this: 1. If a big chunk fails, stop trying to hard and bump down to lower order allocations. 2. Don't try useless orders. The whole point of big chunks is to optimize the TLB and it can really only make use of 2M, 1M, 64K and 4K sizes. We'll still tend to eat up a bunch of big chunks, but that might be the right answer for some users. A future patch could possibly add a new DMA_ATTR that would let the caller decide that TLB optimization isn't important and that we should use smaller chunks. Presumably this would be a sane strategy for some callers. Signed-off-by: Douglas Anderson Acked-by: Marek Szyprowski Reviewed-by: Robin Murphy Reviewed-by: Tomasz Figa --- Changes in v5: None Changes in v4: - Added Marek's ack Changes in v3: None Changes in v2: - No longer just 1 page at a time, but gives up higher order quickly. - Only tries important higher order allocations that might help us. arch/arm/mm/dma-mapping.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0eca3812527e..bc9cebfa0891 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1122,6 +1122,9 @@ static inline void __free_iova(struct dma_iommu_mapping *mapping, spin_unlock_irqrestore(&mapping->lock, flags); } +/* We'll try 2M, 1M, 64K, and finally 4K; array must end with 0! */ +static const int iommu_order_array[] = { 9, 8, 4, 0 }; + static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t gfp, struct dma_attrs *attrs) { @@ -1129,6 +1132,7 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, int count = size >> PAGE_SHIFT; int array_size = count * sizeof(struct page *); int i = 0; + int order_idx = 0; if (array_size <= PAGE_SIZE) pages = kzalloc(array_size, GFP_KERNEL); @@ -1162,22 +1166,24 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, while (count) { int j, order; - for (order = __fls(count); order > 0; --order) { - /* - * We do not want OOM killer to be invoked as long - * as we can fall back to single pages, so we force - * __GFP_NORETRY for orders higher than zero. - */ - pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); - if (pages[i]) - break; + order = iommu_order_array[order_idx]; + + /* Drop down when we get small */ + if (__fls(count) < order) { + order_idx++; + continue; } - if (!pages[i]) { - /* - * Fall back to single page allocation. - * Might invoke OOM killer as last resort. - */ + if (order) { + /* See if it's easy to allocate a high-order chunk */ + pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); + + /* Go down a notch at first sign of pressure */ + if (!pages[i]) { + order_idx++; + continue; + } + } else { pages[i] = alloc_pages(gfp, 0); if (!pages[i]) goto error;