From patchwork Wed Jan 6 19:36:43 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 7970411 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0A69B9F32E for ; Wed, 6 Jan 2016 19:39:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BF9342014A for ; Wed, 6 Jan 2016 19:39:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BE220200FF for ; Wed, 6 Jan 2016 19:39:46 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aGttn-0002up-Ez; Wed, 06 Jan 2016 19:37:35 +0000 Received: from mail-pa0-x22e.google.com ([2607:f8b0:400e:c03::22e]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aGttc-0002rV-Ce for linux-arm-kernel@lists.infradead.org; Wed, 06 Jan 2016 19:37:25 +0000 Received: by mail-pa0-x22e.google.com with SMTP id uo6so219785811pac.1 for ; Wed, 06 Jan 2016 11:37:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XUOCsZcrlDINSAVkV/JQQu5+W3/djii7Ru3O8rsG1WM=; b=dPkZWzBNnGLoY3QU/T+E87UC8JzolDmIe1vhLc4y/meCbfoCEbuFsnviyO4raIL+Zw uYfRVt0ljNJNlK8t6dlG09O+ta689R1UXXS/GVeTnvkCQVsWuqU0NshWbrjKTnRdUsIQ m1P3UqD86OxmgU+TuFMmeI168vJIXSqcSESZk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XUOCsZcrlDINSAVkV/JQQu5+W3/djii7Ru3O8rsG1WM=; b=NKHT5vpOSsujZicvKzUE3IhOBx2TmTLK5EA7Wvj2ZLkFOOR9W0Zp6raFiY9Zk45nKg vn0uvA4SQCRQk8sEHJTZbahsIqhBdvYXunsZaVLtqwpyIUBDRlmbw1Z4KqSEPPEmyAfD leoQyI1ZE+SovTObDh/ejo3i5wg6bGov5NFulMITeQ8NYfMlseDBpRQbN9cgohqaVcC7 zVm8zTaYjOO1JeOiagA+5XepDm1J+gtPrpV2AZEgmt0eDc4g4xdf+zGws7gJkPa0OZ3j UctsdKZqfBILCnL7nHJ5R8+0EJ9CG/c3nODBBbVG38G1g0+v1b9Y93iF14xDCzM2qWwj 2Kig== X-Gm-Message-State: ALoCoQlqOtxYOiy3hMkkAnYs3cPLHksEr2DBX2P7YrOu9KbW8w7BdPNbYrKmq8xnq3q44crm0CTQJHhOujLabgkw0GZKa+UTUw== X-Received: by 10.66.232.74 with SMTP id tm10mr89229950pac.128.1452109022945; Wed, 06 Jan 2016 11:37:02 -0800 (PST) Received: from tictac.mtv.corp.google.com ([172.22.65.76]) by smtp.gmail.com with ESMTPSA id u14sm137274228pfi.58.2016.01.06.11.37.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 06 Jan 2016 11:37:02 -0800 (PST) From: Douglas Anderson To: Russell King Subject: [PATCH v3 1/3] ARM: dma-mapping: Optimize allocation Date: Wed, 6 Jan 2016 11:36:43 -0800 Message-Id: <1452109005-19517-2-git-send-email-dianders@chromium.org> X-Mailer: git-send-email 2.6.0.rc2.230.g3dd15c0 In-Reply-To: <1452109005-19517-1-git-send-email-dianders@chromium.org> References: <1452109005-19517-1-git-send-email-dianders@chromium.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160106_113724_560768_2C441CF9 X-CRM114-Status: GOOD ( 22.07 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: laurent.pinchart+renesas@ideasonboard.com, Pawel Osciak , mike.looijmans@topic.nl, linux-kernel@vger.kernel.org, Dmitry Torokhov , will.deacon@arm.com, Douglas Anderson , Tomasz Figa , penguin-kernel@i-love.sakura.ne.jp, carlo@caione.org, akpm@linux-foundation.org, Robin Murphy , dan.j.williams@intel.com, linux-arm-kernel@lists.infradead.org, Marek Szyprowski MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The __iommu_alloc_buffer() is expected to be called to allocate pretty sizeable buffers. Upon simple tests of video I saw it trying to allocate 4,194,304 bytes. The function tries to allocate large chunks in order to optimize IOMMU TLB usage. The current function is very, very slow. One problem is the way it keeps trying and trying to allocate big chunks. Imagine a very fragmented memory that has 4M free but no contiguous pages at all. Further imagine allocating 4M (1024 pages). We'll do the following memory allocations: - For page 1: - Try to allocate order 10 (no retry) - Try to allocate order 9 (no retry) - ... - Try to allocate order 0 (with retry, but not needed) - For page 2: - Try to allocate order 9 (no retry) - Try to allocate order 8 (no retry) - ... - Try to allocate order 0 (with retry, but not needed) - ... - ... Total number of calls to alloc() calls for this case is: sum(int(math.log(i, 2)) + 1 for i in range(1, 1025)) => 9228 The above is obviously worse case, but given how slow alloc can be we really want to try to avoid even somewhat bad cases. I timed the old code with a device under memory pressure and it wasn't hard to see it take more than 120 seconds to allocate 4 megs of memory! (NOTE: testing was done on kernel 3.14, so possibly mainline would behave differently). A second problem is that allocating big chunks under memory pressure when we don't need them is just not a great idea anyway unless we really need them. We can make due pretty well with smaller chunks so it's probably wise to leave bigger chunks for other users once memory pressure is on. Let's adjust the allocation like this: 1. If a big chunk fails, stop trying to hard and bump down to lower order allocations. 2. Don't try useless orders. The whole point of big chunks is to optimize the TLB and it can really only make use of 2M, 1M, 64K and 4K sizes. We'll still tend to eat up a bunch of big chunks, but that might be the right answer for some users. A future patch could possibly add a new DMA_ATTR that would let the caller decide that TLB optimization isn't important and that we should use smaller chunks. Presumably this would be a sane strategy for some callers. Signed-off-by: Douglas Anderson --- Changes in v3: None Changes in v2: - No longer just 1 page at a time, but gives up higher order quickly. - Only tries important higher order allocations that might help us. arch/arm/mm/dma-mapping.c | 34 ++++++++++++++++++++-------------- 1 file changed, 20 insertions(+), 14 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0eca3812527e..bc9cebfa0891 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1122,6 +1122,9 @@ static inline void __free_iova(struct dma_iommu_mapping *mapping, spin_unlock_irqrestore(&mapping->lock, flags); } +/* We'll try 2M, 1M, 64K, and finally 4K; array must end with 0! */ +static const int iommu_order_array[] = { 9, 8, 4, 0 }; + static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t gfp, struct dma_attrs *attrs) { @@ -1129,6 +1132,7 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, int count = size >> PAGE_SHIFT; int array_size = count * sizeof(struct page *); int i = 0; + int order_idx = 0; if (array_size <= PAGE_SIZE) pages = kzalloc(array_size, GFP_KERNEL); @@ -1162,22 +1166,24 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, while (count) { int j, order; - for (order = __fls(count); order > 0; --order) { - /* - * We do not want OOM killer to be invoked as long - * as we can fall back to single pages, so we force - * __GFP_NORETRY for orders higher than zero. - */ - pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); - if (pages[i]) - break; + order = iommu_order_array[order_idx]; + + /* Drop down when we get small */ + if (__fls(count) < order) { + order_idx++; + continue; } - if (!pages[i]) { - /* - * Fall back to single page allocation. - * Might invoke OOM killer as last resort. - */ + if (order) { + /* See if it's easy to allocate a high-order chunk */ + pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); + + /* Go down a notch at first sign of pressure */ + if (!pages[i]) { + order_idx++; + continue; + } + } else { pages[i] = alloc_pages(gfp, 0); if (!pages[i]) goto error;