From patchwork Fri Jan 8 00:36:45 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Doug Anderson X-Patchwork-Id: 7981261 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CF0499F1CC for ; Fri, 8 Jan 2016 00:40:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E02E220142 for ; Fri, 8 Jan 2016 00:40:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F38D1200E1 for ; Fri, 8 Jan 2016 00:40:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1aHL4q-0004ht-R9; Fri, 08 Jan 2016 00:38:48 +0000 Received: from mail-pf0-x234.google.com ([2607:f8b0:400e:c00::234]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1aHL4X-0004dj-9v for linux-arm-kernel@lists.infradead.org; Fri, 08 Jan 2016 00:38:30 +0000 Received: by mail-pf0-x234.google.com with SMTP id e65so1185353pfe.0 for ; Thu, 07 Jan 2016 16:38:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=b2UolQe5tJz8wUCzXNAybBhZ7noQUse7Pian7/s8hrg=; b=NFfyAJL4ihpjrlp42cNC551lKEbhAiNjlo9YyhBW4738gR3bJqYxHhxGh6ub8oFyQv JNNMb7RDhXYhu0IzyMk1NgZVtgWNK4gY3j/qJzeLdKpQFR4TMuU8N65LRSV2MT0qPUvH mEaP7+1yf+PoqZ7FHbpUg4n9W3pQ6vjqF699U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b2UolQe5tJz8wUCzXNAybBhZ7noQUse7Pian7/s8hrg=; b=C36/RixSrgWhtcr1nO45rSgBgkjmFoVdKm0fvb6G6//oxGxmMQ/MCfge/bYST6O0K0 caHceg6c3fxdEzdDU7ep8qc0eB2SK6AJYsTy9uzwoN6+I/7280gBQ5/xxb304jGbJJU8 LErvxBRVMYpZmXy0sM1Dwolj9FMJtStelpWkmQz+/4kGwH95Es/0yLEJdm6xsoSH1xyT 8dFPbD+uRLBOCZ7Z6CCUOMGSVRuQRdpI40tylcGoF8eCrd3jGY2gvpPP1udTYNZUM0Bs /jBLvY07BytvUBC0U64GQSKxFXEzdO3R/tf6ukt4aORwEbPPpQM6pD/7s0WjQ66KubBw OkFw== X-Gm-Message-State: ALoCoQm9wQW3wFu0YlJIBeJl6v+FvUWZdBjAVCtXOdj26YMBmyjU9KRml9+pmwUqxgPOaIZMOPnpZpGrysyUEetN8x3bFkRP/w== X-Received: by 10.98.80.79 with SMTP id e76mr532382pfb.126.1452213486953; Thu, 07 Jan 2016 16:38:06 -0800 (PST) Received: from tictac.mtv.corp.google.com ([172.22.65.76]) by smtp.gmail.com with ESMTPSA id ud10sm162683968pab.27.2016.01.07.16.38.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Jan 2016 16:38:06 -0800 (PST) From: Douglas Anderson To: Russell King Subject: [PATCH v4 3/3] ARM: dma-mapping: Use DMA_ATTR_NOHUGEPAGE hint to optimize allocation Date: Thu, 7 Jan 2016 16:36:45 -0800 Message-Id: <1452213405-22942-4-git-send-email-dianders@chromium.org> X-Mailer: git-send-email 2.6.0.rc2.230.g3dd15c0 In-Reply-To: <1452213405-22942-1-git-send-email-dianders@chromium.org> References: <1452213405-22942-1-git-send-email-dianders@chromium.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160107_163829_373960_CC45FB75 X-CRM114-Status: GOOD ( 18.03 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: laurent.pinchart+renesas@ideasonboard.com, Pawel Osciak , mike.looijmans@topic.nl, linux-kernel@vger.kernel.org, Dmitry Torokhov , will.deacon@arm.com, Douglas Anderson , Tomasz Figa , carlo@caione.org, akpm@linux-foundation.org, Robin Murphy , dan.j.williams@intel.com, linux-arm-kernel@lists.infradead.org, Marek Szyprowski MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we know that TLB efficiency will not be an issue when memory is accessed then it's not terribly important to allocate big chunks of memory. The whole point of allocating the big chunks was that it would make TLB usage efficient. As Marek Szyprowski indicated: Please note that mapping memory with larger pages significantly improves performance, especially when IOMMU has a little TLB cache. This can be easily observed when multimedia devices do processing of RGB data with 90/270 degree rotation Image rotation is distinctly an operation that needs to bounce around through memory, so it makes sense that TLB efficiency is important there. Video decoding, on the other hand, is a fairly sequential operation. During video decoding it's not expected that we'll be jumping all over memory. Decoding video is also pretty heavy and the TLB misses aren't a huge deal. Presumably most HW video acceleration users of dma-mapping will not care about huge pages and will set DMA_ATTR_NOHUGEPAGE. Allocating big chunks of memory is quite expensive, especially if we're doing it repeadly and memory is full. In one (out of tree) usage model it is common that arm_iommu_alloc_attrs() is called 16 times in a row, each one trying to allocate 4 MB of memory. This is called whenever the system encounters a new video, which could easily happen while the memory system is stressed out. In fact, on certain social media websites that auto-play video and have infinite scrolling, it's quite common to see not just one of these 16x4MB allocations but 2 or 3 right after another. Asking the system even to do a small amount of extra work to give us big chunks in this case is just not a good use of time. Allocating big chunks of memory is also expensive indirectly. Even if we ask the system not to do ANY extra work to allocate _our_ memory, we're still potentially eating up all big chunks in the system. Presumably there are other users in the system that aren't quite as flexible and that actually need these big chunks. By eating all the big chunks we're causing extra work for the rest of the system. We also may start making other memory allocations fail. While the system may be robust to such failures (as is the case with dwc2 USB trying to allocate buffers for Ethernet data and with WiFi trying to allocate buffers for WiFi data), it is yet another big performance hit. Signed-off-by: Douglas Anderson Acked-by: Marek Szyprowski --- Changes in v4: - renamed DMA_ATTR_SEQUENTIAL to DMA_ATTR_NOHUGEPAGE - added Marek's ack Changes in v3: - Use DMA_ATTR_SEQUENTIAL hint patch new for v3. Changes in v2: None arch/arm/mm/dma-mapping.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index bc9cebfa0891..96d71bcb4c3a 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1158,6 +1158,10 @@ static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, return pages; } + /* Go straight to 4K chunks if caller says it's OK. */ + if (dma_get_attr(DMA_ATTR_NOHUGEPAGE, attrs)) + order_idx = ARRAY_SIZE(iommu_order_array) - 1; + /* * IOMMU can map any pages, so himem can also be used here */