From patchwork Mon Jul 15 09:34:34 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Ribalda Delgado X-Patchwork-Id: 2827341 Return-Path: X-Original-To: patchwork-linux-media@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2426DC0AB2 for ; Mon, 15 Jul 2013 09:34:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id ECD1620142 for ; Mon, 15 Jul 2013 09:34:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E426E2012C for ; Mon, 15 Jul 2013 09:34:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754642Ab3GOJeq (ORCPT ); Mon, 15 Jul 2013 05:34:46 -0400 Received: from mail-la0-f41.google.com ([209.85.215.41]:61798 "EHLO mail-la0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754547Ab3GOJeo (ORCPT ); Mon, 15 Jul 2013 05:34:44 -0400 Received: by mail-la0-f41.google.com with SMTP id fn20so9364322lab.28 for ; Mon, 15 Jul 2013 02:34:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer; bh=zDFO13JlZlcOgs6/2w6ybaHvdbJCNWL0flZM85PFk/E=; b=DT/hFE0/2mksujSuLGGY7V2rmK8j4/S5aD7LFCC6e3tEUvcWW3+p+gREg9wM+185RE eCGau1HgcKcAiCU7GiyfxNB6g9wU100VX134ToPpapWq3gKqB+HK50CvcuoDUTXyh9P8 9dPE3b1p342zJtkC90gviy6wGxL+csJ/XzorCMoWC2t8XaAphoZ2nUmKS+GOehHBTL+i WvQrkVnsyPx0jwWpAztfSGRA5zPUVwEeY7BHTgRPMicwJ5LmlUhngzNU2kLvxS9l3VQz S2xBe6A8hcG7tmO4kwG3p0ex9R+Lq5jMr8ACqgLl3esKbq6AWzNheRIdDRd2QbtuFZiv YJVw== X-Received: by 10.152.87.81 with SMTP id v17mr24292716laz.1.1373880882997; Mon, 15 Jul 2013 02:34:42 -0700 (PDT) Received: from localhost (0x4dd4aed9.adsl.cybercity.dk. [77.212.174.217]) by mx.google.com with ESMTPSA id am8sm18831653lac.1.2013.07.15.02.34.41 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 15 Jul 2013 02:34:42 -0700 (PDT) From: Ricardo Ribalda Delgado To: Pawel Osciak , Marek Szyprowski , Kyungmin Park , Mauro Carvalho Chehab , linux-media@vger.kernel.org, linux-kernel@vger.kernel.org (open list) Cc: Ricardo Ribalda Delgado Subject: [PATCH] videobuf2-dma-sg: Minimize the number of dma segments Date: Mon, 15 Jul 2013 11:34:34 +0200 Message-Id: <1373880874-9270-1-git-send-email-ricardo.ribalda@gmail.com> X-Mailer: git-send-email 1.7.10.4 Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Most DMA engines have limitations regarding the number of DMA segments (sg-buffers) that they can handle. Videobuffers can easily spread through houndreds of pages. In the previous aproach, the pages were allocated individually, this could led to the creation houndreds of dma segments (sg-buffers) that could not be handled by some DMA engines. This patch tries to minimize the number of DMA segments by using alloc_pages_exact. In the worst case it will behave as before, but most of the times it will reduce the number fo dma segments Signed-off-by: Ricardo Ribalda Delgado --- drivers/media/v4l2-core/videobuf2-dma-sg.c | 49 +++++++++++++++++++++------- 1 file changed, 38 insertions(+), 11 deletions(-) diff --git a/drivers/media/v4l2-core/videobuf2-dma-sg.c b/drivers/media/v4l2-core/videobuf2-dma-sg.c index 16ae3dc..67a94ab 100644 --- a/drivers/media/v4l2-core/videobuf2-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf2-dma-sg.c @@ -42,10 +42,44 @@ struct vb2_dma_sg_buf { static void vb2_dma_sg_put(void *buf_priv); +static int vb2_dma_sg_alloc_compacted(struct vb2_dma_sg_buf *buf, + gfp_t gfp_flags) +{ + unsigned int last_page = 0; + void *vaddr = NULL; + unsigned int req_pages; + + while (last_page < buf->sg_desc.num_pages) { + req_pages = buf->sg_desc.num_pages-last_page; + while (req_pages >= 1) { + vaddr = alloc_pages_exact(req_pages*PAGE_SIZE, + GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN); + if (vaddr) + break; + req_pages >>= 1; + } + if (!vaddr) { + while (--last_page >= 0) + __free_page(buf->pages[last_page]); + return -ENOMEM; + } + while (req_pages) { + buf->pages[last_page] = virt_to_page(vaddr); + sg_set_page(&buf->sg_desc.sglist[last_page], + buf->pages[last_page], PAGE_SIZE, 0); + vaddr += PAGE_SIZE; + last_page++; + req_pages--; + } + } + + return 0; +} + static void *vb2_dma_sg_alloc(void *alloc_ctx, unsigned long size, gfp_t gfp_flags) { struct vb2_dma_sg_buf *buf; - int i; + int ret; buf = kzalloc(sizeof *buf, GFP_KERNEL); if (!buf) @@ -69,14 +103,9 @@ static void *vb2_dma_sg_alloc(void *alloc_ctx, unsigned long size, gfp_t gfp_fla if (!buf->pages) goto fail_pages_array_alloc; - for (i = 0; i < buf->sg_desc.num_pages; ++i) { - buf->pages[i] = alloc_page(GFP_KERNEL | __GFP_ZERO | - __GFP_NOWARN | gfp_flags); - if (NULL == buf->pages[i]) - goto fail_pages_alloc; - sg_set_page(&buf->sg_desc.sglist[i], - buf->pages[i], PAGE_SIZE, 0); - } + ret = vb2_dma_sg_alloc_compacted(buf, gfp_flags); + if (ret) + goto fail_pages_alloc; buf->handler.refcount = &buf->refcount; buf->handler.put = vb2_dma_sg_put; @@ -89,8 +118,6 @@ static void *vb2_dma_sg_alloc(void *alloc_ctx, unsigned long size, gfp_t gfp_fla return buf; fail_pages_alloc: - while (--i >= 0) - __free_page(buf->pages[i]); kfree(buf->pages); fail_pages_array_alloc: