From patchwork Mon Aug 11 15:17:14 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 4707771 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1347F9F375 for ; Mon, 11 Aug 2014 15:17:14 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D5BF8200F2 for ; Mon, 11 Aug 2014 15:17:12 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 4AE92200E0 for ; Mon, 11 Aug 2014 15:17:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 416566E408; Mon, 11 Aug 2014 08:17:10 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qg0-f51.google.com (mail-qg0-f51.google.com [209.85.192.51]) by gabe.freedesktop.org (Postfix) with ESMTP id 736FD6E408 for ; Mon, 11 Aug 2014 08:17:08 -0700 (PDT) Received: by mail-qg0-f51.google.com with SMTP id a108so8347319qge.24 for ; Mon, 11 Aug 2014 08:17:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=hX7B+spt3+HRVlohK29Unl0UBSNfoIrD1eoTeun8aQk=; b=UNQk2V6rinvUZv6OEzbIZwooDww6FxZB32cYruuytmGYDN0a9GH2tEmiSU+uFpyHtb Q+GrfiMYHGwChSnDWWap1sGV/9G4bMAs2MDLoMX49W+aJYVx3nKLBcQsQXjPUSplzHra oD/8DOhgypiq2cNfeNrh0D4e+QzSd+gXkLEfAo7Kt9dtwntXsE/mMSh8gyvzdX3/O7cp qt+yLi3m3OyXJe2uGeC1cH2EeP7JOQeMwHmaFuhe92OzsF60KrgRMzLCYYyRikarC9g6 4Sg8GrxTN1Kl4VApBrA4kfqyvmHRN/FUfk1dxz7eKwVQ0nijmCWNIFWCABuyzNuv4ZpF CsXQ== X-Received: by 10.140.43.70 with SMTP id d64mr43755704qga.74.1407770227747; Mon, 11 Aug 2014 08:17:07 -0700 (PDT) Received: from gmail.com (c-66-31-44-77.hsd1.ma.comcast.net. [66.31.44.77]) by mx.google.com with ESMTPSA id m42sm14281167qga.21.2014.08.11.08.17.06 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 11 Aug 2014 08:17:07 -0700 (PDT) Date: Mon, 11 Aug 2014 11:17:14 -0400 From: Jerome Glisse To: Thomas Hellstrom Subject: Re: CONFIG_DMA_CMA causes ttm performance problems/hangs. Message-ID: <20140811151712.GA3541@gmail.com> References: <53E50C1B.9080507@gmail.com> <53E5B41B.3030009@vmware.com> <60bd3db2-4919-40c4-a4ff-1b7b043cadfc@email.android.com> <53E628FE.10808@vmware.com> <53E6E2CE.8070005@gmail.com> <53E75192.3070003@vmware.com> <53E7B39D.2060900@gmail.com> <53E896C9.5010501@vmware.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <53E896C9.5010501@vmware.com> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: Konrad Rzeszutek Wilk , kamal@canonical.com, LKML , "dri-devel@lists.freedesktop.org" , Dave Airlie , ben@decadent.org.uk, m.szyprowski@samsung.com X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Aug 11, 2014 at 12:11:21PM +0200, Thomas Hellstrom wrote: > On 08/10/2014 08:02 PM, Mario Kleiner wrote: > > On 08/10/2014 01:03 PM, Thomas Hellstrom wrote: > >> On 08/10/2014 05:11 AM, Mario Kleiner wrote: > >>> Resent this time without HTML formatting which lkml doesn't like. > >>> Sorry. > >>> > >>> On 08/09/2014 03:58 PM, Thomas Hellstrom wrote: > >>>> On 08/09/2014 03:33 PM, Konrad Rzeszutek Wilk wrote: > >>>>> On August 9, 2014 1:39:39 AM EDT, Thomas > >>>>> Hellstrom wrote: > >>>>>> Hi. > >>>>>> > >>>>> Hey Thomas! > >>>>> > >>>>>> IIRC I don't think the TTM DMA pool allocates coherent pages more > >>>>>> than > >>>>>> one page at a time, and _if that's true_ it's pretty unnecessary for > >>>>>> the > >>>>>> dma subsystem to route those allocations to CMA. Maybe Konrad could > >>>>>> shed > >>>>>> some light over this? > >>>>> It should allocate in batches and keep them in the TTM DMA pool for > >>>>> some time to be reused. > >>>>> > >>>>> The pages that it gets are in 4kb granularity though. > >>>> Then I feel inclined to say this is a DMA subsystem bug. Single page > >>>> allocations shouldn't get routed to CMA. > >>>> > >>>> /Thomas > >>> Yes, seems you're both right. I read through the code a bit more and > >>> indeed the TTM DMA pool allocates only one page during each > >>> dma_alloc_coherent() call, so it doesn't need CMA memory. The current > >>> allocators don't check for single page CMA allocations and therefore > >>> try to get it from the CMA area anyway, instead of skipping to the > >>> much cheaper fallback. > >>> > >>> So the callers of dma_alloc_from_contiguous() could need that little > >>> optimization of skipping it if only one page is requested. For > >>> > >>> dma_generic_alloc_coherent > >>> > >>> > >>> andintel_alloc_coherent > >>> > >>> this > >>> seems easy to do. Looking at the arm arch variants, e.g., > >>> > >>> https://urldefense.proofpoint.com/v1/url?u=http://lxr.free-electrons.com/source/arch/arm/mm/dma-mapping.c%23L1194&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=l5Ago9ekmVFZ3c4M6eauqrJWGwjf6fTb%2BP3CxbBFkVM%3D%0A&m=QQSN6uVpEiw6RuWLAfK%2FKWBFV5HspJUfDh4Y2mUz%2FH4%3D%0A&s=4c178257eab9b5d7ca650dedba76cf27abeb49ddc7aebb9433f52b6c8bb3bbac > >>> > >>> > >>> and > >>> > >>> https://urldefense.proofpoint.com/v1/url?u=http://lxr.free-electrons.com/source/arch/arm64/mm/dma-mapping.c%23L44&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=l5Ago9ekmVFZ3c4M6eauqrJWGwjf6fTb%2BP3CxbBFkVM%3D%0A&m=QQSN6uVpEiw6RuWLAfK%2FKWBFV5HspJUfDh4Y2mUz%2FH4%3D%0A&s=5f62f4cbe8cee1f1dd4cbba656354efe6867bcdc664cf90e9719e2f42a85de08 > >>> > >>> > >>> i'm not sure if it is that easily done, as there aren't any fallbacks > >>> for such a case and the code looks to me as if that's at least > >>> somewhat intentional. > >>> > >>> As far as TTM goes, one quick one-line fix to prevent it from using > >>> the CMA at least on SWIOTLB, NOMMU and Intel IOMMU (when using the > >>> above methods) would be to clear the __GFP_WAIT > >>> > >>> flag from the > >>> passed gfp_t flags. That would trigger the well working fallback. > >>> So, is > >>> > >>> __GFP_WAIT > >>> > >>> needed > >>> for those single page allocations that go through__ttm_dma_alloc_page > >>> ? > >>> > >>> > >>> It would be nice to have such a simple, non-intrusive one-line patch > >>> that we still could get into 3.17 and then backported to older stable > >>> kernels to avoid the same desktop hangs there if CMA is enabled. It > >>> would be also nice for actual users of CMA to not use up lots of CMA > >>> space for gpu's which don't need it. I think DMA_CMA was introduced > >>> around 3.12. > >>> > >> I don't think that's a good idea. Omitting __GFP_WAIT would cause > >> unnecessary memory allocation errors on systems under stress. > >> I think this should be filed as a DMA subsystem kernel bug / regression > >> and an appropriate solution should be worked out together with the DMA > >> subsystem maintainers and then backported. > > > > Ok, so it is needed. I'll file a bug report. > > > >>> The other problem is that probably TTM does not reuse pages from the > >>> DMA pool. If i trace the __ttm_dma_alloc_page > >>> > >>> and > >>> __ttm_dma_free_page > >>> > >>> calls for > >>> those single page allocs/frees, then over a 20 second interval of > >>> tracing and switching tabs in firefox, scrolling things around etc. i > >>> find about as many alloc's as i find free's, e.g., 1607 allocs vs. > >>> 1648 frees. > >> This is because historically the pools have been designed to keep only > >> pages with nonstandard caching attributes since changing page caching > >> attributes have been very slow but the kernel page allocators have been > >> reasonably fast. > >> > >> /Thomas > > > > Ok. A bit more ftraceing showed my hang problem case goes through the > > "if (is_cached)" paths, so the pool doesn't recycle anything and i see > > it bouncing up and down by 4 pages all the time. > > > > But for the non-cached case, which i don't hit with my problem, could > > one of you look at line 954... > > > > https://urldefense.proofpoint.com/v1/url?u=http://lxr.free-electrons.com/source/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c%23L954&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=l5Ago9ekmVFZ3c4M6eauqrJWGwjf6fTb%2BP3CxbBFkVM%3D%0A&m=QQSN6uVpEiw6RuWLAfK%2FKWBFV5HspJUfDh4Y2mUz%2FH4%3D%0A&s=e15c51805d429ee6d8960d6b88035e9811a1cdbfbf13168eec2fbb2214b99c60 > > > > > > ... and tell me why that unconditional npages = count; assignment > > makes sense? It seems to essentially disable all recycling for the dma > > pool whenever the pool isn't filled up to/beyond its maximum with free > > pages? When the pool is filled up, lots of stuff is recycled, but when > > it is already somewhat below capacity, it gets "punished" by not > > getting refilled? I'd just like to understand the logic behind that line. > > > > thanks, > > -mario > > I'll happily forward that question to Konrad who wrote the code (or it > may even stem from the ordinary page pool code which IIRC has Dave > Airlie / Jerome Glisse as authors) This is effectively bogus code, i now wonder how it came to stay alive. Attached patch will fix that. > > /Thomas > From f65e796fea5f79e4834f4609147ea06c123d6396 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Date: Mon, 11 Aug 2014 11:10:31 -0400 Subject: [PATCH] drm/ttm: fix object deallocation to properly fill in the page pool. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Current code never allowed the page pool to actualy fill in anyway. This fix it and also allow it to grow over its limit until it grow beyond the batch size for allocation and deallocation. Signed-off-by: Jérôme Glisse Reviewed-by: Mario Kleiner Tested-by: Michel Dänzer --- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index fb8259f..73744cd 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -951,14 +951,9 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list); - npages = count; - if (pool->npages_free > _manager->options.max_size) { + if (pool->npages_free >= (_manager->options.max_size + + NUM_PAGES_TO_ALLOC)) npages = pool->npages_free - _manager->options.max_size; - /* free at least NUM_PAGES_TO_ALLOC number of pages - * to reduce calls to set_memory_wb */ - if (npages < NUM_PAGES_TO_ALLOC) - npages = NUM_PAGES_TO_ALLOC; - } } spin_unlock_irqrestore(&pool->lock, irq_flags);