From patchwork Wed Aug 13 03:52:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 4715911 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 65EEA9F467 for ; Wed, 13 Aug 2014 03:52:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 52FDA201CD for ; Wed, 13 Aug 2014 03:52:11 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id 7F311201C8 for ; Wed, 13 Aug 2014 03:52:09 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8FE5A6E267; Tue, 12 Aug 2014 20:52:08 -0700 (PDT) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-qg0-f47.google.com (mail-qg0-f47.google.com [209.85.192.47]) by gabe.freedesktop.org (Postfix) with ESMTP id 533636E267 for ; Tue, 12 Aug 2014 20:52:07 -0700 (PDT) Received: by mail-qg0-f47.google.com with SMTP id i50so10321586qgf.6 for ; Tue, 12 Aug 2014 20:52:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=yOWb4u/GNELbK8zheM0o3KBVa0k4pZgChVaROQixU3c=; b=Ww2BF2u/xg9xn66HDiPwaOk11XEksPb/YgE0I2eTsymoiayHmO2BD+M7Flzadh8JcI 0u+ZNq2q9UMXqMoBJgx0MJ4pYR99sYrfjC37Mws77wk4vBMxKi4kXDuOyUeJfL0wCnjH 0v3NP0GrK8AxSycoKsQufjs06dfT8rAeCbiMG0eT5xVaplCd7ejT93vaRf2XTUXXJynh tIjC8XV++lCzCrSrqttDNEotyeD1ND6oPRRVcKB2Hk2FxmtA+6jKJtPIEVTpNDIfWoK7 monk8YZGo/o0JErkbUrtzSp6g17BontOrdBoHvqeKT5Jp8GKWVXNlLyRA7hQ3IAozpRD x7tA== X-Received: by 10.224.12.134 with SMTP id x6mr2531858qax.1.1407901926875; Tue, 12 Aug 2014 20:52:06 -0700 (PDT) Received: from unused-10-19-63-219.boston.devel.redhat.com (c-66-31-44-77.hsd1.ma.comcast.net. [66.31.44.77]) by mx.google.com with ESMTPSA id b5sm1320673qag.17.2014.08.12.20.52.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Aug 2014 20:52:06 -0700 (PDT) From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= To: dri-devel@lists.freedesktop.org Subject: [PATCH 3/3] drm/ttm: under memory pressure minimize the size of memory pool Date: Tue, 12 Aug 2014 23:52:06 -0400 Message-Id: <1407901926-24516-4-git-send-email-j.glisse@gmail.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1407901926-24516-1-git-send-email-j.glisse@gmail.com> References: <1407901926-24516-1-git-send-email-j.glisse@gmail.com> MIME-Version: 1.0 Cc: =?UTF-8?q?Michel=20D=C3=A4nzer?= , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Konrad Rzeszutek Wilk , Thomas Hellstrom X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse When experiencing memory pressure we want to minimize pool size so that memory we just shrinked is not added back again just as the next thing. This will divide by 2 the maximum pool size for each device each time the pool have to shrink. The limit is bumped again is next allocation happen after one second since the last shrink. The one second delay is obviously an arbitrary choice. Signed-off-by: Jérôme Glisse Cc: Mario Kleiner Cc: Michel Dänzer Cc: Thomas Hellstrom Cc: Konrad Rzeszutek Wilk --- drivers/gpu/drm/ttm/ttm_page_alloc.c | 35 +++++++++++++++++++++++++------- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 27 ++++++++++++++++++++++-- 2 files changed, 53 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 09874d6..ab41adf 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -68,6 +68,8 @@ * @list: Pool of free uc/wc pages for fast reuse. * @gfp_flags: Flags to pass for alloc_page. * @npages: Number of pages in pool. + * @cur_max_size: Current maximum size for the pool. + * @shrink_timeout: Timeout for pool maximum size restriction. */ struct ttm_page_pool { spinlock_t lock; @@ -76,6 +78,8 @@ struct ttm_page_pool { gfp_t gfp_flags; unsigned npages; char *name; + unsigned cur_max_size; + unsigned long last_shrink; unsigned long nfrees; unsigned long nrefills; }; @@ -289,6 +293,16 @@ static void ttm_pool_update_free_locked(struct ttm_page_pool *pool, pool->nfrees += freed_pages; } +static inline void ttm_pool_update_max_size(struct ttm_page_pool *pool) +{ + if (time_before(jiffies, pool->shrink_timeout)) + return; + /* In case we reached zero bounce back to 512 pages. */ + pool->cur_max_size = max(pool->cur_max_size << 1, 512); + pool->cur_max_size = min(pool->cur_max_size, + _manager->options.max_size); +} + /** * Free pages from pool. * @@ -407,6 +421,9 @@ ttm_pool_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) if (shrink_pages == 0) break; pool = &_manager->pools[(i + pool_offset)%NUM_POOLS]; + /* No matter what make sure the pool do not grow in next second. */ + pool->cur_max_size = pool->cur_max_size >> 1; + pool->shrink_timeout = jiffies + HZ; shrink_pages = ttm_page_pool_free(pool, nr_free, sc->gfp_mask); freed += nr_free - shrink_pages; @@ -701,13 +718,12 @@ static void ttm_put_pages(struct page **pages, unsigned npages, int flags, } /* Check that we don't go over the pool limit */ npages = 0; - if (pool->npages > _manager->options.max_size) { - npages = pool->npages - _manager->options.max_size; - /* free at least NUM_PAGES_TO_ALLOC number of pages - * to reduce calls to set_memory_wb */ - if (npages < NUM_PAGES_TO_ALLOC) - npages = NUM_PAGES_TO_ALLOC; - } + /* + * Free at least NUM_PAGES_TO_ALLOC number of pages to reduce calls to + * set_memory_wb. + */ + if (pool->npages > (pool->cur_max_size + NUM_PAGES_TO_ALLOC)) + npages = pool->npages - pool->cur_max_size; spin_unlock_irqrestore(&pool->lock, irq_flags); if (npages) ttm_page_pool_free(pool, npages, GFP_KERNEL); @@ -751,6 +767,9 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags, return 0; } + /* Update pool size in case shrinker limited it. */ + ttm_pool_update_max_size(pool); + /* combine zero flag to pool flags */ gfp_flags |= pool->gfp_flags; @@ -803,6 +822,8 @@ static void ttm_page_pool_init_locked(struct ttm_page_pool *pool, gfp_t flags, pool->npages = pool->nfrees = 0; pool->gfp_flags = flags; pool->name = name; + pool->cur_max_size = _manager->options.max_size; + pool->shrink_timeout = jiffies; } int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index a076ff3..80b10aa 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -93,6 +93,8 @@ enum pool_type { * @size: Size used during DMA allocation. * @npages_free: Count of available pages for re-use. * @npages_in_use: Count of pages that are in use. + * @cur_max_size: Current maximum size for the pool. + * @shrink_timeout: Timeout for pool maximum size restriction. * @nfrees: Stats when pool is shrinking. * @nrefills: Stats when the pool is grown. * @gfp_flags: Flags to pass for alloc_page. @@ -110,6 +112,8 @@ struct dma_pool { unsigned size; unsigned npages_free; unsigned npages_in_use; + unsigned cur_max_size; + unsigned long last_shrink; unsigned long nfrees; /* Stats when shrunk. */ unsigned long nrefills; /* Stats when grown. */ gfp_t gfp_flags; @@ -331,6 +335,17 @@ static void __ttm_dma_free_page(struct dma_pool *pool, struct dma_page *d_page) kfree(d_page); d_page = NULL; } + +static inline void ttm_dma_pool_update_max_size(struct dma_pool *pool) +{ + if (time_before(jiffies, pool->shrink_timeout)) + return; + /* In case we reached zero bounce back to 512 pages. */ + pool->cur_max_size = max(pool->cur_max_size << 1, 512); + pool->cur_max_size = min(pool->cur_max_size, + _manager->options.max_size); +} + static struct dma_page *__ttm_dma_alloc_page(struct dma_pool *pool) { struct dma_page *d_page; @@ -606,6 +621,8 @@ static struct dma_pool *ttm_dma_pool_init(struct device *dev, gfp_t flags, pool->size = PAGE_SIZE; pool->type = type; pool->nrefills = 0; + pool->cur_max_size = _manager->options.max_size; + pool->shrink_timeout = jiffies; p = pool->name; for (i = 0; i < 5; i++) { if (type & t[i]) { @@ -892,6 +909,9 @@ int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev) } } + /* Update pool size in case shrinker limited it. */ + ttm_dma_pool_update_max_size(pool); + INIT_LIST_HEAD(&ttm_dma->pages_list); for (i = 0; i < ttm->num_pages; ++i) { ret = ttm_dma_pool_get_pages(pool, ttm_dma, i); @@ -953,9 +973,9 @@ void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev) } else { pool->npages_free += count; list_splice(&ttm_dma->pages_list, &pool->free_list); - if (pool->npages_free >= (_manager->options.max_size + + if (pool->npages_free >= (pool->cur_max_size + NUM_PAGES_TO_ALLOC)) - npages = pool->npages_free - _manager->options.max_size; + npages = pool->npages_free - pool->cur_max_size; } spin_unlock_irqrestore(&pool->lock, irq_flags); @@ -1024,6 +1044,9 @@ ttm_dma_pool_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) /* Do it in round-robin fashion. */ if (++idx < pool_offset) continue; + /* No matter what make sure the pool do not grow in next second. */ + p->pool->cur_max_size = p->pool->cur_max_size >> 1; + p->pool->shrink_timeout = jiffies + HZ; nr_free = shrink_pages; shrink_pages = ttm_dma_page_pool_free(p->pool, nr_free, sc->gfp_mask);