From patchwork Tue Oct 10 08:53:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 9995339 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6D0D260230 for ; Tue, 10 Oct 2017 08:54:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F4DC2838B for ; Tue, 10 Oct 2017 08:54:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5410428397; Tue, 10 Oct 2017 08:54:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BBFF42838B for ; Tue, 10 Oct 2017 08:54:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id C93D86E3D7; Tue, 10 Oct 2017 08:53:55 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wm0-x242.google.com (mail-wm0-x242.google.com [IPv6:2a00:1450:400c:c09::242]) by gabe.freedesktop.org (Postfix) with ESMTPS id 02D976E3CE; Tue, 10 Oct 2017 08:53:53 +0000 (UTC) Received: by mail-wm0-x242.google.com with SMTP id b189so24271252wmd.2; Tue, 10 Oct 2017 01:53:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=o29ZLzFg9SOyxbk7aDgukzT1P7Bd7McEmb2pinS9phw=; b=MXX7EIMZI4EKRH8LlthR0UTAS5gSE413bbRcdnR5NnPiEPNMjCBC8anjkOnqg43wD0 VB4iD0Dmz1wgbRO0DvSQ7x7JmJIJZkLZcW89TNgfZAz/oL6wnP24KCbHr3ysGwLRTEoN /7P5QP2vEqEqMI78xAkjgRsEdTISv9faT9U4XpjI4OURNf8dHchq70gpex4sRabZjfjO L6rKbbDYROo9ccuv6h6HFOUGZ9x0pJW7co+tG9JhyLdV9JtUuD0vGSBjTCdR7E7Bm+jv H00gw4q/4BHDntKL8qPd36jlDAq67dpa8nmvcjrtlfsOMFiD6VB25oJxEUm4tq6wrDgY Z6Ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o29ZLzFg9SOyxbk7aDgukzT1P7Bd7McEmb2pinS9phw=; b=EaEnW85vrrkvcMejjk86hmY/TA39aVqf1gQt6szqIRczAnnwKS4/oo9IOEa8yI9RJr /WTlnmjkZhCESZhkub3TUQKUabBRN68G1WpJsGS58hdf5VmfL9ZajGQMa9ZtxZtBSjH4 v8kN1a1Lzyf4/ofGvVYksEJOeYVABoPcjUwXCm0Aa/JlzmPx4FfFO+2zkdgtCPn876RA 3P4F6flgDsWV6VGhju4XamXqK+gMa3apZr/WK14Z0htiMuBR5z1cTE/W/rrG3bNjYwT6 Qo+JGPHH0q6zOwhRQFarGNYfAj8+RC+3EavzmMhMNp+MIoSvwqbGpx2zlJ+szn6ypLoC jEjQ== X-Gm-Message-State: AMCzsaU0uvQQSxQSPhvDt8Vc3zC8Me4iULZ0m0QTmViCuXZEooxdQSMj yIZmCyDtRoVyNTX+DFoj+ihqnT/6 X-Google-Smtp-Source: AOwi7QASp31EeHxtdhzB/lxRsbCdUpTos8t1lDzSw8txTQC5t6iRwcL86NcbgOxajBpcQ0OtFriO8Q== X-Received: by 10.223.154.70 with SMTP id z64mr10479605wrb.220.1507625631267; Tue, 10 Oct 2017 01:53:51 -0700 (PDT) Received: from localhost.localdomain ([2a02:908:1251:7981:c8a1:1d38:1f91:26ed]) by smtp.gmail.com with ESMTPSA id r15sm6041462wrc.30.2017.10.10.01.53.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 10 Oct 2017 01:53:50 -0700 (PDT) From: "=?UTF-8?q?Christian=20K=C3=B6nig?=" X-Google-Original-From: =?UTF-8?q?Christian=20K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 4/4] drm/ttm: add transparent huge page support for wc or uc allocations v2 Date: Tue, 10 Oct 2017 10:53:45 +0200 Message-Id: <1507625625-8665-4-git-send-email-deathsimple@vodafone.de> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507625625-8665-1-git-send-email-deathsimple@vodafone.de> References: <1507625625-8665-1-git-send-email-deathsimple@vodafone.de> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Christian König Add a new huge page pool and try to allocate from it when it makes sense. v2: avoid compound pages for now Signed-off-by: Christian König Acked-by: Alex Deucher Acked-by: Alex Deucher --- drivers/gpu/drm/ttm/ttm_page_alloc.c | 136 ++++++++++++++++++++++++++++------- 1 file changed, 109 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 3974732..b6f16e7ff 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -95,7 +95,7 @@ struct ttm_pool_opts { unsigned small; }; -#define NUM_POOLS 4 +#define NUM_POOLS 6 /** * struct ttm_pool_manager - Holds memory pools for fst allocation @@ -122,6 +122,8 @@ struct ttm_pool_manager { struct ttm_page_pool uc_pool; struct ttm_page_pool wc_pool_dma32; struct ttm_page_pool uc_pool_dma32; + struct ttm_page_pool wc_pool_huge; + struct ttm_page_pool uc_pool_huge; } ; }; }; @@ -256,8 +258,8 @@ static int set_pages_array_uc(struct page **pages, int addrinarray) /** * Select the right pool or requested caching state and ttm flags. */ -static struct ttm_page_pool *ttm_get_pool(int flags, - enum ttm_caching_state cstate) +static struct ttm_page_pool *ttm_get_pool(int flags, bool huge, + enum ttm_caching_state cstate) { int pool_index; @@ -269,9 +271,15 @@ static struct ttm_page_pool *ttm_get_pool(int flags, else pool_index = 0x1; - if (flags & TTM_PAGE_FLAG_DMA32) + if (flags & TTM_PAGE_FLAG_DMA32) { + if (huge) + return NULL; pool_index |= 0x2; + } else if (huge) { + pool_index |= 0x4; + } + return &_manager->pools[pool_index]; } @@ -494,12 +502,14 @@ static void ttm_handle_caching_state_failure(struct list_head *pages, * pages returned in pages array. */ static int ttm_alloc_new_pages(struct list_head *pages, gfp_t gfp_flags, - int ttm_flags, enum ttm_caching_state cstate, unsigned count) + int ttm_flags, enum ttm_caching_state cstate, + unsigned count, unsigned order) { struct page **caching_array; struct page *p; int r = 0; - unsigned i, cpages; + unsigned i, j, cpages; + unsigned npages = 1 << order; unsigned max_cpages = min(count, (unsigned)(PAGE_SIZE/sizeof(struct page *))); @@ -512,7 +522,7 @@ static int ttm_alloc_new_pages(struct list_head *pages, gfp_t gfp_flags, } for (i = 0, cpages = 0; i < count; ++i) { - p = alloc_page(gfp_flags); + p = alloc_pages(gfp_flags, order); if (!p) { pr_err("Unable to get page %u\n", i); @@ -531,14 +541,18 @@ static int ttm_alloc_new_pages(struct list_head *pages, gfp_t gfp_flags, goto out; } + list_add(&p->lru, pages); + #ifdef CONFIG_HIGHMEM /* gfp flags of highmem page should never be dma32 so we * we should be fine in such case */ - if (!PageHighMem(p)) + if (PageHighMem(p)) + continue; + #endif - { - caching_array[cpages++] = p; + for (j = 0; j < npages; ++j) { + caching_array[cpages++] = p++; if (cpages == max_cpages) { r = ttm_set_pages_caching(caching_array, @@ -552,8 +566,6 @@ static int ttm_alloc_new_pages(struct list_head *pages, gfp_t gfp_flags, cpages = 0; } } - - list_add(&p->lru, pages); } if (cpages) { @@ -573,9 +585,9 @@ static int ttm_alloc_new_pages(struct list_head *pages, gfp_t gfp_flags, * Fill the given pool if there aren't enough pages and the requested number of * pages is small. */ -static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, - int ttm_flags, enum ttm_caching_state cstate, unsigned count, - unsigned long *irq_flags) +static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, int ttm_flags, + enum ttm_caching_state cstate, + unsigned count, unsigned long *irq_flags) { struct page *p; int r; @@ -605,7 +617,7 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, INIT_LIST_HEAD(&new_pages); r = ttm_alloc_new_pages(&new_pages, pool->gfp_flags, ttm_flags, - cstate, alloc_size); + cstate, alloc_size, 0); spin_lock_irqsave(&pool->lock, *irq_flags); if (!r) { @@ -635,7 +647,7 @@ static int ttm_page_pool_get_pages(struct ttm_page_pool *pool, struct list_head *pages, int ttm_flags, enum ttm_caching_state cstate, - unsigned count) + unsigned count, unsigned order) { unsigned long irq_flags; struct list_head *p; @@ -643,7 +655,9 @@ static int ttm_page_pool_get_pages(struct ttm_page_pool *pool, int r = 0; spin_lock_irqsave(&pool->lock, irq_flags); - ttm_page_pool_fill_locked(pool, ttm_flags, cstate, count, &irq_flags); + if (!order) + ttm_page_pool_fill_locked(pool, ttm_flags, cstate, count, + &irq_flags); if (count >= pool->npages) { /* take all pages from the pool */ @@ -698,7 +712,7 @@ static int ttm_page_pool_get_pages(struct ttm_page_pool *pool, * multiple requests in parallel. **/ r = ttm_alloc_new_pages(pages, gfp_flags, ttm_flags, cstate, - count); + count, order); } return r; @@ -708,8 +722,9 @@ static int ttm_page_pool_get_pages(struct ttm_page_pool *pool, static void ttm_put_pages(struct page **pages, unsigned npages, int flags, enum ttm_caching_state cstate) { + struct ttm_page_pool *pool = ttm_get_pool(flags, false, cstate); + struct ttm_page_pool *huge = ttm_get_pool(flags, true, cstate); unsigned long irq_flags; - struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); unsigned i; if (pool == NULL) { @@ -737,8 +752,48 @@ static void ttm_put_pages(struct page **pages, unsigned npages, int flags, return; } + i = 0; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (huge) { + unsigned max_size, n2free; + + spin_lock_irqsave(&huge->lock, irq_flags); + while (i < npages) { + struct page *p = pages[i]; + unsigned j; + + if (!p) + break; + + for (j = 0; j < HPAGE_PMD_NR; ++j) + if (p++ != pages[i + j]) + break; + + if (j != HPAGE_PMD_NR) + break; + + list_add_tail(&pages[i]->lru, &huge->list); + + for (j = 0; j < HPAGE_PMD_NR; ++j) + pages[i++] = NULL; + huge->npages++; + } + + /* Check that we don't go over the pool limit */ + max_size = _manager->options.max_size; + max_size /= HPAGE_PMD_NR; + if (huge->npages > max_size) + n2free = huge->npages - max_size; + else + n2free = 0; + spin_unlock_irqrestore(&huge->lock, irq_flags); + if (n2free) + ttm_page_pool_free(huge, n2free, false); + } +#endif + spin_lock_irqsave(&pool->lock, irq_flags); - for (i = 0; i < npages; i++) { + while (i < npages) { if (pages[i]) { if (page_count(pages[i]) != 1) pr_err("Erroneous page count. Leaking pages.\n"); @@ -746,6 +801,7 @@ static void ttm_put_pages(struct page **pages, unsigned npages, int flags, pages[i] = NULL; pool->npages++; } + ++i; } /* Check that we don't go over the pool limit */ npages = 0; @@ -768,7 +824,8 @@ static void ttm_put_pages(struct page **pages, unsigned npages, int flags, static int ttm_get_pages(struct page **pages, unsigned npages, int flags, enum ttm_caching_state cstate) { - struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); + struct ttm_page_pool *pool = ttm_get_pool(flags, false, cstate); + struct ttm_page_pool *huge = ttm_get_pool(flags, true, cstate); struct list_head plist; struct page *p = NULL; unsigned count; @@ -821,11 +878,28 @@ static int ttm_get_pages(struct page **pages, unsigned npages, int flags, return 0; } - /* First we take pages from the pool */ + count = 0; + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (huge && npages >= HPAGE_PMD_NR) { + INIT_LIST_HEAD(&plist); + ttm_page_pool_get_pages(huge, &plist, flags, cstate, + npages / HPAGE_PMD_NR, + HPAGE_PMD_ORDER); + + list_for_each_entry(p, &plist, lru) { + unsigned j; + + for (j = 0; j < HPAGE_PMD_NR; ++j) + pages[count++] = &p[j]; + } + } +#endif + INIT_LIST_HEAD(&plist); - r = ttm_page_pool_get_pages(pool, &plist, flags, cstate, npages); + r = ttm_page_pool_get_pages(pool, &plist, flags, cstate, + npages - count, 0); - count = 0; list_for_each_entry(p, &plist, lru) pages[count++] = p; @@ -872,6 +946,14 @@ int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages) ttm_page_pool_init_locked(&_manager->uc_pool_dma32, GFP_USER | GFP_DMA32, "uc dma"); + ttm_page_pool_init_locked(&_manager->wc_pool_huge, + GFP_TRANSHUGE & ~(__GFP_MOVABLE | __GFP_COMP), + "wc huge"); + + ttm_page_pool_init_locked(&_manager->uc_pool_huge, + GFP_TRANSHUGE & ~(__GFP_MOVABLE | __GFP_COMP) + , "uc huge"); + _manager->options.max_size = max_pages; _manager->options.small = SMALL_ALLOCATION; _manager->options.alloc_size = NUM_PAGES_TO_ALLOC; @@ -1041,12 +1123,12 @@ int ttm_page_alloc_debugfs(struct seq_file *m, void *data) seq_printf(m, "No pool allocator running.\n"); return 0; } - seq_printf(m, "%6s %12s %13s %8s\n", + seq_printf(m, "%7s %12s %13s %8s\n", h[0], h[1], h[2], h[3]); for (i = 0; i < NUM_POOLS; ++i) { p = &_manager->pools[i]; - seq_printf(m, "%6s %12ld %13ld %8d\n", + seq_printf(m, "%7s %12ld %13ld %8d\n", p->name, p->nrefills, p->nfrees, p->npages); }