From patchwork Tue Mar 7 14:46:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13163823 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7CEA4C678D4 for ; Tue, 7 Mar 2023 14:47:21 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 8BA8310E4CE; Tue, 7 Mar 2023 14:47:20 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 04E1D10E4C9; Tue, 7 Mar 2023 14:47:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678200438; x=1709736438; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qFho7p+QwWsgKi8oq1mN/iBTkFW2ScZubzzES3UhAD8=; b=ajbrWIFZ3Ah9NCAYVcUAp6OiZtYUoUGOSe+VKktDx6/eIh4zQB1iFfS4 11UFFwBhrYocgkNOlsQ/exvggTAVG9WITX23zpzQYKvBKJNzrl9Eeysvn zRsIl63/Vvid00NeOPMj5sIEi1ItmWx7MSMOwjBxRyslAPdUhHxkMS9sz f0TxEM0g5KR55OESc3epKsiSogYqHcMBr92p17sTWiWNA85xirSsR3C3a ek9mxDC9El//qkjRJMcf8uYiR9fc5X1H5hX48Vn6v9nf/n4AitnBRRCzD SW4Ou3ahp7ZeFr1WHwpuOtFnYR0rP1F+bBPplg6sCfdIVuRqFQj+fky3C Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10642"; a="400687266" X-IronPort-AV: E=Sophos;i="5.98,241,1673942400"; d="scan'208";a="400687266" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2023 06:47:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10642"; a="800414378" X-IronPort-AV: E=Sophos;i="5.98,241,1673942400"; d="scan'208";a="800414378" Received: from peiwangc-mobl1.ccr.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.10]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Mar 2023 06:47:14 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: dri-devel@lists.freedesktop.org Subject: [PATCH v2 2/7] drm/ttm/pool: Fix ttm_pool_alloc error path Date: Tue, 7 Mar 2023 15:46:16 +0100 Message-Id: <20230307144621.10748-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230307144621.10748-1-thomas.hellstrom@linux.intel.com> References: <20230307144621.10748-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , intel-gfx@lists.freedesktop.org, Huang Rui , Matthew Auld , Dave Airlie , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When hitting an error, the error path forgot to unmap dma mappings and could call set_pages_wb() on already uncached pages. Fix this by introducing a common __ttm_pool_free() function that does the right thing. v2: - Simplify __ttm_pool_free() (Christian König) Fixes: d099fc8f540a ("drm/ttm: new TT backend allocation pool v3") Cc: Christian König Cc: Dave Airlie Cc: Christian Koenig Cc: Huang Rui Cc: dri-devel@lists.freedesktop.org Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_pool.c | 68 +++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 30 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index aa116a7bbae3..0b6e20613d19 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -367,6 +367,30 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, return 0; } +static void __ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt, + enum ttm_caching caching, + pgoff_t start_page, pgoff_t end_page) +{ + struct page **pages = tt->pages; + unsigned int order; + pgoff_t i, nr; + + for (i = start_page; i < end_page; i += nr, pages += nr) { + struct ttm_pool_type *pt = NULL; + + order = ttm_pool_page_order(pool, *pages); + nr = (1UL << order); + if (tt->dma_address) + ttm_pool_unmap(pool, tt->dma_address[i], nr); + + pt = ttm_pool_select_type(pool, caching, order); + if (pt) + ttm_pool_type_give(pt, *pages); + else + ttm_pool_free_page(pool, caching, order, *pages); + } +} + /** * ttm_pool_alloc - Fill a ttm_tt object * @@ -382,12 +406,14 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, struct ttm_operation_ctx *ctx) { - unsigned long num_pages = tt->num_pages; + pgoff_t num_pages = tt->num_pages; dma_addr_t *dma_addr = tt->dma_address; struct page **caching = tt->pages; struct page **pages = tt->pages; + enum ttm_caching page_caching; gfp_t gfp_flags = GFP_USER; - unsigned int i, order; + pgoff_t caching_divide; + unsigned int order; struct page *p; int r; @@ -410,6 +436,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, order = min_t(unsigned int, order, __fls(num_pages))) { struct ttm_pool_type *pt; + page_caching = tt->caching; pt = ttm_pool_select_type(pool, tt->caching, order); p = pt ? ttm_pool_type_take(pt) : NULL; if (p) { @@ -418,6 +445,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (r) goto error_free_page; + caching = pages; do { r = ttm_pool_page_allocated(pool, order, p, &dma_addr, @@ -426,14 +454,15 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (r) goto error_free_page; + caching = pages; if (num_pages < (1 << order)) break; p = ttm_pool_type_take(pt); } while (p); - caching = pages; } + page_caching = ttm_cached; while (num_pages >= (1 << order) && (p = ttm_pool_alloc_page(pool, gfp_flags, order))) { @@ -442,6 +471,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, tt->caching); if (r) goto error_free_page; + caching = pages; } r = ttm_pool_page_allocated(pool, order, p, &dma_addr, &num_pages, &pages); @@ -468,15 +498,13 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, return 0; error_free_page: - ttm_pool_free_page(pool, tt->caching, order, p); + ttm_pool_free_page(pool, page_caching, order, p); error_free_all: num_pages = tt->num_pages - num_pages; - for (i = 0; i < num_pages; ) { - order = ttm_pool_page_order(pool, tt->pages[i]); - ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); - i += 1 << order; - } + caching_divide = caching - tt->pages; + __ttm_pool_free(pool, tt, tt->caching, 0, caching_divide); + __ttm_pool_free(pool, tt, ttm_cached, caching_divide, num_pages); return r; } @@ -492,27 +520,7 @@ EXPORT_SYMBOL(ttm_pool_alloc); */ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) { - unsigned int i; - - for (i = 0; i < tt->num_pages; ) { - struct page *p = tt->pages[i]; - unsigned int order, num_pages; - struct ttm_pool_type *pt; - - order = ttm_pool_page_order(pool, p); - num_pages = 1ULL << order; - if (tt->dma_address) - ttm_pool_unmap(pool, tt->dma_address[i], num_pages); - - pt = ttm_pool_select_type(pool, tt->caching, order); - if (pt) - ttm_pool_type_give(pt, tt->pages[i]); - else - ttm_pool_free_page(pool, tt->caching, order, - tt->pages[i]); - - i += num_pages; - } + __ttm_pool_free(pool, tt, tt->caching, 0, tt->num_pages); while (atomic_long_read(&allocated_pages) > page_pool_size) ttm_pool_shrink();