From patchwork Wed Feb 15 16:13:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 13141844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB165C636D7 for ; Wed, 15 Feb 2023 16:14:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5402A6B0074; Wed, 15 Feb 2023 11:14:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C9876B0075; Wed, 15 Feb 2023 11:14:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3441D6B0078; Wed, 15 Feb 2023 11:14:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2025B6B0074 for ; Wed, 15 Feb 2023 11:14:44 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BBE85160547 for ; Wed, 15 Feb 2023 16:14:43 +0000 (UTC) X-FDA: 80470024446.27.0D06DE4 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf09.hostedemail.com (Postfix) with ESMTP id 814CC14001F for ; Wed, 15 Feb 2023 16:14:41 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="T3/OqiP3"; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676477681; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t76yy4lSaA37MNYsNLKys72h3y8kTABY/0mrQF2oSBg=; b=th4VjxzpjxNoQ7nxHO8npqhoiDV+lksWUzSFmHZiTQG7XPVKV+AXtPrIcP8lAeb2mWC+1u fv6obxfuNi4egu8CdMSk/eqX5+uk0Oma4l3ErQZ54mnj1dKdhvLZAF8YsYJ7cMaDug80P0 LwQTltzcOeRyu3YFKfG+mu1c/8juU9M= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="T3/OqiP3"; spf=none (imf09.hostedemail.com: domain of thomas.hellstrom@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=thomas.hellstrom@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676477681; a=rsa-sha256; cv=none; b=y26NJ+wqE7b09N+HFW86g1NDMiSEowXtW1O6W5wS+rZja5iKXKqaYHjt6t/b8e2e8QqiWQ 8bmd6tmmTDomsUOR6FlH9Ifw6nhAq2FfKNB3OiwWhuXlxo1T2YP8azKwe2aPx6iUxwC6GE Vt02OoVSa9XEMPSikmyOPKCZ/rLRG+w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676477681; x=1708013681; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fBC7jBzo37+O/nCkHNJ6G49Az5rw8kNvRLn7CLfn/ts=; b=T3/OqiP3MbeVN3h2OFWdtEtsuvHjJQqxTyYSrDBdPbbPyzOO8w//c4Bj t9R1XCpdrNjw1twsgeuul9GWrWGWsMJMcdP/gwSoNKQHniSP+RNtJCgG1 iP57yEAHpxh4K0nxsa91hIl7IwOZ3yXxCNSNSMAnT48YkafGKiennfgab OfuiGdgDPpEATKjkFGOkvoE6oKCQAfLjWbhRiT6YflhfHcAfywihO6aaa Vg4pq15areggrhhlLbMXvRPZQPyAuEdQvZYLEocH9ibY1NMKriBNyJbYi bNIux8c5BJhKV7MjWhTIbnIdy1RRrYaXZ0afSd4axpUDEm9NX8sZl6RDq g==; X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="393870714" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="393870714" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:14:40 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10622"; a="758472124" X-IronPort-AV: E=Sophos;i="5.97,300,1669104000"; d="scan'208";a="758472124" Received: from auliel-mobl1.ger.corp.intel.com (HELO thellstr-mobl1.intel.com) ([10.249.254.14]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Feb 2023 08:14:34 -0800 From: =?utf-8?q?Thomas_Hellstr=C3=B6m?= To: dri-devel@lists.freedesktop.org Cc: =?utf-8?q?Thomas_Hellstr=C3=B6m?= , =?utf-8?q?Christian_K=C3=B6nig?= , Dave Airlie , Madhav Chauhan , Huang Rui , Andrew Morton , "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Daniel Vetter , Dave Hansen , Matthew Auld , linux-graphics-maintainer@vmware.com, linux-mm@kvack.org, intel-gfx@lists.freedesktop.org Subject: [RFC PATCH 02/16] drm/ttm/pool: Fix ttm_pool_alloc error path Date: Wed, 15 Feb 2023 17:13:51 +0100 Message-Id: <20230215161405.187368-3-thomas.hellstrom@linux.intel.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> References: <20230215161405.187368-1-thomas.hellstrom@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 814CC14001F X-Stat-Signature: kbxz4dm8dfg49p5xwusq7giyyuqykqd5 X-HE-Tag: 1676477681-401622 X-HE-Meta: U2FsdGVkX19gblRUFiKDJIzIvUfWbejmfQazA3uEBnc8+6OtJtVFdp+6XaefQGjWhXV1FZ1anzESoj1DlwB/H16uhFPaG99lS466zr5CA6FsfQabxfI/M7qblxeyybxnmFYeAVOiGxEC04Dn2U9slJE3y4DgphP9o4u9+AC696hTfGCFvVfRbsoNwYk2o6898EYCwzyTVdXL0cG9HgmpS5MLgihiiKM895hvULs/jOmJ3bjo2SgP0G09iT9CeGpTXILU9Ra2ASEIQIrmu9el3cOorr8HPf3+QEQIFYWetdPmtXeBGRk3AvE3ERfr2ZWBjKevmFwkOWOLSCjgbRcOg945XG9HVjWJEMwdJAArjgpfxz/3XACVaUCHx7s7uF4yFD675YxMNdiBzdj3fh4oFPnuSP2HwxPfsFpJw4USY9WCcRlQEQ6x24DePsymLUl0rRTznl9Ve84Ggvum78d5JJA0RKlC46fNujrYxuICHy7e5X/Uhyw56UZm2NuKtyd7DM7esvJYYvAGJkcyphmRB+GjbGnaZs5hh11X5UPPRk2qN/H/VCfIFKXFeaQgZNGTTOu63mhQ0HEhbIuXPHWl+oHmbYwTccIpS0Edtlb26e0HQ7KdX8sTh3xN+kei2YIYIQGZK2vYYB78VJEzZ46z+AYEzQJkE6aNj+eVab5A9uzTCmNeLlDQiAY9j8fzptqXw/tFTvfHjAPBo/vElupI+essYjgrTnp4nSp4C2trq+FvbwEJTmIlldY8n0BIE1Yz8iFHe/QuVkqZcP8M3K3GWUVCD6WJa6vsoVLTPM8mCFF6B3qS9p6WVmKxTXuhIoshhbkld2r3WSKimnqBdq58PXd5o/e9DGVTYx2anjKMJ3vqMHaZ5aLCPCevksardzB0pr05KeQGCGFk+vSGbXwSZHPi4rKZcTLyl75Rr4ZQw1FSfk8hfsPw31twwwojmSvDGeyvy9IwAWvSyfnQYKB LJLqto/8 mxVOz5SiQtfFPS7Bwa0jEC2aonIk+uMgxcX8UQ4YOHJyYH3zOalxhRqKlf2mzNtZXXBqzDGpFaLWRhqV/XX1+TqlN3sCUBjHaR/IzibDcExh6xqspWRm8GZYKVbVfA60gRjIBK2PA64cjQy9rappRTYF6ACjv4fDW5/PQmi/zpDpCxh3prDIq6PzgX9221SQHVd3/gWCVkxjhCiN6bQxc4+WTmC9xJtiopxbt6IxLqXv1x6keD6a94mmHm/0Z57RWc+u7iOFkT+u5wAp4gejPhTnPzdm4JmnNDtqSl9WLin7kkqicOdQ/XSc8Krq5SMHBE/rJxjL6RGIsyw4R/O6seF2hug== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When hitting an error, the error path forgot to unmap dma mappings and could call set_pages_wb() on already uncached pages. Fix this by introducing a common __ttm_pool_free() function that does the right thing. Fixes: d099fc8f540a ("drm/ttm: new TT backend allocation pool v3") Cc: Christian König Cc: Dave Airlie Cc: Madhav Chauhan Cc: Christian Koenig Cc: Huang Rui Cc: dri-devel@lists.freedesktop.org Signed-off-by: Thomas Hellström --- drivers/gpu/drm/ttm/ttm_pool.c | 74 +++++++++++++++++++++------------- 1 file changed, 45 insertions(+), 29 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index aa116a7bbae3..1cc7591a9542 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -367,6 +367,39 @@ static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, return 0; } +static void __ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt, + struct page **caching_divide, + enum ttm_caching initial_caching, + enum ttm_caching subseq_caching, + pgoff_t num_pages) +{ + enum ttm_caching caching = subseq_caching; + struct page **pages = tt->pages; + unsigned int order; + pgoff_t i, nr; + + if (pool && caching_divide) + caching = initial_caching; + + for (i = 0; i < num_pages; i += nr, pages += nr) { + struct ttm_pool_type *pt = NULL; + + if (unlikely(caching_divide == pages)) + caching = subseq_caching; + + order = ttm_pool_page_order(pool, *pages); + nr = (1UL << order); + if (tt->dma_address) + ttm_pool_unmap(pool, tt->dma_address[i], nr); + + pt = ttm_pool_select_type(pool, caching, order); + if (pt) + ttm_pool_type_give(pt, *pages); + else + ttm_pool_free_page(pool, caching, order, *pages); + } +} + /** * ttm_pool_alloc - Fill a ttm_tt object * @@ -386,8 +419,9 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, dma_addr_t *dma_addr = tt->dma_address; struct page **caching = tt->pages; struct page **pages = tt->pages; + enum ttm_caching page_caching; gfp_t gfp_flags = GFP_USER; - unsigned int i, order; + unsigned int order; struct page *p; int r; @@ -410,6 +444,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, order = min_t(unsigned int, order, __fls(num_pages))) { struct ttm_pool_type *pt; + page_caching = tt->caching; pt = ttm_pool_select_type(pool, tt->caching, order); p = pt ? ttm_pool_type_take(pt) : NULL; if (p) { @@ -418,6 +453,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (r) goto error_free_page; + caching = pages; do { r = ttm_pool_page_allocated(pool, order, p, &dma_addr, @@ -426,14 +462,15 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, if (r) goto error_free_page; + caching = pages; if (num_pages < (1 << order)) break; p = ttm_pool_type_take(pt); } while (p); - caching = pages; } + page_caching = ttm_cached; while (num_pages >= (1 << order) && (p = ttm_pool_alloc_page(pool, gfp_flags, order))) { @@ -442,6 +479,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, tt->caching); if (r) goto error_free_page; + caching = pages; } r = ttm_pool_page_allocated(pool, order, p, &dma_addr, &num_pages, &pages); @@ -468,15 +506,12 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, return 0; error_free_page: - ttm_pool_free_page(pool, tt->caching, order, p); + ttm_pool_free_page(pool, page_caching, order, p); error_free_all: num_pages = tt->num_pages - num_pages; - for (i = 0; i < num_pages; ) { - order = ttm_pool_page_order(pool, tt->pages[i]); - ttm_pool_free_page(pool, tt->caching, order, tt->pages[i]); - i += 1 << order; - } + __ttm_pool_free(pool, tt, caching, tt->caching, ttm_cached, + num_pages); return r; } @@ -492,27 +527,8 @@ EXPORT_SYMBOL(ttm_pool_alloc); */ void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt) { - unsigned int i; - - for (i = 0; i < tt->num_pages; ) { - struct page *p = tt->pages[i]; - unsigned int order, num_pages; - struct ttm_pool_type *pt; - - order = ttm_pool_page_order(pool, p); - num_pages = 1ULL << order; - if (tt->dma_address) - ttm_pool_unmap(pool, tt->dma_address[i], num_pages); - - pt = ttm_pool_select_type(pool, tt->caching, order); - if (pt) - ttm_pool_type_give(pt, tt->pages[i]); - else - ttm_pool_free_page(pool, tt->caching, order, - tt->pages[i]); - - i += num_pages; - } + __ttm_pool_free(pool, tt, NULL, tt->caching, tt->caching, + tt->num_pages); while (atomic_long_read(&allocated_pages) > page_pool_size) ttm_pool_shrink();