From patchwork Tue Dec 3 13:22:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28Intel=29?= X-Patchwork-Id: 11271253 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E33EB112B for ; Tue, 3 Dec 2019 13:22:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9EA052073C for ; Tue, 3 Dec 2019 13:22:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="PlsKcmez" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EA052073C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC0196B0525; Tue, 3 Dec 2019 08:22:57 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C73686B0526; Tue, 3 Dec 2019 08:22:57 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B864B6B0527; Tue, 3 Dec 2019 08:22:57 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0233.hostedemail.com [216.40.44.233]) by kanga.kvack.org (Postfix) with ESMTP id A34FF6B0525 for ; Tue, 3 Dec 2019 08:22:57 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 62080180AD815 for ; Tue, 3 Dec 2019 13:22:57 +0000 (UTC) X-FDA: 76223895594.24.earth11_3699a534a464d X-Spam-Summary: 2,0,0,06eac132b2faeb83,d41d8cd98f00b204,thomas_os@shipmail.org,::linux-kernel@vger.kernel.org:dri-devel@lists.freedesktop.org:pv-drivers@vmware.com:linux-graphics-maintainer@vmware.com:thellstrom@vmware.com:akpm@linux-foundation.org:mhocko@suse.com:willy@infradead.org:kirill.shutemov@linux.intel.com:rcampbell@nvidia.com:jglisse@redhat.com:christian.koenig@amd.com,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1261:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1535:1544:1593:1594:1676:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3871:3874:4118:4321:4385:4412:4605:5007:6117:6120:6261:6653:7576:7901:8603:9036:10004:11026:11473:11657:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12895:13894:14096:14097:14181:14394:14659:14721:21080:21094:21323:21451:21627:30054:30064,0,RBL:213.80.101.70:@shipmail.org:.lbl8.mailshell.net-62.2.203.100 64.100.201.201,CacheIP:none, Bayesian X-HE-Tag: earth11_3699a534a464d X-Filterd-Recvd-Size: 7467 Received: from ste-pvt-msa1.bahnhof.se (ste-pvt-msa1.bahnhof.se [213.80.101.70]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Tue, 3 Dec 2019 13:22:55 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by ste-pvt-msa1.bahnhof.se (Postfix) with ESMTP id 4A90548703; Tue, 3 Dec 2019 14:22:53 +0100 (CET) Authentication-Results: ste-pvt-msa1.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=PlsKcmez; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se X-Spam-Flag: NO X-Spam-Score: -2.099 X-Spam-Level: X-Spam-Status: No, score=-2.099 tagged_above=-999 required=6.31 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no Received: from ste-pvt-msa1.bahnhof.se ([127.0.0.1]) by localhost (ste-pvt-msa1.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id nu9E_Svur_te; Tue, 3 Dec 2019 14:22:52 +0100 (CET) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by ste-pvt-msa1.bahnhof.se (Postfix) with ESMTPA id 183F448493; Tue, 3 Dec 2019 14:22:52 +0100 (CET) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id 0E18E36253B; Tue, 3 Dec 2019 14:22:50 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1575379370; bh=yW4Kv+tDJBfAXwh/Zhx3QElV1XLWl/OFJo8EITEcphM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PlsKcmezF1e2T4UarZuiKhE+rIGTi/TmSXByMxAUpX+723J9X00lHxA6EimnSLFgh HoC0bW64iAFubi8mXCQA3mScWREVTWMCy7rw3YX/SEZb4sxsze6txHiQ4qhrFpHOEx D2QGUVQXvcy/beERmNhB9wn4e8jCOJjU+lgyIAdo= From: =?utf-8?q?Thomas_Hellstr=C3=B6m_=28VMware=29?= To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: pv-drivers@vmware.com, linux-graphics-maintainer@vmware.com, Thomas Hellstrom , Andrew Morton , Michal Hocko , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Ralph Campbell , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , =?utf-8?q?Christian_?= =?utf-8?q?K=C3=B6nig?= Subject: [PATCH 7/8] drm/ttm: Introduce a huge page aligning TTM range manager. Date: Tue, 3 Dec 2019 14:22:38 +0100 Message-Id: <20191203132239.5910-8-thomas_os@shipmail.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191203132239.5910-1-thomas_os@shipmail.org> References: <20191203132239.5910-1-thomas_os@shipmail.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Thomas Hellstrom Using huge page-table entries require that the start of a buffer object is huge page size aligned. So introduce a ttm_bo_man_get_node_huge() function that attempts to accomplish this for allocations that are larger than the huge page size, and provide a new range-manager instance that uses that function. Cc: Andrew Morton Cc: Michal Hocko Cc: "Matthew Wilcox (Oracle)" Cc: "Kirill A. Shutemov" Cc: Ralph Campbell Cc: "Jérôme Glisse" Cc: "Christian König" Signed-off-by: Thomas Hellstrom --- drivers/gpu/drm/ttm/ttm_bo_manager.c | 92 ++++++++++++++++++++++++++++ include/drm/ttm/ttm_bo_driver.h | 1 + 2 files changed, 93 insertions(+) diff --git a/drivers/gpu/drm/ttm/ttm_bo_manager.c b/drivers/gpu/drm/ttm/ttm_bo_manager.c index 18d3debcc949..26aa1a2ae7f1 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_manager.c +++ b/drivers/gpu/drm/ttm/ttm_bo_manager.c @@ -89,6 +89,89 @@ static int ttm_bo_man_get_node(struct ttm_mem_type_manager *man, return 0; } +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static int ttm_bo_insert_aligned(struct drm_mm *mm, struct drm_mm_node *node, + unsigned long align_pages, + const struct ttm_place *place, + struct ttm_mem_reg *mem, + unsigned long lpfn, + enum drm_mm_insert_mode mode) +{ + if (align_pages >= mem->page_alignment && + (!mem->page_alignment || align_pages % mem->page_alignment == 0)) { + return drm_mm_insert_node_in_range(mm, node, + mem->num_pages, + align_pages, 0, + place->fpfn, lpfn, mode); + } + + return -ENOSPC; +} + +static int ttm_bo_man_get_node_huge(struct ttm_mem_type_manager *man, + struct ttm_buffer_object *bo, + const struct ttm_place *place, + struct ttm_mem_reg *mem) +{ + struct ttm_range_manager *rman = (struct ttm_range_manager *) man->priv; + struct drm_mm *mm = &rman->mm; + struct drm_mm_node *node; + unsigned long align_pages; + unsigned long lpfn; + enum drm_mm_insert_mode mode = DRM_MM_INSERT_BEST; + int ret; + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOMEM; + + lpfn = place->lpfn; + if (!lpfn) + lpfn = man->size; + + mode = DRM_MM_INSERT_BEST; + if (place->flags & TTM_PL_FLAG_TOPDOWN) + mode = DRM_MM_INSERT_HIGH; + + spin_lock(&rman->lock); + if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)) { + align_pages = (HPAGE_PUD_SIZE >> PAGE_SHIFT); + if (mem->num_pages >= align_pages) { + ret = ttm_bo_insert_aligned(mm, node, align_pages, + place, mem, lpfn, mode); + if (!ret) + goto found_unlock; + } + } + + align_pages = (HPAGE_PMD_SIZE >> PAGE_SHIFT); + if (mem->num_pages >= align_pages) { + ret = ttm_bo_insert_aligned(mm, node, align_pages, place, mem, + lpfn, mode); + if (!ret) + goto found_unlock; + } + + ret = drm_mm_insert_node_in_range(mm, node, mem->num_pages, + mem->page_alignment, 0, + place->fpfn, lpfn, mode); +found_unlock: + spin_unlock(&rman->lock); + + if (unlikely(ret)) { + kfree(node); + } else { + mem->mm_node = node; + mem->start = node->start; + } + + return 0; +} +#else +#define ttm_bo_man_get_node_huge ttm_bo_man_get_node +#endif + + static void ttm_bo_man_put_node(struct ttm_mem_type_manager *man, struct ttm_mem_reg *mem) { @@ -154,3 +237,12 @@ const struct ttm_mem_type_manager_func ttm_bo_manager_func = { .debug = ttm_bo_man_debug }; EXPORT_SYMBOL(ttm_bo_manager_func); + +const struct ttm_mem_type_manager_func ttm_bo_manager_huge_func = { + .init = ttm_bo_man_init, + .takedown = ttm_bo_man_takedown, + .get_node = ttm_bo_man_get_node_huge, + .put_node = ttm_bo_man_put_node, + .debug = ttm_bo_man_debug +}; +EXPORT_SYMBOL(ttm_bo_manager_huge_func); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index cac7a8a0825a..868bd0d4be6a 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -888,5 +888,6 @@ int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo); pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp); extern const struct ttm_mem_type_manager_func ttm_bo_manager_func; +extern const struct ttm_mem_type_manager_func ttm_bo_manager_huge_func; #endif