From patchwork Tue Feb 9 01:07:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12076969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BB17C433E0 for ; Tue, 9 Feb 2021 01:09:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CAD9564EAA for ; Tue, 9 Feb 2021 01:09:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CAD9564EAA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C6658D0007; Mon, 8 Feb 2021 20:09:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 527296B0071; Mon, 8 Feb 2021 20:09:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C9C68D0007; Mon, 8 Feb 2021 20:09:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 272876B0070 for ; Mon, 8 Feb 2021 20:09:48 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id EAB11180C718D for ; Tue, 9 Feb 2021 01:09:47 +0000 (UTC) X-FDA: 77796947214.30.867E070 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf01.hostedemail.com (Postfix) with ESMTP id 6137120001D6 for ; Tue, 9 Feb 2021 01:09:47 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Mon, 08 Feb 2021 17:09:46 -0800 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 9 Feb 2021 01:09:45 +0000 Received: from localhost (172.20.145.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 9 Feb 2021 01:09:45 +0000 From: Alistair Popple To: , , , CC: , , , , , , , "Alistair Popple" Subject: [PATCH 8/9] nouveau/dmem: Add support for multiple page types Date: Tue, 9 Feb 2021 12:07:21 +1100 Message-ID: <20210209010722.13839-9-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210209010722.13839-1-apopple@nvidia.com> References: <20210209010722.13839-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1612832986; bh=7iHwtR9sau7241FKCrgteBkrT9cHXs6BLCPTLWgKcXQ=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=VKRG5N9NY9ae9S7GkfqgricGvo5ZGs2UBfcbmWhnNfa6uuRK5Oi4wNrQhOYQjkYs5 CtYJHlMQmzVd2GcxxcWItL2jNHWW0zO+4uoMRB4r/Mg1zI+NQBo4KEUztggcr/nOaa NxPUZ84dy/ZQRoeSStEePDecVkKIGsgW3/eDkS6lakAMbXLykzTozQj8RgBXyst0It bv4Lx4xAnRZfRkKbGIwOihbwBewLn6XMpOhiTSFfOvbktrB5fv5cMMDJ0eFcECGyG7 m+P9i28q28Ovin8mEgLRLtMysXyi/8arOL1T9Xf7Alt9GnBQWEodaOJ7g4rsh67f1x ZmdyxxPwBzagw== X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6137120001D6 X-Stat-Signature: hngyuoqogk5w84e64t5obr7tgosyntci Received-SPF: none (nvidia.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=hqnvemgate26.nvidia.com; client-ip=216.228.121.65 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1612832987-547194 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Device private pages are used to track a per-page migrate_to_ram() callback which is called when the CPU attempts to access a GPU page from the CPU. Currently the same callback is used for all GPU pages tracked by Nouveau. However a future patch requires support for calling a different callback when accessing some GPU pages. This patch extends the existing Nouveau device private page allocator to make it easier to allocate device private pages with different callbacks but should not introduce any functional changes. Signed-off-by: Alistair Popple --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 27 ++++++++++++++------------ drivers/gpu/drm/nouveau/nouveau_dmem.h | 5 +++++ 2 files changed, 20 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 9579bd001f11..8fb4949f3778 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -67,6 +67,7 @@ struct nouveau_dmem_chunk { struct nouveau_bo *bo; struct nouveau_drm *drm; unsigned long callocated; + enum nouveau_dmem_type type; struct dev_pagemap pagemap; }; @@ -81,7 +82,7 @@ struct nouveau_dmem { struct nouveau_dmem_migrate migrate; struct list_head chunks; struct mutex mutex; - struct page *free_pages; + struct page *free_pages[NOUVEAU_DMEM_NTYPES]; spinlock_t lock; }; @@ -112,8 +113,8 @@ static void nouveau_dmem_page_free(struct page *page) struct nouveau_dmem *dmem = chunk->drm->dmem; spin_lock(&dmem->lock); - page->zone_device_data = dmem->free_pages; - dmem->free_pages = page; + page->zone_device_data = dmem->free_pages[chunk->type]; + dmem->free_pages[chunk->type] = page; WARN_ON(!chunk->callocated); chunk->callocated--; @@ -224,7 +225,8 @@ static const struct dev_pagemap_ops nouveau_dmem_pagemap_ops = { }; static int -nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage) +nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage, + enum nouveau_dmem_type type) { struct nouveau_dmem_chunk *chunk; struct resource *res; @@ -248,6 +250,7 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage) } chunk->drm = drm; + chunk->type = type; chunk->pagemap.type = MEMORY_DEVICE_PRIVATE; chunk->pagemap.range.start = res->start; chunk->pagemap.range.end = res->end; @@ -279,8 +282,8 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage) page = pfn_to_page(pfn_first); spin_lock(&drm->dmem->lock); for (i = 0; i < DMEM_CHUNK_NPAGES - 1; ++i, ++page) { - page->zone_device_data = drm->dmem->free_pages; - drm->dmem->free_pages = page; + page->zone_device_data = drm->dmem->free_pages[type]; + drm->dmem->free_pages[type] = page; } *ppage = page; chunk->callocated++; @@ -304,22 +307,22 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage) } static struct page * -nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm) +nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm, enum nouveau_dmem_type type) { struct nouveau_dmem_chunk *chunk; struct page *page = NULL; int ret; spin_lock(&drm->dmem->lock); - if (drm->dmem->free_pages) { - page = drm->dmem->free_pages; - drm->dmem->free_pages = page->zone_device_data; + if (drm->dmem->free_pages[type]) { + page = drm->dmem->free_pages[type]; + drm->dmem->free_pages[type] = page->zone_device_data; chunk = nouveau_page_to_chunk(page); chunk->callocated++; spin_unlock(&drm->dmem->lock); } else { spin_unlock(&drm->dmem->lock); - ret = nouveau_dmem_chunk_alloc(drm, &page); + ret = nouveau_dmem_chunk_alloc(drm, &page, type); if (ret) return NULL; } @@ -577,7 +580,7 @@ static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, if (!(src & MIGRATE_PFN_MIGRATE)) goto out; - dpage = nouveau_dmem_page_alloc_locked(drm); + dpage = nouveau_dmem_page_alloc_locked(drm, NOUVEAU_DMEM); if (!dpage) goto out; diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.h b/drivers/gpu/drm/nouveau/nouveau_dmem.h index 64da5d3635c8..02e261c4acf1 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.h +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.h @@ -28,6 +28,11 @@ struct nouveau_drm; struct nouveau_svmm; struct hmm_range; +enum nouveau_dmem_type { + NOUVEAU_DMEM, + NOUVEAU_DMEM_NTYPES, /* Number of types, must be last */ +}; + #if IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) void nouveau_dmem_init(struct nouveau_drm *); void nouveau_dmem_fini(struct nouveau_drm *);