From patchwork Sat Mar 10 03:21:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 10273149 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6A4C360236 for ; Sat, 10 Mar 2018 03:22:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59E9B29FEC for ; Sat, 10 Mar 2018 03:22:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D28529FF1; Sat, 10 Mar 2018 03:22:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CC52629FEC for ; Sat, 10 Mar 2018 03:22:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 57A0E6EC96; Sat, 10 Mar 2018 03:22:16 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by gabe.freedesktop.org (Postfix) with ESMTPS id 526BC6EC80; Sat, 10 Mar 2018 03:22:00 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9C6818D6D7; Sat, 10 Mar 2018 03:21:59 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-120-68.rdu2.redhat.com [10.10.120.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2C763202322C; Sat, 10 Mar 2018 03:21:59 +0000 (UTC) From: jglisse@redhat.com To: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org Subject: [RFC PATCH 08/13] drm/nouveau: special mapping method for HMM (user interface) Date: Fri, 9 Mar 2018 22:21:36 -0500 Message-Id: <20180310032141.6096-9-jglisse@redhat.com> In-Reply-To: <20180310032141.6096-1-jglisse@redhat.com> References: <20180310032141.6096-1-jglisse@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Sat, 10 Mar 2018 03:21:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Sat, 10 Mar 2018 03:21:59 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jglisse@redhat.com' RCPT:'' X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Evgeny Baskakov , linux-mm@kvack.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Ben Skeggs , John Hubbard Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse Signed-off-by: Jérôme Glisse Cc: Ben Skeggs --- drivers/gpu/drm/nouveau/include/nvif/if000c.h | 17 ++++++++ drivers/gpu/drm/nouveau/include/nvif/vmm.h | 2 + drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 25 ++++-------- drivers/gpu/drm/nouveau/nvif/vmm.c | 29 ++++++++++++++ drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c | 49 ++++++++++++++++++++--- 5 files changed, 99 insertions(+), 23 deletions(-) diff --git a/drivers/gpu/drm/nouveau/include/nvif/if000c.h b/drivers/gpu/drm/nouveau/include/nvif/if000c.h index 2928ecd989ad..2c24817ca533 100644 --- a/drivers/gpu/drm/nouveau/include/nvif/if000c.h +++ b/drivers/gpu/drm/nouveau/include/nvif/if000c.h @@ -14,6 +14,8 @@ struct nvif_vmm_v0 { #define NVIF_VMM_V0_PUT 0x02 #define NVIF_VMM_V0_MAP 0x03 #define NVIF_VMM_V0_UNMAP 0x04 +#define NVIF_VMM_V0_HMM_MAP 0x05 +#define NVIF_VMM_V0_HMM_UNMAP 0x06 struct nvif_vmm_page_v0 { __u8 version; @@ -61,4 +63,19 @@ struct nvif_vmm_unmap_v0 { __u8 pad01[7]; __u64 addr; }; + +struct nvif_vmm_hmm_map_v0 { + __u8 version; + __u8 pad01[7]; + __u64 addr; + __u64 npages; + __u64 pages; +}; + +struct nvif_vmm_hmm_unmap_v0 { + __u8 version; + __u8 pad01[7]; + __u64 addr; + __u64 npages; +}; #endif diff --git a/drivers/gpu/drm/nouveau/include/nvif/vmm.h b/drivers/gpu/drm/nouveau/include/nvif/vmm.h index c5db8a2e82df..c5e4adaa0e3c 100644 --- a/drivers/gpu/drm/nouveau/include/nvif/vmm.h +++ b/drivers/gpu/drm/nouveau/include/nvif/vmm.h @@ -39,4 +39,6 @@ void nvif_vmm_put(struct nvif_vmm *, struct nvif_vma *); int nvif_vmm_map(struct nvif_vmm *, u64 addr, u64 size, void *argv, u32 argc, struct nvif_mem *, u64 offset); int nvif_vmm_unmap(struct nvif_vmm *, u64); +int nvif_vmm_hmm_map(struct nvif_vmm *vmm, u64 addr, u64 npages, u64 *pages); +int nvif_vmm_hmm_unmap(struct nvif_vmm *vmm, u64 addr, u64 npages); #endif diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h index 719d50e6296f..8f08718e05aa 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/mmu.h @@ -4,20 +4,6 @@ #include #include -/* Need to change HMM to be more driver friendly */ -#if IS_ENABLED(CONFIG_HMM) -#else -typedef unsigned long hmm_pfn_t; -#define HMM_PFN_VALID (1 << 0) -#define HMM_PFN_READ (1 << 1) -#define HMM_PFN_WRITE (1 << 2) -#define HMM_PFN_ERROR (1 << 3) -#define HMM_PFN_EMPTY (1 << 4) -#define HMM_PFN_SPECIAL (1 << 5) -#define HMM_PFN_DEVICE_UNADDRESSABLE (1 << 6) -#define HMM_PFN_SHIFT 7 -#endif - struct nvkm_vma { struct list_head head; struct rb_node tree; @@ -79,10 +65,13 @@ struct nvkm_vmm_map { struct nvkm_mm_node *mem; struct scatterlist *sgl; dma_addr_t *dma; -#define NV_HMM_PAGE_FLAG_V HMM_PFN_VALID -#define NV_HMM_PAGE_FLAG_W HMM_PFN_WRITE -#define NV_HMM_PAGE_FLAG_E HMM_PFN_ERROR -#define NV_HMM_PAGE_PFN_SHIFT HMM_PFN_SHIFT +#define NV_HMM_PAGE_FLAG_V (1 << 0) +#define NV_HMM_PAGE_FLAG_R 0 +#define NV_HMM_PAGE_FLAG_W (1 << 1) +#define NV_HMM_PAGE_FLAG_E (-1ULL) +#define NV_HMM_PAGE_FLAG_N 0 +#define NV_HMM_PAGE_FLAG_S (1ULL << 63) +#define NV_HMM_PAGE_PFN_SHIFT 8 u64 *pages; u64 off; diff --git a/drivers/gpu/drm/nouveau/nvif/vmm.c b/drivers/gpu/drm/nouveau/nvif/vmm.c index 31cdb2d2e1ff..27a7b95b4e9c 100644 --- a/drivers/gpu/drm/nouveau/nvif/vmm.c +++ b/drivers/gpu/drm/nouveau/nvif/vmm.c @@ -32,6 +32,35 @@ nvif_vmm_unmap(struct nvif_vmm *vmm, u64 addr) sizeof(struct nvif_vmm_unmap_v0)); } +int +nvif_vmm_hmm_map(struct nvif_vmm *vmm, u64 addr, u64 npages, u64 *pages) +{ + struct nvif_vmm_hmm_map_v0 args; + int ret; + + args.version = 0; + args.addr = addr; + args.npages = npages; + args.pages = (uint64_t)pages; + ret = nvif_object_mthd(&vmm->object, NVIF_VMM_V0_HMM_MAP, + &args, sizeof(args)); + return ret; +} + +int +nvif_vmm_hmm_unmap(struct nvif_vmm *vmm, u64 addr, u64 npages) +{ + struct nvif_vmm_hmm_unmap_v0 args; + int ret; + + args.version = 0; + args.addr = addr; + args.npages = npages; + ret = nvif_object_mthd(&vmm->object, NVIF_VMM_V0_HMM_UNMAP, + &args, sizeof(args)); + return ret; +} + int nvif_vmm_map(struct nvif_vmm *vmm, u64 addr, u64 size, void *argv, u32 argc, struct nvif_mem *mem, u64 offset) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c index 37b201b95f15..739f2af02552 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c @@ -274,16 +274,55 @@ nvkm_uvmm_mthd_page(struct nvkm_uvmm *uvmm, void *argv, u32 argc) return 0; } +static int +nvkm_uvmm_mthd_hmm_map(struct nvkm_uvmm *uvmm, void *argv, u32 argc) +{ + union { + struct nvif_vmm_hmm_map_v0 v0; + } *args = argv; + struct nvkm_vmm *vmm = uvmm->vmm; + int ret = -ENOSYS; + + if ((ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))) + return ret; + + mutex_lock(&vmm->mutex); + nvkm_vmm_hmm_map(vmm, args->v0.addr, args->v0.npages, + (u64 *)args->v0.pages); + mutex_unlock(&vmm->mutex); + return 0; +} + +static int +nvkm_uvmm_mthd_hmm_unmap(struct nvkm_uvmm *uvmm, void *argv, u32 argc) +{ + union { + struct nvif_vmm_hmm_unmap_v0 v0; + } *args = argv; + struct nvkm_vmm *vmm = uvmm->vmm; + int ret = -ENOSYS; + + if ((ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))) + return ret; + + mutex_lock(&vmm->mutex); + nvkm_vmm_hmm_unmap(vmm, args->v0.addr, args->v0.npages); + mutex_unlock(&vmm->mutex); + return 0; +} + static int nvkm_uvmm_mthd(struct nvkm_object *object, u32 mthd, void *argv, u32 argc) { struct nvkm_uvmm *uvmm = nvkm_uvmm(object); switch (mthd) { - case NVIF_VMM_V0_PAGE : return nvkm_uvmm_mthd_page (uvmm, argv, argc); - case NVIF_VMM_V0_GET : return nvkm_uvmm_mthd_get (uvmm, argv, argc); - case NVIF_VMM_V0_PUT : return nvkm_uvmm_mthd_put (uvmm, argv, argc); - case NVIF_VMM_V0_MAP : return nvkm_uvmm_mthd_map (uvmm, argv, argc); - case NVIF_VMM_V0_UNMAP : return nvkm_uvmm_mthd_unmap (uvmm, argv, argc); + case NVIF_VMM_V0_PAGE : return nvkm_uvmm_mthd_page (uvmm, argv, argc); + case NVIF_VMM_V0_GET : return nvkm_uvmm_mthd_get (uvmm, argv, argc); + case NVIF_VMM_V0_PUT : return nvkm_uvmm_mthd_put (uvmm, argv, argc); + case NVIF_VMM_V0_MAP : return nvkm_uvmm_mthd_map (uvmm, argv, argc); + case NVIF_VMM_V0_UNMAP : return nvkm_uvmm_mthd_unmap (uvmm, argv, argc); + case NVIF_VMM_V0_HMM_MAP : return nvkm_uvmm_mthd_hmm_map (uvmm, argv, argc); + case NVIF_VMM_V0_HMM_UNMAP: return nvkm_uvmm_mthd_hmm_unmap(uvmm, argv, argc); default: break; }