From patchwork Sat Mar 10 03:21:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 10273137 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1952760236 for ; Sat, 10 Mar 2018 03:22:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B57229A87 for ; Sat, 10 Mar 2018 03:22:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F414C29FD0; Sat, 10 Mar 2018 03:22:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7DA7629A87 for ; Sat, 10 Mar 2018 03:22:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AB9526EC7A; Sat, 10 Mar 2018 03:22:04 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by gabe.freedesktop.org (Postfix) with ESMTPS id 56AAB6EC76; Sat, 10 Mar 2018 03:22:01 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id A55688182D0A; Sat, 10 Mar 2018 03:22:00 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-120-68.rdu2.redhat.com [10.10.120.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3D469202322A; Sat, 10 Mar 2018 03:22:00 +0000 (UTC) From: jglisse@redhat.com To: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org Subject: [RFC PATCH 10/13] drm/nouveau: add HMM area creation Date: Fri, 9 Mar 2018 22:21:38 -0500 Message-Id: <20180310032141.6096-11-jglisse@redhat.com> In-Reply-To: <20180310032141.6096-1-jglisse@redhat.com> References: <20180310032141.6096-1-jglisse@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Sat, 10 Mar 2018 03:22:00 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Sat, 10 Mar 2018 03:22:00 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jglisse@redhat.com' RCPT:'' X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Ralph Campbell , Evgeny Baskakov , linux-mm@kvack.org, =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Ben Skeggs , John Hubbard Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse HMM area is a virtual address range under HMM control, GPU access inside such range is like CPU access. For thing to work properly HMM range should cover everything except a reserved range for GEM buffer object. Signed-off-by: Jérôme Glisse Cc: Ben Skeggs --- drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 63 +++++++++++++++++++++++++++ drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h | 2 + 2 files changed, 65 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c index 96671987ce53..ef4b839932fa 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c @@ -1540,6 +1540,69 @@ nvkm_vmm_get_locked(struct nvkm_vmm *vmm, bool getref, bool mapref, bool sparse, return 0; } +int +nvkm_vmm_hmm_init(struct nvkm_vmm *vmm, u64 start, u64 end, + struct nvkm_vma **pvma) +{ + struct nvkm_vma *vma = NULL, *tmp; + struct rb_node *node; + + /* Locate smallest block that can possibly satisfy the allocation. */ + node = vmm->free.rb_node; + while (node) { + struct nvkm_vma *this = rb_entry(node, typeof(*this), tree); + + if (this->addr <= start && (this->addr + this->size) >= end) { + rb_erase(&this->tree, &vmm->free); + vma = this; + break; + } + node = node->rb_left; + } + + if (vma == NULL) { + return -EINVAL; + } + + if (start != vma->addr) { + if (!(tmp = nvkm_vma_tail(vma, vma->size + vma->addr - start))) { + nvkm_vmm_put_region(vmm, vma); + return -ENOMEM; + } + nvkm_vmm_free_insert(vmm, vma); + vma = tmp; + } + + if (end < (vma->addr + vma->size)) { + if (!(tmp = nvkm_vma_tail(vma, vma->size + vma->addr - end))) { + nvkm_vmm_put_region(vmm, vma); + return -ENOMEM; + } + nvkm_vmm_free_insert(vmm, tmp); + } + + vma->mapref = false; + vma->sparse = false; + vma->page = NVKM_VMA_PAGE_NONE; + vma->refd = NVKM_VMA_PAGE_NONE; + vma->used = true; + nvkm_vmm_node_insert(vmm, vma); + *pvma = vma; + return 0; +} + +void +nvkm_vmm_hmm_fini(struct nvkm_vmm *vmm, u64 start, u64 end) +{ + struct nvkm_vma *vma; + u64 size = (end - start); + + vma = nvkm_vmm_node_search(vmm, start); + if (vma && vma->addr == start && vma->size == size) { + nvkm_vmm_put_locked(vmm, vma); + } +} + int nvkm_vmm_get(struct nvkm_vmm *vmm, u8 page, u64 size, struct nvkm_vma **pvma) { diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h index a630aa2a77e4..04d672a4dccb 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.h @@ -165,6 +165,8 @@ int nvkm_vmm_get_locked(struct nvkm_vmm *, bool getref, bool mapref, bool sparse, u8 page, u8 align, u64 size, struct nvkm_vma **pvma); void nvkm_vmm_put_locked(struct nvkm_vmm *, struct nvkm_vma *); +int nvkm_vmm_hmm_init(struct nvkm_vmm *, u64, u64, struct nvkm_vma **); +void nvkm_vmm_hmm_fini(struct nvkm_vmm *, u64, u64); void nvkm_vmm_unmap_locked(struct nvkm_vmm *, struct nvkm_vma *); void nvkm_vmm_unmap_region(struct nvkm_vmm *vmm, struct nvkm_vma *vma); void nvkm_vmm_hmm_map(struct nvkm_vmm *vmm, u64 addr, u64 npages, u64 *pages);