From patchwork Fri Apr 9 12:39:23 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 91692 Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o39CetkP000526 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Fri, 9 Apr 2010 12:41:31 GMT Received: from localhost ([127.0.0.1] helo=sfs-ml-4.v29.ch3.sourceforge.com) by sfs-ml-4.v29.ch3.sourceforge.com with esmtp (Exim 4.69) (envelope-from ) id 1O0DV5-0006kh-CO; Fri, 09 Apr 2010 12:39:55 +0000 Received: from sfi-mx-3.v28.ch3.sourceforge.com ([172.29.28.123] helo=mx.sourceforge.net) by sfs-ml-4.v29.ch3.sourceforge.com with esmtp (Exim 4.69) (envelope-from ) id 1O0DV4-0006kR-Ry for dri-devel@lists.sourceforge.net; Fri, 09 Apr 2010 12:39:54 +0000 Received-SPF: fail (sfi-mx-3.v28.ch3.sourceforge.com: domain of redhat.com does not designate 88.191.38.29 as permitted sender) client-ip=88.191.38.29; envelope-from=jglisse@redhat.com; helo=nox.protox.org; Received: from nox.protox.org ([88.191.38.29]) by sfi-mx-3.v28.ch3.sourceforge.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.69) id 1O0DV3-0000no-BD for dri-devel@lists.sourceforge.net; Fri, 09 Apr 2010 12:39:54 +0000 Received: from localhost.localdomain (lag77-1-82-238-106-69.fbx.proxad.net [82.238.106.69]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by nox.protox.org (Postfix) with ESMTPSA id BCBEB169E43; Fri, 9 Apr 2010 14:39:41 +0200 (CEST) From: Jerome Glisse To: airlied@gmail.com Subject: [PATCH 05/13] drm/ttm: ttm_fault callback to allow driver to handle bo placement V6 Date: Fri, 9 Apr 2010 14:39:23 +0200 Message-Id: <1270816771-2345-6-git-send-email-jglisse@redhat.com> X-Mailer: git-send-email 1.7.0.1 In-Reply-To: <1270816771-2345-1-git-send-email-jglisse@redhat.com> References: <1270816771-2345-1-git-send-email-jglisse@redhat.com> X-Spam-Score: 5.0 (+++++) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. 4.0 SPF_CHECK_FAIL SPF reports sender host as NOT permitted to send mails from 1.0 SPF_FAIL SPF: sender does not match SPF record (fail) X-Headers-End: 1O0DV3-0000no-BD Cc: Jerome Glisse , dri-devel@lists.sf.net X-BeenThere: dri-devel@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.sourceforge.net X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 09 Apr 2010 12:41:43 +0000 (UTC) diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index 40631e2..b42e3fa 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -632,6 +632,7 @@ static int ttm_bo_evict(struct ttm_buffer_object *bo, bool interruptible, evict_mem = bo->mem; evict_mem.mm_node = NULL; + evict_mem.bus.io_reserved = false; placement.fpfn = 0; placement.lpfn = 0; @@ -1005,6 +1006,7 @@ int ttm_bo_move_buffer(struct ttm_buffer_object *bo, mem.num_pages = bo->num_pages; mem.size = mem.num_pages << PAGE_SHIFT; mem.page_alignment = bo->mem.page_alignment; + mem.bus.io_reserved = false; /* * Determine where to move the buffer. */ @@ -1160,6 +1162,7 @@ int ttm_bo_init(struct ttm_bo_device *bdev, bo->mem.num_pages = bo->num_pages; bo->mem.mm_node = NULL; bo->mem.page_alignment = page_alignment; + bo->mem.bus.io_reserved = false; bo->buffer_start = buffer_start & PAGE_MASK; bo->priv_flags = 0; bo->mem.placement = (TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED); @@ -1574,7 +1577,7 @@ int ttm_bo_pci_offset(struct ttm_bo_device *bdev, if (ttm_mem_reg_is_pci(bdev, mem)) { *bus_offset = mem->mm_node->start << PAGE_SHIFT; *bus_size = mem->num_pages << PAGE_SHIFT; - *bus_base = man->io_offset; + *bus_base = man->io_offset + (uintptr_t)man->io_addr; } return 0; @@ -1588,8 +1591,8 @@ void ttm_bo_unmap_virtual(struct ttm_buffer_object *bo) if (!bdev->dev_mapping) return; - unmap_mapping_range(bdev->dev_mapping, offset, holelen, 1); + ttm_mem_io_free(bdev, &bo->mem); } EXPORT_SYMBOL(ttm_bo_unmap_virtual); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 865b2a8..d58eeb5 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -81,30 +81,62 @@ int ttm_bo_move_ttm(struct ttm_buffer_object *bo, } EXPORT_SYMBOL(ttm_bo_move_ttm); +int ttm_mem_io_reserve(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) +{ + struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; + int ret; + + if (bdev->driver->io_mem_reserve) { + if (!mem->bus.io_reserved) { + mem->bus.io_reserved = true; + ret = bdev->driver->io_mem_reserve(bdev, mem); + if (unlikely(ret != 0)) + return ret; + } + } else { + ret = ttm_bo_pci_offset(bdev, mem, &mem->bus.base, &mem->bus.offset, &mem->bus.size); + if (unlikely(ret != 0)) + return ret; + mem->bus.addr = NULL; + if (!(man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP)) + mem->bus.addr = (void *)(((u8 *)man->io_addr) + mem->bus.offset); + mem->bus.is_iomem = (mem->bus.size > 0) ? 1 : 0; + } + return 0; +} + +void ttm_mem_io_free(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem) +{ + if (bdev->driver->io_mem_reserve) { + if (mem->bus.io_reserved) { + mem->bus.io_reserved = false; + bdev->driver->io_mem_free(bdev, mem); + } + } +} + int ttm_mem_reg_ioremap(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem, void **virtual) { - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; - unsigned long bus_offset; - unsigned long bus_size; - unsigned long bus_base; int ret; void *addr; *virtual = NULL; - ret = ttm_bo_pci_offset(bdev, mem, &bus_base, &bus_offset, &bus_size); - if (ret || bus_size == 0) + ret = ttm_mem_io_reserve(bdev, mem); + if (ret) return ret; - if (!(man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP)) - addr = (void *)(((u8 *) man->io_addr) + bus_offset); - else { + if (mem->bus.addr) { + addr = mem->bus.addr; + } else { if (mem->placement & TTM_PL_FLAG_WC) - addr = ioremap_wc(bus_base + bus_offset, bus_size); + addr = ioremap_wc(mem->bus.base + mem->bus.offset, mem->bus.size); else - addr = ioremap_nocache(bus_base + bus_offset, bus_size); - if (!addr) + addr = ioremap_nocache(mem->bus.base + mem->bus.offset, mem->bus.size); + if (!addr) { + ttm_mem_io_free(bdev, mem); return -ENOMEM; + } } *virtual = addr; return 0; @@ -117,8 +149,9 @@ void ttm_mem_reg_iounmap(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem, man = &bdev->man[mem->mem_type]; - if (virtual && (man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP)) + if (virtual && (man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP || mem->bus.addr == NULL)) iounmap(virtual); + ttm_mem_io_free(bdev, mem); } static int ttm_copy_io_page(void *dst, void *src, unsigned long page) @@ -370,26 +403,23 @@ pgprot_t ttm_io_prot(uint32_t caching_flags, pgprot_t tmp) EXPORT_SYMBOL(ttm_io_prot); static int ttm_bo_ioremap(struct ttm_buffer_object *bo, - unsigned long bus_base, - unsigned long bus_offset, - unsigned long bus_size, + unsigned long offset, + unsigned long size, struct ttm_bo_kmap_obj *map) { - struct ttm_bo_device *bdev = bo->bdev; struct ttm_mem_reg *mem = &bo->mem; - struct ttm_mem_type_manager *man = &bdev->man[mem->mem_type]; - if (!(man->flags & TTM_MEMTYPE_FLAG_NEEDS_IOREMAP)) { + if (bo->mem.bus.addr) { map->bo_kmap_type = ttm_bo_map_premapped; - map->virtual = (void *)(((u8 *) man->io_addr) + bus_offset); + map->virtual = (void *)(((u8 *)bo->mem.bus.addr) + offset); } else { map->bo_kmap_type = ttm_bo_map_iomap; if (mem->placement & TTM_PL_FLAG_WC) - map->virtual = ioremap_wc(bus_base + bus_offset, - bus_size); + map->virtual = ioremap_wc(bo->mem.bus.base + bo->mem.bus.offset + offset, + size); else - map->virtual = ioremap_nocache(bus_base + bus_offset, - bus_size); + map->virtual = ioremap_nocache(bo->mem.bus.base + bo->mem.bus.offset + offset, + size); } return (!map->virtual) ? -ENOMEM : 0; } @@ -442,13 +472,12 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, unsigned long num_pages, struct ttm_bo_kmap_obj *map) { + unsigned long offset, size; int ret; - unsigned long bus_base; - unsigned long bus_offset; - unsigned long bus_size; BUG_ON(!list_empty(&bo->swap)); map->virtual = NULL; + map->bo = bo; if (num_pages > bo->num_pages) return -EINVAL; if (start_page > bo->num_pages) @@ -457,16 +486,15 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, if (num_pages > 1 && !DRM_SUSER(DRM_CURPROC)) return -EPERM; #endif - ret = ttm_bo_pci_offset(bo->bdev, &bo->mem, &bus_base, - &bus_offset, &bus_size); + ret = ttm_mem_io_reserve(bo->bdev, &bo->mem); if (ret) return ret; - if (bus_size == 0) { + if (!bo->mem.bus.is_iomem) { return ttm_bo_kmap_ttm(bo, start_page, num_pages, map); } else { - bus_offset += start_page << PAGE_SHIFT; - bus_size = num_pages << PAGE_SHIFT; - return ttm_bo_ioremap(bo, bus_base, bus_offset, bus_size, map); + offset = start_page << PAGE_SHIFT; + size = num_pages << PAGE_SHIFT; + return ttm_bo_ioremap(bo, offset, size, map); } } EXPORT_SYMBOL(ttm_bo_kmap); @@ -478,6 +506,7 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) switch (map->bo_kmap_type) { case ttm_bo_map_iomap: iounmap(map->virtual); + ttm_mem_io_free(map->bo->bdev, &map->bo->mem); break; case ttm_bo_map_vmap: vunmap(map->virtual); @@ -495,35 +524,6 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) } EXPORT_SYMBOL(ttm_bo_kunmap); -int ttm_bo_pfn_prot(struct ttm_buffer_object *bo, - unsigned long dst_offset, - unsigned long *pfn, pgprot_t *prot) -{ - struct ttm_mem_reg *mem = &bo->mem; - struct ttm_bo_device *bdev = bo->bdev; - unsigned long bus_offset; - unsigned long bus_size; - unsigned long bus_base; - int ret; - ret = ttm_bo_pci_offset(bdev, mem, &bus_base, &bus_offset, - &bus_size); - if (ret) - return -EINVAL; - if (bus_size != 0) - *pfn = (bus_base + bus_offset + dst_offset) >> PAGE_SHIFT; - else - if (!bo->ttm) - return -EINVAL; - else - *pfn = page_to_pfn(ttm_tt_get_page(bo->ttm, - dst_offset >> - PAGE_SHIFT)); - *prot = (mem->placement & TTM_PL_FLAG_CACHED) ? - PAGE_KERNEL : ttm_io_prot(mem->placement, PAGE_KERNEL); - - return 0; -} - int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo, void *sync_obj, void *sync_obj_arg, diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 668dbe8..fe6cb77 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -74,9 +74,6 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) struct ttm_buffer_object *bo = (struct ttm_buffer_object *) vma->vm_private_data; struct ttm_bo_device *bdev = bo->bdev; - unsigned long bus_base; - unsigned long bus_offset; - unsigned long bus_size; unsigned long page_offset; unsigned long page_last; unsigned long pfn; @@ -84,7 +81,6 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) struct page *page; int ret; int i; - bool is_iomem; unsigned long address = (unsigned long)vmf->virtual_address; int retval = VM_FAULT_NOPAGE; @@ -101,8 +97,21 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) return VM_FAULT_NOPAGE; } - if (bdev->driver->fault_reserve_notify) - bdev->driver->fault_reserve_notify(bo); + if (bdev->driver->fault_reserve_notify) { + ret = bdev->driver->fault_reserve_notify(bo); + switch (ret) { + case 0: + break; + case -EBUSY: + set_need_resched(); + case -ERESTARTSYS: + retval = VM_FAULT_NOPAGE; + goto out_unlock; + default: + retval = VM_FAULT_SIGBUS; + goto out_unlock; + } + } /* * Wait for buffer data in transit, due to a pipelined @@ -122,15 +131,12 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) spin_unlock(&bo->lock); - ret = ttm_bo_pci_offset(bdev, &bo->mem, &bus_base, &bus_offset, - &bus_size); - if (unlikely(ret != 0)) { + ret = ttm_mem_io_reserve(bdev, &bo->mem); + if (ret) { retval = VM_FAULT_SIGBUS; goto out_unlock; } - is_iomem = (bus_size != 0); - page_offset = ((address - vma->vm_start) >> PAGE_SHIFT) + bo->vm_node->start - vma->vm_pgoff; page_last = ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) + @@ -154,8 +160,7 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) * vma->vm_page_prot when the object changes caching policy, with * the correct locks held. */ - - if (is_iomem) { + if (bo->mem.bus.is_iomem) { vma->vm_page_prot = ttm_io_prot(bo->mem.placement, vma->vm_page_prot); } else { @@ -171,10 +176,8 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) */ for (i = 0; i < TTM_BO_VM_NUM_PREFAULT; ++i) { - - if (is_iomem) - pfn = ((bus_base + bus_offset) >> PAGE_SHIFT) + - page_offset; + if (bo->mem.bus.is_iomem) + pfn = ((bo->mem.bus.base + bo->mem.bus.offset) >> PAGE_SHIFT) + page_offset; else { page = ttm_tt_get_page(ttm, page_offset); if (unlikely(!page && i == 0)) { @@ -198,7 +201,6 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf) retval = (ret == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS; goto out_unlock; - } address += PAGE_SIZE; @@ -221,8 +223,7 @@ static void ttm_bo_vm_open(struct vm_area_struct *vma) static void ttm_bo_vm_close(struct vm_area_struct *vma) { - struct ttm_buffer_object *bo = - (struct ttm_buffer_object *)vma->vm_private_data; + struct ttm_buffer_object *bo = (struct ttm_buffer_object *)vma->vm_private_data; ttm_bo_unref(&bo); vma->vm_private_data = NULL; diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 8c8005e..3e273e0 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -66,6 +66,26 @@ struct ttm_placement { const uint32_t *busy_placement; }; +/** + * struct ttm_bus_placement + * + * @addr: mapped virtual address + * @base: bus base address + * @is_iomem: is this io memory ? + * @size: size in byte + * @offset: offset from the base address + * + * Structure indicating the bus placement of an object. + */ +struct ttm_bus_placement { + void *addr; + unsigned long base; + unsigned long size; + unsigned long offset; + bool is_iomem; + bool io_reserved; +}; + /** * struct ttm_mem_reg @@ -75,6 +95,7 @@ struct ttm_placement { * @num_pages: Actual size of memory region in pages. * @page_alignment: Page alignment. * @placement: Placement flags. + * @bus: Placement on io bus accessible to the CPU * * Structure indicating the placement and space resources used by a * buffer object. @@ -87,6 +108,7 @@ struct ttm_mem_reg { uint32_t page_alignment; uint32_t mem_type; uint32_t placement; + struct ttm_bus_placement bus; }; /** @@ -274,6 +296,7 @@ struct ttm_bo_kmap_obj { ttm_bo_map_kmap = 3, ttm_bo_map_premapped = 4 | TTM_BO_MAP_IOMEM_MASK, } bo_kmap_type; + struct ttm_buffer_object *bo; }; /** diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 69f70e4..da39865 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -352,12 +352,21 @@ struct ttm_bo_driver { struct ttm_mem_reg *new_mem); /* notify the driver we are taking a fault on this BO * and have reserved it */ - void (*fault_reserve_notify)(struct ttm_buffer_object *bo); + int (*fault_reserve_notify)(struct ttm_buffer_object *bo); /** * notify the driver that we're about to swap out this bo */ void (*swap_notify) (struct ttm_buffer_object *bo); + + /** + * Driver callback on when mapping io memory (for bo_move_memcpy + * for instance). TTM will take care to call io_mem_free whenever + * the mapping is not use anymore. io_mem_reserve & io_mem_free + * are balanced. + */ + int (*io_mem_reserve)(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem); + void (*io_mem_free)(struct ttm_bo_device *bdev, struct ttm_mem_reg *mem); }; /** @@ -685,6 +694,11 @@ extern int ttm_bo_pci_offset(struct ttm_bo_device *bdev, unsigned long *bus_offset, unsigned long *bus_size); +extern int ttm_mem_io_reserve(struct ttm_bo_device *bdev, + struct ttm_mem_reg *mem); +extern void ttm_mem_io_free(struct ttm_bo_device *bdev, + struct ttm_mem_reg *mem); + extern void ttm_bo_global_release(struct ttm_global_reference *ref); extern int ttm_bo_global_init(struct ttm_global_reference *ref);