From patchwork Tue Mar 8 15:39:47 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konrad Rzeszutek Wilk X-Patchwork-Id: 618861 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p28FgY0d010340 for ; Tue, 8 Mar 2011 15:42:54 GMT Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AEEB59E9BB for ; Tue, 8 Mar 2011 07:42:34 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from rcsinet10.oracle.com (rcsinet10.oracle.com [148.87.113.121]) by gabe.freedesktop.org (Postfix) with ESMTP id 593FB9E955 for ; Tue, 8 Mar 2011 07:41:22 -0800 (PST) Received: from rcsinet13.oracle.com (rcsinet13.oracle.com [148.87.113.125]) by rcsinet10.oracle.com (Switch-3.4.2/Switch-3.4.2) with ESMTP id p28FfHtB017506 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 8 Mar 2011 15:41:19 GMT Received: from acsmt355.oracle.com (acsmt355.oracle.com [141.146.40.155]) by rcsinet13.oracle.com (Switch-3.4.2/Switch-3.4.1) with ESMTP id p284arkU006511; Tue, 8 Mar 2011 15:41:15 GMT Received: from abhmt002.oracle.com by acsmt355.oracle.com with ESMTP id 1068158381299598794; Tue, 08 Mar 2011 07:39:54 -0800 Received: from phenom (/209.6.55.207) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 08 Mar 2011 07:39:54 -0800 Received: by phenom (Postfix, from userid 1000) id 03AEF1EDB; Tue, 8 Mar 2011 10:39:52 -0500 (EST) From: Konrad Rzeszutek Wilk To: linux-kernel@vger.kernel.org, Dave Airlie , dri-devel@lists.freedesktop.org, Thomas Hellstrom , Alex Deucher , Jerome Glisse Subject: [PATCH] cleanup: Add 'struct dev' in the TTM layer to be passed in for DMA API calls. Date: Tue, 8 Mar 2011 10:39:47 -0500 Message-Id: <1299598789-20402-1-git-send-email-konrad.wilk@oracle.com> X-Mailer: git-send-email 1.7.1 X-Source-IP: acsmt355.oracle.com [141.146.40.155] X-Auth-Type: Internal IP X-CT-RefId: str=0001.0A090202.4D764E1D.00C6,ss=1,fgs=0 Cc: Konrad Rzeszutek Wilk X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org Errors-To: dri-devel-bounces+patchwork-dri-devel=patchwork.kernel.org@lists.freedesktop.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Tue, 08 Mar 2011 15:42:54 +0000 (UTC) diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c index 26347b7..3706156 100644 --- a/drivers/gpu/drm/nouveau/nouveau_mem.c +++ b/drivers/gpu/drm/nouveau/nouveau_mem.c @@ -412,7 +412,8 @@ nouveau_mem_vram_init(struct drm_device *dev) ret = ttm_bo_device_init(&dev_priv->ttm.bdev, dev_priv->ttm.bo_global_ref.ref.object, &nouveau_bo_driver, DRM_FILE_PAGE_OFFSET, - dma_bits <= 32 ? true : false); + dma_bits <= 32 ? true : false, + dev->dev); if (ret) { NV_ERROR(dev, "Error initialising bo driver: %d\n", ret); return ret; diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 9ead11f..371890c 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -517,7 +517,8 @@ int radeon_ttm_init(struct radeon_device *rdev) r = ttm_bo_device_init(&rdev->mman.bdev, rdev->mman.bo_global_ref.ref.object, &radeon_bo_driver, DRM_FILE_PAGE_OFFSET, - rdev->need_dma32); + rdev->need_dma32, + rdev->dev); if (r) { DRM_ERROR("failed initializing buffer object driver(%d).\n", r); return r; diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index af61fc2..278a2d3 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -1526,12 +1526,14 @@ int ttm_bo_device_init(struct ttm_bo_device *bdev, struct ttm_bo_global *glob, struct ttm_bo_driver *driver, uint64_t file_page_offset, - bool need_dma32) + bool need_dma32, + struct device *dev) { int ret = -EINVAL; rwlock_init(&bdev->vm_lock); bdev->driver = driver; + bdev->dev = dev; memset(bdev->man, 0, sizeof(bdev->man)); diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 737a2a2..35849db 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -664,7 +664,7 @@ out: */ int ttm_get_pages(struct list_head *pages, int flags, enum ttm_caching_state cstate, unsigned count, - dma_addr_t *dma_address) + dma_addr_t *dma_address, struct device *dev) { struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); struct page *p = NULL; @@ -685,7 +685,7 @@ int ttm_get_pages(struct list_head *pages, int flags, for (r = 0; r < count; ++r) { if ((flags & TTM_PAGE_FLAG_DMA32) && dma_address) { void *addr; - addr = dma_alloc_coherent(NULL, PAGE_SIZE, + addr = dma_alloc_coherent(dev, PAGE_SIZE, &dma_address[r], gfp_flags); if (addr == NULL) @@ -730,7 +730,7 @@ int ttm_get_pages(struct list_head *pages, int flags, printk(KERN_ERR TTM_PFX "Failed to allocate extra pages " "for large request."); - ttm_put_pages(pages, 0, flags, cstate, NULL); + ttm_put_pages(pages, 0, flags, cstate, NULL, NULL); return r; } } @@ -741,7 +741,8 @@ int ttm_get_pages(struct list_head *pages, int flags, /* Put all pages in pages list to correct pool to wait for reuse */ void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags, - enum ttm_caching_state cstate, dma_addr_t *dma_address) + enum ttm_caching_state cstate, dma_addr_t *dma_address, + struct device *dev) { unsigned long irq_flags; struct ttm_page_pool *pool = ttm_get_pool(flags, cstate); @@ -757,7 +758,7 @@ void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags, void *addr = page_address(p); WARN_ON(!addr || !dma_address[r]); if (addr) - dma_free_coherent(NULL, PAGE_SIZE, + dma_free_coherent(dev, PAGE_SIZE, addr, dma_address[r]); dma_address[r] = 0; diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 86d5b17..354f9d9 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -110,7 +110,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index) INIT_LIST_HEAD(&h); ret = ttm_get_pages(&h, ttm->page_flags, ttm->caching_state, 1, - &ttm->dma_address[index]); + &ttm->dma_address[index], ttm->dev); if (ret != 0) return NULL; @@ -304,7 +304,7 @@ static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm) } } ttm_put_pages(&h, count, ttm->page_flags, ttm->caching_state, - ttm->dma_address); + ttm->dma_address, ttm->dev); ttm->state = tt_unpopulated; ttm->first_himem_page = ttm->num_pages; ttm->last_lomem_page = -1; @@ -397,6 +397,7 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, unsigned long size, ttm->last_lomem_page = -1; ttm->caching_state = tt_cached; ttm->page_flags = page_flags; + ttm->dev = bdev->dev; ttm->dummy_read_page = dummy_read_page; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c index 10ca97e..803d979 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c @@ -322,11 +322,11 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) ttm_lock_set_kill(&dev_priv->fbdev_master.lock, false, SIGTERM); dev_priv->active_master = &dev_priv->fbdev_master; - ret = ttm_bo_device_init(&dev_priv->bdev, dev_priv->bo_global_ref.ref.object, &vmw_bo_driver, VMWGFX_FILE_PAGE_OFFSET, - false); + false, + dev->dev); if (unlikely(ret != 0)) { DRM_ERROR("Failed initializing TTM buffer object driver.\n"); goto out_err1; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index efed082..2024a74 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -177,6 +177,7 @@ struct ttm_tt { tt_unpopulated, } state; dma_addr_t *dma_address; + struct device *dev; }; #define TTM_MEMTYPE_FLAG_FIXED (1 << 0) /* Fixed (on-card) PCI memory */ @@ -551,6 +552,7 @@ struct ttm_bo_device { struct list_head device_list; struct ttm_bo_global *glob; struct ttm_bo_driver *driver; + struct device *dev; rwlock_t vm_lock; struct ttm_mem_type_manager man[TTM_NUM_MEM_TYPES]; spinlock_t fence_lock; @@ -791,6 +793,8 @@ extern int ttm_bo_device_release(struct ttm_bo_device *bdev); * @file_page_offset: Offset into the device address space that is available * for buffer data. This ensures compatibility with other users of the * address space. + * @need_dma32: Allocate pages under 4GB + * @dev: 'struct device' of the PCI device. * * Initializes a struct ttm_bo_device: * Returns: @@ -799,7 +803,8 @@ extern int ttm_bo_device_release(struct ttm_bo_device *bdev); extern int ttm_bo_device_init(struct ttm_bo_device *bdev, struct ttm_bo_global *glob, struct ttm_bo_driver *driver, - uint64_t file_page_offset, bool need_dma32); + uint64_t file_page_offset, bool need_dma32, + struct device *dev); /** * ttm_bo_unmap_virtual diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h index 8062890..ccb6b7a 100644 --- a/include/drm/ttm/ttm_page_alloc.h +++ b/include/drm/ttm/ttm_page_alloc.h @@ -37,12 +37,14 @@ * @cstate: ttm caching state for the page. * @count: number of pages to allocate. * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set). + * @dev: struct device for appropiate DMA accounting. */ int ttm_get_pages(struct list_head *pages, int flags, enum ttm_caching_state cstate, unsigned count, - dma_addr_t *dma_address); + dma_addr_t *dma_address, + struct device *dev); /** * Put linked list of pages to pool. * @@ -52,12 +54,14 @@ int ttm_get_pages(struct list_head *pages, * @flags: ttm flags for page allocation. * @cstate: ttm caching state. * @dma_address: The DMA (bus) address of pages (if TTM_PAGE_FLAG_DMA32 set). + * @dev: struct device for appropiate DMA accounting. */ void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags, enum ttm_caching_state cstate, - dma_addr_t *dma_address); + dma_addr_t *dma_address, + struct device *dev); /** * Initialize pool allocator. */