From patchwork Sat May 10 03:59:19 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 4146221 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 8978ABFF02 for ; Sat, 10 May 2014 04:01:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B0417201DE for ; Sat, 10 May 2014 04:01:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id CF9F9201DC for ; Sat, 10 May 2014 04:01:28 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 459836F086; Fri, 9 May 2014 21:01:28 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id 751066F086 for ; Fri, 9 May 2014 21:01:25 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id 6A50D5807E; Fri, 9 May 2014 21:01:24 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from ironside.intel.com (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by mail.bwidawsk.net (Postfix) with ESMTPSA id 2AD9458080; Fri, 9 May 2014 21:00:16 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Fri, 9 May 2014 20:59:19 -0700 Message-Id: <1399694391-3935-25-git-send-email-benjamin.widawsky@intel.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1399694391-3935-1-git-send-email-benjamin.widawsky@intel.com> References: <1399694391-3935-1-git-send-email-benjamin.widawsky@intel.com> Cc: Ben Widawsky , Ben Widawsky Subject: [Intel-gfx] [PATCH 24/56] drm/i915: Consolidate dma mappings X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP With a little bit of macro magic, and the fact that every page table/dir/etc. we wish to map will have a page, and daddr member, we can greatly simplify and reduce code. The patch introduces an i915_dma_map/unmap which has the same semantics as pci_map_page, but is 1 line, and doesn't require newlines, or local variables to make it fit cleanly. Notice that even the page allocation shares this same attribute. For now, I am leaving that code untouched because the macro version would be a bit on the big side - but it's a nice cleanup as well (IMO) Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_gem_gtt.c | 56 ++++++++++++------------------------- 1 file changed, 18 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 92ffee7..bb909e9 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -211,45 +211,33 @@ static gen6_gtt_pte_t iris_pte_encode(dma_addr_t addr, return pte; } -#define dma_unmap_pt_single(pt, dev) do { \ - pci_unmap_page((dev)->pdev, (pt)->daddr, 4096, PCI_DMA_BIDIRECTIONAL); \ +#define i915_dma_unmap_single(px, dev) do { \ + pci_unmap_page((dev)->pdev, (px)->daddr, 4096, PCI_DMA_BIDIRECTIONAL); \ } while (0); /** - * dma_map_pt_single() - Create a dma mapping for a page table - * @pt: Page table to get a DMA map for + * i915_dma_map_px_single() - Create a dma mapping for a page table/dir/etc. + * @px: Page table/dir/etc to get a DMA map for * @dev: drm device * * Page table allocations are unified across all gens. They always require a - * single 4k allocation, as well as a DMA mapping. + * single 4k allocation, as well as a DMA mapping. If we keep the structs + * symmetric here, the simple macro covers us for every page table type. * * Return: 0 if success. */ -static int dma_map_pt_single(struct i915_pagetab *pt, struct drm_device *dev) -{ - struct page *page; - dma_addr_t pt_addr; - int ret; - - page = pt->page; - pt_addr = pci_map_page(dev->pdev, page, 0, 4096, - PCI_DMA_BIDIRECTIONAL); - - ret = pci_dma_mapping_error(dev->pdev, pt_addr); - if (ret) - return ret; - - pt->daddr = pt_addr; - - return 0; -} +#define i915_dma_map_px_single(px, dev) \ + pci_dma_mapping_error((dev)->pdev, \ + (px)->daddr = pci_map_page((dev)->pdev, \ + (px)->page, 0, 4096, \ + PCI_DMA_BIDIRECTIONAL)) static void free_pt_single(struct i915_pagetab *pt, struct drm_device *dev) { if (WARN_ON(!pt->page)) return; - dma_unmap_pt_single(pt, dev); + i915_dma_unmap_single(pt, dev); __free_page(pt->page); kfree(pt); } @@ -269,7 +257,7 @@ static struct i915_pagetab *alloc_pt_single(struct drm_device *dev) return ERR_PTR(-ENOMEM); } - ret = dma_map_pt_single(pt, dev); + ret = i915_dma_map_px_single(pt, dev); if (ret) { __free_page(pt->page); kfree(pt); @@ -519,7 +507,7 @@ static void gen8_ppgtt_free(struct i915_hw_ppgtt *ppgtt) static void gen8_ppgtt_dma_unmap_pages(struct i915_hw_ppgtt *ppgtt) { - struct pci_dev *hwdev = ppgtt->base.dev->pdev; + struct drm_device *dev = ppgtt->base.dev; int i, j; for (i = 0; i < ppgtt->num_pd_pages; i++) { @@ -528,16 +516,14 @@ static void gen8_ppgtt_dma_unmap_pages(struct i915_hw_ppgtt *ppgtt) if (!ppgtt->pdp.pagedir[i]->daddr) continue; - pci_unmap_page(hwdev, ppgtt->pdp.pagedir[i]->daddr, PAGE_SIZE, - PCI_DMA_BIDIRECTIONAL); + i915_dma_unmap_single(ppgtt->pdp.pagedir[i], dev); for (j = 0; j < I915_PDES_PER_PD; j++) { struct i915_pagedir *pd = ppgtt->pdp.pagedir[i]; struct i915_pagetab *pt = pd->page_tables[j]; dma_addr_t addr = pt->daddr; if (addr) - pci_unmap_page(hwdev, addr, PAGE_SIZE, - PCI_DMA_BIDIRECTIONAL); + i915_dma_unmap_single(pt, dev); } } } @@ -623,19 +609,13 @@ err_out: static int gen8_ppgtt_setup_page_directories(struct i915_hw_ppgtt *ppgtt, const int pdpe) { - dma_addr_t pd_addr; int ret; - pd_addr = pci_map_page(ppgtt->base.dev->pdev, - ppgtt->pdp.pagedir[pdpe]->page, 0, - PAGE_SIZE, PCI_DMA_BIDIRECTIONAL); - - ret = pci_dma_mapping_error(ppgtt->base.dev->pdev, pd_addr); + ret = i915_dma_map_px_single(ppgtt->pdp.pagedir[pdpe], + ppgtt->base.dev); if (ret) return ret; - ppgtt->pdp.pagedir[pdpe]->daddr = pd_addr; - return 0; }