From patchwork Sat May 10 03:59:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Widawsky X-Patchwork-Id: 4146371 Return-Path: X-Original-To: patchwork-intel-gfx@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 767E4BFF02 for ; Sat, 10 May 2014 04:02:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B3D87201E7 for ; Sat, 10 May 2014 04:02:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id E0B13201DC for ; Sat, 10 May 2014 04:02:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 606156F09B; Fri, 9 May 2014 21:02:15 -0700 (PDT) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mail.bwidawsk.net (bwidawsk.net [166.78.191.112]) by gabe.freedesktop.org (Postfix) with ESMTP id E895B6F094 for ; Fri, 9 May 2014 21:02:13 -0700 (PDT) Received: by mail.bwidawsk.net (Postfix, from userid 5001) id 007B75808F; Fri, 9 May 2014 21:02:12 -0700 (PDT) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from ironside.intel.com (c-24-21-100-90.hsd1.or.comcast.net [24.21.100.90]) by mail.bwidawsk.net (Postfix) with ESMTPSA id 93D4158092; Fri, 9 May 2014 21:00:23 -0700 (PDT) From: Ben Widawsky To: Intel GFX Date: Fri, 9 May 2014 20:59:34 -0700 Message-Id: <1399694391-3935-40-git-send-email-benjamin.widawsky@intel.com> X-Mailer: git-send-email 1.9.2 In-Reply-To: <1399694391-3935-1-git-send-email-benjamin.widawsky@intel.com> References: <1399694391-3935-1-git-send-email-benjamin.widawsky@intel.com> Cc: Ben Widawsky , Ben Widawsky Subject: [Intel-gfx] [PATCH 39/56] drm/i915/bdw: Scratch unused pages X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP This is probably not required since BDW is hopefully a bit more robust that previous generations. Realize also that scratch will not exist for every entry within the page table structure. Doing this would waste an extraordinary amount of space when we move to 4 level page tables. Therefore, the scratch pages/tables will only be pointed to by page tables which have less than all of the entries filled. I wrote the patch while debugging so I figured why not put it in the series. Signed-off-by: Ben Widawsky --- drivers/gpu/drm/i915/i915_gem_gtt.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 66ed943..2b732ca 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -576,6 +576,25 @@ static void gen8_map_pagetable_range(struct i915_pagedir *pd, kunmap_atomic(pagedir); } +static void gen8_map_pagedir(struct i915_pagedir *pd, + struct i915_pagetab *pt, + int entry, + struct drm_device *dev) +{ + gen8_ppgtt_pde_t *pagedir = kmap_atomic(pd->page); + __gen8_do_map_pt(pagedir + entry, pt, dev); + kunmap_atomic(pagedir); +} + +static void gen8_unmap_pagetable(struct i915_hw_ppgtt *ppgtt, + struct i915_pagedir *pd, + int pde) +{ + pd->page_tables[pde] = NULL; + WARN_ON(!test_and_clear_bit(pde, pd->used_pdes)); + gen8_map_pagedir(pd, ppgtt->scratch_pt, pde, ppgtt->base.dev); +} + static void gen8_teardown_va_range(struct i915_address_space *vm, uint64_t start, uint64_t length) { @@ -621,8 +640,10 @@ static void gen8_teardown_va_range(struct i915_address_space *vm, if (bitmap_empty(pt->used_ptes, GEN8_PTES_PER_PT)) { free_pt_single(pt, vm->dev); - pd->page_tables[pde] = NULL; - WARN_ON(!test_and_clear_bit(pde, pd->used_pdes)); + /* This may be nixed later. Optimize? */ + gen8_unmap_pagetable(ppgtt, pd, pde); + } else { + gen8_ppgtt_clear_range(vm, pd_start, pd_len, true); } }