From patchwork Fri Oct 7 13:17:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micha=C5=82_Winiarski?= X-Patchwork-Id: 9366091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 73F04608A6 for ; Fri, 7 Oct 2016 13:18:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 657CC295EA for ; Fri, 7 Oct 2016 13:18:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A642295F4; Fri, 7 Oct 2016 13:18:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E6437295F1 for ; Fri, 7 Oct 2016 13:18:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 967626EB8B; Fri, 7 Oct 2016 13:18:09 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by gabe.freedesktop.org (Postfix) with ESMTPS id E90BD6EB8A for ; Fri, 7 Oct 2016 13:18:08 +0000 (UTC) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP; 07 Oct 2016 06:18:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,308,1473145200"; d="scan'208";a="17684460" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by orsmga004.jf.intel.com with ESMTP; 07 Oct 2016 06:18:07 -0700 Received: from mwiniars-desk1.ger.corp.intel.com (172.28.171.143) by IRSMSX102.ger.corp.intel.com (163.33.3.155) with Microsoft SMTP Server id 14.3.248.2; Fri, 7 Oct 2016 14:18:07 +0100 From: =?UTF-8?q?Micha=C5=82=20Winiarski?= To: Date: Fri, 7 Oct 2016 15:17:51 +0200 Message-ID: <1475846272-16963-2-git-send-email-michal.winiarski@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475846272-16963-1-git-send-email-michal.winiarski@intel.com> References: <1475846272-16963-1-git-send-email-michal.winiarski@intel.com> MIME-Version: 1.0 X-Originating-IP: [172.28.171.143] Cc: Mika Kuoppala Subject: [Intel-gfx] [PATCH v2 2/3] drm/i915/gtt: Split gen8_ppgtt_clear_pte_range X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Let's use more top-down approach, where each gen8_ppgtt_clear_* function is responsible for clearing the struct passed as an argument and calling relevant clear_range functions on lower-level tables. Doing this rather than operating on PTE ranges makes the implementation of shrinking page tables quite simple. v2: Drop min when calculating num_entries, no negation in 48b ppgtt check, no newlines in vars block (Joonas) Cc: Chris Wilson Cc: Joonas Lahtinen Cc: Michel Thierry Cc: Mika Kuoppala Signed-off-by: MichaƂ Winiarski Reviewed-by: Mika Kuoppala Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/i915/i915_gem_gtt.c | 107 +++++++++++++++++++----------------- 1 file changed, 58 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 08e2f35..adabf58 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -704,59 +704,78 @@ static int gen8_48b_mm_switch(struct i915_hw_ppgtt *ppgtt, return gen8_write_pdp(req, 0, px_dma(&ppgtt->pml4)); } -static void gen8_ppgtt_clear_pte_range(struct i915_address_space *vm, - struct i915_page_directory_pointer *pdp, - uint64_t start, - uint64_t length, - gen8_pte_t scratch_pte) +static void gen8_ppgtt_clear_pt(struct i915_address_space *vm, + struct i915_page_table *pt, + uint64_t start, + uint64_t length) { struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); + + unsigned int pte_start = gen8_pte_index(start); + unsigned int num_entries = gen8_pte_count(start, length); + uint64_t pte; gen8_pte_t *pt_vaddr; - unsigned pdpe = gen8_pdpe_index(start); - unsigned pde = gen8_pde_index(start); - unsigned pte = gen8_pte_index(start); - unsigned num_entries = length >> PAGE_SHIFT; - unsigned last_pte, i; + gen8_pte_t scratch_pte = gen8_pte_encode(vm->scratch_page.daddr, + I915_CACHE_LLC); - if (WARN_ON(!pdp)) + if (WARN_ON(!px_page(pt))) return; - while (num_entries) { - struct i915_page_directory *pd; - struct i915_page_table *pt; + bitmap_clear(pt->used_ptes, pte_start, num_entries); - if (WARN_ON(!pdp->page_directory[pdpe])) - break; + pt_vaddr = kmap_px(pt); + + for (pte = pte_start; pte < num_entries; pte++) + pt_vaddr[pte] = scratch_pte; - pd = pdp->page_directory[pdpe]; + kunmap_px(ppgtt, pt_vaddr); +} + +static void gen8_ppgtt_clear_pd(struct i915_address_space *vm, + struct i915_page_directory *pd, + uint64_t start, + uint64_t length) +{ + struct i915_page_table *pt; + uint64_t pde; + gen8_for_each_pde(pt, pd, start, length, pde) { if (WARN_ON(!pd->page_table[pde])) break; - pt = pd->page_table[pde]; + gen8_ppgtt_clear_pt(vm, pt, start, length); + } +} - if (WARN_ON(!px_page(pt))) - break; +static void gen8_ppgtt_clear_pdp(struct i915_address_space *vm, + struct i915_page_directory_pointer *pdp, + uint64_t start, + uint64_t length) +{ + struct i915_page_directory *pd; + uint64_t pdpe; - last_pte = pte + num_entries; - if (last_pte > GEN8_PTES) - last_pte = GEN8_PTES; + gen8_for_each_pdpe(pd, pdp, start, length, pdpe) { + if (WARN_ON(!pdp->page_directory[pdpe])) + break; - pt_vaddr = kmap_px(pt); + gen8_ppgtt_clear_pd(vm, pd, start, length); + } +} - for (i = pte; i < last_pte; i++) { - pt_vaddr[i] = scratch_pte; - num_entries--; - } +static void gen8_ppgtt_clear_pml4(struct i915_address_space *vm, + struct i915_pml4 *pml4, + uint64_t start, + uint64_t length) +{ + struct i915_page_directory_pointer *pdp; + uint64_t pml4e; - kunmap_px(ppgtt, pt_vaddr); + gen8_for_each_pml4e(pdp, pml4, start, length, pml4e) { + if (WARN_ON(!pml4->pdps[pml4e])) + break; - pte = 0; - if (++pde == I915_PDES) { - if (++pdpe == I915_PDPES_PER_PDP(vm->dev)) - break; - pde = 0; - } + gen8_ppgtt_clear_pdp(vm, pdp, start, length); } } @@ -764,21 +783,11 @@ static void gen8_ppgtt_clear_range(struct i915_address_space *vm, uint64_t start, uint64_t length) { struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); - gen8_pte_t scratch_pte = gen8_pte_encode(vm->scratch_page.daddr, - I915_CACHE_LLC); - if (!USES_FULL_48BIT_PPGTT(vm->dev)) { - gen8_ppgtt_clear_pte_range(vm, &ppgtt->pdp, start, length, - scratch_pte); - } else { - uint64_t pml4e; - struct i915_page_directory_pointer *pdp; - - gen8_for_each_pml4e(pdp, &ppgtt->pml4, start, length, pml4e) { - gen8_ppgtt_clear_pte_range(vm, pdp, start, length, - scratch_pte); - } - } + if (USES_FULL_48BIT_PPGTT(vm->dev)) + gen8_ppgtt_clear_pml4(vm, &ppgtt->pml4, start, length); + else + gen8_ppgtt_clear_pdp(vm, &ppgtt->pdp, start, length); } static void