From patchwork Thu Oct 13 12:02:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Micha=C5=82_Winiarski?= X-Patchwork-Id: 9374897 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1771C6075E for ; Thu, 13 Oct 2016 12:03:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 08F6B29FA1 for ; Thu, 13 Oct 2016 12:03:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F1CBC29FFD; Thu, 13 Oct 2016 12:03:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 83C6E29FA1 for ; Thu, 13 Oct 2016 12:03:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0861A6E41F; Thu, 13 Oct 2016 12:03:06 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 288746E41F for ; Thu, 13 Oct 2016 12:03:05 +0000 (UTC) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP; 13 Oct 2016 05:03:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,339,1473145200"; d="scan'208";a="19253967" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by orsmga005.jf.intel.com with ESMTP; 13 Oct 2016 05:03:04 -0700 Received: from mwiniars-desk1.ger.corp.intel.com (172.28.171.143) by IRSMSX102.ger.corp.intel.com (163.33.3.155) with Microsoft SMTP Server id 14.3.248.2; Thu, 13 Oct 2016 13:03:03 +0100 From: =?UTF-8?q?Micha=C5=82=20Winiarski?= To: Date: Thu, 13 Oct 2016 14:02:42 +0200 Message-ID: <1476360162-24062-3-git-send-email-michal.winiarski@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476360162-24062-1-git-send-email-michal.winiarski@intel.com> References: <1476360162-24062-1-git-send-email-michal.winiarski@intel.com> MIME-Version: 1.0 X-Originating-IP: [172.28.171.143] Subject: [Intel-gfx] [CI 3/3] drm/i915/gtt: Free unused lower-level page tables X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Since "Dynamic page table allocations" were introduced, our page tables can grow (being dynamically allocated) with address space range usage. Unfortunately, their lifetime is bound to vm. This is not a huge problem when we're not using softpin - drm_mm is creating an upper bound on used range by causing addresses for our VMAs to eventually be reused. With softpin, long lived contexts can drain the system out of memory even with a single "small" object. For example: bo = bo_alloc(size); while(true) offset += size; exec(bo, offset); Will cause us to create new allocations until all memory in the system is used for tracking GPU pages (even though almost all PTEs in this vm are pointing to scratch). Let's free unused page tables in clear_range to prevent this - if no entries are used, we can safely free it and return this information to the caller (so that higher-level entry is pointing to scratch). v2: Document return value and free semantics (Joonas) v3: No newlines in vars block (Joonas) v4: Drop redundant local 'reduce' variable v5: Handle CI fail with enable_ppgtt=2 Cc: Michel Thierry Cc: Mika Kuoppala Reviewed-by: Chris Wilson Reviewed-by: Joonas Lahtinen Signed-off-by: MichaƂ Winiarski --- drivers/gpu/drm/i915/i915_gem_gtt.c | 78 +++++++++++++++++++++++++++++++++---- 1 file changed, 70 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index c284d8d..f4c80bc 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -704,13 +704,14 @@ static int gen8_48b_mm_switch(struct i915_hw_ppgtt *ppgtt, return gen8_write_pdp(req, 0, px_dma(&ppgtt->pml4)); } -static void gen8_ppgtt_clear_pt(struct i915_address_space *vm, +/* Removes entries from a single page table, releasing it if it's empty. + * Caller can use the return value to update higher-level entries */ +static bool gen8_ppgtt_clear_pt(struct i915_address_space *vm, struct i915_page_table *pt, uint64_t start, uint64_t length) { struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); - unsigned int pte_start = gen8_pte_index(start); unsigned int num_entries = gen8_pte_count(start, length); uint64_t pte; @@ -719,63 +720,124 @@ static void gen8_ppgtt_clear_pt(struct i915_address_space *vm, I915_CACHE_LLC); if (WARN_ON(!px_page(pt))) - return; + return false; bitmap_clear(pt->used_ptes, pte_start, num_entries); + if (bitmap_empty(pt->used_ptes, GEN8_PTES)) { + free_pt(vm->dev, pt); + return true; + } + pt_vaddr = kmap_px(pt); for (pte = pte_start; pte < num_entries; pte++) pt_vaddr[pte] = scratch_pte; kunmap_px(ppgtt, pt_vaddr); + + return false; } -static void gen8_ppgtt_clear_pd(struct i915_address_space *vm, +/* Removes entries from a single page dir, releasing it if it's empty. + * Caller can use the return value to update higher-level entries + */ +static bool gen8_ppgtt_clear_pd(struct i915_address_space *vm, struct i915_page_directory *pd, uint64_t start, uint64_t length) { + struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); struct i915_page_table *pt; uint64_t pde; + gen8_pde_t *pde_vaddr; + gen8_pde_t scratch_pde = gen8_pde_encode(px_dma(vm->scratch_pt), + I915_CACHE_LLC); gen8_for_each_pde(pt, pd, start, length, pde) { if (WARN_ON(!pd->page_table[pde])) break; - gen8_ppgtt_clear_pt(vm, pt, start, length); + if (gen8_ppgtt_clear_pt(vm, pt, start, length)) { + __clear_bit(pde, pd->used_pdes); + pde_vaddr = kmap_px(pd); + pde_vaddr[pde] = scratch_pde; + kunmap_px(ppgtt, pde_vaddr); + } + } + + if (bitmap_empty(pd->used_pdes, I915_PDES)) { + free_pd(vm->dev, pd); + return true; } + + return false; } -static void gen8_ppgtt_clear_pdp(struct i915_address_space *vm, +/* Removes entries from a single page dir pointer, releasing it if it's empty. + * Caller can use the return value to update higher-level entries + */ +static bool gen8_ppgtt_clear_pdp(struct i915_address_space *vm, struct i915_page_directory_pointer *pdp, uint64_t start, uint64_t length) { + struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); struct i915_page_directory *pd; uint64_t pdpe; + gen8_ppgtt_pdpe_t *pdpe_vaddr; + gen8_ppgtt_pdpe_t scratch_pdpe = + gen8_pdpe_encode(px_dma(vm->scratch_pd), I915_CACHE_LLC); gen8_for_each_pdpe(pd, pdp, start, length, pdpe) { if (WARN_ON(!pdp->page_directory[pdpe])) break; - gen8_ppgtt_clear_pd(vm, pd, start, length); + if (gen8_ppgtt_clear_pd(vm, pd, start, length)) { + __clear_bit(pdpe, pdp->used_pdpes); + if (USES_FULL_48BIT_PPGTT(vm->dev)) { + pdpe_vaddr = kmap_px(pdp); + pdpe_vaddr[pdpe] = scratch_pdpe; + kunmap_px(ppgtt, pdpe_vaddr); + } + } } + + if (USES_FULL_48BIT_PPGTT(vm->dev) && + bitmap_empty(pdp->used_pdpes, I915_PDPES_PER_PDP(vm->dev))) { + free_pdp(vm->dev, pdp); + return true; + } + + return false; } +/* Removes entries from a single pml4. + * This is the top-level structure in 4-level page tables used on gen8+. + * Empty entries are always scratch pml4e. + */ static void gen8_ppgtt_clear_pml4(struct i915_address_space *vm, struct i915_pml4 *pml4, uint64_t start, uint64_t length) { + struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm); struct i915_page_directory_pointer *pdp; uint64_t pml4e; + gen8_ppgtt_pml4e_t *pml4e_vaddr; + gen8_ppgtt_pml4e_t scratch_pml4e = + gen8_pml4e_encode(px_dma(vm->scratch_pdp), I915_CACHE_LLC); gen8_for_each_pml4e(pdp, pml4, start, length, pml4e) { if (WARN_ON(!pml4->pdps[pml4e])) break; - gen8_ppgtt_clear_pdp(vm, pdp, start, length); + if (gen8_ppgtt_clear_pdp(vm, pdp, start, length)) { + __clear_bit(pml4e, pml4->used_pml4es); + pml4e_vaddr = kmap_px(pml4); + pml4e_vaddr[pml4e] = scratch_pml4e; + kunmap_px(ppgtt, pml4e_vaddr); + } } }