From patchwork Fri Jun 14 16:43:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Kuoppala X-Patchwork-Id: 10995975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B29276 for ; Fri, 14 Jun 2019 16:43:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B97926538 for ; Fri, 14 Jun 2019 16:43:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0FCDA2834A; Fri, 14 Jun 2019 16:43:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8378826538 for ; Fri, 14 Jun 2019 16:43:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2B65C89B00; Fri, 14 Jun 2019 16:43:56 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id E22E189AF3 for ; Fri, 14 Jun 2019 16:43:54 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Jun 2019 09:43:54 -0700 X-ExtLoop1: 1 Received: from rosetta.fi.intel.com ([10.237.72.186]) by orsmga003.jf.intel.com with ESMTP; 14 Jun 2019 09:43:53 -0700 Received: by rosetta.fi.intel.com (Postfix, from userid 1000) id D812184055B; Fri, 14 Jun 2019 19:43:50 +0300 (EEST) From: Mika Kuoppala To: intel-gfx@lists.freedesktop.org Date: Fri, 14 Jun 2019 19:43:45 +0300 Message-Id: <20190614164350.30415-5-mika.kuoppala@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190614164350.30415-1-mika.kuoppala@linux.intel.com> References: <20190614164350.30415-1-mika.kuoppala@linux.intel.com> Subject: [Intel-gfx] [PATCH 05/10] drm/i915/gtt: Generalize alloc_pd X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP Allocate all page directory variants with alloc_pd. As the lvl3 and lvl4 variants differ in manipulation, we need to check for existence of backing phys page before accessing it. v2: use err in returns Signed-off-by: Mika Kuoppala Reviewed-by: Chris Wilson --- drivers/gpu/drm/i915/i915_gem_gtt.c | 88 ++++++++++++----------------- 1 file changed, 36 insertions(+), 52 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index 25805971f771..de264b3a0105 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -719,10 +719,17 @@ static struct i915_page_directory *alloc_pd(struct i915_address_space *vm) return pd; } +static inline bool pd_has_phys_page(const struct i915_page_directory * const pd) +{ + return pd->base.page; +} + static void free_pd(struct i915_address_space *vm, struct i915_page_directory *pd) { - cleanup_px(vm, pd); + if (likely(pd_has_phys_page(pd))) + cleanup_px(vm, pd); + kfree(pd); } @@ -734,37 +741,12 @@ static void init_pd_with_page(struct i915_address_space *vm, memset_p(pd->entry, pt, 512); } -static struct i915_page_directory *alloc_pdp(struct i915_address_space *vm) -{ - struct i915_page_directory *pdp; - - pdp = __alloc_pd(); - if (!pdp) - return ERR_PTR(-ENOMEM); - - if (i915_vm_is_4lvl(vm)) { - if (unlikely(setup_px(vm, pdp))) { - kfree(pdp); - return ERR_PTR(-ENOMEM); - } - } - - return pdp; -} - -static void free_pdp(struct i915_address_space *vm, - struct i915_page_directory *pdp) -{ - if (i915_vm_is_4lvl(vm)) - cleanup_px(vm, pdp); - - kfree(pdp); -} - static void init_pd(struct i915_address_space *vm, struct i915_page_directory * const pd, struct i915_page_directory * const to) { + GEM_DEBUG_BUG_ON(!pd_has_phys_page(pd)); + fill_px(vm, pd, gen8_pdpe_encode(px_dma(to), I915_CACHE_LLC)); memset_p(pd->entry, to, 512); } @@ -842,14 +824,13 @@ static bool gen8_ppgtt_clear_pd(struct i915_address_space *vm, return !atomic_read(&pd->used); } -static void gen8_ppgtt_set_pdpe(struct i915_address_space *vm, - struct i915_page_directory *pdp, +static void gen8_ppgtt_set_pdpe(struct i915_page_directory *pdp, struct i915_page_directory *pd, unsigned int pdpe) { gen8_ppgtt_pdpe_t *vaddr; - if (!i915_vm_is_4lvl(vm)) + if (!pd_has_phys_page(pdp)) return; vaddr = kmap_atomic_px(pdp); @@ -877,7 +858,7 @@ static bool gen8_ppgtt_clear_pdp(struct i915_address_space *vm, spin_lock(&pdp->lock); if (!atomic_read(&pd->used)) { - gen8_ppgtt_set_pdpe(vm, pdp, vm->scratch_pd, pdpe); + gen8_ppgtt_set_pdpe(pdp, vm->scratch_pd, pdpe); pdp->entry[pdpe] = vm->scratch_pd; GEM_BUG_ON(!atomic_read(&pdp->used)); @@ -938,7 +919,7 @@ static void gen8_ppgtt_clear_4lvl(struct i915_address_space *vm, } spin_unlock(&pml4->lock); if (free) - free_pdp(vm, pdp); + free_pd(vm, pdp); } } @@ -1242,7 +1223,7 @@ static int gen8_init_scratch(struct i915_address_space *vm) } if (i915_vm_is_4lvl(vm)) { - vm->scratch_pdp = alloc_pdp(vm); + vm->scratch_pdp = alloc_pd(vm); if (IS_ERR(vm->scratch_pdp)) { ret = PTR_ERR(vm->scratch_pdp); goto free_pd; @@ -1304,7 +1285,7 @@ static void gen8_free_scratch(struct i915_address_space *vm) return; if (i915_vm_is_4lvl(vm)) - free_pdp(vm, vm->scratch_pdp); + free_pd(vm, vm->scratch_pdp); free_pd(vm, vm->scratch_pd); free_pt(vm, vm->scratch_pt); cleanup_scratch_page(vm); @@ -1324,7 +1305,7 @@ static void gen8_ppgtt_cleanup_3lvl(struct i915_address_space *vm, free_pd(vm, pdp->entry[i]); } - free_pdp(vm, pdp); + free_pd(vm, pdp); } static void gen8_ppgtt_cleanup_4lvl(struct i915_ppgtt *ppgtt) @@ -1431,7 +1412,7 @@ static int gen8_ppgtt_alloc_pdp(struct i915_address_space *vm, old = cmpxchg(&pdp->entry[pdpe], vm->scratch_pd, pd); if (old == vm->scratch_pd) { - gen8_ppgtt_set_pdpe(vm, pdp, pd, pdpe); + gen8_ppgtt_set_pdpe(pdp, pd, pdpe); atomic_inc(&pdp->used); } else { free_pd(vm, pd); @@ -1457,7 +1438,7 @@ static int gen8_ppgtt_alloc_pdp(struct i915_address_space *vm, unwind_pd: spin_lock(&pdp->lock); if (atomic_dec_and_test(&pd->used)) { - gen8_ppgtt_set_pdpe(vm, pdp, vm->scratch_pd, pdpe); + gen8_ppgtt_set_pdpe(pdp, vm->scratch_pd, pdpe); GEM_BUG_ON(!atomic_read(&pdp->used)); atomic_dec(&pdp->used); free_pd(vm, pd); @@ -1487,13 +1468,12 @@ static int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm, spin_lock(&pml4->lock); gen8_for_each_pml4e(pdp, pml4, start, length, pml4e) { - if (pdp == vm->scratch_pdp) { struct i915_page_directory *old; spin_unlock(&pml4->lock); - pdp = alloc_pdp(vm); + pdp = alloc_pd(vm); if (IS_ERR(pdp)) goto unwind; @@ -1503,7 +1483,7 @@ static int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm, if (old == vm->scratch_pdp) { gen8_ppgtt_set_pml4e(pml4, pdp, pml4e); } else { - free_pdp(vm, pdp); + free_pd(vm, pdp); pdp = old; } @@ -1527,7 +1507,7 @@ static int gen8_ppgtt_alloc_4lvl(struct i915_address_space *vm, spin_lock(&pml4->lock); if (atomic_dec_and_test(&pdp->used)) { gen8_ppgtt_set_pml4e(pml4, vm->scratch_pdp, pml4e); - free_pdp(vm, pdp); + free_pd(vm, pdp); } spin_unlock(&pml4->lock); unwind: @@ -1550,7 +1530,7 @@ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt) goto unwind; init_pd_with_page(vm, pd, vm->scratch_pt); - gen8_ppgtt_set_pdpe(vm, pdp, pd, pdpe); + gen8_ppgtt_set_pdpe(pdp, pd, pdpe); atomic_inc(&pdp->used); } @@ -1562,7 +1542,7 @@ static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt) unwind: start -= from; gen8_for_each_pdpe(pd, pdp, from, start, pdpe) { - gen8_ppgtt_set_pdpe(vm, pdp, vm->scratch_pd, pdpe); + gen8_ppgtt_set_pdpe(pdp, vm->scratch_pd, pdpe); free_pd(vm, pd); } atomic_set(&pdp->used, 0); @@ -1620,13 +1600,17 @@ static struct i915_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915) if (err) goto err_free; - ppgtt->pd = alloc_pdp(&ppgtt->vm); - if (IS_ERR(ppgtt->pd)) { - err = PTR_ERR(ppgtt->pd); - goto err_scratch; + ppgtt->pd = __alloc_pd(); + if (!ppgtt->pd) { + err = -ENOMEM; + goto err_free_scratch; } if (i915_vm_is_4lvl(&ppgtt->vm)) { + err = setup_px(&ppgtt->vm, ppgtt->pd); + if (err) + goto err_free_pdp; + init_pd(&ppgtt->vm, ppgtt->pd, ppgtt->vm.scratch_pdp); ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_4lvl; @@ -1643,7 +1627,7 @@ static struct i915_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915) if (intel_vgpu_active(i915)) { err = gen8_preallocate_top_level_pdp(ppgtt); if (err) - goto err_pdp; + goto err_free_pdp; } ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc_3lvl; @@ -1658,9 +1642,9 @@ static struct i915_ppgtt *gen8_ppgtt_create(struct drm_i915_private *i915) return ppgtt; -err_pdp: - free_pdp(&ppgtt->vm, ppgtt->pd); -err_scratch: +err_free_pdp: + free_pd(&ppgtt->vm, ppgtt->pd); +err_free_scratch: gen8_free_scratch(&ppgtt->vm); err_free: kfree(ppgtt);