From patchwork Tue Nov 3 09:27:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11876757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDD52166C for ; Tue, 3 Nov 2020 10:34:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5B4022450 for ; Tue, 3 Nov 2020 10:34:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="4s8omJJm"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="U4tn/+pk" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728623AbgKCKeY (ORCPT ); Tue, 3 Nov 2020 05:34:24 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:40032 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728576AbgKCKeS (ORCPT ); Tue, 3 Nov 2020 05:34:18 -0500 Message-Id: <20201103095900.254775994@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1604399648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=fathe5XQH1prv3cOh0IbfIOri3L6Dlenv7hGYFbTtOw=; b=4s8omJJmBJxpmEg2QlzSO+2D4D00kouw94LPizFXKTom4HVYkN1o6VhUYVjXFczGJZXWt6 jcuI7M/Stlt9YLFWcRG639AknE1v3UGJre9Df69eIwRjf9YTAiuZlQltsoKHicCYv0XkOB tMyxElLRYReGvN4Pcl2amdoIzJ+WqApLJqAyPzA/Jqs47Dn+PWf6iFzctqWQ01kgsVlEs+ 1Ko7pw79T/pa0E4i3nSMi+6A0K8qzsp3/0oy+zVV4I8gfR55e/p9Uu4ya2uNvVxk7sYui2 P+3BfUUa3cw0srOaWKmfe2rDqSUUj9AU19r7OCFMjiYKDFXVXEhQ3iKBruwOYA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1604399648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=fathe5XQH1prv3cOh0IbfIOri3L6Dlenv7hGYFbTtOw=; b=U4tn/+pk9QVOKUfGcyx1nvemACltstypD2THWsG3wL3LXLx1tDAUzM/HpmYQmwhLMtJxEk hA08G13TeSVN0TDg== Date: Tue, 03 Nov 2020 10:27:48 +0100 From: Thomas Gleixner To: LKML Cc: Linus Torvalds , Peter Zijlstra , Paul McKenney , Christoph Hellwig , Sebastian Andrzej Siewior , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Andrew Morton , linux-mm@kvack.org, Alexander Viro , Benjamin LaHaise , linux-fsdevel@vger.kernel.org, linux-aio@kvack.org, Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org, x86@kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , sparclinux@vger.kernel.org, Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org, Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Christian Koenig , Huang Rui , VMware Graphics , Roland Scheidegger , Dave Airlie , Gerd Hoffmann , virtualization@lists.linux-foundation.org, spice-devel@lists.freedesktop.org, Ben Skeggs , nouveau@lists.freedesktop.org Subject: [patch V3 36/37] drm/i915: Replace io_mapping_map_atomic_wc() References: <20201103092712.714480842@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org None of these mapping requires the side effect of disabling pagefaults and preemption. Use io_mapping_map_local_wc() instead, and clean up gtt_user_read() and gtt_user_write() to use a plain copy_from_user() as the local maps are not disabling pagefaults. Signed-off-by: Thomas Gleixner Cc: Jani Nikula Cc: Joonas Lahtinen Cc: Rodrigo Vivi Cc: David Airlie Cc: Daniel Vetter Cc: intel-gfx@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 +--- drivers/gpu/drm/i915/i915_gem.c | 40 ++++++++----------------- drivers/gpu/drm/i915/selftests/i915_gem.c | 4 +- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 8 ++--- 4 files changed, 22 insertions(+), 37 deletions(-) --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1081,7 +1081,7 @@ static void reloc_cache_reset(struct rel struct i915_ggtt *ggtt = cache_to_ggtt(cache); intel_gt_flush_ggtt_writes(ggtt->vm.gt); - io_mapping_unmap_atomic((void __iomem *)vaddr); + io_mapping_unmap_local((void __iomem *)vaddr); if (drm_mm_node_allocated(&cache->node)) { ggtt->vm.clear_range(&ggtt->vm, @@ -1147,7 +1147,7 @@ static void *reloc_iomap(struct drm_i915 if (cache->vaddr) { intel_gt_flush_ggtt_writes(ggtt->vm.gt); - io_mapping_unmap_atomic((void __force __iomem *) unmask_page(cache->vaddr)); + io_mapping_unmap_local((void __force __iomem *) unmask_page(cache->vaddr)); } else { struct i915_vma *vma; int err; @@ -1195,8 +1195,7 @@ static void *reloc_iomap(struct drm_i915 offset += page << PAGE_SHIFT; } - vaddr = (void __force *)io_mapping_map_atomic_wc(&ggtt->iomap, - offset); + vaddr = (void __force *)io_mapping_map_local_wc(&ggtt->iomap, offset); cache->page = page; cache->vaddr = (unsigned long)vaddr; --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -379,22 +379,15 @@ gtt_user_read(struct io_mapping *mapping char __user *user_data, int length) { void __iomem *vaddr; - unsigned long unwritten; + bool fail = false; /* We can use the cpu mem copy function because this is X86. */ - vaddr = io_mapping_map_atomic_wc(mapping, base); - unwritten = __copy_to_user_inatomic(user_data, - (void __force *)vaddr + offset, - length); - io_mapping_unmap_atomic(vaddr); - if (unwritten) { - vaddr = io_mapping_map_wc(mapping, base, PAGE_SIZE); - unwritten = copy_to_user(user_data, - (void __force *)vaddr + offset, - length); - io_mapping_unmap(vaddr); - } - return unwritten; + vaddr = io_mapping_map_local_wc(mapping, base); + if (copy_to_user(user_data, (void __force *)vaddr + offset, length)) + fail = true; + io_mapping_unmap_local(vaddr); + + return fail; } static int @@ -557,21 +550,14 @@ ggtt_write(struct io_mapping *mapping, char __user *user_data, int length) { void __iomem *vaddr; - unsigned long unwritten; + bool fail = false; /* We can use the cpu mem copy function because this is X86. */ - vaddr = io_mapping_map_atomic_wc(mapping, base); - unwritten = __copy_from_user_inatomic_nocache((void __force *)vaddr + offset, - user_data, length); - io_mapping_unmap_atomic(vaddr); - if (unwritten) { - vaddr = io_mapping_map_wc(mapping, base, PAGE_SIZE); - unwritten = copy_from_user((void __force *)vaddr + offset, - user_data, length); - io_mapping_unmap(vaddr); - } - - return unwritten; + vaddr = io_mapping_map_local_wc(mapping, base); + if (copy_from_user((void __force *)vaddr + offset, user_data, length)) + fail = true; + io_mapping_unmap_local(vaddr); + return fail; } /** --- a/drivers/gpu/drm/i915/selftests/i915_gem.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem.c @@ -57,12 +57,12 @@ static void trash_stolen(struct drm_i915 ggtt->vm.insert_page(&ggtt->vm, dma, slot, I915_CACHE_NONE, 0); - s = io_mapping_map_atomic_wc(&ggtt->iomap, slot); + s = io_mapping_map_local_wc(&ggtt->iomap, slot); for (x = 0; x < PAGE_SIZE / sizeof(u32); x++) { prng = next_pseudo_random32(prng); iowrite32(prng, &s[x]); } - io_mapping_unmap_atomic(s); + io_mapping_unmap_local(s); } ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE); --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -1200,9 +1200,9 @@ static int igt_ggtt_page(void *arg) u64 offset = tmp.start + order[n] * PAGE_SIZE; u32 __iomem *vaddr; - vaddr = io_mapping_map_atomic_wc(&ggtt->iomap, offset); + vaddr = io_mapping_map_local_wc(&ggtt->iomap, offset); iowrite32(n, vaddr + n); - io_mapping_unmap_atomic(vaddr); + io_mapping_unmap_local(vaddr); } intel_gt_flush_ggtt_writes(ggtt->vm.gt); @@ -1212,9 +1212,9 @@ static int igt_ggtt_page(void *arg) u32 __iomem *vaddr; u32 val; - vaddr = io_mapping_map_atomic_wc(&ggtt->iomap, offset); + vaddr = io_mapping_map_local_wc(&ggtt->iomap, offset); val = ioread32(vaddr + n); - io_mapping_unmap_atomic(vaddr); + io_mapping_unmap_local(vaddr); if (val != n) { pr_err("insert page failed: found %d, expected %d\n",