From patchwork Wed Mar 29 07:32:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhao Liu X-Patchwork-Id: 13192030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5939EC6FD18 for ; Wed, 29 Mar 2023 07:24:42 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E3B3010E4EE; Wed, 29 Mar 2023 07:24:41 +0000 (UTC) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by gabe.freedesktop.org (Postfix) with ESMTPS id 17B8610E4F0; Wed, 29 Mar 2023 07:24:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680074674; x=1711610674; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O5mjsJvXEoxA+vJsVu0W75jJyr6KhRreGwElbH0Raj8=; b=Ea6wj8SwifeBVQ3Wd9XmbJZrskQpS04HhP7cx1CiU7sGBZCTtNbyJFFV Sm9JBRywvMLY0G0QhAv568XF+JFC3pxDgmj9daylZGacGql11dXS9nzdj gfcaUsMpyDaAgCgLVlgeUe7z+NrFDY14wPv2Ejk64Kl4Y0GuHUrx6XlY9 sHTi6w4fUva7xEmtj9WwIFlcb4/S4auhMHAvxw0bEw4GwvNz0LwHgQ0Ln S+g5JZi6+UntvvZcRPU4OEmzJgrKTKywhHCA8QvsX04+Z6K/1emrhqoM5 nQifsddr2e6+0vtcH9ijOTUfeoFIpIJh4+iBQCyhXCFN4JGH8MXaQ33Nn A==; X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="405745925" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="405745925" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Mar 2023 00:24:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10663"; a="684160581" X-IronPort-AV: E=Sophos;i="5.98,300,1673942400"; d="scan'208";a="684160581" Received: from liuzhao-optiplex-7080.sh.intel.com ([10.239.160.112]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2023 00:24:29 -0700 From: Zhao Liu To: Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Daniel Vetter , Matthew Auld , =?utf-8?q?Thomas_Hellstr=C3=B6m?= , Nirmoy Das , Maarten Lankhorst , Chris Wilson , =?utf-8?q?Christian_K=C3=B6nig?= , intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org Date: Wed, 29 Mar 2023 15:32:15 +0800 Message-Id: <20230329073220.3982460-5-zhao1.liu@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> References: <20230329073220.3982460-1-zhao1.liu@linux.intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH v2 4/9] drm/i915: Use kmap_local_page() in gem/selftests/huge_pages.c X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Fabio M . De Francesco" , Ira Weiny , Zhao Liu , Zhenyu Wang , Dave Hansen Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Zhao Liu The use of kmap_atomic() is being deprecated in favor of kmap_local_page()[1], and this patch converts the call from kmap_atomic() to kmap_local_page(). The main difference between atomic and local mappings is that local mappings doesn't disable page faults or preemption (the preemption is disabled for !PREEMPT_RT case, otherwise it only disables migration). With kmap_local_page(), we can avoid the often unwanted side effect of unnecessary page faults or preemption disables. In drm/i915/gem/selftests/huge_pages.c, function __cpu_check_shmem() mainly uses mapping to flush cache and check the value. There're 2 reasons why __cpu_check_shmem() doesn't need to disable pagefaults and preemption for mapping: 1. The flush operation is safe. Function __cpu_check_shmem() calls drm_clflush_virt_range() to use CLFLUSHOPT or WBINVD to flush. Since CLFLUSHOPT is global on x86 and WBINVD is called on each cpu in drm_clflush_virt_range(), the flush operation is global. 2. Any context switch caused by preemption or page faults (page fault may cause sleep) doesn't affect the validity of local mapping. Therefore, __cpu_check_shmem() is a function where the use of kmap_local_page() in place of kmap_atomic() is correctly suited. Convert the calls of kmap_atomic() / kunmap_atomic() to kmap_local_page() / kunmap_local(). [1]: https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com v2: * Dropped hot plug related description since it has nothing to do with kmap_local_page(). * No code change since v1, and added description of the motivation of using kmap_local_page(). Suggested-by: Dave Hansen Suggested-by: Ira Weiny Suggested-by: Fabio M. De Francesco Signed-off-by: Zhao Liu Reviewed-by: Ira Weiny --- Suggested by credits: Dave: Referred to his explanation about cache flush. Ira: Referred to his task document, review comments and explanation about cache flush. Fabio: Referred to his boiler plate commit message and his description about why kmap_local_page() should be preferred. --- drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index defece0bcb81..3f9ea48a48d0 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -1026,7 +1026,7 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val) goto err_unlock; for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) { - u32 *ptr = kmap_atomic(i915_gem_object_get_page(obj, n)); + u32 *ptr = kmap_local_page(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) drm_clflush_virt_range(ptr, PAGE_SIZE); @@ -1034,12 +1034,12 @@ __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val) if (ptr[dword] != val) { pr_err("n=%lu ptr[%u]=%u, val=%u\n", n, dword, ptr[dword], val); - kunmap_atomic(ptr); + kunmap_local(ptr); err = -EINVAL; break; } - kunmap_atomic(ptr); + kunmap_local(ptr); } i915_gem_object_finish_access(obj);