From patchwork Tue Aug 23 10:17:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7D3A1C32772 for ; Tue, 23 Aug 2022 10:18:25 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3164FB2FF5; Tue, 23 Aug 2022 10:18:18 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 14687B2FDA; Tue, 23 Aug 2022 10:17:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249876; x=1692785876; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pHZpD5R55mDNFmQEydujAwEmg6PN38/zVmQNxisp2i4=; b=SqePmS2fmMlXnXSSAWmqjfjlUpp5nw9i9i5nfFEpIn3XKrKF2X2jykOD 8QLPLc32SRro2eJpU8muPB4eaqkQo4UnIH1hixfC1nA/o/DD4CMB4kvau EnHnRPaV6CH4ndOUKzR3FRJpvr0sjKLIpgm07Df4vn/SV68Cfa3KDJ+b9 gl0s6GfKF59iFXFFiCQDdxtjSuduFOP7iCO9KtnxvS1XTW2J6cuIc4Jyg 74jT+xpLQGupS6c672MOs0d+izEnrTA0mEgj3irRSRRIzynwU/ytIq6eM JoMitCp7iMPQA1/kVBAxWUPRphvBujDy5FGzEKWlUtDDBF2cxpygiUydY g==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="273400724" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="273400724" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:17:52 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639883" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:17:47 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 1/8] overflow: Move and add few utility macros into overflow Date: Tue, 23 Aug 2022 19:17:22 +0900 Message-Id: <20220823101729.2098841-2-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It moves overflows_type utility macro into overflow header from i915_utils header. The overflows_type can be used to catch the truncaion (overflow) between different data types. And it adds check_assign() macro which performs an assigning source value into destination ptr along with an overflow check. overflow_type macro has been improved to handle the signbit by gcc's built-in overflow check function. And it adds overflows_ptr() helper macro for checking the overflows between a value and a pointer type/value. v3: Add is_type_unsigned() macro (Mauro) Modify overflows_type() macro to consider signed data types (Mauro) Fix the problem that safe_conversion() macro always returns true v4: Fix kernel-doc markups v6: Move macro addition location so that it can be used by other than drm subsystem (Jani, Mauro, Andi) Change is_type_unsigned to is_unsigned_type to have the same name form as is_signed_type macro v8: Add check_assign() and remove safe_conversion() (Kees) Fix overflows_type() to use gcc's built-in overflow function (Andrzej) Add overflows_ptr() to allow overflow checking when assigning a value into a pointer variable (G.G.) Signed-off-by: Gwan-gyeong Mun Cc: Thomas Hellström Cc: Matthew Auld Cc: Nirmoy Das Cc: Jani Nikula Cc: Andi Shyti Cc: Andrzej Hajda Cc: Mauro Carvalho Chehab Cc: Kees Cook Reviewed-by: Mauro Carvalho Chehab (v5) --- drivers/gpu/drm/i915/i915_user_extensions.c | 2 +- drivers/gpu/drm/i915/i915_utils.h | 5 +- include/linux/overflow.h | 67 +++++++++++++++++++++ 3 files changed, 69 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_user_extensions.c b/drivers/gpu/drm/i915/i915_user_extensions.c index c822d0aafd2d..0fb2fecbcaae 100644 --- a/drivers/gpu/drm/i915/i915_user_extensions.c +++ b/drivers/gpu/drm/i915/i915_user_extensions.c @@ -51,7 +51,7 @@ int i915_user_extensions(struct i915_user_extension __user *ext, return err; if (get_user(next, &ext->next_extension) || - overflows_type(next, ext)) + overflows_ptr(next, ext)) return -EFAULT; ext = u64_to_user_ptr(next); diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index c10d68cdc3ca..eb0ded23fa9c 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -32,6 +32,7 @@ #include #include #include +#include #ifdef CONFIG_X86 #include @@ -111,10 +112,6 @@ bool i915_error_injected(void); #define range_overflows_end_t(type, start, size, max) \ range_overflows_end((type)(start), (type)(size), (type)(max)) -/* Note we don't consider signbits :| */ -#define overflows_type(x, T) \ - (sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T)) - #define ptr_mask_bits(ptr, n) ({ \ unsigned long __v = (unsigned long)(ptr); \ (typeof(ptr))(__v & -BIT(n)); \ diff --git a/include/linux/overflow.h b/include/linux/overflow.h index f1221d11f8e5..4016f1378bcc 100644 --- a/include/linux/overflow.h +++ b/include/linux/overflow.h @@ -52,6 +52,73 @@ static inline bool __must_check __must_check_overflow(bool overflow) return unlikely(overflow); } +/** + * overflows_type - helper for checking the overflows between data types or + * values + * + * @x: Source value or data type for overflow check + * @T: Destination value or data type for overflow check + * + * It compares the values or data type between the first and second argument to + * check whether overflow can occur when assigning the first argument to the + * variable of the second argument. Source and Destination can be singned or + * unsigned data types. + * + * Returns: + * True if overflow can occur, false otherwise. + */ +#define overflows_type(x, T) __must_check_overflow(({ \ + typeof(T) v = 0; \ + __builtin_add_overflow_p((x), v, v); \ +})) + +/** + * overflows_ptr - helper for checking the overflows between a value and + * a pointer type/value + * + * @x: Source value for overflow check + * @T: Destination pointer type or value for overflow check + * + * gcc's built-in overflow check functions don't support checking between the + * pointer type and non-pointer type. Therefore it compares the values and + * size of each data type between the first and second argument to check whether + * truncation can occur when assigning the first argument to the variable of the + * second argument. It checks internally the ptr is a pointer type. + * + * Returns: + * True if overflow can occur, false otherwise. + */ +#define overflows_ptr(x, T) __must_check_overflow(({ \ + typecheck_pointer(T); \ + ((x) < 0) ? 1 : (sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T)) ? 1 : 0; \ +})) + +/** + * check_assign - perform an assigning source value into destination ptr along + * with an overflow check. + * + * @value: Source value + * @ptr: Destination pointer address, If the pointer type is not used, + * a warning message is output during build. + * + * It checks internally the ptr is a pointer type. And it uses gcc's built-in + * overflow check function. + * It does not use the check_*() wrapper functions, but directly uses gcc's + * built-in overflow check function so that it can be used even when + * the type of value and the type pointed to by ptr are different without build + * warning messages. + * + * Returns: + * If the value would overflow the destination, it returns true. If not return + * false. When overflow does not occur, the assigning into ptr from value + * succeeds. It follows the return policy as other check_*_overflow() functions + * return non-zero as a failure. + */ +#define check_assign(value, ptr) __must_check_overflow(({ \ + typecheck_pointer(ptr); \ + __builtin_add_overflow(0, value, ptr); \ +})) + /* * For simplicity and code hygiene, the fallback code below insists on * a, b and *d having the same type (similar to the min() and max() From patchwork Tue Aug 23 10:17:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9E752C32772 for ; Tue, 23 Aug 2022 10:20:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D272FB306C; Tue, 23 Aug 2022 10:18:54 +0000 (UTC) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by gabe.freedesktop.org (Postfix) with ESMTPS id 51D99B2FDE; Tue, 23 Aug 2022 10:17:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249878; x=1692785878; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uWNiR7M8bUT4JAhynBPIOlI6cvuOcXNpkpD1siwzhag=; b=mL+SgThu6xW7P9gWPcJLHIg6r5/5L4lKn4SxuJFVadGpMRiQ0CDZ7saC JUdXeX6/UkZ1XyoK89b3EGi1W8rvHMDJJzdDErE230PLf4T7q8fjSSAwN L4e4W3zMO4kNSrnFiJHsmq0J6ql09f4K081+A+wKOU/WivaC2S3dV3zEP Mk6KADA/lp5UiVhcyZC2vCAXjdmnO+utehQmeATCPVs26FJE4rnXmpJw1 r/WE7dYc9pIEEBUWBMDoQnn54uMJmV11gFSm6Oirwb6f/eX9rLv7UWvgt 5gUCRzwZMbh55VpGRvKJhWTo4UNO7KRgxoQY2Uq8vShPWXc8UST+xwtK2 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="273400739" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="273400739" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:17:57 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639913" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:17:52 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 2/8] util_macros: Add exact_type macro to catch type mis-match while compiling Date: Tue, 23 Aug 2022 19:17:23 +0900 Message-Id: <20220823101729.2098841-3-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It adds exact_type and exactly_pgoff_t macro to catch type mis-match while compiling. The existing typecheck() macro outputs build warnings, but the newly added exact_type() macro uses the BUILD_BUG_ON() macro to generate a build break when the types are different and can be used to detect explicit build errors. v6: Move macro addition location so that it can be used by other than drm subsystem (Jani, Mauro, Andi) Signed-off-by: Gwan-gyeong Mun Cc: Thomas Hellström Cc: Matthew Auld Cc: Nirmoy Das Cc: Jani Nikula Cc: Andi Shyti Cc: Mauro Carvalho Chehab --- include/linux/util_macros.h | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/include/linux/util_macros.h b/include/linux/util_macros.h index 72299f261b25..b6624b275257 100644 --- a/include/linux/util_macros.h +++ b/include/linux/util_macros.h @@ -2,6 +2,9 @@ #ifndef _LINUX_HELPER_MACROS_H_ #define _LINUX_HELPER_MACROS_H_ +#include +#include + #define __find_closest(x, a, as, op) \ ({ \ typeof(as) __fc_i, __fc_as = (as) - 1; \ @@ -38,4 +41,26 @@ */ #define find_closest_descending(x, a, as) __find_closest(x, a, as, >=) +/** + * exact_type - break compile if source type and destination value's type are + * not the same + * @T: Source type + * @n: Destination value + * + * It is a helper macro for a poor man's -Wconversion: only allow variables of + * an exact type. It determines whether the source type and destination value's + * type are the same while compiling, and it breaks compile if two types are + * not the same + */ +#define exact_type(T, n) \ + BUILD_BUG_ON(!__builtin_constant_p(n) && !__builtin_types_compatible_p(T, typeof(n))) + +/** + * exactly_pgoff_t - helper to check if the type of a value is pgoff_t + * @n: value to compare pgoff_t type + * + * It breaks compile if the argument value's type is not pgoff_t type. + */ +#define exactly_pgoff_t(n) exact_type(pgoff_t, n) + #endif From patchwork Tue Aug 23 10:17:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A1278C32772 for ; Tue, 23 Aug 2022 10:19:53 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 28437B3060; Tue, 23 Aug 2022 10:18:54 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id DADC4B2FEC; Tue, 23 Aug 2022 10:18:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249884; x=1692785884; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Hjuo9kW9bheYYzAP6rFzZE28qSxkQouTWA6IrwsWWGI=; b=XXhKe+jEF4nGNkY/AJKWSYdOn7bVc9l3odNgDjoqRIWbG6MR7omq1Qfw JPsAiVd8Pl+pxrU+iUUUWOFxsj1W+aCCIJwRArX/GIo4+nmz4AiLgeMw8 v9fm5++wkUbA2XmhG6QDd+qAmdCCbOWzb3SD/fd9l7C6elU7yoNZuSPBh CRcQ+lEuLYYTTn/zQoa90y49INJKFPTAj2k7CJmEZ7pnTMccjvyD8+KnW nqFzRlIHPbi+X5PowvAf1LXRH0bUte3OP0bKNS6eeB2NdFmuJ99/vT/E+ n9/7RdcH6vCBfyjcjw6I6HZaFNFciGLCOUm9j+yTH/7amQhXUbuaGGjG2 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="293647202" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="293647202" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:03 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639938" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:17:57 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 3/8] drm/i915/gem: Typecheck page lookups Date: Tue, 23 Aug 2022 19:17:24 +0900 Message-Id: <20220823101729.2098841-4-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson We need to check that we avoid integer overflows when looking up a page, and so fix all the instances where we have mistakenly used a plain integer instead of a more suitable long. Be pedantic and add integer typechecking to the lookup so that we can be sure that we are safe. And it also uses pgoff_t as our page lookups must remain compatible with the page cache, pgoff_t is currently exactly unsigned long. v2: Move added i915_utils's macro into drm_util header (Jani N) v3: Make not use the same macro name on a function. (Mauro) For kernel-doc, macros and functions are handled in the same namespace, the same macro name on a function prevents ever adding documentation for it. v4: Add kernel-doc markups to the kAPI functions and macros (Mauoro) v5: Fix an alignment to match open parenthesis v6: Rebase Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 7 +- drivers/gpu/drm/i915/gem/i915_gem_object.h | 293 ++++++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 27 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- .../drm/i915/gem/selftests/i915_gem_context.c | 12 +- .../drm/i915/gem/selftests/i915_gem_mman.c | 8 +- .../drm/i915/gem/selftests/i915_gem_object.c | 8 +- drivers/gpu/drm/i915/i915_gem.c | 18 +- drivers/gpu/drm/i915/i915_utils.h | 1 + drivers/gpu/drm/i915/i915_vma.c | 8 +- 10 files changed, 323 insertions(+), 61 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 389e9f157ca5..b3861739c1eb 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -413,10 +413,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, static void i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; void *src_map; void *src_ptr; - src_map = kmap_atomic(i915_gem_object_get_page(obj, offset >> PAGE_SHIFT)); + src_map = kmap_atomic(i915_gem_object_get_page(obj, idx)); src_ptr = src_map + offset_in_page(offset); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) @@ -429,9 +430,10 @@ i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, static void i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; + dma_addr_t dma = i915_gem_object_get_dma_address(obj, idx); void __iomem *src_map; void __iomem *src_ptr; - dma_addr_t dma = i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT); src_map = io_mapping_map_wc(&obj->mm.region->iomap, dma - obj->mm.region->region.start, @@ -460,6 +462,7 @@ i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset */ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + GEM_BUG_ON(overflows_type(offset >> PAGE_SHIFT, pgoff_t)); GEM_BUG_ON(offset >= obj->base.size); GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size); GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 6f0a3ce35567..5da872afc4ba 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -27,8 +27,10 @@ enum intel_region_id; * spot such a local variable, please consider fixing! * * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for num_pages - * - get_user_pages*() mixed ints with longs + * - sg_table embeds unsigned int for nents + * + * We can check for invalidly typed locals with typecheck(), see for example + * i915_gem_object_get_sg(). */ #define GEM_CHECK_SIZE_OVERFLOW(sz) \ GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) @@ -363,44 +365,289 @@ i915_gem_object_get_tile_row_size(const struct drm_i915_gem_object *obj) int i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, unsigned int tiling, unsigned int stride); +/** + * __i915_gem_object_page_iter_get_sg - helper to find the target scatterlist + * pointer and the target page position using pgoff_t n input argument and + * i915_gem_object_page_iter + * @obj: i915 GEM buffer object + * @iter: i915 GEM buffer object page iterator + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Context: Takes and releases the mutex lock of the i915_gem_object_page_iter. + * Takes and releases the RCU lock to search the radix_tree of + * i915_gem_object_page_iter. + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * Recommended to use wrapper macro: i915_gem_object_page_iter_get_sg() + */ struct scatterlist * -__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, bool dma); +__i915_gem_object_page_iter_get_sg(struct drm_i915_gem_object *obj, + struct i915_gem_object_page_iter *iter, + pgoff_t n, + unsigned int *offset); +/** + * i915_gem_object_page_iter_get_sg - wrapper macro for + * __i915_gem_object_page_iter_get_sg() + * @obj: i915 GEM buffer object + * @it: i915 GEM buffer object page iterator + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Context: Takes and releases the mutex lock of the i915_gem_object_page_iter. + * Takes and releases the RCU lock to search the radix_tree of + * i915_gem_object_page_iter. + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_page_iter_get_sg(). + */ +#define i915_gem_object_page_iter_get_sg(obj, it, n, offset) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_page_iter_get_sg(obj, it, n, offset); \ +}) + +/** + * __i915_gem_object_get_sg - helper to find the target scatterlist + * pointer and the target page position using pgoff_t n input argument and + * drm_i915_gem_object. It uses an internal shmem scatterlist lookup function. + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * It uses drm_i915_gem_object's internal shmem scatterlist lookup function as + * i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg(). + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * Recommended to use wrapper macro: i915_gem_object_get_sg() + * See also __i915_gem_object_page_iter_get_sg() + */ static inline struct scatterlist * -i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - unsigned int n, - unsigned int *offset) +__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, pgoff_t n, + unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, false); + return __i915_gem_object_page_iter_get_sg(obj, &obj->mm.get_page, n, offset); } +/** + * i915_gem_object_get_sg - wrapper macro for __i915_gem_object_get_sg() + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_sg(). + * See also __i915_gem_object_page_iter_get_sg() + */ +#define i915_gem_object_get_sg(obj, n, offset) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_get_sg(obj, n, offset); \ +}) + +/** + * __i915_gem_object_get_sg_dma - helper to find the target scatterlist + * pointer and the target page position using pgoff_t n input argument and + * drm_i915_gem_object. It uses an internal DMA mapped scatterlist lookup function + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * It uses drm_i915_gem_object's internal DMA mapped scatterlist lookup function + * as i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg(). + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * Recommended to use wrapper macro: i915_gem_object_get_sg_dma() + * See also __i915_gem_object_page_iter_get_sg() + */ static inline struct scatterlist * -i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, - unsigned int n, - unsigned int *offset) +__i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, pgoff_t n, + unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, true); + return __i915_gem_object_page_iter_get_sg(obj, &obj->mm.get_dma_page, n, offset); } +/** + * i915_gem_object_get_sg_dma - wrapper macro for __i915_gem_object_get_sg_dma() + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_sg_dma(). + * See also __i915_gem_object_page_iter_get_sg() + */ +#define i915_gem_object_get_sg_dma(obj, n, offset) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_get_sg_dma(obj, n, offset); \ +}) + +/** + * __i915_gem_object_get_page - helper to find the target page with a page offset + * @obj: i915 GEM buffer object + * @n: page offset + * + * It uses drm_i915_gem_object's internal shmem scatterlist lookup function as + * i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg() + * internally. + * + * Returns: + * The target page pointer. + * + * Recommended to use wrapper macro: i915_gem_object_get_page() + * See also __i915_gem_object_page_iter_get_sg() + */ struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, - unsigned int n); +__i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n); +/** + * i915_gem_object_get_page - wrapper macro for __i915_gem_object_get_page + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * The target page pointer. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_page(). + * See also __i915_gem_object_page_iter_get_sg() + */ +#define i915_gem_object_get_page(obj, n) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_get_page(obj, n); \ +}) + +/** + * __i915_gem_object_get_dirty_page - helper to find the target page with a page + * offset + * @obj: i915 GEM buffer object + * @n: page offset + * + * It works like i915_gem_object_get_page(), but it marks the returned page dirty. + * + * Returns: + * The target page pointer. + * + * Recommended to use wrapper macro: i915_gem_object_get_dirty_page() + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_page() + */ struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n); +__i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n); + +/** + * i915_gem_object_get_dirty_page - wrapper macro for __i915_gem_object_get_dirty_page + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * The target page pointer. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_dirty_page(). + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_page() + */ +#define i915_gem_object_get_dirty_page(obj, n) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_get_dirty_page(obj, n); \ +}) +/** + * __i915_gem_object_get_dma_address_len - helper to get bus addresses of + * targeted DMA mapped scatterlist from i915 GEM buffer object and it's length + * @obj: i915 GEM buffer object + * @n: page offset + * @len: DMA mapped scatterlist's DMA bus addresses length to return + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlist + * + * Recommended to use wrapper macro: i915_gem_object_get_dma_address_len() + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_sg_dma() + */ dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, - unsigned int *len); +__i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, pgoff_t n, + unsigned int *len); +/** + * i915_gem_object_get_dma_address_len - wrapper macro for + * __i915_gem_object_get_dma_address_len + * @obj: i915 GEM buffer object + * @n: page offset + * @len: DMA mapped scatterlist's DMA bus addresses length to return + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlist + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_dma_address_len(). + * See also __i915_gem_object_page_iter_get_sg() and + * __i915_gem_object_get_dma_address_len() + */ +#define i915_gem_object_get_dma_address_len(obj, n, len) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_get_dma_address_len(obj, n, len); \ +}) + +/** + * __i915_gem_object_get_dma_address - helper to get bus addresses of + * targeted DMA mapped scatterlist from i915 GEM buffer object + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlis + * + * Recommended to use wrapper macro: i915_gem_object_get_dma_address() + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_sg_dma() + */ dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n); +__i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n); + +/** + * i915_gem_object_get_dma_address - wrapper macro for + * __i915_gem_object_get_dma_address + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlist + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_dma_address(). + * See also __i915_gem_object_page_iter_get_sg() and + * __i915_gem_object_get_dma_address() + */ +#define i915_gem_object_get_dma_address(obj, n) ({ \ + exactly_pgoff_t(n); \ + __i915_gem_object_get_dma_address(obj, n); \ +}) void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct sg_table *pages, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 8357dbdcab5c..4d925202cae1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -510,14 +510,16 @@ void __i915_gem_object_release_map(struct drm_i915_gem_object *obj) } struct scatterlist * -__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, - bool dma) +__i915_gem_object_page_iter_get_sg(struct drm_i915_gem_object *obj, + struct i915_gem_object_page_iter *iter, + pgoff_t n, + unsigned int *offset) + { - struct scatterlist *sg; + const bool dma = iter == &obj->mm.get_dma_page || + iter == &obj->ttm.get_io_page; unsigned int idx, count; + struct scatterlist *sg; might_sleep(); GEM_BUG_ON(n >= obj->base.size >> PAGE_SHIFT); @@ -625,7 +627,7 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, } struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) +__i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n) { struct scatterlist *sg; unsigned int offset; @@ -638,8 +640,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) /* Like i915_gem_object_get_page(), but mark the returned page dirty */ struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n) +__i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n) { struct page *page; @@ -651,9 +652,8 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, - unsigned int *len) +__i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, + pgoff_t n, unsigned int *len) { struct scatterlist *sg; unsigned int offset; @@ -667,8 +667,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n) +__i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n) { return i915_gem_object_get_dma_address_len(obj, n, NULL); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index bc9c432edffe..4148126e7b1b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -685,7 +685,7 @@ static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo, GEM_WARN_ON(bo->ttm); base = obj->mm.region->iomap.base - obj->mm.region->region.start; - sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs, true); + sg = i915_gem_object_page_iter_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs); return ((base + sg_dma_address(sg)) >> PAGE_SHIFT) + ofs; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index c6ad67b90e8a..a18a890e681f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -455,7 +455,8 @@ static int gpu_fill(struct intel_context *ce, static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) { const bool has_llc = HAS_LLC(to_i915(obj->base.dev)); - unsigned int n, m, need_flush; + unsigned int need_flush; + unsigned long n, m; int err; i915_gem_object_lock(obj, NULL); @@ -485,7 +486,8 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) static noinline int cpu_check(struct drm_i915_gem_object *obj, unsigned int idx, unsigned int max) { - unsigned int n, m, needs_flush; + unsigned int needs_flush; + unsigned long n; int err; i915_gem_object_lock(obj, NULL); @@ -494,7 +496,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, goto out_unlock; for (n = 0; n < real_page_count(obj); n++) { - u32 *map; + u32 *map, m; map = kmap_atomic(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) @@ -502,7 +504,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (m = 0; m < max; m++) { if (map[m] != m) { - pr_err("%pS: Invalid value at object %d page %d/%ld, offset %d/%d: found %x expected %x\n", + pr_err("%pS: Invalid value at object %d page %ld/%ld, offset %d/%d: found %x expected %x\n", __builtin_return_address(0), idx, n, real_page_count(obj), m, max, map[m], m); @@ -513,7 +515,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (; m < DW_PER_PAGE; m++) { if (map[m] != STACK_MAGIC) { - pr_err("%pS: Invalid value at object %d page %d, offset %d: found %x expected %x (uninitialised)\n", + pr_err("%pS: Invalid value at object %d page %ld, offset %d: found %x expected %x (uninitialised)\n", __builtin_return_address(0), idx, n, m, map[m], STACK_MAGIC); err = -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 3cff08f04f6c..7512b21b6b4f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -95,11 +95,11 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt_view view; struct i915_vma *vma; + unsigned long offset; unsigned long page; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; int err; @@ -156,7 +156,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, @@ -212,10 +212,10 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, for_each_prime_number_from(page, 1, npages) { struct i915_ggtt_view view = compute_partial_view(obj, page, MIN_CHUNK_PAGES); + unsigned long offset; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; GEM_BUG_ON(view.partial.size > nreal); @@ -252,7 +252,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c index bdf5bb40ccf1..19e374f68ff7 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c @@ -33,10 +33,10 @@ static int igt_gem_object(void *arg) static int igt_gem_huge(void *arg) { - const unsigned int nreal = 509; /* just to be awkward */ + const unsigned long nreal = 509; /* just to be awkward */ struct drm_i915_private *i915 = arg; struct drm_i915_gem_object *obj; - unsigned int n; + unsigned long n; int err; /* Basic sanitycheck of our huge fake object allocation */ @@ -49,7 +49,7 @@ static int igt_gem_huge(void *arg) err = i915_gem_object_pin_pages_unlocked(obj); if (err) { - pr_err("Failed to allocate %u pages (%lu total), err=%d\n", + pr_err("Failed to allocate %lu pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); goto out; } @@ -57,7 +57,7 @@ static int igt_gem_huge(void *arg) for (n = 0; n < obj->base.size / PAGE_SIZE; n++) { if (i915_gem_object_get_page(obj, n) != i915_gem_object_get_page(obj, n % nreal)) { - pr_err("Page lookup mismatch at index %u [%u]\n", + pr_err("Page lookup mismatch at index %lu [%lu]\n", n, n % nreal); err = -EINVAL; goto out_unpin; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 702e5b89be22..dba58a3c3238 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -229,8 +229,9 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, struct drm_i915_gem_pread *args) { unsigned int needs_clflush; - unsigned int idx, offset; char __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; @@ -383,13 +384,17 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj, { struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; void __user *user_data; struct i915_vma *vma; - u64 remain, offset; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + wakeref = intel_runtime_pm_get(&i915->runtime_pm); vma = i915_gem_gtt_prepare(obj, &node, false); @@ -540,13 +545,17 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; struct intel_runtime_pm *rpm = &i915->runtime_pm; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; struct i915_vma *vma; - u64 remain, offset; void __user *user_data; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + if (i915_gem_object_has_struct_page(obj)) { /* * Avoid waking the device up if we can fallback, as @@ -654,8 +663,9 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, { unsigned int partial_cacheline_write; unsigned int needs_clflush; - unsigned int offset, idx; void __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index eb0ded23fa9c..b325b48a66ec 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -33,6 +33,7 @@ #include #include #include +#include #ifdef CONFIG_X86 #include diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 260371716490..1dd8a0f51aeb 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -909,7 +909,7 @@ rotate_pages(struct drm_i915_gem_object *obj, unsigned int offset, struct sg_table *st, struct scatterlist *sg) { unsigned int column, row; - unsigned int src_idx; + pgoff_t src_idx; for (column = 0; column < width; column++) { unsigned int left; @@ -1015,7 +1015,7 @@ add_padding_pages(unsigned int count, static struct scatterlist * remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int offset, unsigned int alignment_pad, + unsigned long offset, unsigned int alignment_pad, unsigned int width, unsigned int height, unsigned int src_stride, unsigned int dst_stride, struct sg_table *st, struct scatterlist *sg, @@ -1074,7 +1074,7 @@ remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_contiguous_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, + pgoff_t obj_offset, unsigned int count, struct sg_table *st, struct scatterlist *sg) { @@ -1107,7 +1107,7 @@ remap_contiguous_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_linear_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, unsigned int alignment_pad, + pgoff_t obj_offset, unsigned int alignment_pad, unsigned int size, struct sg_table *st, struct scatterlist *sg, unsigned int *gtt_offset) From patchwork Tue Aug 23 10:17:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9A241C32772 for ; Tue, 23 Aug 2022 10:19:46 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id BE5EDB3059; Tue, 23 Aug 2022 10:18:52 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 30A0EB2FDE; Tue, 23 Aug 2022 10:18:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249889; x=1692785889; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uZyR5G+AOydbxrOOok2HEUuikNWiFlgo84gXHP/S3TI=; b=mvWFClhrIbPZy8kkk4Rz1w0V0IPIk7ASU+Psszyz/6skf+u844h15hA1 3Jj/59LwSNJPefiPevWRG1gvnbm21VOOcho3iHzxHTOSDPPX+PgXtxXSL HCR/As3hPYlejZur2bF6iUpf0X4cDur66eEO3szbeRfZFD05njUGHSDyO +lIb8PP+J1gaYkkScV7RH9wtll45XDRf8VWaMKzE42IY7PdwylUTcUbPW 6fDmQi/W/sjXgxmujVx6vkCo0+y8m17N8qUXORKPVYLw4Gxz/kYHdfi1g Wy7jdZyQRGo40FAfqKrsikC1dLvH4ty7YmIrxHuQTQCtYOoIqeS4WuYDK w==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="293647221" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="293647221" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:08 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639951" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:03 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 4/8] drm/i915: Check for integer truncation on scatterlist creation Date: Tue, 23 Aug 2022 19:17:25 +0900 Message-Id: <20220823101729.2098841-5-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson There is an impedance mismatch between the scatterlist API using unsigned int and our memory/page accounting in unsigned long. That is we may try to create a scatterlist for a large object that overflows returning a small table into which we try to fit very many pages. As the object size is under control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long into the scatterlist's unsigned int, we use overflows_type check and report E2BIG prior to the operation. This is already used in our create ioctls to indicate if the uABI request is simply too large for the backing store. Failing that type check, we have a second check at sg_alloc_table time to make sure the values we are passing into the scatterlist API are not truncated. It uses pgoff_t for locals that are dealing with page indices, in this case, the page count is the limit of the page index. And it uses safe_conversion() macro which performs a type conversion (cast) of an integer value into a new variable, checking that the destination is large enough to hold the source value. v2: Move added i915_utils's macro into drm_util header (Jani N) v5: Fix macros to be enclosed in parentheses for complex values Fix too long line warning v8: Replace safe_conversion() with check_assign() (Kees) Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 6 ++++-- drivers/gpu/drm/i915/gem/i915_gem_object.h | 3 --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 5 ++++- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++++- drivers/gpu/drm/i915/gvt/dmabuf.c | 9 +++++---- drivers/gpu/drm/i915/i915_scatterlist.h | 11 +++++++++++ 8 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index c698f95af15f..53fa27e1c950 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -37,10 +37,13 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) struct sg_table *st; struct scatterlist *sg; unsigned int sg_page_sizes; - unsigned int npages; + pgoff_t npages; /* restricted by sg_alloc_table */ int max_order; gfp_t gfp; + if (check_assign(obj->base.size >> PAGE_SHIFT, &npages)) + return -E2BIG; + max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB if (is_swiotlb_active(obj->base.dev->dev)) { @@ -67,7 +70,6 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) if (!st) return -ENOMEM; - npages = obj->base.size / PAGE_SIZE; if (sg_alloc_table(st, npages, GFP_KERNEL)) { kfree(st); return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 5da872afc4ba..0cf31adbfd41 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -26,9 +26,6 @@ enum intel_region_id; * this and catch if we ever need to fix it. In the meantime, if you do * spot such a local variable, please consider fixing! * - * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for nents - * * We can check for invalidly typed locals with typecheck(), see for example * i915_gem_object_get_sg(). */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 0d0e46dae559..88ba7266a3a5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -28,6 +28,10 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) void *dst; int i; + /* Contiguous chunk, with a single scatterlist element */ + if (overflows_type(obj->base.size, sg->length)) + return -E2BIG; + if (GEM_WARN_ON(i915_gem_object_needs_bit17_swizzle(obj))) return -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index f42ca1179f37..339b0a9cf2d0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -193,13 +193,16 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) struct drm_i915_private *i915 = to_i915(obj->base.dev); struct intel_memory_region *mem = obj->mm.region; struct address_space *mapping = obj->base.filp->f_mapping; - const unsigned long page_count = obj->base.size / PAGE_SIZE; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; struct sgt_iter sgt_iter; + pgoff_t page_count; struct page *page; int ret; + if (check_assign(obj->base.size >> PAGE_SHIFT, &page_count)) + return -E2BIG; + /* * Assert that the object is not currently in any GPU domain. As it * wasn't in the GTT, there shouldn't be any way it could have been in diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 4148126e7b1b..d708c4c2a9bd 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -783,6 +783,10 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj) { struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS]; struct ttm_placement placement; + pgoff_t num_pages; + + if (check_assign(obj->base.size >> PAGE_SHIFT, &num_pages)) + return -E2BIG; GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 8423df021b71..48237e443863 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -128,13 +128,16 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj) static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) { - const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; unsigned int sg_page_sizes; struct page **pvec; + pgoff_t num_pages; /* limited by sg_alloc_table_from_pages_segment */ int ret; + if (check_assign(obj->base.size >> PAGE_SHIFT, &num_pages)) + return -E2BIG; + st = kmalloc(sizeof(*st), GFP_KERNEL); if (!st) return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c index 01e54b45c5c1..bc6e823584ad 100644 --- a/drivers/gpu/drm/i915/gvt/dmabuf.c +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c @@ -42,8 +42,7 @@ #define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12)) -static int vgpu_gem_get_pages( - struct drm_i915_gem_object *obj) +static int vgpu_gem_get_pages(struct drm_i915_gem_object *obj) { struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct intel_vgpu *vgpu; @@ -52,7 +51,10 @@ static int vgpu_gem_get_pages( int i, j, ret; gen8_pte_t __iomem *gtt_entries; struct intel_vgpu_fb_info *fb_info; - u32 page_num; + pgoff_t page_num; + + if (check_assign(obj->base.size >> PAGE_SHIFT, &page_num)) + return -E2BIG; fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info; if (drm_WARN_ON(&dev_priv->drm, !fb_info)) @@ -66,7 +68,6 @@ static int vgpu_gem_get_pages( if (unlikely(!st)) return -ENOMEM; - page_num = obj->base.size >> PAGE_SHIFT; ret = sg_alloc_table(st, page_num, GFP_KERNEL); if (ret) { kfree(st); diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h index 9ddb3e743a3e..1d1802beb42b 100644 --- a/drivers/gpu/drm/i915/i915_scatterlist.h +++ b/drivers/gpu/drm/i915/i915_scatterlist.h @@ -220,4 +220,15 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res, u64 region_start, u32 page_alignment); +/* Wrap scatterlist.h to sanity check for integer truncation */ +typedef unsigned int __sg_size_t; /* see linux/scatterlist.h */ +#define sg_alloc_table(sgt, nents, gfp) \ + overflows_type(nents, __sg_size_t) ? -E2BIG \ + : ((sg_alloc_table)(sgt, (__sg_size_t)(nents), gfp)) + +#define sg_alloc_table_from_pages_segment(sgt, pages, npages, offset, size, max_segment, gfp) \ + overflows_type(npages, __sg_size_t) ? -E2BIG \ + : ((sg_alloc_table_from_pages_segment)(sgt, pages, (__sg_size_t)(npages), offset, \ + size, max_segment, gfp)) + #endif From patchwork Tue Aug 23 10:17:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66D0BC32772 for ; Tue, 23 Aug 2022 10:19:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7200CB3068; Tue, 23 Aug 2022 10:18:53 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9801CB2FF2; Tue, 23 Aug 2022 10:18:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249894; x=1692785894; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bNeVdVfZ7gqb0naDT5qkkW6jZqVEr8L6HjhifYEvt0M=; b=VzlPj5xhNIvNXoALMb0/g5geMIOBdnI4/DO2QKV4udn2uOJZ3Hkj7zMi 5MZyu2wdK9s2zjijc6xmugOELw92p77tW7DWT1DVZx3YOdnrL2IXlLqT8 b3mCuaVB6GTWD1WBUb5O40nO57uFz921Xrx+Z+675yXjS3mnZf7gL+B+y QJfSyKwgvswk+LfsDejsHQsM7ublI5kcyHMMUqqTfXQezVKBf8FJRtr8B /m1NKA3Qb+0OtrLxF1l5veQyRMqnNNmRyqtvNNqrOTjoKHGL5lfSRY0hq G9FZiMAIt6p02lM20Vvhed04XGg/a3a6r5NIk2Y/RwunqT+gswy0hjyW7 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="293647234" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="293647234" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:13 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639968" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:08 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 5/8] drm/i915: Check for integer truncation on the configuration of ttm place Date: Tue, 23 Aug 2022 19:17:26 +0900 Message-Id: <20220823101729.2098841-6-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is an impedance mismatch between the first/last valid page frame number of ttm place in unsigned and our memory/page accounting in unsigned long. As the object size is under the control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long to unsigned, we use overflows_type check and report E2BIG or overflow_type prior to the operation. v3: Not to change execution inside a macro. (Mauro) Add safe_conversion_gem_bug_on() macro and remove temporal SAFE_CONVERSION() macro. v4: Fix unhandled GEM_BUG_ON() macro call from safe_conversion_gem_bug_on() v6: Fix to follow general use case for GEM_BUG_ON(). (Jani) v7: Fix to use WARN_ON() macro where GEM_BUG_ON() macro was used. (Jani) v8: Replace safe_conversion() with check_assign() (Kees) Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Cc: Jani Nikula Reviewed-by: Nirmoy Das (v2) Reviewed-by: Mauro Carvalho Chehab (v3) Reported-by: kernel test robot Reviewed-by: Andrzej Hajda (v5) --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 6 +++--- drivers/gpu/drm/i915/intel_region_ttm.c | 17 ++++++++++++++--- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index d708c4c2a9bd..615541b650fa 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -140,14 +140,14 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr, if (flags & I915_BO_ALLOC_CONTIGUOUS) place->flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place->fpfn = offset >> PAGE_SHIFT; - place->lpfn = place->fpfn + (size >> PAGE_SHIFT); + WARN_ON(check_assign(offset >> PAGE_SHIFT, &place->fpfn)); + WARN_ON(check_assign(place->fpfn + (size >> PAGE_SHIFT), &place->lpfn)); } else if (mr->io_size && mr->io_size < mr->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place->flags |= TTM_PL_FLAG_TOPDOWN; } else { place->fpfn = 0; - place->lpfn = mr->io_size >> PAGE_SHIFT; + WARN_ON(check_assign(mr->io_size >> PAGE_SHIFT, &place->lpfn)); } } } diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 575d67bc6ffe..37a964b20b36 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -209,14 +209,23 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, if (flags & I915_BO_ALLOC_CONTIGUOUS) place.flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place.fpfn = offset >> PAGE_SHIFT; - place.lpfn = place.fpfn + (size >> PAGE_SHIFT); + if (WARN_ON(check_assign(offset >> PAGE_SHIFT, &place.fpfn))) { + ret = -E2BIG; + goto out; + } + if (WARN_ON(check_assign(place.fpfn + (size >> PAGE_SHIFT), &place.lpfn))) { + ret = -E2BIG; + goto out; + } } else if (mem->io_size && mem->io_size < mem->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place.flags |= TTM_PL_FLAG_TOPDOWN; } else { place.fpfn = 0; - place.lpfn = mem->io_size >> PAGE_SHIFT; + if (WARN_ON(check_assign(mem->io_size >> PAGE_SHIFT, &place.lpfn))) { + ret = -E2BIG; + goto out; + } } } @@ -224,6 +233,8 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, mock_bo.bdev = &mem->i915->bdev; ret = man->func->alloc(man, &mock_bo, &place, &res); + +out: if (ret == -ENOSPC) ret = -ENXIO; if (!ret) From patchwork Tue Aug 23 10:17:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951952 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32974C32772 for ; Tue, 23 Aug 2022 10:20:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7012AB3094; Tue, 23 Aug 2022 10:18:58 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id 70B36B2FF7; Tue, 23 Aug 2022 10:18:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249899; x=1692785899; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fusilmdSQD+FIBKYeUukiHJrxKRQ3B5pPSTTQeO8rTk=; b=U9VbX0kzOBMLwDOM1usTenvyrkqtxl6B2DOsU+2zOFMKk8uodWbfyv9y gr31/9eO3gs2w+QMVNllfab7Ponzk0hTzng216jV9Y+Ggn67uZN3fUTbs IigfxlW3+9VyXxH9/rrd1a0LgpsxG3lV+h7Z5hc7kChadiWHW+UX17pYf klAm8r7LxfCeuBxlFqwqw+4u+f1UC54srcYqsUw9SjGRyw+i35SUtJvUS BggCAA8IMegDrcZdnQAlKO4UQwIYr4ZT4gkEbZNY/jGKnkv8wB1Yn/qm9 mzQWQFPZ94A/KL8rOgsxaMBGX+Ik2T9qgk6F5wtaPynmQHao/r+t7qMoK A==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="293647261" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="293647261" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:18 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639982" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:13 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 6/8] drm/i915: Check if the size is too big while creating shmem file Date: Tue, 23 Aug 2022 19:17:27 +0900 Message-Id: <20220823101729.2098841-7-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The __shmem_file_setup() function returns -EINVAL if size is greater than MAX_LFS_FILESIZE. To handle the same error as other code that returns -E2BIG when the size is too large, it add a code that returns -E2BIG when the size is larger than the size that can be handled. v4: If BITS_PER_LONG is 32, size > MAX_LFS_FILESIZE is always false, so it checks only when BITS_PER_LONG is 64. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reported-by: kernel test robot Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 339b0a9cf2d0..ca30060e34ab 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -541,6 +541,20 @@ static int __create_shmem(struct drm_i915_private *i915, drm_gem_private_object_init(&i915->drm, obj, size); + /* XXX: The __shmem_file_setup() function returns -EINVAL if size is + * greater than MAX_LFS_FILESIZE. + * To handle the same error as other code that returns -E2BIG when + * the size is too large, we add a code that returns -E2BIG when the + * size is larger than the size that can be handled. + * If BITS_PER_LONG is 32, size > MAX_LFS_FILESIZE is always false, + * so we only needs to check when BITS_PER_LONG is 64. + * If BITS_PER_LONG is 32, E2BIG checks are processed when + * i915_gem_object_size_2big() is called before init_object() callback + * is called. + */ + if (BITS_PER_LONG == 64 && size > MAX_LFS_FILESIZE) + return -E2BIG; + if (i915->mm.gemfs) filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size, flags); From patchwork Tue Aug 23 10:17:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12951951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B75CEC32772 for ; Tue, 23 Aug 2022 10:20:05 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1B778B307E; Tue, 23 Aug 2022 10:18:57 +0000 (UTC) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by gabe.freedesktop.org (Postfix) with ESMTPS id E090EB3002; Tue, 23 Aug 2022 10:18:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249902; x=1692785902; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xXWYIOHPKolHy/g3/53FzAfI4f56T3QhlqwnvSbksEw=; b=Mhu+nl7WDxVF0/de+BBy6sKJUD4UzNPW1H6eIpUhRSoPwzRrTjm1+1jx Vt+4CYRykfbI7ZQ1d7BkWDIgCDYl6PXF7VTidr/6k34mi2kaH3A8Ue3nb Nc0oO/JB0E6yoeSNLoZ3Zkw0W7hyh2FBp03frSgwYGlPGfr2//3JQHskH BAR06SbWJJX3NIlJURPygpqTrb12ByR3TzwyJIoIzLH2rK+/pzX++eC/e IHdM0BC6SJ78aTLAWRPIrYhApDPz6qOU+TgRCRFh2RH1RyOHHz5utJ9bX Ru2PhXK3mMl4OFMLv+KrcWUXxVDB2aeKHLhTg5YHx1GVQNDSrS0Of9ZY/ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="293647280" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="293647280" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:22 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751639998" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:18 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 7/8] drm/i915: Use error code as -E2BIG when the size of gem ttm object is too large Date: Tue, 23 Aug 2022 19:17:28 +0900 Message-Id: <20220823101729.2098841-8-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The ttm_bo_init_reserved() functions returns -ENOSPC if the size is too big to add vma. The direct function that returns -ENOSPC is drm_mm_insert_node_in_range(). To handle the same error as other code returning -E2BIG when the size is too large, it converts return value to -E2BIG. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 615541b650fa..e3046e02b47a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1210,6 +1210,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), bo_type, &i915_sys_placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); + + /* + * XXX: The ttm_bo_init_reserved() functions returns -ENOSPC if the size + * is too big to add vma. The direct function that returns -ENOSPC is + * drm_mm_insert_node_in_range(). To handle the same error as other code + * that returns -E2BIG when the size is too large, it converts -ENOSPC to + * -E2BIG. + */ + if (size >> PAGE_SHIFT > INT_MAX && ret == -ENOSPC) + ret = -E2BIG; + if (ret) return i915_ttm_err_to_gem(ret); From patchwork Tue Aug 23 10:17:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12953953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E85D4C3F6B0 for ; Wed, 24 Aug 2022 20:31:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E6BD3C7D66; Wed, 24 Aug 2022 20:31:21 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4F8D6B3006; Tue, 23 Aug 2022 10:18:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661249910; x=1692785910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rIBijWYDCuJCpKPJprGFsO7FPm8DX5/N35/ZVITEqGo=; b=IW+gYiVLOn0c5H5VDqMubRRl5lJRBTZGkHxdLBLf8AzLWoPSrAaZIxa4 ogsBsihF+ViUHAXfygmJH12tBLj57olzVO9rwutmpX026KRr/9TYx1Qac z3SVyOkxjoUX3SPVt4Nj50gqMZD3D0KA1afgjI+kjPyJtRNdGyeeGvPDB Wu8/H3bIbMdjdnBBwPdKpDDxvj0xlybz/BoJbWwjVsz0NJS94aLmLqxvJ hvxL+Gtlb8tMSUv92zXiicgn9PcmMgyM+5di3BMN18fyBX1gQt2yRaWXY kaVuKOOWqxHVBQL+iHBTyrnBixcLoyjD4qqrTM6RyoBI1OBQ+BoYmS1lH Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10447"; a="355378434" X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="355378434" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:28 -0700 X-IronPort-AV: E=Sophos;i="5.93,257,1654585200"; d="scan'208";a="751640019" Received: from jabish-mobl2.amr.corp.intel.com (HELO paris.amr.corp.intel.com) ([10.254.9.209]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Aug 2022 03:18:23 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v8 8/8] drm/i915: Remove truncation warning for large objects Date: Tue, 23 Aug 2022 19:17:29 +0900 Message-Id: <20220823101729.2098841-9-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> References: <20220823101729.2098841-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, mauro.chehab@linux.intel.com, andi.shyti@linux.intel.com, keescook@chromium.org, jani.nikula@intel.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, andrzej.hajda@intel.com, matthew.auld@intel.com, intel-gfx-trybot@lists.freedesktop.org, mchehab@kernel.org, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson Having addressed the issues surrounding incorrect types for local variables and potential integer truncation in using the scatterlist API, we have closed all the loop holes we had previously identified with dangerously large object creation. As such, we can eliminate the warning put in place to remind us to complete the review. Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström Testcase: igt@gem_create@create-massive Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4991 Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 0cf31adbfd41..dd2762da332f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -20,25 +20,10 @@ enum intel_region_id; -/* - * XXX: There is a prevalence of the assumption that we fit the - * object's page count inside a 32bit _signed_ variable. Let's document - * this and catch if we ever need to fix it. In the meantime, if you do - * spot such a local variable, please consider fixing! - * - * We can check for invalidly typed locals with typecheck(), see for example - * i915_gem_object_get_sg(). - */ -#define GEM_CHECK_SIZE_OVERFLOW(sz) \ - GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) - static inline bool i915_gem_object_size_2big(u64 size) { struct drm_i915_gem_object *obj; - if (GEM_CHECK_SIZE_OVERFLOW(size)) - return true; - if (overflows_type(size, obj->base.size)) return true;