From patchwork Tue Jul 5 12:24:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 11CE0C43334 for ; Tue, 5 Jul 2022 12:25:17 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2FE4A10E626; Tue, 5 Jul 2022 12:25:07 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7B39C10E30A; Tue, 5 Jul 2022 12:25:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023904; x=1688559904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+4kEEAnXF0yyM7JtzkZFaHrnyjAjEbzB7khuK36H9yU=; b=JKwK3u4sWEcPtGSXl4BuNuza/74Ibmr2XcjgKuu3CHHt218YGckh+wSz SYbXxABlXDYd9v0izZBY9lmGquvsjFys72ABiYi/pl8pvV1R4aUFicfne O+nwv0iBSLlyr+VH+n4rTdVryjR/x1c3hIJgcIiwo2EjwKFRmTJ5u4vNc XXTRyjjZRrNvF/o2iT3jggwa5qGdgiWvqHRhD2u9lsh4bm1LclsA2kGhD zrHjhMeKKokwHJk8jkHU1S3Bk9QtcJj12KvSSjoWCZPOyH11yiEMIqfeX 2G9IJpV/5al2LQgASG3ezXrQWda9OOuSlFWZmlNB+xlVev/HiE/g5nmHz Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019745" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019745" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:04 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119420" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:02 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 1/7] drm: Move and add a few utility macros into drm util header Date: Tue, 5 Jul 2022 15:24:49 +0300 Message-Id: <20220705122455.3866745-2-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" It moves overflows_type utility macro into drm util header from i915_utils header. The overflows_type can be used to catch the truncation between data types. And it adds safe_conversion() macro which performs a type conversion (cast) of an source value into a new variable, checking that the destination is large enough to hold the source value. And it adds exact_type and exactly_pgoff_t macro to catch type mis-match while compiling. Signed-off-by: Gwan-gyeong Mun Cc: Thomas Hellström Cc: Matthew Auld Cc: Nirmoy Das Cc: Jani Nikula --- drivers/gpu/drm/i915/i915_utils.h | 5 +-- include/drm/drm_util.h | 54 +++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index c10d68cdc3ca..345e5b2dc1cd 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -32,6 +32,7 @@ #include #include #include +#include #ifdef CONFIG_X86 #include @@ -111,10 +112,6 @@ bool i915_error_injected(void); #define range_overflows_end_t(type, start, size, max) \ range_overflows_end((type)(start), (type)(size), (type)(max)) -/* Note we don't consider signbits :| */ -#define overflows_type(x, T) \ - (sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T)) - #define ptr_mask_bits(ptr, n) ({ \ unsigned long __v = (unsigned long)(ptr); \ (typeof(ptr))(__v & -BIT(n)); \ diff --git a/include/drm/drm_util.h b/include/drm/drm_util.h index 79952d8c4bba..c56230e39e37 100644 --- a/include/drm/drm_util.h +++ b/include/drm/drm_util.h @@ -62,6 +62,60 @@ */ #define for_each_if(condition) if (!(condition)) {} else +/** + * overflows_type - helper for checking the truncation between data types + * @x: Source for overflow type comparison + * @T: Destination for overflow type comparison + * + * It compares the values and size of each data type between the first and + * second argument to check whether truncation can occur when assigning the + * first argument to the variable of the second argument. + * It does't consider signbits. + * + * Returns: + * True if truncation can occur, false otherwise. + */ +#define overflows_type(x, T) \ + (sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T)) + +/** + * exact_type - break compile if source type and destination value's type are + * not the same + * @T: Source type + * @n: Destination value + * + * It is a helper macro for a poor man's -Wconversion: only allow variables of + * an exact type. It determines whether the source type and destination value's + * type are the same while compiling, and it breaks compile if two types are + * not the same + */ +#define exact_type(T, n) \ + BUILD_BUG_ON(!__builtin_constant_p(n) && !__builtin_types_compatible_p(T, typeof(n))) + +/** + * exactly_pgoff_t - helper to check if the type of a value is pgoff_t + * @n: value to compare pgoff_t type + * + * It breaks compile if the argument value's type is not pgoff_t type. + */ +#define exactly_pgoff_t(n) exact_type(pgoff_t, n) + +/* + * safe_conversion - perform a type conversion (cast) of an source value into + * a new variable, checking that the destination is large enough to hold the + * source value. + * @ptr: Destination pointer address + * @value: Source value + * + * Returns: + * If the value would overflow the destination, it returns false. + */ +#define safe_conversion(ptr, value) ({ \ + typeof(value) __v = (value); \ + typeof(ptr) __ptr = (ptr); \ + overflows_type(__v, *__ptr) ? 0 : (*__ptr = (typeof(*__ptr))__v), 1; \ +}) + /** * drm_can_sleep - returns true if currently okay to sleep * From patchwork Tue Jul 5 12:24:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42722C43334 for ; Tue, 5 Jul 2022 12:25:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 73F2510EB78; Tue, 5 Jul 2022 12:25:11 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 93D0910E850; Tue, 5 Jul 2022 12:25:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023908; x=1688559908; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WGaATGNoDRvr1ZzOF4iaLnjp6Pzx0PahZqLxRMf+SwE=; b=bSPNhe6C6tOZn1EJsX9nP/mlh/ev2fNshiQ8EmtCILNpN1Fm7RS/JT4p G2SPQZUKlLH6JLHti3GZmWp+OLtSCqmyw5RxSM4Tnc2iVTGGvt5Xi4geR gQs8hvO5Hk+PBSuHatCmmPo/eg1aw8fJnftV/mVdXGoAbDi2lVwQjJA1X TzJGzEN1USYpa0cUK6U5A9pT+LwIzmk04oTThffX9cMGjoVCTQ+TCmPkA 5z4tYBrpX75BvPqT/zE0ftgzs16XPZE6eb9pGf1/C0W9FU7fL2rVx7od0 qrOXUq8DI6Z0IFLMnhFR60MCQ3rN73H3rlsbATQ4bmAKSAhuh2sJhf0X1 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019748" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019748" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:07 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119440" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:04 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 2/7] drm/i915/gem: Typecheck page lookups Date: Tue, 5 Jul 2022 15:24:50 +0300 Message-Id: <20220705122455.3866745-3-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson We need to check that we avoid integer overflows when looking up a page, and so fix all the instances where we have mistakenly used a plain integer instead of a more suitable long. Be pedantic and add integer typechecking to the lookup so that we can be sure that we are safe. And it also uses pgoff_t as our page lookups must remain compatible with the page cache, pgoff_t is currently exactly unsigned long. v2: Move added i915_utils's macro into drm_util header (Jani N) Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 7 +- drivers/gpu/drm/i915/gem/i915_gem_object.h | 67 ++++++++++++++----- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 25 ++++--- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- .../drm/i915/gem/selftests/i915_gem_context.c | 12 ++-- .../drm/i915/gem/selftests/i915_gem_mman.c | 8 +-- .../drm/i915/gem/selftests/i915_gem_object.c | 8 +-- drivers/gpu/drm/i915/i915_gem.c | 18 +++-- drivers/gpu/drm/i915/i915_vma.c | 8 +-- 9 files changed, 100 insertions(+), 55 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index ccec4055fde3..90996fe8ad45 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -421,10 +421,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, static void i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; void *src_map; void *src_ptr; - src_map = kmap_atomic(i915_gem_object_get_page(obj, offset >> PAGE_SHIFT)); + src_map = kmap_atomic(i915_gem_object_get_page(obj, idx)); src_ptr = src_map + offset_in_page(offset); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) @@ -437,9 +438,10 @@ i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, static void i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; + dma_addr_t dma = i915_gem_object_get_dma_address(obj, idx); void __iomem *src_map; void __iomem *src_ptr; - dma_addr_t dma = i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT); src_map = io_mapping_map_wc(&obj->mm.region->iomap, dma - obj->mm.region->region.start, @@ -468,6 +470,7 @@ i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset */ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + GEM_BUG_ON(overflows_type(offset >> PAGE_SHIFT, pgoff_t)); GEM_BUG_ON(offset >= obj->base.size); GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size); GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 6f0a3ce35567..a60c6f4517d5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -27,8 +27,10 @@ enum intel_region_id; * spot such a local variable, please consider fixing! * * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for num_pages - * - get_user_pages*() mixed ints with longs + * - sg_table embeds unsigned int for nents + * + * We can check for invalidly typed locals with typecheck(), see for example + * i915_gem_object_get_sg(). */ #define GEM_CHECK_SIZE_OVERFLOW(sz) \ GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) @@ -366,41 +368,70 @@ int i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, struct scatterlist * __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, bool dma); + pgoff_t n, + unsigned int *offset); + +#define __i915_gem_object_get_sg(obj, it, n, offset) ({ \ + exactly_pgoff_t(n); \ + (__i915_gem_object_get_sg)(obj, it, n, offset); \ +}) static inline struct scatterlist * -i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - unsigned int n, +i915_gem_object_get_sg(struct drm_i915_gem_object *obj, pgoff_t n, unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, false); + return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset); } +#define i915_gem_object_get_sg(obj, n, offset) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_sg)(obj, n, offset); \ +}) + static inline struct scatterlist * -i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, - unsigned int n, +i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, pgoff_t n, unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, true); + return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset); } +#define i915_gem_object_get_sg_dma(obj, n, offset) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_sg_dma)(obj, n, offset); \ +}) + struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, - unsigned int n); +i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n); + +#define i915_gem_object_get_page(obj, n) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_page)(obj, n); \ +}) struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n); +i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n); + +#define i915_gem_object_get_dirty_page(obj, n) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_dirty_page)(obj, n); \ +}) dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, +i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, pgoff_t n, unsigned int *len); +#define i915_gem_object_get_dma_address_len(obj, n, len) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_dma_address_len)(obj, n, len); \ +}) + dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n); +i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n); + +#define i915_gem_object_get_dma_address(obj, n) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_dma_address)(obj, n); \ +}) void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct sg_table *pages, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 97c820eee115..1d1edcb3514b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -503,14 +503,16 @@ void __i915_gem_object_release_map(struct drm_i915_gem_object *obj) } struct scatterlist * -__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, +(__i915_gem_object_get_sg)(struct drm_i915_gem_object *obj, struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, - bool dma) + pgoff_t n, + unsigned int *offset) + { - struct scatterlist *sg; + const bool dma = iter == &obj->mm.get_dma_page || + iter == &obj->ttm.get_io_page; unsigned int idx, count; + struct scatterlist *sg; might_sleep(); GEM_BUG_ON(n >= obj->base.size >> PAGE_SHIFT); @@ -618,7 +620,7 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, } struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) +(i915_gem_object_get_page)(struct drm_i915_gem_object *obj, pgoff_t n) { struct scatterlist *sg; unsigned int offset; @@ -631,8 +633,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) /* Like i915_gem_object_get_page(), but mark the returned page dirty */ struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n) +(i915_gem_object_get_dirty_page)(struct drm_i915_gem_object *obj, pgoff_t n) { struct page *page; @@ -644,9 +645,8 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, - unsigned int *len) +(i915_gem_object_get_dma_address_len)(struct drm_i915_gem_object *obj, + pgoff_t n, unsigned int *len) { struct scatterlist *sg; unsigned int offset; @@ -660,8 +660,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n) +(i915_gem_object_get_dma_address)(struct drm_i915_gem_object *obj, pgoff_t n) { return i915_gem_object_get_dma_address_len(obj, n, NULL); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 7e1f8b83077f..50a02d850139 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -717,7 +717,7 @@ static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo, GEM_WARN_ON(bo->ttm); base = obj->mm.region->iomap.base - obj->mm.region->region.start; - sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs, true); + sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs); return ((base + sg_dma_address(sg)) >> PAGE_SHIFT) + ofs; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index c6ad67b90e8a..a18a890e681f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -455,7 +455,8 @@ static int gpu_fill(struct intel_context *ce, static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) { const bool has_llc = HAS_LLC(to_i915(obj->base.dev)); - unsigned int n, m, need_flush; + unsigned int need_flush; + unsigned long n, m; int err; i915_gem_object_lock(obj, NULL); @@ -485,7 +486,8 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) static noinline int cpu_check(struct drm_i915_gem_object *obj, unsigned int idx, unsigned int max) { - unsigned int n, m, needs_flush; + unsigned int needs_flush; + unsigned long n; int err; i915_gem_object_lock(obj, NULL); @@ -494,7 +496,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, goto out_unlock; for (n = 0; n < real_page_count(obj); n++) { - u32 *map; + u32 *map, m; map = kmap_atomic(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) @@ -502,7 +504,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (m = 0; m < max; m++) { if (map[m] != m) { - pr_err("%pS: Invalid value at object %d page %d/%ld, offset %d/%d: found %x expected %x\n", + pr_err("%pS: Invalid value at object %d page %ld/%ld, offset %d/%d: found %x expected %x\n", __builtin_return_address(0), idx, n, real_page_count(obj), m, max, map[m], m); @@ -513,7 +515,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (; m < DW_PER_PAGE; m++) { if (map[m] != STACK_MAGIC) { - pr_err("%pS: Invalid value at object %d page %d, offset %d: found %x expected %x (uninitialised)\n", + pr_err("%pS: Invalid value at object %d page %ld, offset %d: found %x expected %x (uninitialised)\n", __builtin_return_address(0), idx, n, m, map[m], STACK_MAGIC); err = -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 3ced9948a331..86e435d42546 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -95,11 +95,11 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt_view view; struct i915_vma *vma; + unsigned long offset; unsigned long page; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; int err; @@ -156,7 +156,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, @@ -212,10 +212,10 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, for_each_prime_number_from(page, 1, npages) { struct i915_ggtt_view view = compute_partial_view(obj, page, MIN_CHUNK_PAGES); + unsigned long offset; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; GEM_BUG_ON(view.partial.size > nreal); @@ -252,7 +252,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c index fe0a890775e2..bf30763ee6bc 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c @@ -33,10 +33,10 @@ static int igt_gem_object(void *arg) static int igt_gem_huge(void *arg) { - const unsigned int nreal = 509; /* just to be awkward */ + const unsigned long nreal = 509; /* just to be awkward */ struct drm_i915_private *i915 = arg; struct drm_i915_gem_object *obj; - unsigned int n; + unsigned long n; int err; /* Basic sanitycheck of our huge fake object allocation */ @@ -49,7 +49,7 @@ static int igt_gem_huge(void *arg) err = i915_gem_object_pin_pages_unlocked(obj); if (err) { - pr_err("Failed to allocate %u pages (%lu total), err=%d\n", + pr_err("Failed to allocate %lu pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); goto out; } @@ -57,7 +57,7 @@ static int igt_gem_huge(void *arg) for (n = 0; n < obj->base.size / PAGE_SIZE; n++) { if (i915_gem_object_get_page(obj, n) != i915_gem_object_get_page(obj, n % nreal)) { - pr_err("Page lookup mismatch at index %u [%u]\n", + pr_err("Page lookup mismatch at index %lu [%lu]\n", n, n % nreal); err = -EINVAL; goto out_unpin; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 702e5b89be22..dba58a3c3238 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -229,8 +229,9 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, struct drm_i915_gem_pread *args) { unsigned int needs_clflush; - unsigned int idx, offset; char __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; @@ -383,13 +384,17 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj, { struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; void __user *user_data; struct i915_vma *vma; - u64 remain, offset; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + wakeref = intel_runtime_pm_get(&i915->runtime_pm); vma = i915_gem_gtt_prepare(obj, &node, false); @@ -540,13 +545,17 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; struct intel_runtime_pm *rpm = &i915->runtime_pm; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; struct i915_vma *vma; - u64 remain, offset; void __user *user_data; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + if (i915_gem_object_has_struct_page(obj)) { /* * Avoid waking the device up if we can fallback, as @@ -654,8 +663,9 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, { unsigned int partial_cacheline_write; unsigned int needs_clflush; - unsigned int offset, idx; void __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index ef3b04c7e153..28443c77b45a 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -911,7 +911,7 @@ rotate_pages(struct drm_i915_gem_object *obj, unsigned int offset, struct sg_table *st, struct scatterlist *sg) { unsigned int column, row; - unsigned int src_idx; + pgoff_t src_idx; for (column = 0; column < width; column++) { unsigned int left; @@ -1017,7 +1017,7 @@ add_padding_pages(unsigned int count, static struct scatterlist * remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int offset, unsigned int alignment_pad, + unsigned long offset, unsigned int alignment_pad, unsigned int width, unsigned int height, unsigned int src_stride, unsigned int dst_stride, struct sg_table *st, struct scatterlist *sg, @@ -1076,7 +1076,7 @@ remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_contiguous_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, + pgoff_t obj_offset, unsigned int count, struct sg_table *st, struct scatterlist *sg) { @@ -1109,7 +1109,7 @@ remap_contiguous_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_linear_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, unsigned int alignment_pad, + pgoff_t obj_offset, unsigned int alignment_pad, unsigned int size, struct sg_table *st, struct scatterlist *sg, unsigned int *gtt_offset) From patchwork Tue Jul 5 12:24:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E2D4ACCA47B for ; Tue, 5 Jul 2022 12:25:30 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D7C0410EFA2; Tue, 5 Jul 2022 12:25:14 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id C8DC910E30A; Tue, 5 Jul 2022 12:25:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023910; x=1688559910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bmS8vGslTq/1c/V5gmLAd4FjAw8DmdtAo/hjJBCEETU=; b=XotEQdHYm5ySembosrsMlB3q3Ouk+tykIXtDlU5CHfAlkmNpRFui+Ntf G68hV5lqGvzOtFi0AOs4/vX1jH8aiSJa5qseWpMplfKSYmNLrbGQ2axZU nQ2OamHa3tietex7JnqMx7m30Awnz9VFm2psmMGHRbw9MnBdZiCs39jpb yspnEqOteUn3Gx8bSf262cJQ/zUmCdKNVcw2Cm6fJi3pSWE+itu+SKyXa JA6+3Kbxc29olwjXz5CsYJ6eLtnnjJVyj+t+a2ZC3t7HECT8CyrN9g0UV 6lcbERcZXLe5hc/iP71ScR10bNsXLn4dneAX2MPns6vrYNdgNBvMtXSm6 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019758" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019758" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:09 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119460" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:07 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 3/7] drm/i915: Check for integer truncation on scatterlist creation Date: Tue, 5 Jul 2022 15:24:51 +0300 Message-Id: <20220705122455.3866745-4-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson There is an impedance mismatch between the scatterlist API using unsigned int and our memory/page accounting in unsigned long. That is we may try to create a scatterlist for a large object that overflows returning a small table into which we try to fit very many pages. As the object size is under control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long into the scatterlist's unsigned int, we use overflows_type check and report E2BIG prior to the operation. This is already used in our create ioctls to indicate if the uABI request is simply too large for the backing store. Failing that type check, we have a second check at sg_alloc_table time to make sure the values we are passing into the scatterlist API are not truncated. It uses pgoff_t for locals that are dealing with page indices, in this case, the page count is the limit of the page index. And it uses safe_conversion() macro which performs a type conversion (cast) of an integer value into a new variable, checking that the destination is large enough to hold the source value. v2: Move added i915_utils's macro into drm_util header (Jani N) Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 6 ++++-- drivers/gpu/drm/i915/gem/i915_gem_object.h | 3 --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 5 ++++- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++++- drivers/gpu/drm/i915/gvt/dmabuf.c | 9 +++++---- drivers/gpu/drm/i915/i915_scatterlist.h | 8 ++++++++ 8 files changed, 33 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index c698f95af15f..ff2e6e780631 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -37,10 +37,13 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) struct sg_table *st; struct scatterlist *sg; unsigned int sg_page_sizes; - unsigned int npages; + pgoff_t npages; /* restricted by sg_alloc_table */ int max_order; gfp_t gfp; + if (!safe_conversion(&npages, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; + max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB if (is_swiotlb_active(obj->base.dev->dev)) { @@ -67,7 +70,6 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) if (!st) return -ENOMEM; - npages = obj->base.size / PAGE_SIZE; if (sg_alloc_table(st, npages, GFP_KERNEL)) { kfree(st); return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index a60c6f4517d5..31bb09dccf2f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -26,9 +26,6 @@ enum intel_region_id; * this and catch if we ever need to fix it. In the meantime, if you do * spot such a local variable, please consider fixing! * - * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for nents - * * We can check for invalidly typed locals with typecheck(), see for example * i915_gem_object_get_sg(). */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 0d0e46dae559..88ba7266a3a5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -28,6 +28,10 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) void *dst; int i; + /* Contiguous chunk, with a single scatterlist element */ + if (overflows_type(obj->base.size, sg->length)) + return -E2BIG; + if (GEM_WARN_ON(i915_gem_object_needs_bit17_swizzle(obj))) return -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 4eed3dd90ba8..604e8829e8ea 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -193,13 +193,16 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) struct drm_i915_private *i915 = to_i915(obj->base.dev); struct intel_memory_region *mem = obj->mm.region; struct address_space *mapping = obj->base.filp->f_mapping; - const unsigned long page_count = obj->base.size / PAGE_SIZE; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; struct sgt_iter sgt_iter; + pgoff_t page_count; struct page *page; int ret; + if (!safe_conversion(&page_count, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; + /* * Assert that the object is not currently in any GPU domain. As it * wasn't in the GTT, there shouldn't be any way it could have been in diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 50a02d850139..cdcb3ee0c433 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -815,6 +815,10 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj) { struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS]; struct ttm_placement placement; + pgoff_t num_pages; + + if (!safe_conversion(&num_pages, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 094f06b4ce33..25785c3a0083 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -128,13 +128,16 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj) static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) { - const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; unsigned int sg_page_sizes; struct page **pvec; + pgoff_t num_pages; /* limited by sg_alloc_table_from_pages_segment */ int ret; + if (!safe_conversion(&num_pages, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; + st = kmalloc(sizeof(*st), GFP_KERNEL); if (!st) return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c index 01e54b45c5c1..795270cb4ec2 100644 --- a/drivers/gpu/drm/i915/gvt/dmabuf.c +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c @@ -42,8 +42,7 @@ #define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12)) -static int vgpu_gem_get_pages( - struct drm_i915_gem_object *obj) +static int vgpu_gem_get_pages(struct drm_i915_gem_object *obj) { struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct intel_vgpu *vgpu; @@ -52,7 +51,10 @@ static int vgpu_gem_get_pages( int i, j, ret; gen8_pte_t __iomem *gtt_entries; struct intel_vgpu_fb_info *fb_info; - u32 page_num; + pgoff_t page_num; + + if (!safe_conversion(&page_num, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info; if (drm_WARN_ON(&dev_priv->drm, !fb_info)) @@ -66,7 +68,6 @@ static int vgpu_gem_get_pages( if (unlikely(!st)) return -ENOMEM; - page_num = obj->base.size >> PAGE_SHIFT; ret = sg_alloc_table(st, page_num, GFP_KERNEL); if (ret) { kfree(st); diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h index 12c6a1684081..c4d4d3c84cff 100644 --- a/drivers/gpu/drm/i915/i915_scatterlist.h +++ b/drivers/gpu/drm/i915/i915_scatterlist.h @@ -218,4 +218,12 @@ struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node, struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res, u64 region_start); +/* Wrap scatterlist.h to sanity check for integer truncation */ +typedef unsigned int __sg_size_t; /* see linux/scatterlist.h */ +#define sg_alloc_table(sgt, nents, gfp) \ + overflows_type(nents, __sg_size_t) ? -E2BIG : (sg_alloc_table)(sgt, (__sg_size_t)(nents), gfp) + +#define sg_alloc_table_from_pages_segment(sgt, pages, npages, offset, size, max_segment, gfp) \ + overflows_type(npages, __sg_size_t) ? -E2BIG : (sg_alloc_table_from_pages_segment)(sgt, pages, (__sg_size_t)(npages), offset, size, max_segment, gfp) + #endif From patchwork Tue Jul 5 12:24:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 28004C43334 for ; Tue, 5 Jul 2022 12:25:34 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 2443B10F060; Tue, 5 Jul 2022 12:25:15 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 3FDDA10ECF1; Tue, 5 Jul 2022 12:25:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023912; x=1688559912; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=x2h56ZVkYgfARSbrvbSCNmxORY1+DdaI/tRezkDgizA=; b=ZwnfkgBoxYfcN42CeqCSUhn6DVSCsD4K4rJDyjSUrL7dCTA28JUnHNnS 1JTVNCbentC8bhymDykwWaUSoIomU3w3eyYMazNms639FFhTJWXqRZcaY zo4R+Oc5H8yTx+OxTQi4Cg1aDDc45ZV44XTm0rZMabrigMXvORHFSekUb RsqG4M4WuD4OSr6u6DJTzUVO8sheTwHX+qfDJNBLt4C2XIU2osSAWzVzz var8tQVMFreQS6R/QcirdDt2jaiW+uAyxKNGipwFMAgbtEWdlar88cPdR Q0jlMQY9WspsJTcVnMcBE3PVKWssT7V+sPpkdsKTsQs+mpcjId123oCnU w==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019765" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019765" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:12 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119490" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:09 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 4/7] drm/i915: Check for integer truncation on the configuration of ttm place Date: Tue, 5 Jul 2022 15:24:52 +0300 Message-Id: <20220705122455.3866745-5-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" There is an impedance mismatch between the first/last valid page frame number of ttm place in unsigned and our memory/page accounting in unsigned long. As the object size is under the control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long to unsigned, we use overflows_type check and report E2BIG or overflow_type prior to the operation. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 12 +++++++++--- drivers/gpu/drm/i915/intel_region_ttm.c | 16 +++++++++++++--- 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index cdcb3ee0c433..d579524663b3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -137,19 +137,25 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr, if (mr->type == INTEL_MEMORY_SYSTEM) return; +#define SAFE_CONVERSION(ptr, value) ({ \ + if (!safe_conversion(ptr, value)) { \ + GEM_BUG_ON(overflows_type(value, *ptr)); \ + } \ +}) if (flags & I915_BO_ALLOC_CONTIGUOUS) place->flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place->fpfn = offset >> PAGE_SHIFT; - place->lpfn = place->fpfn + (size >> PAGE_SHIFT); + SAFE_CONVERSION(&place->fpfn, offset >> PAGE_SHIFT); + SAFE_CONVERSION(&place->lpfn, place->fpfn + (size >> PAGE_SHIFT)); } else if (mr->io_size && mr->io_size < mr->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place->flags |= TTM_PL_FLAG_TOPDOWN; } else { place->fpfn = 0; - place->lpfn = mr->io_size >> PAGE_SHIFT; + SAFE_CONVERSION(&place->lpfn, mr->io_size >> PAGE_SHIFT); } } +#undef SAFE_CONVERSION } static void diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 62ff77445b01..8fcb8654b978 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -202,24 +202,34 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, struct ttm_resource *res; int ret; +#define SAFE_CONVERSION(ptr, value) ({ \ + if (!safe_conversion(ptr, value)) { \ + GEM_BUG_ON(overflows_type(value, *ptr)); \ + ret = -E2BIG; \ + goto out; \ + } \ +}) if (flags & I915_BO_ALLOC_CONTIGUOUS) place.flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place.fpfn = offset >> PAGE_SHIFT; - place.lpfn = place.fpfn + (size >> PAGE_SHIFT); + SAFE_CONVERSION(&place.fpfn, offset >> PAGE_SHIFT); + SAFE_CONVERSION(&place.lpfn, place.fpfn + (size >> PAGE_SHIFT)); } else if (mem->io_size && mem->io_size < mem->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place.flags |= TTM_PL_FLAG_TOPDOWN; } else { place.fpfn = 0; - place.lpfn = mem->io_size >> PAGE_SHIFT; + SAFE_CONVERSION(&place.lpfn, mem->io_size >> PAGE_SHIFT); } } +#undef SAFE_CONVERSION mock_bo.base.size = size; mock_bo.bdev = &mem->i915->bdev; ret = man->func->alloc(man, &mock_bo, &place, &res); + +out: if (ret == -ENOSPC) ret = -ENXIO; if (!ret) From patchwork Tue Jul 5 12:24:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906538 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 80A44C43334 for ; Tue, 5 Jul 2022 12:25:38 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 38D1D10F4E1; Tue, 5 Jul 2022 12:25:19 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8BDC610EF9C; Tue, 5 Jul 2022 12:25:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023915; x=1688559915; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RyLVF0wc3au/2pQUP0YHF+Mx5j7uQhfQE01aVY3Zm14=; b=GWSYgZKpR5SEY4dYb410ZDHWauoPdHg1RLg1pwD9vZQPUN1Doxi8btEW aaAvRVAA6DHxkVu3fbLHpEqQNhbmEsmUhHCLnuu1e04817bqF8VIy8gVS Alsgg3xvn3O1fUFB1ZlEj4qxH5UXk1BttpbsfxY7MwvQzVJ2r4b/Y7nZ4 RSibczq4OBod/wxE0yhwuBVGGWD+HaJ7B1Ed++7Ycv8XKraV/W9b34Y6E 8+mqbbHgeMuRr+hi/w3wMV2jc7h0u5xqVqchTo12YClT7UaICShPpa2T7 6I6N1/MOkA9N33zWQs23Q2taD9zCbBx/bUG5FxzeoFUWoBVYHUlbQ4mkD A==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019769" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019769" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:14 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119502" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:12 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 5/7] drm/i915: Check if the size is too big while creating shmem file Date: Tue, 5 Jul 2022 15:24:53 +0300 Message-Id: <20220705122455.3866745-6-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The __shmem_file_setup() function returns -EINVAL if size is greater than MAX_LFS_FILESIZE. To handle the same error as other code that returns -E2BIG when the size is too large, it add a code that returns -E2BIG when the size is larger than the size that can be handled. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 604e8829e8ea..8495e87432f6 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -541,6 +541,15 @@ static int __create_shmem(struct drm_i915_private *i915, drm_gem_private_object_init(&i915->drm, obj, size); + /* XXX: The __shmem_file_setup() function returns -EINVAL if size is + * greater than MAX_LFS_FILESIZE. + * To handle the same error as other code that returns -E2BIG when + * the size is too large, we add a code that returns -E2BIG when the + * size is larger than the size that can be handled. + */ + if (size > MAX_LFS_FILESIZE) + return -E2BIG; + if (i915->mm.gemfs) filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size, flags); From patchwork Tue Jul 5 12:24:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E19E7CCA47B for ; Tue, 5 Jul 2022 12:25:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 414AA10F51D; Tue, 5 Jul 2022 12:25:19 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id F2EAF10F105; Tue, 5 Jul 2022 12:25:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023917; x=1688559917; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ttG5ZgkjGHlE6lrMhny2uMiGIqblgoocrrq+mbeE6W8=; b=TqXfuvwEDL4HCmt+nIqfRmo1zM0Ea04mfUdOm7Z9w0noDsyL33V1Rmx4 DVreZ4TRnbmrPiGjVZIDBa0rtm8gNmMDviIMMoAObITa45q/EDp9ogAAy Q9BOyBLf24A9pKf8r+d+8U1Ia62TA1AQnsCon8/AQgZEuqQuZo1mOJ4Ox V2c6RZSa1M9lx7Z03DX5HrmqDhZD6NOTnFbfslyDRTHjksoRWqU/2zsSA wObRKJq3vMgrnkKY3W8rOJUB8hGDARMdosWnP9JeLikyicLd4Ur6tdJyb 7IRcIPBbQiEg68EYU2Si7RK8m8IRXTsWi3oTmG4lZbj11KVW1yB+SuMU7 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019773" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019773" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:17 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119518" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:14 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 6/7] drm/i915: Use error code as -E2BIG when the size of gem ttm object is too large Date: Tue, 5 Jul 2022 15:24:54 +0300 Message-Id: <20220705122455.3866745-7-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The ttm_bo_init_reserved() functions returns -ENOSPC if the size is too big to add vma. The direct function that returns -ENOSPC is drm_mm_insert_node_in_range(). To handle the same error as other code returning -E2BIG when the size is too large, it converts return value to -E2BIG. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index d579524663b3..271f64b7e4f1 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1249,6 +1249,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, bo_type, &i915_sys_placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); + + /* + * XXX: The ttm_bo_init_reserved() functions returns -ENOSPC if the size + * is too big to add vma. The direct function that returns -ENOSPC is + * drm_mm_insert_node_in_range(). To handle the same error as other code + * that returns -E2BIG when the size is too large, it converts -ENOSPC to + * -E2BIG. + */ + if (size >> PAGE_SHIFT > INT_MAX && ret == -ENOSPC) + ret = -E2BIG; + if (ret) return i915_ttm_err_to_gem(ret); From patchwork Tue Jul 5 12:24:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12906539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D2E58C43334 for ; Tue, 5 Jul 2022 12:25:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E980B10F8A1; Tue, 5 Jul 2022 12:25:21 +0000 (UTC) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8878210F591; Tue, 5 Jul 2022 12:25:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657023920; x=1688559920; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=L237iES3n0KkjuoqKen3dYgw6HWodg7ti3qNVOPq2oI=; b=gvuMeapOTsshIx8POQ+lyoaqQSu09wjWLOZVNveDhOdNxwbyZ8Tve9i7 bmQyQv3oGoFc2t681FrSefGKDrraCiLoQPGA29SKdPfZHXOf3JTAFYqIh esq5GzB5oMQtkIxGkxDPECxU2vLA4GxJ//Kef0PE212oX8pIQgnJ6t7tA Oh4m+RezdMR/uZZFk0O3zuzUnXn80xk1u/xfMyoNVyQTKZbJhtx7Ndnlf XqGG6PdhY1XAlJOTo02/WtzZyy4jPi0RQ8Ud6+g7iUioDuGDaxyt+cYlO tU0EZZFmPf5AcCA6ODIxPPcFYQx9pWvyksM2KMXi23lAUZznAaZWY9fR6 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10398"; a="345019777" X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="345019777" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:19 -0700 X-IronPort-AV: E=Sophos;i="5.92,245,1650956400"; d="scan'208";a="650119544" Received: from mmckenzi-mobl.ger.corp.intel.com (HELO hades.ger.corp.intel.com) ([10.252.50.45]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Jul 2022 05:25:16 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Subject: [PATCH v2 7/7] drm/i915: Remove truncation warning for large objects Date: Tue, 5 Jul 2022 15:24:55 +0300 Message-Id: <20220705122455.3866745-8-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> References: <20220705122455.3866745-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk, airlied@linux.ie, matthew.auld@intel.com, nirmoy.das@intel.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Chris Wilson Having addressed the issues surrounding incorrect types for local variables and potential integer truncation in using the scatterlist API, we have closed all the loop holes we had previously identified with dangerously large object creation. As such, we can eliminate the warning put in place to remind us to complete the review. Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström Testcase: igt@gem_create@create-massive Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4991 Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 31bb09dccf2f..4d614e4c1c4e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -20,25 +20,10 @@ enum intel_region_id; -/* - * XXX: There is a prevalence of the assumption that we fit the - * object's page count inside a 32bit _signed_ variable. Let's document - * this and catch if we ever need to fix it. In the meantime, if you do - * spot such a local variable, please consider fixing! - * - * We can check for invalidly typed locals with typecheck(), see for example - * i915_gem_object_get_sg(). - */ -#define GEM_CHECK_SIZE_OVERFLOW(sz) \ - GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) - static inline bool i915_gem_object_size_2big(u64 size) { struct drm_i915_gem_object *obj; - if (GEM_CHECK_SIZE_OVERFLOW(size)) - return true; - if (overflows_type(size, obj->base.size)) return true;