From patchwork Fri Jun 3 09:24:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12868841 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 467CFC433EF for ; Fri, 3 Jun 2022 09:26:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E12B3112258; Fri, 3 Jun 2022 09:26:09 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1F933112257 for ; Fri, 3 Jun 2022 09:26:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654248368; x=1685784368; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+nQIEZhXZvt6tqOit0s8YxBH46obKU7CPEhwT2/GmVM=; b=gjlC1AgUgb9auWwMqwIiRR81ujqkNCVj5aMQxXONOdRqb72I8j0lADtF 1tgmEdkOe5FgusYHO/4+w8vPAE1zSxGNWlhbmJN0IYxDubuUczjn10yhl Ev012z64M8dleA90u+L7uaMbEbElgWtJF2dJyC8pMcciV4pyfTtvTQoqH uPep1oLJNGsUWmihMSQtJl0u0gF0qtwZEL8VuN4q76tJVGP2cm3TdgWyM MZpPX+W2saj5kXsIovvox2dk4mNwLMAJxlMpTR4HLHTiXVAvUE57WphCT gh3i5MMaEzJpU6sl51lYfTwlJo0Xk7iGmKDN8Iy9NR7wIDtppemPtIp+S Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10366"; a="276227441" X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="276227441" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:00 -0700 X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="563712686" Received: from swoon-mobl1.gar.corp.intel.com (HELO hades.gar.corp.intel.com) ([10.213.34.85]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:25:57 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Date: Fri, 3 Jun 2022 18:24:36 +0900 Message-Id: <20220603092441.1526151-2-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> References: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 1/6] drm/i915/gem: Typecheck page lookups X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, matthew.auld@intel.com, chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Chris Wilson We need to check that we avoid integer overflows when looking up a page, and so fix all the instances where we have mistakenly used a plain integer instead of a more suitable long. Be pedantic and add integer typechecking to the lookup so that we can be sure that we are safe. And it also uses pgoff_t as our page lookups must remain compatible with the page cache, pgoff_t is currently exactly unsigned long. Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 7 +- drivers/gpu/drm/i915/gem/i915_gem_object.h | 67 ++++++++++++++----- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 25 ++++--- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- .../drm/i915/gem/selftests/i915_gem_context.c | 12 ++-- .../drm/i915/gem/selftests/i915_gem_mman.c | 8 +-- .../drm/i915/gem/selftests/i915_gem_object.c | 8 +-- drivers/gpu/drm/i915/i915_gem.c | 18 +++-- drivers/gpu/drm/i915/i915_utils.h | 6 ++ drivers/gpu/drm/i915/i915_vma.c | 8 +-- 10 files changed, 106 insertions(+), 55 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 06b1b188ce5a..a5af74f78d57 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -421,10 +421,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, static void i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; void *src_map; void *src_ptr; - src_map = kmap_atomic(i915_gem_object_get_page(obj, offset >> PAGE_SHIFT)); + src_map = kmap_atomic(i915_gem_object_get_page(obj, idx)); src_ptr = src_map + offset_in_page(offset); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) @@ -437,9 +438,10 @@ i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, static void i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; + dma_addr_t dma = i915_gem_object_get_dma_address(obj, idx); void __iomem *src_map; void __iomem *src_ptr; - dma_addr_t dma = i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT); src_map = io_mapping_map_wc(&obj->mm.region->iomap, dma - obj->mm.region->region.start, @@ -468,6 +470,7 @@ i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset */ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + GEM_BUG_ON(overflows_type(offset >> PAGE_SHIFT, pgoff_t)); GEM_BUG_ON(offset >= obj->base.size); GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size); GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index e11d82a9f7c3..22c4ba0cd106 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -27,8 +27,10 @@ enum intel_region_id; * spot such a local variable, please consider fixing! * * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for num_pages - * - get_user_pages*() mixed ints with longs + * - sg_table embeds unsigned int for nents + * + * We can check for invalidly typed locals with typecheck(), see for example + * i915_gem_object_get_sg(). */ #define GEM_CHECK_SIZE_OVERFLOW(sz) \ GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) @@ -366,41 +368,70 @@ int i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, struct scatterlist * __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, bool dma); + pgoff_t n, + unsigned int *offset); + +#define __i915_gem_object_get_sg(obj, it, n, offset) ({ \ + exactly_pgoff_t(n); \ + (__i915_gem_object_get_sg)(obj, it, n, offset); \ +}) static inline struct scatterlist * -i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - unsigned int n, +i915_gem_object_get_sg(struct drm_i915_gem_object *obj, pgoff_t n, unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, false); + return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset); } +#define i915_gem_object_get_sg(obj, n, offset) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_sg)(obj, n, offset); \ +}) + static inline struct scatterlist * -i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, - unsigned int n, +i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, pgoff_t n, unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, true); + return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset); } +#define i915_gem_object_get_sg_dma(obj, n, offset) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_sg_dma)(obj, n, offset); \ +}) + struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, - unsigned int n); +i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n); + +#define i915_gem_object_get_page(obj, n) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_page)(obj, n); \ +}) struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n); +i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n); + +#define i915_gem_object_get_dirty_page(obj, n) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_dirty_page)(obj, n); \ +}) dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, +i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, pgoff_t n, unsigned int *len); +#define i915_gem_object_get_dma_address_len(obj, n, len) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_dma_address_len)(obj, n, len); \ +}) + dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n); +i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n); + +#define i915_gem_object_get_dma_address(obj, n) ({ \ + exactly_pgoff_t(n); \ + (i915_gem_object_get_dma_address)(obj, n); \ +}) void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct sg_table *pages, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 97c820eee115..1d1edcb3514b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -503,14 +503,16 @@ void __i915_gem_object_release_map(struct drm_i915_gem_object *obj) } struct scatterlist * -__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, +(__i915_gem_object_get_sg)(struct drm_i915_gem_object *obj, struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, - bool dma) + pgoff_t n, + unsigned int *offset) + { - struct scatterlist *sg; + const bool dma = iter == &obj->mm.get_dma_page || + iter == &obj->ttm.get_io_page; unsigned int idx, count; + struct scatterlist *sg; might_sleep(); GEM_BUG_ON(n >= obj->base.size >> PAGE_SHIFT); @@ -618,7 +620,7 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, } struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) +(i915_gem_object_get_page)(struct drm_i915_gem_object *obj, pgoff_t n) { struct scatterlist *sg; unsigned int offset; @@ -631,8 +633,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) /* Like i915_gem_object_get_page(), but mark the returned page dirty */ struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n) +(i915_gem_object_get_dirty_page)(struct drm_i915_gem_object *obj, pgoff_t n) { struct page *page; @@ -644,9 +645,8 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, - unsigned int *len) +(i915_gem_object_get_dma_address_len)(struct drm_i915_gem_object *obj, + pgoff_t n, unsigned int *len) { struct scatterlist *sg; unsigned int offset; @@ -660,8 +660,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n) +(i915_gem_object_get_dma_address)(struct drm_i915_gem_object *obj, pgoff_t n) { return i915_gem_object_get_dma_address_len(obj, n, NULL); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 4c25d9b2f138..bcdbefd8f269 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -711,7 +711,7 @@ static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo, GEM_WARN_ON(bo->ttm); base = obj->mm.region->iomap.base - obj->mm.region->region.start; - sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs, true); + sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs); return ((base + sg_dma_address(sg)) >> PAGE_SHIFT) + ofs; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index 93a67422ca3b..84d6edd9c204 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -455,7 +455,8 @@ static int gpu_fill(struct intel_context *ce, static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) { const bool has_llc = HAS_LLC(to_i915(obj->base.dev)); - unsigned int n, m, need_flush; + unsigned int need_flush; + unsigned long n, m; int err; i915_gem_object_lock(obj, NULL); @@ -485,7 +486,8 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) static noinline int cpu_check(struct drm_i915_gem_object *obj, unsigned int idx, unsigned int max) { - unsigned int n, m, needs_flush; + unsigned int needs_flush; + unsigned long n; int err; i915_gem_object_lock(obj, NULL); @@ -494,7 +496,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, goto out_unlock; for (n = 0; n < real_page_count(obj); n++) { - u32 *map; + u32 *map, m; map = kmap_atomic(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) @@ -502,7 +504,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (m = 0; m < max; m++) { if (map[m] != m) { - pr_err("%pS: Invalid value at object %d page %d/%ld, offset %d/%d: found %x expected %x\n", + pr_err("%pS: Invalid value at object %d page %ld/%ld, offset %d/%d: found %x expected %x\n", __builtin_return_address(0), idx, n, real_page_count(obj), m, max, map[m], m); @@ -513,7 +515,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (; m < DW_PER_PAGE; m++) { if (map[m] != STACK_MAGIC) { - pr_err("%pS: Invalid value at object %d page %d, offset %d: found %x expected %x (uninitialised)\n", + pr_err("%pS: Invalid value at object %d page %ld, offset %d: found %x expected %x (uninitialised)\n", __builtin_return_address(0), idx, n, m, map[m], STACK_MAGIC); err = -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 5bc93a1ce3e3..b674c1d7a045 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -93,11 +93,11 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt_view view; struct i915_vma *vma; + unsigned long offset; unsigned long page; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; int err; @@ -154,7 +154,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, @@ -210,10 +210,10 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, for_each_prime_number_from(page, 1, npages) { struct i915_ggtt_view view = compute_partial_view(obj, page, MIN_CHUNK_PAGES); + unsigned long offset; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; GEM_BUG_ON(view.partial.size > nreal); @@ -250,7 +250,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c index fe0a890775e2..bf30763ee6bc 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c @@ -33,10 +33,10 @@ static int igt_gem_object(void *arg) static int igt_gem_huge(void *arg) { - const unsigned int nreal = 509; /* just to be awkward */ + const unsigned long nreal = 509; /* just to be awkward */ struct drm_i915_private *i915 = arg; struct drm_i915_gem_object *obj; - unsigned int n; + unsigned long n; int err; /* Basic sanitycheck of our huge fake object allocation */ @@ -49,7 +49,7 @@ static int igt_gem_huge(void *arg) err = i915_gem_object_pin_pages_unlocked(obj); if (err) { - pr_err("Failed to allocate %u pages (%lu total), err=%d\n", + pr_err("Failed to allocate %lu pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); goto out; } @@ -57,7 +57,7 @@ static int igt_gem_huge(void *arg) for (n = 0; n < obj->base.size / PAGE_SIZE; n++) { if (i915_gem_object_get_page(obj, n) != i915_gem_object_get_page(obj, n % nreal)) { - pr_err("Page lookup mismatch at index %u [%u]\n", + pr_err("Page lookup mismatch at index %lu [%lu]\n", n, n % nreal); err = -EINVAL; goto out_unpin; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 702e5b89be22..dba58a3c3238 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -229,8 +229,9 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, struct drm_i915_gem_pread *args) { unsigned int needs_clflush; - unsigned int idx, offset; char __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; @@ -383,13 +384,17 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj, { struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; void __user *user_data; struct i915_vma *vma; - u64 remain, offset; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + wakeref = intel_runtime_pm_get(&i915->runtime_pm); vma = i915_gem_gtt_prepare(obj, &node, false); @@ -540,13 +545,17 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; struct intel_runtime_pm *rpm = &i915->runtime_pm; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; struct i915_vma *vma; - u64 remain, offset; void __user *user_data; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + if (i915_gem_object_has_struct_page(obj)) { /* * Avoid waking the device up if we can fallback, as @@ -654,8 +663,9 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, { unsigned int partial_cacheline_write; unsigned int needs_clflush; - unsigned int offset, idx; void __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index ea7648e3aa0e..b0b24f352fa9 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -441,4 +441,10 @@ static inline bool i915_run_as_guest(void) bool i915_vtd_active(struct drm_i915_private *i915); +/* A poor man's -Wconversion: only allow variables of an exact type. */ +#define exact_type(T, n) \ + BUILD_BUG_ON(!__builtin_constant_p(n) && !__builtin_types_compatible_p(T, typeof(n))) + +#define exactly_pgoff_t(n) exact_type(pgoff_t, n) + #endif /* !__I915_UTILS_H */ diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 0bffb70b3c5f..686aa0214e68 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -902,7 +902,7 @@ rotate_pages(struct drm_i915_gem_object *obj, unsigned int offset, struct sg_table *st, struct scatterlist *sg) { unsigned int column, row; - unsigned int src_idx; + pgoff_t src_idx; for (column = 0; column < width; column++) { unsigned int left; @@ -1008,7 +1008,7 @@ add_padding_pages(unsigned int count, static struct scatterlist * remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int offset, unsigned int alignment_pad, + unsigned long offset, unsigned int alignment_pad, unsigned int width, unsigned int height, unsigned int src_stride, unsigned int dst_stride, struct sg_table *st, struct scatterlist *sg, @@ -1067,7 +1067,7 @@ remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_contiguous_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, + pgoff_t obj_offset, unsigned int count, struct sg_table *st, struct scatterlist *sg) { @@ -1100,7 +1100,7 @@ remap_contiguous_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_linear_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, unsigned int alignment_pad, + pgoff_t obj_offset, unsigned int alignment_pad, unsigned int size, struct sg_table *st, struct scatterlist *sg, unsigned int *gtt_offset) From patchwork Fri Jun 3 09:24:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12868842 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B412DC43334 for ; Fri, 3 Jun 2022 09:26:13 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7CFCE11225C; Fri, 3 Jun 2022 09:26:11 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0C134112272 for ; Fri, 3 Jun 2022 09:26:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654248369; x=1685784369; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=imF+QVK7iGrTFHcXcLRrl94cL7DhRLpdrt4OV7+Lcnk=; b=YH9HwydQj/aVGdY+V3nPA+WE7GR02yf+5s7frKJTopZwgbyf9nO+FQ8z kmKa49j15NFvbCSSt9GXK6hi6eNr5+VB9nAQKGMOucUAq8m5UZBB8xa3/ fFz88B2MKrT5r4d89pxHIAKhUr3TM50QfQc3O+/oMzuGRfdr8EAcP5/j2 XqiRz0mcw271g+uqv+brmoroUKoVrgOFR6EDgv0mVYVUj9SNh2rQuYjvr GLIHCUqKKjuagzsH2P/mejdn5rTX6mdBcF55cO3NV3uD9UYJDXl/D5RbJ TzLdHhCvBBG08JuttS10YjsNgPC+P/wxPv74Q8DtiRFsJQZZNjtYUY+0b g==; X-IronPort-AV: E=McAfee;i="6400,9594,10366"; a="276227443" X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="276227443" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:03 -0700 X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="563712695" Received: from swoon-mobl1.gar.corp.intel.com (HELO hades.gar.corp.intel.com) ([10.213.34.85]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:01 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Date: Fri, 3 Jun 2022 18:24:37 +0900 Message-Id: <20220603092441.1526151-3-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> References: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 2/6] drm/i915: Check for integer truncation on scatterlist creation X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, matthew.auld@intel.com, chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Chris Wilson There is an impedance mismatch between the scatterlist API using unsigned int and our memory/page accounting in unsigned long. That is we may try to create a scatterlist for a large object that overflows returning a small table into which we try to fit very many pages. As the object size is under control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long into the scatterlist's unsigned int, we use our overflows_type check and report E2BIG prior to the operation. This is already used in our create ioctls to indicate if the uABI request is simply too large for the backing store. Failing that type check, we have a second check at sg_alloc_table time to make sure the values we are passing into the scatterlist API are not truncated. It uses pgoff_t for locals that are dealing with page indices, in this case, the page count is the limit of the page index. And it adds and uses safe_conversion() macro which performs a type conversion (cast) of an integer value into a new variable, checking that the destination is large enough to hold the source value. Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 6 ++++-- drivers/gpu/drm/i915/gem/i915_gem_object.h | 3 --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 5 ++++- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++++- drivers/gpu/drm/i915/gvt/dmabuf.c | 9 +++++---- drivers/gpu/drm/i915/i915_scatterlist.h | 8 ++++++++ drivers/gpu/drm/i915/i915_utils.h | 12 ++++++++++++ 9 files changed, 45 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index c698f95af15f..ff2e6e780631 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -37,10 +37,13 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) struct sg_table *st; struct scatterlist *sg; unsigned int sg_page_sizes; - unsigned int npages; + pgoff_t npages; /* restricted by sg_alloc_table */ int max_order; gfp_t gfp; + if (!safe_conversion(&npages, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; + max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB if (is_swiotlb_active(obj->base.dev->dev)) { @@ -67,7 +70,6 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) if (!st) return -ENOMEM; - npages = obj->base.size / PAGE_SIZE; if (sg_alloc_table(st, npages, GFP_KERNEL)) { kfree(st); return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 22c4ba0cd106..551e4293d19c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -26,9 +26,6 @@ enum intel_region_id; * this and catch if we ever need to fix it. In the meantime, if you do * spot such a local variable, please consider fixing! * - * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for nents - * * We can check for invalidly typed locals with typecheck(), see for example * i915_gem_object_get_sg(). */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 0d0e46dae559..88ba7266a3a5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -28,6 +28,10 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) void *dst; int i; + /* Contiguous chunk, with a single scatterlist element */ + if (overflows_type(obj->base.size, sg->length)) + return -E2BIG; + if (GEM_WARN_ON(i915_gem_object_needs_bit17_swizzle(obj))) return -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 955844f19193..e77f9ada31a3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -193,13 +193,16 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) struct drm_i915_private *i915 = to_i915(obj->base.dev); struct intel_memory_region *mem = obj->mm.region; struct address_space *mapping = obj->base.filp->f_mapping; - const unsigned long page_count = obj->base.size / PAGE_SIZE; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; struct sgt_iter sgt_iter; + pgoff_t page_count; struct page *page; int ret; + if (!safe_conversion(&page_count, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; + /* * Assert that the object is not currently in any GPU domain. As it * wasn't in the GTT, there shouldn't be any way it could have been in diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index bcdbefd8f269..52f8c3f4d8a8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -809,6 +809,10 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj) { struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS]; struct ttm_placement placement; + pgoff_t num_pages; + + if (!safe_conversion(&num_pages, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 094f06b4ce33..25785c3a0083 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -128,13 +128,16 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj) static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) { - const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; unsigned int sg_page_sizes; struct page **pvec; + pgoff_t num_pages; /* limited by sg_alloc_table_from_pages_segment */ int ret; + if (!safe_conversion(&num_pages, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; + st = kmalloc(sizeof(*st), GFP_KERNEL); if (!st) return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c index 01e54b45c5c1..795270cb4ec2 100644 --- a/drivers/gpu/drm/i915/gvt/dmabuf.c +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c @@ -42,8 +42,7 @@ #define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12)) -static int vgpu_gem_get_pages( - struct drm_i915_gem_object *obj) +static int vgpu_gem_get_pages(struct drm_i915_gem_object *obj) { struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct intel_vgpu *vgpu; @@ -52,7 +51,10 @@ static int vgpu_gem_get_pages( int i, j, ret; gen8_pte_t __iomem *gtt_entries; struct intel_vgpu_fb_info *fb_info; - u32 page_num; + pgoff_t page_num; + + if (!safe_conversion(&page_num, obj->base.size >> PAGE_SHIFT)) + return -E2BIG; fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info; if (drm_WARN_ON(&dev_priv->drm, !fb_info)) @@ -66,7 +68,6 @@ static int vgpu_gem_get_pages( if (unlikely(!st)) return -ENOMEM; - page_num = obj->base.size >> PAGE_SHIFT; ret = sg_alloc_table(st, page_num, GFP_KERNEL); if (ret) { kfree(st); diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h index 12c6a1684081..c4d4d3c84cff 100644 --- a/drivers/gpu/drm/i915/i915_scatterlist.h +++ b/drivers/gpu/drm/i915/i915_scatterlist.h @@ -218,4 +218,12 @@ struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node, struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res, u64 region_start); +/* Wrap scatterlist.h to sanity check for integer truncation */ +typedef unsigned int __sg_size_t; /* see linux/scatterlist.h */ +#define sg_alloc_table(sgt, nents, gfp) \ + overflows_type(nents, __sg_size_t) ? -E2BIG : (sg_alloc_table)(sgt, (__sg_size_t)(nents), gfp) + +#define sg_alloc_table_from_pages_segment(sgt, pages, npages, offset, size, max_segment, gfp) \ + overflows_type(npages, __sg_size_t) ? -E2BIG : (sg_alloc_table_from_pages_segment)(sgt, pages, (__sg_size_t)(npages), offset, size, max_segment, gfp) + #endif diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index b0b24f352fa9..10454f618698 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -447,4 +447,16 @@ bool i915_vtd_active(struct drm_i915_private *i915); #define exactly_pgoff_t(n) exact_type(pgoff_t, n) +/* + * Perform a type conversion (cast) of an integer value into a new + * variable, checking that the destination is large enough to hold the source + * value. If the value would overflow the destination leaving a truncated + * result, return false instead. + */ +#define safe_conversion(ptr, value) ({ \ + typeof(value) __v = (value); \ + typeof(ptr) __ptr = (ptr); \ + overflows_type(__v, *__ptr) ? 0 : (*__ptr = (typeof(*__ptr))__v), 1; \ +}) + #endif /* !__I915_UTILS_H */ From patchwork Fri Jun 3 09:24:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12868845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 19AF1C433EF for ; Fri, 3 Jun 2022 09:26:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7C443112302; Fri, 3 Jun 2022 09:26:14 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id E4C6A112258 for ; Fri, 3 Jun 2022 09:26:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654248368; x=1685784368; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=t5QRcVfp55W80fWNslwT7XcwkaYr62prOvRbMy4EZWY=; b=ViSmltSBiBnKrB9/DFWpMLSf4gpN9cOb6/y0c+jCjREm08QzPFaAPyom G3KJFB6KaFRqCNfSfyp7ga698uEc7seXvKBKJwbq4O1txi0vKLhk+AtpV oEjPnLaucuLRJkbRXDb8s3dyIMsUKGAhSStOPNdZ7LDjTePL8FzyoDZNv bFlnqCPx15uGpbEWu+99JT53ICd2WAaCicvTppbJymciCIcGuei+QVIEW CbCrRZ6LFsrojCLlFDEDnU06NWyN3Dfwz3bHKrG4DsnO4P44ebMEsVba+ cCN4ZzJ9yLRyPRwKbbP/qZb8a/1AAVGU+fXjYGqCcd5Si0Inw9CW9cArW w==; X-IronPort-AV: E=McAfee;i="6400,9594,10366"; a="276227446" X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="276227446" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:06 -0700 X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="563712705" Received: from swoon-mobl1.gar.corp.intel.com (HELO hades.gar.corp.intel.com) ([10.213.34.85]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:03 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Date: Fri, 3 Jun 2022 18:24:38 +0900 Message-Id: <20220603092441.1526151-4-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> References: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 3/6] drm/i915: Check for integer truncation on the configuration of ttm place X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, matthew.auld@intel.com, chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" There is an impedance mismatch between the first/last valid page frame number of ttm place in unsigned and our memory/page accounting in unsigned long. As the object size is under the control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long to unsigned, we use our overflows_type check and report E2BIG or overflow_type prior to the operation. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 12 +++++++++--- drivers/gpu/drm/i915/intel_region_ttm.c | 16 +++++++++++++--- 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 52f8c3f4d8a8..8231a6fc5437 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -137,19 +137,25 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr, if (mr->type == INTEL_MEMORY_SYSTEM) return; +#define SAFE_CONVERSION(ptr, value) ({ \ + if (!safe_conversion(ptr, value)) { \ + GEM_BUG_ON(overflows_type(value, *ptr)); \ + } \ +}) if (flags & I915_BO_ALLOC_CONTIGUOUS) place->flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place->fpfn = offset >> PAGE_SHIFT; - place->lpfn = place->fpfn + (size >> PAGE_SHIFT); + SAFE_CONVERSION(&place->fpfn, offset >> PAGE_SHIFT); + SAFE_CONVERSION(&place->lpfn, place->fpfn + (size >> PAGE_SHIFT)); } else if (mr->io_size && mr->io_size < mr->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place->flags |= TTM_PL_FLAG_TOPDOWN; } else { place->fpfn = 0; - place->lpfn = mr->io_size >> PAGE_SHIFT; + SAFE_CONVERSION(&place->lpfn, mr->io_size >> PAGE_SHIFT); } } +#undef SAFE_CONVERSION } static void diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 62ff77445b01..8fcb8654b978 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -202,24 +202,34 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, struct ttm_resource *res; int ret; +#define SAFE_CONVERSION(ptr, value) ({ \ + if (!safe_conversion(ptr, value)) { \ + GEM_BUG_ON(overflows_type(value, *ptr)); \ + ret = -E2BIG; \ + goto out; \ + } \ +}) if (flags & I915_BO_ALLOC_CONTIGUOUS) place.flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place.fpfn = offset >> PAGE_SHIFT; - place.lpfn = place.fpfn + (size >> PAGE_SHIFT); + SAFE_CONVERSION(&place.fpfn, offset >> PAGE_SHIFT); + SAFE_CONVERSION(&place.lpfn, place.fpfn + (size >> PAGE_SHIFT)); } else if (mem->io_size && mem->io_size < mem->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place.flags |= TTM_PL_FLAG_TOPDOWN; } else { place.fpfn = 0; - place.lpfn = mem->io_size >> PAGE_SHIFT; + SAFE_CONVERSION(&place.lpfn, mem->io_size >> PAGE_SHIFT); } } +#undef SAFE_CONVERSION mock_bo.base.size = size; mock_bo.bdev = &mem->i915->bdev; ret = man->func->alloc(man, &mock_bo, &place, &res); + +out: if (ret == -ENOSPC) ret = -ENXIO; if (!ret) From patchwork Fri Jun 3 09:24:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12868843 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65F47C433EF for ; Fri, 3 Jun 2022 09:26:15 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DEE4E11225F; Fri, 3 Jun 2022 09:26:11 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 28266112302 for ; Fri, 3 Jun 2022 09:26:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654248369; x=1685784369; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1EyV1VR7bFwCb/tX2tSNIc3Y6QghyYfsmQDsA9biPcI=; b=CnZ6yoX951BQaRg61H6SEpFzxO+LVhAvHH2xBWmV93BwK5McssJyqhcF +pbmscbw3gqOYX0pt/dIRwsPV+R73aXs1iqITrRWA1wRrE4WYc3U8sBn7 0ZId7uAl5O2txMGgeGAsYjCDl+QilMlIpxXe+9WRmGZ+hut8Ruwknzt6k KtADVJCI2odm2vqrW2kVDzYeuh+2yY+li6/6RB0M2glHf4L9lGX3k+v6t HG/b6nFpn2Wd1ZjbvEILbWoRzVXuL91NpL8pn7l8+0Nloyun36GLwz/tT KDYoD8tNsiTyL+HSnfXc4I1QqFF8zniNFzUPaKWDgjy1IvayDAHQwUEsm w==; X-IronPort-AV: E=McAfee;i="6400,9594,10366"; a="276227450" X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="276227450" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:08 -0700 X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="563712711" Received: from swoon-mobl1.gar.corp.intel.com (HELO hades.gar.corp.intel.com) ([10.213.34.85]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:06 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Date: Fri, 3 Jun 2022 18:24:39 +0900 Message-Id: <20220603092441.1526151-5-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> References: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 4/6] drm/i915: Check if the size is too big while creating shmem file X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, matthew.auld@intel.com, chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The __shmem_file_setup() function returns -EINVAL if size is greater than MAX_LFS_FILESIZE. To handle the same error as other code that returns -E2BIG when the size is too large, it add a code that returns -E2BIG when the size is larger than the size that can be handled. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Testcase: igt@gem_create@create-massive Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4991 --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index e77f9ada31a3..b4ecca21277e 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -542,6 +542,15 @@ static int __create_shmem(struct drm_i915_private *i915, drm_gem_private_object_init(&i915->drm, obj, size); + /* XXX: The __shmem_file_setup() function returns -EINVAL if size is + * greater than MAX_LFS_FILESIZE. + * To handle the same error as other code that returns -E2BIG when + * the size is too large, we add a code that returns -E2BIG when the + * size is larger than the size that can be handled. + */ + if (size > MAX_LFS_FILESIZE) + return -E2BIG; + if (i915->mm.gemfs) filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size, flags); From patchwork Fri Jun 3 09:24:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12868844 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C4BB0CCA473 for ; Fri, 3 Jun 2022 09:26:16 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E2AB411225D; Fri, 3 Jun 2022 09:26:13 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id EBBC1112272 for ; Fri, 3 Jun 2022 09:26:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654248371; x=1685784371; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IxAueYsBY2QUGsVfZlQBuBAS9XROicpMhnGEnOg/8GE=; b=YQaALxiulZVlJmApOj+z+1qD5eNXjcLucOR6xjiQUv+dkG9J5LTGAvue V41L1nk21UoPlXyQMqWA39ff+KmFTMgHgL+NIMjzcdsvUcq3g3SeiktJj TuD5NwmzG0iFdaoij1czl1deOUt8ALkJGTSx2Sy/LrDuhUDoZi29QagtF cT4Z6YEaFoXSY4W2OeYs0TU0XKb7UUtKZwQZNnOhb3n3/pFcIP5FEaUqu h1nYMdfRzXWnoN3GlvmmJdFKWYkoq6R+W6DECIir7kTAxXgLiCS+838RB rZWaRq69S4UmBRaH9AlKl0Z8dSmZevdMjFv1HgXmwtv1ocee+SK0YcpFm w==; X-IronPort-AV: E=McAfee;i="6400,9594,10366"; a="276227457" X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="276227457" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:11 -0700 X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="563712718" Received: from swoon-mobl1.gar.corp.intel.com (HELO hades.gar.corp.intel.com) ([10.213.34.85]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:09 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Date: Fri, 3 Jun 2022 18:24:40 +0900 Message-Id: <20220603092441.1526151-6-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> References: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 5/6] drm/i915: Use error code as -E2BIG when the size of gem ttm object is too large X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, matthew.auld@intel.com, chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" The ttm_bo_init_reserved() functions returns -ENOSPC if the size is too big to add vma. The direct function that returns -ENOSPC is drm_mm_insert_node_in_range(). To handle the same error as other code returning -E2BIG when the size is too large, it converts return value to -E2BIG. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Testcase: igt@gem_create@create-massive Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4991 --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 8231a6fc5437..5617b100067a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1243,6 +1243,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, bo_type, &i915_sys_placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); + + /* + * XXX: The ttm_bo_init_reserved() functions returns -ENOSPC if the size + * is too big to add vma. The direct function that returns -ENOSPC is + * drm_mm_insert_node_in_range(). To handle the same error as other code + * that returns -E2BIG when the size is too large, it converts -ENOSPC to + * -E2BIG. + */ + if (size >> PAGE_SHIFT > INT_MAX) + ret = -E2BIG; + if (ret) return i915_ttm_err_to_gem(ret); From patchwork Fri Jun 3 09:24:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12868846 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2596BC43334 for ; Fri, 3 Jun 2022 09:26:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DB5F911226A; Fri, 3 Jun 2022 09:26:16 +0000 (UTC) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by gabe.freedesktop.org (Postfix) with ESMTPS id 583B511226A for ; Fri, 3 Jun 2022 09:26:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654248374; x=1685784374; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=FQLRr32gOdityUisim8Q81Lq9et/vS+wHIpn1Id7OQM=; b=XMiV6mTpA3t9H0Ti7TNxVRxVm45LAGQDodqjz6B3H1FKJGmxgV8LeQ+h +Uu4/W4FuV8FbYZwQ3hrxEH7/S6YLavjXXoaIBdrjMC0kMvDaAuc1ZzFd 0A2ateVC+LBY68VEJlLR+drjsujSqFbP+X+FVNG8RB9yOEKuE9WHo1fbA +5lU0g7Hiob3FUbOo4IhGNxSg8QWRugrtDTfe80lzTnX5tZrjYH6fl7ld G8PZgA68R0N4QjnabSvi783f4tv37aa2OPbGVS+uFWHwe5+pSxxndWaaz jGXYspyK+Wur7BGagGcR7DNSKZEgWbsNI9y9xfDKtsj3aezowNCPnCUnW Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10366"; a="276227462" X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="276227462" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:14 -0700 X-IronPort-AV: E=Sophos;i="5.91,274,1647327600"; d="scan'208";a="563712741" Received: from swoon-mobl1.gar.corp.intel.com (HELO hades.gar.corp.intel.com) ([10.213.34.85]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2022 02:26:11 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Date: Fri, 3 Jun 2022 18:24:41 +0900 Message-Id: <20220603092441.1526151-7-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> References: <20220603092441.1526151-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Subject: [Intel-gfx] [PATCH 6/6] drm/i915: Remove truncation warning for large objects X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: thomas.hellstrom@linux.intel.com, matthew.auld@intel.com, chris@chris-wilson.co.uk Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" From: Chris Wilson Having addressed the issues surrounding incorrect types for local variables and potential integer truncation in using the scatterlist API, we have closed all the loop holes we had previously identified with dangerously large object creation. As such, we can eliminate the warning put in place to remind us to complete the review. Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 551e4293d19c..ef942dd7039f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -20,25 +20,10 @@ enum intel_region_id; -/* - * XXX: There is a prevalence of the assumption that we fit the - * object's page count inside a 32bit _signed_ variable. Let's document - * this and catch if we ever need to fix it. In the meantime, if you do - * spot such a local variable, please consider fixing! - * - * We can check for invalidly typed locals with typecheck(), see for example - * i915_gem_object_get_sg(). - */ -#define GEM_CHECK_SIZE_OVERFLOW(sz) \ - GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) - static inline bool i915_gem_object_size_2big(u64 size) { struct drm_i915_gem_object *obj; - if (GEM_CHECK_SIZE_OVERFLOW(size)) - return true; - if (overflows_type(size, obj->base.size)) return true;