From patchwork Sun May 13 07:50:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10396111 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2E912600D0 for ; Sun, 13 May 2018 07:50:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 240C428D06 for ; Sun, 13 May 2018 07:50:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 18B8828ECB; Sun, 13 May 2018 07:50:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9879228D06 for ; Sun, 13 May 2018 07:50:35 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id AD7E66E4AF; Sun, 13 May 2018 07:50:29 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6F7896E48E; Sun, 13 May 2018 07:50:26 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 11685065-1500050 for multiple; Sun, 13 May 2018 08:50:10 +0100 Received: by haswell.alporthouse.com (sSMTP sendmail emulation); Sun, 13 May 2018 08:50:10 +0100 From: Chris Wilson To: dri-devel@lists.freedesktop.org Subject: [PATCH 1/4] drm/mm: Reject over-sized allocation requests early Date: Sun, 13 May 2018 08:50:07 +0100 Message-Id: <20180513075010.23275-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.17.0 X-Originating-IP: 78.156.65.138 X-Country: code=GB country="United Kingdom" ip=78.156.65.138 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: intel-gfx@lists.freedesktop.org MIME-Version: 1.0 Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP As we keep an rbtree of available holes sorted by their size, we can very easily determine if there is any hole large enough that might satisfy the allocation request. This helps when dealing with a highly fragmented address space and a request for a search by address. To cache the largest size, we convert into the cached rbtree variant which tracks the leftmost node for us. However, currently we sorted into ascending size order so the leftmost node is the smallest, and so to make it the largest hole we need to invert our sorting. Signed-off-by: Chris Wilson Cc: Joonas Lahtinen Reviewed-by: Joonas Lahtinen --- drivers/gpu/drm/drm_mm.c | 82 ++++++++++++++++++++++++++++------------ include/drm/drm_mm.h | 2 +- 2 files changed, 58 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c index 3166026a1874..7b4ad05fe1c0 100644 --- a/drivers/gpu/drm/drm_mm.c +++ b/drivers/gpu/drm/drm_mm.c @@ -239,6 +239,32 @@ static void drm_mm_interval_tree_add_node(struct drm_mm_node *hole_node, #define HOLE_SIZE(NODE) ((NODE)->hole_size) #define HOLE_ADDR(NODE) (__drm_mm_hole_node_start(NODE)) +static u64 rb_to_hole_size(struct rb_node *rb) +{ + return rb_entry(rb, struct drm_mm_node, rb_hole_size)->hole_size; +} + +static void insert_hole_size(struct rb_root_cached *root, + struct drm_mm_node *node) +{ + struct rb_node **link = &root->rb_root.rb_node, *rb = NULL; + u64 x = node->hole_size; + bool first = true; + + while (*link) { + rb = *link; + if (x > rb_to_hole_size(rb)) { + link = &rb->rb_left; + } else { + link = &rb->rb_right; + first = false; + } + } + + rb_link_node(&node->rb_hole_size, rb, link); + rb_insert_color_cached(&node->rb_hole_size, root, first); +} + static void add_hole(struct drm_mm_node *node) { struct drm_mm *mm = node->mm; @@ -247,7 +273,7 @@ static void add_hole(struct drm_mm_node *node) __drm_mm_hole_node_end(node) - __drm_mm_hole_node_start(node); DRM_MM_BUG_ON(!drm_mm_hole_follows(node)); - RB_INSERT(mm->holes_size, rb_hole_size, HOLE_SIZE); + insert_hole_size(&mm->holes_size, node); RB_INSERT(mm->holes_addr, rb_hole_addr, HOLE_ADDR); list_add(&node->hole_stack, &mm->hole_stack); @@ -258,7 +284,7 @@ static void rm_hole(struct drm_mm_node *node) DRM_MM_BUG_ON(!drm_mm_hole_follows(node)); list_del(&node->hole_stack); - rb_erase(&node->rb_hole_size, &node->mm->holes_size); + rb_erase_cached(&node->rb_hole_size, &node->mm->holes_size); rb_erase(&node->rb_hole_addr, &node->mm->holes_addr); node->hole_size = 0; @@ -282,38 +308,39 @@ static inline u64 rb_hole_size(struct rb_node *rb) static struct drm_mm_node *best_hole(struct drm_mm *mm, u64 size) { - struct rb_node *best = NULL; - struct rb_node **link = &mm->holes_size.rb_node; + struct rb_node *rb = mm->holes_size.rb_root.rb_node; + struct drm_mm_node *best = NULL; - while (*link) { - struct rb_node *rb = *link; + do { + struct drm_mm_node *node = + rb_entry(rb, struct drm_mm_node, rb_hole_size); - if (size <= rb_hole_size(rb)) { - link = &rb->rb_left; - best = rb; + if (size <= node->hole_size) { + best = node; + rb = rb->rb_right; } else { - link = &rb->rb_right; + rb = rb->rb_left; } - } + } while (rb); - return rb_hole_size_to_node(best); + return best; } static struct drm_mm_node *find_hole(struct drm_mm *mm, u64 addr) { + struct rb_node *rb = mm->holes_addr.rb_node; struct drm_mm_node *node = NULL; - struct rb_node **link = &mm->holes_addr.rb_node; - while (*link) { + while (rb) { u64 hole_start; - node = rb_hole_addr_to_node(*link); + node = rb_hole_addr_to_node(rb); hole_start = __drm_mm_hole_node_start(node); if (addr < hole_start) - link = &node->rb_hole_addr.rb_left; + rb = node->rb_hole_addr.rb_left; else if (addr > hole_start + node->hole_size) - link = &node->rb_hole_addr.rb_right; + rb = node->rb_hole_addr.rb_right; else break; } @@ -326,9 +353,6 @@ first_hole(struct drm_mm *mm, u64 start, u64 end, u64 size, enum drm_mm_insert_mode mode) { - if (RB_EMPTY_ROOT(&mm->holes_size)) - return NULL; - switch (mode) { default: case DRM_MM_INSERT_BEST: @@ -355,7 +379,7 @@ next_hole(struct drm_mm *mm, switch (mode) { default: case DRM_MM_INSERT_BEST: - return rb_hole_size_to_node(rb_next(&node->rb_hole_size)); + return rb_hole_size_to_node(rb_prev(&node->rb_hole_size)); case DRM_MM_INSERT_LOW: return rb_hole_addr_to_node(rb_next(&node->rb_hole_addr)); @@ -426,6 +450,11 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node) } EXPORT_SYMBOL(drm_mm_reserve_node); +static u64 rb_to_hole_size_or_zero(struct rb_node *rb) +{ + return rb ? rb_to_hole_size(rb) : 0; +} + /** * drm_mm_insert_node_in_range - ranged search for space and insert @node * @mm: drm_mm to allocate from @@ -457,6 +486,9 @@ int drm_mm_insert_node_in_range(struct drm_mm * const mm, if (unlikely(size == 0 || range_end - range_start < size)) return -ENOSPC; + if (rb_to_hole_size_or_zero(rb_first_cached(&mm->holes_size)) < size) + return -ENOSPC; + if (alignment <= 1) alignment = 0; @@ -587,9 +619,9 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) if (drm_mm_hole_follows(old)) { list_replace(&old->hole_stack, &new->hole_stack); - rb_replace_node(&old->rb_hole_size, - &new->rb_hole_size, - &mm->holes_size); + rb_replace_node_cached(&old->rb_hole_size, + &new->rb_hole_size, + &mm->holes_size); rb_replace_node(&old->rb_hole_addr, &new->rb_hole_addr, &mm->holes_addr); @@ -885,7 +917,7 @@ void drm_mm_init(struct drm_mm *mm, u64 start, u64 size) INIT_LIST_HEAD(&mm->hole_stack); mm->interval_tree = RB_ROOT_CACHED; - mm->holes_size = RB_ROOT; + mm->holes_size = RB_ROOT_CACHED; mm->holes_addr = RB_ROOT; /* Clever trick to avoid a special case in the free hole tracking. */ diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 101f566ae43d..e3aa3bfd4860 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -173,7 +173,7 @@ struct drm_mm { struct drm_mm_node head_node; /* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */ struct rb_root_cached interval_tree; - struct rb_root holes_size; + struct rb_root_cached holes_size; struct rb_root holes_addr; unsigned long scan_active;