From patchwork Fri Jul 5 09:02:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11032259 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F5FE14C0 for ; Fri, 5 Jul 2019 09:05:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B1112879E for ; Fri, 5 Jul 2019 09:05:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0C33928A5B; Fri, 5 Jul 2019 09:05:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 64FB42879E for ; Fri, 5 Jul 2019 09:05:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hjK7N-0005o6-Ui; Fri, 05 Jul 2019 09:02:57 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hjK7M-0005np-7e for xen-devel@lists.xenproject.org; Fri, 05 Jul 2019 09:02:56 +0000 X-Inumbo-ID: ad48fcbb-9f03-11e9-8980-bc764e045a96 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id ad48fcbb-9f03-11e9-8980-bc764e045a96; Fri, 05 Jul 2019 09:02:55 +0000 (UTC) Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=paul.durrant@citrix.com; spf=Pass smtp.mailfrom=Paul.Durrant@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of paul.durrant@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Paul.Durrant@citrix.com"; x-sender="paul.durrant@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of Paul.Durrant@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Paul.Durrant@citrix.com"; x-sender="Paul.Durrant@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Paul.Durrant@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 4WTiIKA+HzA9PzSmHZH19teAf8MKeNhsXfRFxWz4kml4xrQDk39SkhcXpZBnC162u9dAmelHvx gAj3mUXn1eXPKXcDynaSH+c40FR+oEwabCU44lFyDRqr99tQuiQ6Rf0U6BSnsdl2ViDphRWORZ 8bCZupwd+dLbx3KlDLHMNfCiTmpGT3dNd5w3cD+cV/oK+ZBPJrgt9hQU/MoXwH8pGLsS6RxwDG xLd/gUhYp1Yv/7WCVCQT0SMvZaEY2iXx7eoHIGkBe4815RHAcp9BNGgzHG1dUuBoDcNClBr37Q J9Y= X-SBRS: 2.7 X-MesageID: 2623114 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.63,454,1557201600"; d="scan'208";a="2623114" From: Paul Durrant To: Date: Fri, 5 Jul 2019 10:02:48 +0100 Message-ID: <20190705090249.1935-2-paul.durrant@citrix.com> X-Mailer: git-send-email 2.20.1.2.gb21ebb671 In-Reply-To: <20190705090249.1935-1-paul.durrant@citrix.com> References: <20190705090249.1935-1-paul.durrant@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 1/2] xmalloc: remove struct xmem_pool init_region X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch dispenses with the init_region. It's simply not necessary (pools will still happily grow and shrink on demand in its absence) and the code can be shortended by removing it. It also avoids the sole evaluation of ADD_REGION without holding the pool lock (which is unsafe). Signed-off-by: Paul Durrant Suggested-by: Jan Beulich Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Jan Beulich Cc: Julien Grall Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu v2: - remove init_region instead of fixing the locking --- xen/common/xmalloc_tlsf.c | 34 ++++------------------------------ xen/include/xen/xmalloc.h | 2 -- 2 files changed, 4 insertions(+), 32 deletions(-) diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c index f585388dfa..e4e476a27c 100644 --- a/xen/common/xmalloc_tlsf.c +++ b/xen/common/xmalloc_tlsf.c @@ -101,7 +101,6 @@ struct xmem_pool { spinlock_t lock; - unsigned long init_size; unsigned long max_size; unsigned long grow_size; @@ -115,7 +114,6 @@ struct xmem_pool { struct list_head list; - void *init_region; char name[MAX_POOL_NAME_LEN]; }; @@ -287,14 +285,13 @@ struct xmem_pool *xmem_pool_create( const char *name, xmem_pool_get_memory get_mem, xmem_pool_put_memory put_mem, - unsigned long init_size, unsigned long max_size, unsigned long grow_size) { struct xmem_pool *pool; int pool_bytes, pool_order; - BUG_ON(max_size && (max_size < init_size)); + BUG_ON(max_size && (max_size < grow_size)); pool_bytes = ROUNDUP_SIZE(sizeof(*pool)); pool_order = get_order_from_bytes(pool_bytes); @@ -305,23 +302,18 @@ struct xmem_pool *xmem_pool_create( memset(pool, 0, pool_bytes); /* Round to next page boundary */ - init_size = ROUNDUP_PAGE(init_size); max_size = ROUNDUP_PAGE(max_size); grow_size = ROUNDUP_PAGE(grow_size); /* pool global overhead not included in used size */ pool->used_size = 0; - pool->init_size = init_size; pool->max_size = max_size; pool->grow_size = grow_size; pool->get_mem = get_mem; pool->put_mem = put_mem; strlcpy(pool->name, name, sizeof(pool->name)); - /* always obtain init_region lazily now to ensure it is get_mem'd - * in the same "context" as all other regions */ - spin_lock_init(&pool->lock); spin_lock(&pool_list_lock); @@ -340,7 +332,6 @@ unsigned long xmem_pool_get_total_size(struct xmem_pool *pool) { unsigned long total; total = ROUNDUP_SIZE(sizeof(*pool)) - + pool->init_size + (pool->num_regions - 1) * pool->grow_size; return total; } @@ -352,13 +343,6 @@ void xmem_pool_destroy(struct xmem_pool *pool) if ( pool == NULL ) return; - /* User is destroying without ever allocating from this pool */ - if ( xmem_pool_get_used_size(pool) == BHDR_OVERHEAD ) - { - ASSERT(!pool->init_region); - pool->used_size -= BHDR_OVERHEAD; - } - /* Check for memory leaks in this pool */ if ( xmem_pool_get_used_size(pool) ) printk("memory leak in pool: %s (%p). " @@ -380,14 +364,6 @@ void *xmem_pool_alloc(unsigned long size, struct xmem_pool *pool) int fl, sl; unsigned long tmp_size; - if ( pool->init_region == NULL ) - { - if ( (region = pool->get_mem(pool->init_size)) == NULL ) - goto out; - ADD_REGION(region, pool->init_size, pool); - pool->init_region = region; - } - size = (size < MIN_BLOCK_SIZE) ? MIN_BLOCK_SIZE : ROUNDUP_SIZE(size); /* Rounding up the requested size and calculating fl and sl */ @@ -401,8 +377,7 @@ void *xmem_pool_alloc(unsigned long size, struct xmem_pool *pool) /* Not found */ if ( size > (pool->grow_size - 2 * BHDR_OVERHEAD) ) goto out_locked; - if ( pool->max_size && (pool->init_size + - pool->num_regions * pool->grow_size + if ( pool->max_size && (pool->num_regions * pool->grow_size > pool->max_size) ) goto out_locked; spin_unlock(&pool->lock); @@ -551,9 +526,8 @@ static void *xmalloc_whole_pages(unsigned long size, unsigned long align) static void tlsf_init(void) { - xenpool = xmem_pool_create( - "xmalloc", xmalloc_pool_get, xmalloc_pool_put, - PAGE_SIZE, 0, PAGE_SIZE); + xenpool = xmem_pool_create("xmalloc", xmalloc_pool_get, + xmalloc_pool_put, 0, PAGE_SIZE); BUG_ON(!xenpool); } diff --git a/xen/include/xen/xmalloc.h b/xen/include/xen/xmalloc.h index b486fe4b06..f075d2da91 100644 --- a/xen/include/xen/xmalloc.h +++ b/xen/include/xen/xmalloc.h @@ -84,7 +84,6 @@ typedef void (xmem_pool_put_memory)(void *ptr); * @name: name of the pool * @get_mem: callback function used to expand pool * @put_mem: callback function used to shrink pool - * @init_size: inital pool size (in bytes) * @max_size: maximum pool size (in bytes) - set this as 0 for no limit * @grow_size: amount of memory (in bytes) added to pool whenever required * @@ -94,7 +93,6 @@ struct xmem_pool *xmem_pool_create( const char *name, xmem_pool_get_memory get_mem, xmem_pool_put_memory put_mem, - unsigned long init_size, unsigned long max_size, unsigned long grow_size);