From patchwork Wed Jan 17 20:21:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10172819 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3C7F66055D for ; Thu, 18 Jan 2018 10:30:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B9A42623D for ; Thu, 18 Jan 2018 10:30:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1FF79262FF; Thu, 18 Jan 2018 10:30:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 907742623D for ; Thu, 18 Jan 2018 10:30:19 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7814A6E808; Thu, 18 Jan 2018 09:45:55 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) by gabe.freedesktop.org (Postfix) with ESMTPS id C134C6E1C4 for ; Wed, 17 Jan 2018 20:23:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=p1nKCANnj+NpenPHx5xTQWGs5IuJSDxvO05UkyJDO2c=; b=CWYgI2D7YgaBoQYajPBP7Jpgk S8B86ykiQybmpD5rPRVoJfs7zJVTRXLC1cA6zaWLMBCvg/fCXAgcUU0orhyGFu6buHAas41QK5wxJ sTEXeuQgKVR9mmmAekeI+Q+3OBTA1gvucSMym38+tTXwj+Uz7rYSJCaTnstLAJN+ZJK9H7Tjmlo9T MAUwsqyn7QZ7sCzrhpS9GDXpx/HKb0Lt6g0mklkeVN+tO6tfVe18ics8vGTrig+S3eKngQbMwWP4K ZAdrhV37f1ibaZ5Zd7picZrAvpw0GCkMKKZyixwxt0BYEpIdhwKAW7C6JRXFmCMZZU/CT5gbWhYi5 vlHy0qv1Q==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEX-00068M-Pm; Wed, 17 Jan 2018 20:22:53 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Date: Wed, 17 Jan 2018 12:21:32 -0800 Message-Id: <20180117202203.19756-69-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Cc: linux-s390@vger.kernel.org, David Howells , linux-nilfs@vger.kernel.org, Matthew Wilcox , linux-sh@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-usb@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Stefano Stabellini , linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, Bjorn Andersson , linux-btrfs@vger.kernel.org Subject: [Intel-gfx] [PATCH v6 68/99] vmalloc: Convert to XArray X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox The radix tree of vmap blocks is simpler to express as an XArray. Saves a couple of hundred bytes of text and eliminates a user of the radix tree preload API. Signed-off-by: Matthew Wilcox --- mm/vmalloc.c | 39 +++++++++++++-------------------------- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 673942094328..b6c138633592 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -23,7 +23,7 @@ #include #include #include -#include +#include #include #include #include @@ -821,12 +821,11 @@ struct vmap_block { static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); /* - * Radix tree of vmap blocks, indexed by address, to quickly find a vmap block + * XArray of vmap blocks, indexed by address, to quickly find a vmap block * in the free path. Could get rid of this if we change the API to return a * "cookie" from alloc, to be passed to free. But no big deal yet. */ -static DEFINE_SPINLOCK(vmap_block_tree_lock); -static RADIX_TREE(vmap_block_tree, GFP_ATOMIC); +static DEFINE_XARRAY(vmap_block_tree); /* * We should probably have a fallback mechanism to allocate virtual memory @@ -865,8 +864,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) struct vmap_block *vb; struct vmap_area *va; unsigned long vb_idx; - int node, err; - void *vaddr; + int node; + void *ret, *vaddr; node = numa_node_id(); @@ -883,13 +882,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) return ERR_CAST(va); } - err = radix_tree_preload(gfp_mask); - if (unlikely(err)) { - kfree(vb); - free_vmap_area(va); - return ERR_PTR(err); - } - vaddr = vmap_block_vaddr(va->va_start, 0); spin_lock_init(&vb->lock); vb->va = va; @@ -902,11 +894,12 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) INIT_LIST_HEAD(&vb->free_list); vb_idx = addr_to_vb_idx(va->va_start); - spin_lock(&vmap_block_tree_lock); - err = radix_tree_insert(&vmap_block_tree, vb_idx, vb); - spin_unlock(&vmap_block_tree_lock); - BUG_ON(err); - radix_tree_preload_end(); + ret = xa_store(&vmap_block_tree, vb_idx, vb, gfp_mask); + if (xa_is_err(ret)) { + kfree(vb); + free_vmap_area(va); + return ERR_PTR(xa_err(ret)); + } vbq = &get_cpu_var(vmap_block_queue); spin_lock(&vbq->lock); @@ -923,9 +916,7 @@ static void free_vmap_block(struct vmap_block *vb) unsigned long vb_idx; vb_idx = addr_to_vb_idx(vb->va->va_start); - spin_lock(&vmap_block_tree_lock); - tmp = radix_tree_delete(&vmap_block_tree, vb_idx); - spin_unlock(&vmap_block_tree_lock); + tmp = xa_erase(&vmap_block_tree, vb_idx); BUG_ON(tmp != vb); free_vmap_area_noflush(vb->va); @@ -1031,7 +1022,6 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) static void vb_free(const void *addr, unsigned long size) { unsigned long offset; - unsigned long vb_idx; unsigned int order; struct vmap_block *vb; @@ -1045,10 +1035,7 @@ static void vb_free(const void *addr, unsigned long size) offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1); offset >>= PAGE_SHIFT; - vb_idx = addr_to_vb_idx((unsigned long)addr); - rcu_read_lock(); - vb = radix_tree_lookup(&vmap_block_tree, vb_idx); - rcu_read_unlock(); + vb = xa_load(&vmap_block_tree, addr_to_vb_idx((unsigned long)addr)); BUG_ON(!vb); vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);