From patchwork Wed Jan 17 20:21:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10171141 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 87038603B5 for ; Wed, 17 Jan 2018 20:38:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78808204C4 for ; Wed, 17 Jan 2018 20:38:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6C908223A1; Wed, 17 Jan 2018 20:38:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1110B204C4 for ; Wed, 17 Jan 2018 20:38:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754632AbeAQUiN (ORCPT ); Wed, 17 Jan 2018 15:38:13 -0500 Received: from bombadil.infradead.org ([65.50.211.133]:57541 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753822AbeAQUWz (ORCPT ); Wed, 17 Jan 2018 15:22:55 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=p1nKCANnj+NpenPHx5xTQWGs5IuJSDxvO05UkyJDO2c=; b=CWYgI2D7YgaBoQYajPBP7Jpgk S8B86ykiQybmpD5rPRVoJfs7zJVTRXLC1cA6zaWLMBCvg/fCXAgcUU0orhyGFu6buHAas41QK5wxJ sTEXeuQgKVR9mmmAekeI+Q+3OBTA1gvucSMym38+tTXwj+Uz7rYSJCaTnstLAJN+ZJK9H7Tjmlo9T MAUwsqyn7QZ7sCzrhpS9GDXpx/HKb0Lt6g0mklkeVN+tO6tfVe18ics8vGTrig+S3eKngQbMwWP4K ZAdrhV37f1ibaZ5Zd7picZrAvpw0GCkMKKZyixwxt0BYEpIdhwKAW7C6JRXFmCMZZU/CT5gbWhYi5 vlHy0qv1Q==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEX-00068M-Pm; Wed, 17 Jan 2018 20:22:53 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Cc: Matthew Wilcox , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, Bjorn Andersson , Stefano Stabellini , iommu@lists.linux-foundation.org, linux-remoteproc@vger.kernel.org, linux-s390@vger.kernel.org, intel-gfx@lists.freedesktop.org, cgroups@vger.kernel.org, linux-sh@vger.kernel.org, David Howells Subject: [PATCH v6 68/99] vmalloc: Convert to XArray Date: Wed, 17 Jan 2018 12:21:32 -0800 Message-Id: <20180117202203.19756-69-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox The radix tree of vmap blocks is simpler to express as an XArray. Saves a couple of hundred bytes of text and eliminates a user of the radix tree preload API. Signed-off-by: Matthew Wilcox --- mm/vmalloc.c | 39 +++++++++++++-------------------------- 1 file changed, 13 insertions(+), 26 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 673942094328..b6c138633592 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -23,7 +23,7 @@ #include #include #include -#include +#include #include #include #include @@ -821,12 +821,11 @@ struct vmap_block { static DEFINE_PER_CPU(struct vmap_block_queue, vmap_block_queue); /* - * Radix tree of vmap blocks, indexed by address, to quickly find a vmap block + * XArray of vmap blocks, indexed by address, to quickly find a vmap block * in the free path. Could get rid of this if we change the API to return a * "cookie" from alloc, to be passed to free. But no big deal yet. */ -static DEFINE_SPINLOCK(vmap_block_tree_lock); -static RADIX_TREE(vmap_block_tree, GFP_ATOMIC); +static DEFINE_XARRAY(vmap_block_tree); /* * We should probably have a fallback mechanism to allocate virtual memory @@ -865,8 +864,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) struct vmap_block *vb; struct vmap_area *va; unsigned long vb_idx; - int node, err; - void *vaddr; + int node; + void *ret, *vaddr; node = numa_node_id(); @@ -883,13 +882,6 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) return ERR_CAST(va); } - err = radix_tree_preload(gfp_mask); - if (unlikely(err)) { - kfree(vb); - free_vmap_area(va); - return ERR_PTR(err); - } - vaddr = vmap_block_vaddr(va->va_start, 0); spin_lock_init(&vb->lock); vb->va = va; @@ -902,11 +894,12 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) INIT_LIST_HEAD(&vb->free_list); vb_idx = addr_to_vb_idx(va->va_start); - spin_lock(&vmap_block_tree_lock); - err = radix_tree_insert(&vmap_block_tree, vb_idx, vb); - spin_unlock(&vmap_block_tree_lock); - BUG_ON(err); - radix_tree_preload_end(); + ret = xa_store(&vmap_block_tree, vb_idx, vb, gfp_mask); + if (xa_is_err(ret)) { + kfree(vb); + free_vmap_area(va); + return ERR_PTR(xa_err(ret)); + } vbq = &get_cpu_var(vmap_block_queue); spin_lock(&vbq->lock); @@ -923,9 +916,7 @@ static void free_vmap_block(struct vmap_block *vb) unsigned long vb_idx; vb_idx = addr_to_vb_idx(vb->va->va_start); - spin_lock(&vmap_block_tree_lock); - tmp = radix_tree_delete(&vmap_block_tree, vb_idx); - spin_unlock(&vmap_block_tree_lock); + tmp = xa_erase(&vmap_block_tree, vb_idx); BUG_ON(tmp != vb); free_vmap_area_noflush(vb->va); @@ -1031,7 +1022,6 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) static void vb_free(const void *addr, unsigned long size) { unsigned long offset; - unsigned long vb_idx; unsigned int order; struct vmap_block *vb; @@ -1045,10 +1035,7 @@ static void vb_free(const void *addr, unsigned long size) offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1); offset >>= PAGE_SHIFT; - vb_idx = addr_to_vb_idx((unsigned long)addr); - rcu_read_lock(); - vb = radix_tree_lookup(&vmap_block_tree, vb_idx); - rcu_read_unlock(); + vb = xa_load(&vmap_block_tree, addr_to_vb_idx((unsigned long)addr)); BUG_ON(!vb); vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);