From patchwork Mon Jun 11 14:05:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10457895 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 813106020F for ; Mon, 11 Jun 2018 14:08:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6EA0D2846D for ; Mon, 11 Jun 2018 14:08:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 637F628475; Mon, 11 Jun 2018 14:08:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B10F2846D for ; Mon, 11 Jun 2018 14:08:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 220206B0275; Mon, 11 Jun 2018 10:06:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 17C4D6B0273; Mon, 11 Jun 2018 10:06:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB0F16B0277; Mon, 11 Jun 2018 10:06:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id 8108F6B0274 for ; Mon, 11 Jun 2018 10:06:50 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id s3-v6so10786460plp.21 for ; Mon, 11 Jun 2018 07:06:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=KfXYsnltZL+rtjCiw7RQ68iU+U85ftipb5d8695MkPs=; b=b7Vvx++8CuiarD8kTN9q0Om81VC2AsYUvSbZPCe/VruRmPxe1gU5axRtGucyGuGvSi GGA4KWHq5pq1WNSz/lTqovS3coEGWnk6LOkY/92fLJ+gIuCr9Ums1AXlYnt0YCxyHk+o sS4/H0cNm6diHybtYCWjE6i4twXczawr4b+JiDXbYM7994NrSFyAYSVlrhZxFt7sNXNO chK8H6SXUMSLS37veXaVIucKF7M8675zeoQ5SeZeHM/KRTa7t7E+6kq+z5hgWIDXh6Py xgO9+M7e+je0NeZPSP0gSR/2UfdgSNhFSAmq3syaYLGNr0oYFYJD6IPBhiHk+uAoVvq6 28jQ== X-Gm-Message-State: APt69E3Fm4bZ63dM9Jd0TVu63sIT7ZRmJgbZ3dgOReuvST+wo0Cj3PAu qh9wM5XmSxWTWlQ9utOm0wgtA5Im5TkTro9VEC+B9j+qlN5p3GzaTYSV3dvifQlCWlbvy/Tsadl 5QqbiI/Hl6MI1R5fIK8mEuk4vVDfbE/qw2yFgA4/dUpzh22vMj6+YFOP27ZbYwJxQRg== X-Received: by 2002:a63:107:: with SMTP id 7-v6mr2524066pgb.289.1528726010170; Mon, 11 Jun 2018 07:06:50 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLqZ2KDkbNMvA/QwzkHhfnJpy6O0byYp6bS6Q3bY54ewQmhqA8mkZi3DASDXrpVUGPLHU4L X-Received: by 2002:a63:107:: with SMTP id 7-v6mr2524004pgb.289.1528726008892; Mon, 11 Jun 2018 07:06:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528726008; cv=none; d=google.com; s=arc-20160816; b=YpPSzHEaud6vIz1sbfYVdrxpYIiLtfLOd8DCx0xD10csTTXq6885mYE68UpQsdmkOq /InK3a9HeOOKHQ2WsKVxSWLtpbtZh2hOsE5dQy2vSEClWdd76d121cO4taeVVY2qlXlX aQ1Y8nEySOOhp6Bc5cIJwaq06p7hNnbuRX43LsJz70RyIHoh6uLx0XQrMREh5M9o9r2b onHeGSDdiADyTEPgyfadOKb0Pq45wsVM2f1a26osDkQI94stzq8y7JNXyByXNKW0q8Ut 3NKJ0taGB6c6Imvtjvvg+O7eFkbvXj+xYUPM29JGd3LPH/sX8LKJa6tZMY2c3Eb2ZgY9 rLFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=KfXYsnltZL+rtjCiw7RQ68iU+U85ftipb5d8695MkPs=; b=seYCa1VKwDel5z4yMOrRe4JvRMmDv71pRoG+ABg6yrJF5wxOsG5m8ILVM/yq3nLJvB kVkafoALDCscrXPDI7fBpyaoPbQazFRIogp6pn5BayYqvIhjjg6KUUT5leqvV8JGSH7Y 45+4YsP38CN5TQkCd+Ut5jOfLCiWHqTPqBkeplLscjkIegi1kyvTvTt5X+4j9hqcQG2Z B536O9Kqgm5QGkqGposQpy0LaXY9JWpJg/R5HYByOXls73wNbPcr5faIJAA9Tm+8EZ3B KEy8pS3VWUbF9W+r9BtHaHYp3fBvUf2Xbl5stZbtsuw6l0iuDQWkeZGz/jXIISi87W83 iIYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=oaKxgdPG; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 33-v6si62871835plo.505.2018.06.11.07.06.48 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 11 Jun 2018 07:06:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=oaKxgdPG; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=KfXYsnltZL+rtjCiw7RQ68iU+U85ftipb5d8695MkPs=; b=oaKxgdPGojGzt+8MZ5TkVyNJp 4iWHNzcCOmlbVPkaOxKq8Wa/D7kVUW+xhAL+oIHQigrULSIGILcHFGMkpXDOTZ2R+0L4Tn1JYqtI1 Jyk9huwIrk9YfUCjmKePNhNT6dlVxNeQfnudJuCGHdeo6c98ny+ji5KvNbGKMReSfztYmiIy7wvSe zVv/P+tlSkNbz9lwtj+pTbj2rtCcZBxXrdmEyOwJQxgLMDns3CIp22nftLkOmHTH4F71xvbOM6kbJ +J8nETYZAGxqKkpZZWIDbKIs2iKq7DyynWrNWSrAuG9UcPre0SIjqoaZcS7hACmPlsiSkCDd1IeKp 68pDyZu5A==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSNT5-0004e9-VL; Mon, 11 Jun 2018 14:06:47 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v13 21/72] page cache: Add and replace pages using the XArray Date: Mon, 11 Jun 2018 07:05:48 -0700 Message-Id: <20180611140639.17215-22-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180611140639.17215-1-willy@infradead.org> References: <20180611140639.17215-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox Use the XArray APIs to add and replace pages in the page cache. This removes two uses of the radix tree preload API and is significantly shorter code. It also removes the last user of __radix_tree_create() outside radix-tree.c itself, so make it static. Signed-off-by: Matthew Wilcox --- include/linux/radix-tree.h | 3 - include/linux/swap.h | 8 ++- lib/radix-tree.c | 6 +- mm/filemap.c | 139 +++++++++++++++---------------------- 4 files changed, 66 insertions(+), 90 deletions(-) diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h index f64beb9ba175..4b6f685309fc 100644 --- a/include/linux/radix-tree.h +++ b/include/linux/radix-tree.h @@ -231,9 +231,6 @@ static inline int radix_tree_exception(void *arg) return unlikely((unsigned long)arg & RADIX_TREE_ENTRY_MASK); } -int __radix_tree_create(struct radix_tree_root *, unsigned long index, - unsigned order, struct radix_tree_node **nodep, - void __rcu ***slotp); int __radix_tree_insert(struct radix_tree_root *, unsigned long index, unsigned order, void *); static inline int radix_tree_insert(struct radix_tree_root *root, diff --git a/include/linux/swap.h b/include/linux/swap.h index f73eafcaf4e9..1b91e7f7bdeb 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -300,8 +300,12 @@ void *workingset_eviction(struct address_space *mapping, struct page *page); bool workingset_refault(void *shadow); void workingset_activation(struct page *page); -/* Do not use directly, use workingset_lookup_update */ -void workingset_update_node(struct radix_tree_node *node); +/* Only track the nodes of mappings with shadow entries */ +void workingset_update_node(struct xa_node *node); +#define mapping_set_update(xas, mapping) do { \ + if (!dax_mapping(mapping) && !shmem_mapping(mapping)) \ + xas_set_update(xas, workingset_update_node); \ +} while (0) /* Returns workingset_update_node() if the mapping has shadow entries. */ #define workingset_lookup_update(mapping) \ diff --git a/lib/radix-tree.c b/lib/radix-tree.c index f7785f7cbd5f..5c8a262f506c 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -740,9 +740,9 @@ static bool delete_node(struct radix_tree_root *root, * * Returns -ENOMEM, or 0 for success. */ -int __radix_tree_create(struct radix_tree_root *root, unsigned long index, - unsigned order, struct radix_tree_node **nodep, - void __rcu ***slotp) +static int __radix_tree_create(struct radix_tree_root *root, + unsigned long index, unsigned order, + struct radix_tree_node **nodep, void __rcu ***slotp) { struct radix_tree_node *node = NULL, *child; void __rcu **slot = (void __rcu **)&root->xa_head; diff --git a/mm/filemap.c b/mm/filemap.c index 8de36e14e22f..965ff68e5b8d 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -111,35 +111,6 @@ * ->tasklist_lock (memory_failure, collect_procs_ao) */ -static int page_cache_tree_insert(struct address_space *mapping, - struct page *page, void **shadowp) -{ - struct radix_tree_node *node; - void **slot; - int error; - - error = __radix_tree_create(&mapping->i_pages, page->index, 0, - &node, &slot); - if (error) - return error; - if (*slot) { - void *p; - - p = radix_tree_deref_slot_protected(slot, - &mapping->i_pages.xa_lock); - if (!xa_is_value(p)) - return -EEXIST; - - mapping->nrexceptional--; - if (shadowp) - *shadowp = p; - } - __radix_tree_replace(&mapping->i_pages, node, slot, page, - workingset_lookup_update(mapping)); - mapping->nrpages++; - return 0; -} - static void page_cache_tree_delete(struct address_space *mapping, struct page *page, void *shadow) { @@ -775,51 +746,44 @@ EXPORT_SYMBOL(file_write_and_wait_range); * locked. This function does not add the new page to the LRU, the * caller must do that. * - * The remove + add is atomic. The only way this function can fail is - * memory allocation failure. + * The remove + add is atomic. This function cannot fail. */ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) { - int error; + struct address_space *mapping = old->mapping; + void (*freepage)(struct page *) = mapping->a_ops->freepage; + pgoff_t offset = old->index; + XA_STATE(xas, &mapping->i_pages, offset); + unsigned long flags; VM_BUG_ON_PAGE(!PageLocked(old), old); VM_BUG_ON_PAGE(!PageLocked(new), new); VM_BUG_ON_PAGE(new->mapping, new); - error = radix_tree_preload(gfp_mask & GFP_RECLAIM_MASK); - if (!error) { - struct address_space *mapping = old->mapping; - void (*freepage)(struct page *); - unsigned long flags; - - pgoff_t offset = old->index; - freepage = mapping->a_ops->freepage; + get_page(new); + new->mapping = mapping; + new->index = offset; - get_page(new); - new->mapping = mapping; - new->index = offset; + xas_lock_irqsave(&xas, flags); + xas_store(&xas, new); - xa_lock_irqsave(&mapping->i_pages, flags); - __delete_from_page_cache(old, NULL); - error = page_cache_tree_insert(mapping, new, NULL); - BUG_ON(error); - - /* - * hugetlb pages do not participate in page cache accounting. - */ - if (!PageHuge(new)) - __inc_node_page_state(new, NR_FILE_PAGES); - if (PageSwapBacked(new)) - __inc_node_page_state(new, NR_SHMEM); - xa_unlock_irqrestore(&mapping->i_pages, flags); - mem_cgroup_migrate(old, new); - radix_tree_preload_end(); - if (freepage) - freepage(old); - put_page(old); - } + old->mapping = NULL; + /* hugetlb pages do not participate in page cache accounting. */ + if (!PageHuge(old)) + __dec_node_page_state(new, NR_FILE_PAGES); + if (!PageHuge(new)) + __inc_node_page_state(new, NR_FILE_PAGES); + if (PageSwapBacked(old)) + __dec_node_page_state(new, NR_SHMEM); + if (PageSwapBacked(new)) + __inc_node_page_state(new, NR_SHMEM); + xas_unlock_irqrestore(&xas, flags); + mem_cgroup_migrate(old, new); + if (freepage) + freepage(old); + put_page(old); - return error; + return 0; } EXPORT_SYMBOL_GPL(replace_page_cache_page); @@ -828,12 +792,15 @@ static int __add_to_page_cache_locked(struct page *page, pgoff_t offset, gfp_t gfp_mask, void **shadowp) { + XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); struct mem_cgroup *memcg; int error; + void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapBacked(page), page); + mapping_set_update(&xas, mapping); if (!huge) { error = mem_cgroup_try_charge(page, current->mm, @@ -842,39 +809,47 @@ static int __add_to_page_cache_locked(struct page *page, return error; } - error = radix_tree_maybe_preload(gfp_mask & GFP_RECLAIM_MASK); - if (error) { - if (!huge) - mem_cgroup_cancel_charge(page, memcg, false); - return error; - } - get_page(page); page->mapping = mapping; page->index = offset; - xa_lock_irq(&mapping->i_pages); - error = page_cache_tree_insert(mapping, page, shadowp); - radix_tree_preload_end(); - if (unlikely(error)) - goto err_insert; + do { + xas_lock_irq(&xas); + old = xas_load(&xas); + if (old && !xa_is_value(old)) + xas_set_err(&xas, -EEXIST); + xas_store(&xas, page); + if (xas_error(&xas)) + goto unlock; + + if (xa_is_value(old)) { + mapping->nrexceptional--; + if (shadowp) + *shadowp = old; + } + mapping->nrpages++; + + /* hugetlb pages do not participate in page cache accounting */ + if (!huge) + __inc_node_page_state(page, NR_FILE_PAGES); +unlock: + xas_unlock_irq(&xas); + } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); + + if (xas_error(&xas)) + goto error; - /* hugetlb pages do not participate in page cache accounting. */ - if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); - xa_unlock_irq(&mapping->i_pages); if (!huge) mem_cgroup_commit_charge(page, memcg, false, false); trace_mm_filemap_add_to_page_cache(page); return 0; -err_insert: +error: page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ - xa_unlock_irq(&mapping->i_pages); if (!huge) mem_cgroup_cancel_charge(page, memcg, false); put_page(page); - return error; + return xas_error(&xas); } /**