From patchwork Mon Jun 11 14:06:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10458015 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 74F1A6020F for ; Mon, 11 Jun 2018 14:12:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E33328829 for ; Mon, 11 Jun 2018 14:12:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5ACBE286EE; Mon, 11 Jun 2018 14:12:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CA048286A3 for ; Mon, 11 Jun 2018 14:11:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA5206B029B; Mon, 11 Jun 2018 10:07:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A55B46B029D; Mon, 11 Jun 2018 10:07:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CE526B029E; Mon, 11 Jun 2018 10:07:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f72.google.com (mail-pl0-f72.google.com [209.85.160.72]) by kanga.kvack.org (Postfix) with ESMTP id 477AC6B029B for ; Mon, 11 Jun 2018 10:07:11 -0400 (EDT) Received: by mail-pl0-f72.google.com with SMTP id z5-v6so10208071pln.20 for ; Mon, 11 Jun 2018 07:07:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=D6Nq+q//o9H6X1dq/PwQbU/ZsmpxD/JiFSdlUUlc5Dc=; b=moDB4OvVgexI1JuqMoBCoU++dZVpcqhs2L/xYmkjdpYe0JeC06ONT0IoWy2rCz8dk9 bV0qLU9HMpJZpglBsSctBsbZ0mGBPCnv0s4yTteDAYnBDTI8zoV4a5QBckChYecTB40O hCLbI0X1MS523x2oJh6mPZRxSKmo9HvyvuvlAvxGGjLCaGsEmJZLDzOAo91Nl5ItxTag yTvPkcibMPWiF1qP+bnGWUtP+hkdcS0o2iZ8qhc411T88j9oTfxvgvYkc+lVtQETwb+N PERwZFD0bnuBsgBlapGHd4QUFLi2RxKGSD/CbNXxcnvxBxaWbLn5+y2FG8KSSgeula+d 6WJA== X-Gm-Message-State: APt69E2bX65uqpneH7fuPTC0wimS8bzfKL3WFFgQ/f4fBGhkHLMV0iv2 fgY1lIh7NGtjQGRCI+mX4Hzigcb950q0eUyesIx0pjUafAhd/PZRVNqsMXiOBhLVIojOfUSl3dT g/cCkKjjfdQ/k5+UU0EV0fDOvAHsHx/Ch1Ezr8/aSqtu77/uZrKiDzAheoW6nqNI2hg== X-Received: by 2002:a65:660a:: with SMTP id w10-v6mr2440823pgv.366.1528726030764; Mon, 11 Jun 2018 07:07:10 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIJxBfR4JYEGVsHQJWQVeUO8PcOy67p30ekmLEDCsuKEBnlG1rxed2jNc5O/nPm6etdEbno X-Received: by 2002:a65:660a:: with SMTP id w10-v6mr2440760pgv.366.1528726029583; Mon, 11 Jun 2018 07:07:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528726029; cv=none; d=google.com; s=arc-20160816; b=n8MhQh8+JLBOLdtGd+MrZqyP4uqz6Bdh494cfeME8Qf07ci0Ju0VqcjbwNRDXFOUls 1xVbhrxSPqurqSMZ9Ik2ZTrtvxIIb82vc9d/G53/J2Cax2yulkTJW32rd2WvB5E6jA7n bcdGKLJvJv7poBBfR47aPDl9UNezjPJqRJYA8rjlTCVKH6th20QPDBYAKf8YjAEy3iBn YfsNkUTu2sj5GpNJHvMBkrE88TWNa0w0PMhKcu1Kk/uAbROKs+Y1mSXWNX6ScaHHIZFK JD1kTJA2fja2YgDGYWHv3b4jxz1gK4XC3eWWdSDfxUuly2Mu6HawkSHIv1hMoBnPdUK2 FydQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=D6Nq+q//o9H6X1dq/PwQbU/ZsmpxD/JiFSdlUUlc5Dc=; b=sQwpvBo6QLWFkCtw5OaFMG1res6LPlhT6QsW64r7b4+oCn8aPAwQ0xPwiSaR0M88J0 Ee58ICWD6MzOAfJTfVh50FCvItLO2JbLue/m0d7++nHZnQe7JUesp6TnZR+lq0dJn6ul 6Krq6XMGPMbRxpo+VHmjZknT+QQSVGf1X9a9KrDvR+KHpGAOZAcCNHMZ3JuGHxsX4L5l dciHGC+6vpfFPrkR9llkAdyjWsgIofAfcUz3+P/MVPDAP+xCM4pamMZfXi7JjH2csbIA 4QgUkZLKQE5yc11HrhX550052EqEl3sTChI4LUBrpdbkbrrV2MyRDkB8N5Vd2hYHupwu FNoQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=O+u+ZFiw; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id c10-v6si49964528pgv.446.2018.06.11.07.07.09 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 11 Jun 2018 07:07:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=O+u+ZFiw; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=D6Nq+q//o9H6X1dq/PwQbU/ZsmpxD/JiFSdlUUlc5Dc=; b=O+u+ZFiwIvLk3e5zFGmndfuSL /MQGF1eAUWKNWK9+eypVSJiKH70pAn0SPMWidprPm79HoyYyPeA2xJZgBX9mER+PGhDzdHhnPySbw oN/9ITrFRkdHmKTdAEq1GAUHAU4tF2tOgeTVzyk3tCFHJK6oP8R533upqSgNTLNNv996hkznkEMLH 4vGBJV6A3I5TXCJJuaoCx5o56fMdo2h7ePfT6rQ5205gr/slGBCtecZgApvEKlqEH4K0nLegG5yTD qDKX01N4nKXCbfZ6w8fHVAaK2XaZmKhHmiHhnVTw/I0IdC0rNi+opSaHahugUBK0yqJqL1jvZiR+c Ss5Qg2kIQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSNTQ-0004yN-Pn; Mon, 11 Jun 2018 14:07:08 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v13 59/72] nilfs2: Convert to XArray Date: Mon, 11 Jun 2018 07:06:26 -0700 Message-Id: <20180611140639.17215-60-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180611140639.17215-1-willy@infradead.org> References: <20180611140639.17215-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox This is close to a 1:1 replacement of radix tree APIs with their XArray equivalents. It would be possible to optimise nilfs_copy_back_pages(), but that doesn't seem to be in the performance path. Also, I think it has a pre-existing bug, and I've added a note to that effect in the source code. Signed-off-by: Matthew Wilcox --- fs/nilfs2/btnode.c | 26 +++++++++----------------- fs/nilfs2/page.c | 29 +++++++++++++---------------- 2 files changed, 22 insertions(+), 33 deletions(-) diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c index dec98cab729d..e9fcffb3a15c 100644 --- a/fs/nilfs2/btnode.c +++ b/fs/nilfs2/btnode.c @@ -177,24 +177,18 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, ctxt->newbh = NULL; if (inode->i_blkbits == PAGE_SHIFT) { - lock_page(obh->b_page); - /* - * We cannot call radix_tree_preload for the kernels older - * than 2.6.23, because it is not exported for modules. - */ + struct page *opage = obh->b_page; + lock_page(opage); retry: - err = radix_tree_preload(GFP_NOFS & ~__GFP_HIGHMEM); - if (err) - goto failed_unlock; /* BUG_ON(oldkey != obh->b_page->index); */ - if (unlikely(oldkey != obh->b_page->index)) - NILFS_PAGE_BUG(obh->b_page, + if (unlikely(oldkey != opage->index)) + NILFS_PAGE_BUG(opage, "invalid oldkey %lld (newkey=%lld)", (unsigned long long)oldkey, (unsigned long long)newkey); xa_lock_irq(&btnc->i_pages); - err = radix_tree_insert(&btnc->i_pages, newkey, obh->b_page); + err = __xa_insert(&btnc->i_pages, newkey, opage, GFP_NOFS); xa_unlock_irq(&btnc->i_pages); /* * Note: page->index will not change to newkey until @@ -202,7 +196,6 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, * To protect the page in intermediate state, the page lock * is held. */ - radix_tree_preload_end(); if (!err) return 0; else if (err != -EEXIST) @@ -212,7 +205,7 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, if (!err) goto retry; /* fallback to copy mode */ - unlock_page(obh->b_page); + unlock_page(opage); } nbh = nilfs_btnode_create_block(btnc, newkey); @@ -252,9 +245,8 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc, mark_buffer_dirty(obh); xa_lock_irq(&btnc->i_pages); - radix_tree_delete(&btnc->i_pages, oldkey); - radix_tree_tag_set(&btnc->i_pages, newkey, - PAGECACHE_TAG_DIRTY); + __xa_erase(&btnc->i_pages, oldkey); + __xa_set_tag(&btnc->i_pages, newkey, PAGECACHE_TAG_DIRTY); xa_unlock_irq(&btnc->i_pages); opage->index = obh->b_blocknr = newkey; @@ -284,7 +276,7 @@ void nilfs_btnode_abort_change_key(struct address_space *btnc, if (nbh == NULL) { /* blocksize == pagesize */ xa_lock_irq(&btnc->i_pages); - radix_tree_delete(&btnc->i_pages, newkey); + __xa_erase(&btnc->i_pages, newkey); xa_unlock_irq(&btnc->i_pages); unlock_page(ctxt->bh->b_page); } else diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c index 4cb850a6f1c2..8384473b98b8 100644 --- a/fs/nilfs2/page.c +++ b/fs/nilfs2/page.c @@ -298,7 +298,7 @@ int nilfs_copy_dirty_pages(struct address_space *dmap, * @dmap: destination page cache * @smap: source page cache * - * No pages must no be added to the cache during this process. + * No pages must be added to the cache during this process. * This must be ensured by the caller. */ void nilfs_copy_back_pages(struct address_space *dmap, @@ -307,7 +307,6 @@ void nilfs_copy_back_pages(struct address_space *dmap, struct pagevec pvec; unsigned int i, n; pgoff_t index = 0; - int err; pagevec_init(&pvec); repeat: @@ -322,35 +321,34 @@ void nilfs_copy_back_pages(struct address_space *dmap, lock_page(page); dpage = find_lock_page(dmap, offset); if (dpage) { - /* override existing page on the destination cache */ + /* overwrite existing page in the destination cache */ WARN_ON(PageDirty(dpage)); nilfs_copy_page(dpage, page, 0); unlock_page(dpage); put_page(dpage); + /* Do we not need to remove page from smap here? */ } else { - struct page *page2; + struct page *p; /* move the page to the destination cache */ xa_lock_irq(&smap->i_pages); - page2 = radix_tree_delete(&smap->i_pages, offset); - WARN_ON(page2 != page); - + p = __xa_erase(&smap->i_pages, offset); + WARN_ON(page != p); smap->nrpages--; xa_unlock_irq(&smap->i_pages); xa_lock_irq(&dmap->i_pages); - err = radix_tree_insert(&dmap->i_pages, offset, page); - if (unlikely(err < 0)) { - WARN_ON(err == -EEXIST); + p = __xa_store(&dmap->i_pages, offset, page, GFP_NOFS); + if (unlikely(p)) { + /* Probably -ENOMEM */ page->mapping = NULL; - put_page(page); /* for cache */ + put_page(page); } else { page->mapping = dmap; dmap->nrpages++; if (PageDirty(page)) - radix_tree_tag_set(&dmap->i_pages, - offset, - PAGECACHE_TAG_DIRTY); + __xa_set_tag(&dmap->i_pages, offset, + PAGECACHE_TAG_DIRTY); } xa_unlock_irq(&dmap->i_pages); } @@ -476,8 +474,7 @@ int __nilfs_clear_page_dirty(struct page *page) if (mapping) { xa_lock_irq(&mapping->i_pages); if (test_bit(PG_dirty, &page->flags)) { - radix_tree_tag_clear(&mapping->i_pages, - page_index(page), + __xa_clear_tag(&mapping->i_pages, page_index(page), PAGECACHE_TAG_DIRTY); xa_unlock_irq(&mapping->i_pages); return clear_page_dirty_for_io(page);