From patchwork Wed Jan 17 20:21:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10172785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E9637601E7 for ; Thu, 18 Jan 2018 10:22:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D88E526255 for ; Thu, 18 Jan 2018 10:22:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D710826785; Thu, 18 Jan 2018 10:22:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 79BB727F2B for ; Thu, 18 Jan 2018 10:21:57 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0115789A9A; Thu, 18 Jan 2018 09:44:42 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8BF006E1C4 for ; Wed, 17 Jan 2018 20:22:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=tm395P+07Nr+JL+jzuW6+85raWyo2+0xXriULUgqQxI=; b=iImZJE8Jl3whzqxpoFqTXA9gQ fDnZ3Vyj5JPWwtf0e3yUrPSFsGt67h+xSPv9F7Y8H4oBZqoC56OA0Cvz9ep+araGxk8ry+1DZo2QU p3sKCwVSorZm1fLGjRmrofBefwL8HqxkIz6rvx7MRkZEByG+Yii+wy72qLXBypmZKwtkayNYh2lOv Aa2mbZXJ8oCSFpRTXaAhr/QSkWPnmvwyAij5WU3iQbJ59RisMmaChF8Hbi1bVn56YNx1c11g7UP3H tik4IckU7RKhlANn8HO++SHidG5gtGdmA/p872gw9daBbN4sJjKF3itSIJQYKznzG3aJJ6pwsBsYm wN+SBrpiA==; Received: from willy by bombadil.infradead.org with local (Exim 4.89 #1 (Red Hat Linux)) id 1ebuEN-0005uu-6n; Wed, 17 Jan 2018 20:22:43 +0000 From: Matthew Wilcox To: linux-kernel@vger.kernel.org Date: Wed, 17 Jan 2018 12:21:09 -0800 Message-Id: <20180117202203.19756-46-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180117202203.19756-1-willy@infradead.org> References: <20180117202203.19756-1-willy@infradead.org> Cc: linux-s390@vger.kernel.org, David Howells , linux-nilfs@vger.kernel.org, Matthew Wilcox , linux-sh@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-usb@vger.kernel.org, linux-remoteproc@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-xfs@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, Stefano Stabellini , linux-fsdevel@vger.kernel.org, cgroups@vger.kernel.org, Bjorn Andersson , linux-btrfs@vger.kernel.org Subject: [Intel-gfx] [PATCH v6 45/99] shmem: Convert shmem_wait_for_pins to XArray X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox As with shmem_tag_pins(), hold the lock around the entire loop instead of acquiring & dropping it for each entry we're going to untag. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 59 ++++++++++++++++++++++++----------------------------------- 1 file changed, 24 insertions(+), 35 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 2f41c7ceea18..e4a2eb1336be 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2636,9 +2636,7 @@ static void shmem_tag_pins(struct address_space *mapping) */ static int shmem_wait_for_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; int error, scan; @@ -2646,7 +2644,9 @@ static int shmem_wait_for_pins(struct address_space *mapping) error = 0; for (scan = 0; scan <= LAST_SCAN; scan++) { - if (!radix_tree_tagged(&mapping->pages, SHMEM_TAG_PINNED)) + unsigned int tagged = 0; + + if (!xas_tagged(&xas, SHMEM_TAG_PINNED)) break; if (!scan) @@ -2654,45 +2654,34 @@ static int shmem_wait_for_pins(struct address_space *mapping) else if (schedule_timeout_killable((HZ << scan) / 200)) scan = LAST_SCAN; - start = 0; - rcu_read_lock(); - radix_tree_for_each_tagged(slot, &mapping->pages, &iter, - start, SHMEM_TAG_PINNED) { - - page = radix_tree_deref_slot(slot); - if (radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - - page = NULL; - } - - if (page && - page_count(page) - page_mapcount(page) != 1) { - if (scan < LAST_SCAN) - goto continue_resched; - + xas_set(&xas, 0); + xas_lock_irq(&xas); + xas_for_each_tag(&xas, page, ULONG_MAX, SHMEM_TAG_PINNED) { + bool clear = true; + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) != 1) { /* * On the last scan, we clean up all those tags * we inserted; but make a note that we still * found pages pinned. */ - error = -EBUSY; + if (scan == LAST_SCAN) + error = -EBUSY; + else + clear = false; } + if (clear) + xas_clear_tag(&xas, SHMEM_TAG_PINNED); + if (++tagged % XA_CHECK_SCHED) + continue; - xa_lock_irq(&mapping->pages); - radix_tree_tag_clear(&mapping->pages, - iter.index, SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); -continue_resched: - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } return error;