From patchwork Mon Jun 11 14:05:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 10457893 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4BDDC6020F for ; Mon, 11 Jun 2018 14:08:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3AF3428451 for ; Mon, 11 Jun 2018 14:08:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F73B2846F; Mon, 11 Jun 2018 14:08:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8793728451 for ; Mon, 11 Jun 2018 14:08:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6F246B0272; Mon, 11 Jun 2018 10:06:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B1A796B0273; Mon, 11 Jun 2018 10:06:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 921F56B0276; Mon, 11 Jun 2018 10:06:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl0-f71.google.com (mail-pl0-f71.google.com [209.85.160.71]) by kanga.kvack.org (Postfix) with ESMTP id 3F15A6B0272 for ; Mon, 11 Jun 2018 10:06:50 -0400 (EDT) Received: by mail-pl0-f71.google.com with SMTP id bf1-v6so12217638plb.2 for ; Mon, 11 Jun 2018 07:06:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=WWd+vxYrR7dn/0JVMqHD/PvTu5jxex5WZWt+Aik16Z8=; b=nDNdvLi6dim1E7J5W/vqYRqPbeeBqFaSE/eCqDL0SJc12KpGHl/cb+4JSd03K0saGW lFSAhLLp68oK6TXZnHHTsP6ac2wB1llOOB0yM6QjudL+EM0ETg2Rg+U34R5MynYeNJqM 7N1wDybRIgp54uguNjCyITtLjR3yat2v4IFXPDysQE40K5kVH2FdPoNmrmZdm2blzKLY amEGDwsQazaawhYPniqAiDVJE8mPzDtWR4Msz5PFVdxsJek6LsOEXOHaSZwYnIXjokJb NSeIxmAPieMqRRy1DSfRgKY/6F9ha/T9XmmO1lg4HJfyBk0+w4p1ViumzdXRhrCZGXEi 8psw== X-Gm-Message-State: APt69E2Z4zRSl9JfyIzDVqf8dDgZyPRG+8l101vKh7GNZSCWO6AUJ2Hp yw97TwcXFirG4CaoF+VGKLFZngCf2TqwVtYons7UkI+s49i6sqqYCEjppUcjp/zVC8CoULBL0TR IqmxmxiewDA+npZbI3ikBrhoSboxi463mKl3TOHBejG5MfKKSlKo7AOxvgBla/4DA9Q== X-Received: by 2002:a17:902:b946:: with SMTP id h6-v6mr18344428pls.1.1528726009922; Mon, 11 Jun 2018 07:06:49 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKETA/Ei3XExnsOxMoZciUFQdV28vjh5kV2nTHk47PmvmXCMg/kdjwNIenADAKdK6uhaUoe X-Received: by 2002:a17:902:b946:: with SMTP id h6-v6mr18344322pls.1.1528726008517; Mon, 11 Jun 2018 07:06:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528726008; cv=none; d=google.com; s=arc-20160816; b=IR3iosohQ1HNgoZdS/PuVR+1jya/uROhMbWs7GWeaoGeCfZFuaLbd1U5bmIBO7uzLQ hWDkPkwmVxSCJrbBFWspHeSmkkqkrChSOB86Bcpnoi53LzmSHQylOvUa/BfR2mpePb0/ 8712FsrwcN48oAdRnhz3veHpn/psKw1yq9Q/JabsIwEEonVpGVh3Q5pEPPb5cdhcb3xf qzQ2DNDstq1HSGWy6ii2SvFGxEx7ioR07tTlCoWabL06bK8C0Vl+GWncLf/ZkmjQsJY8 b1MQu0si2rYpMTckuWqhH7AIZU26L1jo11Hefl7u156W5JGkMG1FP7vwH+DXweLUqvdD uo9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=WWd+vxYrR7dn/0JVMqHD/PvTu5jxex5WZWt+Aik16Z8=; b=jbihqzx/UPwgrcGCvdMVMDiNdSW26dpACipxDocI6A8eVr7X14hPTpAfeqx2vuYgTz H4FtEHFkSE99ryRKiU3P32jAlB3SGEohLLgR1zc876z4xMGoFXiWrgaWqdTwXymirRPj pL0Vpe1vF49tQBmNVTzGk8mwwDLEUzIbpSvwvxKIUGbiTUD36QKi5aJEZH0GBuKtXIfa crRE6KIsuZoDRFm1H7LroNrqT44GH838YnYCiB6aZYFSXRmPyT2TuTGQ7sW42ZQQZgKU JqsawNwOfzfTyhgbgKRTeJ2q0Jezft4J5eOZvef5ZtszRfqFUcF4rCkatztv7wqsN21Q 9Jog== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=goYdOaUW; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id g14-v6si22100336plq.41.2018.06.11.07.06.48 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 11 Jun 2018 07:06:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=goYdOaUW; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=WWd+vxYrR7dn/0JVMqHD/PvTu5jxex5WZWt+Aik16Z8=; b=goYdOaUWm9P51SnD4JM/dgShi 9kwWLFGB2Lpf0q/sVdcvlqJ/89ZhpEWeWN+Lj4Vx1b4eKoWJJ3UKLHkLiOKiEpe2ROnWnhRrfN43Q enAbGkzzeWmcr5z1RBbC/x9mSD4P/u9AST5eUrxsEWXCFeDRYZ+HqYxN6WHFZZqS309v10lNJlvF1 I+ox6DfozfmR2vbAaMcV5TgoPe/Djag1Cawy81ZxbWsbLJ9gcIBF7cDyuOdR5EPtDKMvVSs45irvZ mTmPejPCSNpFLuyscKmLi9VNIuJoO6KbsdJh973W/Z6WgwKnfvW0sDozyNQ4PoujFgvj7alNpneVl NQD1nC8bg==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fSNT5-0004e0-IZ; Mon, 11 Jun 2018 14:06:47 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v13 20/72] page cache: Convert hole search to XArray Date: Mon, 11 Jun 2018 07:05:47 -0700 Message-Id: <20180611140639.17215-21-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180611140639.17215-1-willy@infradead.org> References: <20180611140639.17215-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox The page cache offers the ability to search for a miss in the previous or next N locations. Rather than teach the XArray about the page cache's definition of a miss, use xas_prev() and xas_next() to search the page array. This should be more efficient as it does not have to start the lookup from the top for each index. Signed-off-by: Matthew Wilcox --- fs/nfs/blocklayout/blocklayout.c | 2 +- include/linux/pagemap.h | 4 +- mm/filemap.c | 110 ++++++++++++++----------------- mm/readahead.c | 4 +- 4 files changed, 55 insertions(+), 65 deletions(-) diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c index 7cb5c38c19e4..961901685007 100644 --- a/fs/nfs/blocklayout/blocklayout.c +++ b/fs/nfs/blocklayout/blocklayout.c @@ -895,7 +895,7 @@ static u64 pnfs_num_cont_bytes(struct inode *inode, pgoff_t idx) end = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); if (end != inode->i_mapping->nrpages) { rcu_read_lock(); - end = page_cache_next_hole(mapping, idx + 1, ULONG_MAX); + end = page_cache_next_gap(mapping, idx + 1, ULONG_MAX); rcu_read_unlock(); } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index b1bd2186e6d2..2f5d2d3ebaac 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -241,9 +241,9 @@ static inline gfp_t readahead_gfp_mask(struct address_space *x) typedef int filler_t(void *, struct page *); -pgoff_t page_cache_next_hole(struct address_space *mapping, +pgoff_t page_cache_next_gap(struct address_space *mapping, pgoff_t index, unsigned long max_scan); -pgoff_t page_cache_prev_hole(struct address_space *mapping, +pgoff_t page_cache_prev_gap(struct address_space *mapping, pgoff_t index, unsigned long max_scan); #define FGP_ACCESSED 0x00000001 diff --git a/mm/filemap.c b/mm/filemap.c index 4de14e75c4ec..8de36e14e22f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1326,86 +1326,76 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm, } /** - * page_cache_next_hole - find the next hole (not-present entry) - * @mapping: mapping - * @index: index - * @max_scan: maximum range to search - * - * Search the set [index, min(index+max_scan-1, MAX_INDEX)] for the - * lowest indexed hole. - * - * Returns: the index of the hole if found, otherwise returns an index - * outside of the set specified (in which case 'return - index >= - * max_scan' will be true). In rare cases of index wrap-around, 0 will - * be returned. - * - * page_cache_next_hole may be called under rcu_read_lock. However, - * like radix_tree_gang_lookup, this will not atomically search a - * snapshot of the tree at a single point in time. For example, if a - * hole is created at index 5, then subsequently a hole is created at - * index 10, page_cache_next_hole covering both indexes may return 10 - * if called under rcu_read_lock. + * page_cache_next_gap() - Find the next gap in the page cache. + * @mapping: Mapping. + * @index: Index. + * @max_scan: Maximum range to search. + * + * Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the + * gap with the lowest index. + * + * This function may be called under the rcu_read_lock. However, this will + * not atomically search a snapshot of the cache at a single point in time. + * For example, if a gap is created at index 5, then subsequently a gap is + * created at index 10, page_cache_next_gap covering both indices may + * return 10 if called under the rcu_read_lock. + * + * Return: The index of the gap if found, otherwise an index outside the + * range specified (in which case 'return - index >= max_scan' will be true). + * In the rare case of index wrap-around, 0 will be returned. */ -pgoff_t page_cache_next_hole(struct address_space *mapping, +pgoff_t page_cache_next_gap(struct address_space *mapping, pgoff_t index, unsigned long max_scan) { - unsigned long i; + XA_STATE(xas, &mapping->i_pages, index); - for (i = 0; i < max_scan; i++) { - struct page *page; - - page = radix_tree_lookup(&mapping->i_pages, index); - if (!page || xa_is_value(page)) + while (max_scan--) { + void *entry = xas_next(&xas); + if (!entry || xa_is_value(entry)) break; - index++; - if (index == 0) + if (xas.xa_index == 0) break; } - return index; + return xas.xa_index; } -EXPORT_SYMBOL(page_cache_next_hole); +EXPORT_SYMBOL(page_cache_next_gap); /** - * page_cache_prev_hole - find the prev hole (not-present entry) - * @mapping: mapping - * @index: index - * @max_scan: maximum range to search - * - * Search backwards in the range [max(index-max_scan+1, 0), index] for - * the first hole. - * - * Returns: the index of the hole if found, otherwise returns an index - * outside of the set specified (in which case 'index - return >= - * max_scan' will be true). In rare cases of wrap-around, ULONG_MAX - * will be returned. - * - * page_cache_prev_hole may be called under rcu_read_lock. However, - * like radix_tree_gang_lookup, this will not atomically search a - * snapshot of the tree at a single point in time. For example, if a - * hole is created at index 10, then subsequently a hole is created at - * index 5, page_cache_prev_hole covering both indexes may return 5 if - * called under rcu_read_lock. + * page_cache_prev_gap() - Find the next gap in the page cache. + * @mapping: Mapping. + * @index: Index. + * @max_scan: Maximum range to search. + * + * Search the range [max(index - max_scan + 1, 0), index] for the + * gap with the highest index. + * + * This function may be called under the rcu_read_lock. However, this will + * not atomically search a snapshot of the cache at a single point in time. + * For example, if a gap is created at index 10, then subsequently a gap is + * created at index 5, page_cache_prev_gap() covering both indices may + * return 5 if called under the rcu_read_lock. + * + * Return: The index of the gap if found, otherwise an index outside the + * range specified (in which case 'index - return >= max_scan' will be true). + * In the rare case of wrap-around, ULONG_MAX will be returned. */ -pgoff_t page_cache_prev_hole(struct address_space *mapping, +pgoff_t page_cache_prev_gap(struct address_space *mapping, pgoff_t index, unsigned long max_scan) { - unsigned long i; - - for (i = 0; i < max_scan; i++) { - struct page *page; + XA_STATE(xas, &mapping->i_pages, index); - page = radix_tree_lookup(&mapping->i_pages, index); - if (!page || xa_is_value(page)) + while (max_scan--) { + void *entry = xas_prev(&xas); + if (!entry || xa_is_value(entry)) break; - index--; - if (index == ULONG_MAX) + if (xas.xa_index == ULONG_MAX) break; } - return index; + return xas.xa_index; } -EXPORT_SYMBOL(page_cache_prev_hole); +EXPORT_SYMBOL(page_cache_prev_gap); /** * find_get_entry - find and get a page cache entry diff --git a/mm/readahead.c b/mm/readahead.c index 3757aa549709..59998ca31f2a 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -334,7 +334,7 @@ static pgoff_t count_history_pages(struct address_space *mapping, pgoff_t head; rcu_read_lock(); - head = page_cache_prev_hole(mapping, offset - 1, max); + head = page_cache_prev_gap(mapping, offset - 1, max); rcu_read_unlock(); return offset - 1 - head; @@ -422,7 +422,7 @@ ondemand_readahead(struct address_space *mapping, pgoff_t start; rcu_read_lock(); - start = page_cache_next_hole(mapping, offset + 1, max_pages); + start = page_cache_next_gap(mapping, offset + 1, max_pages); rcu_read_unlock(); if (!start || start - offset > max_pages)