From patchwork Tue Feb 11 01:03:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 11374563 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 301DD109A for ; Tue, 11 Feb 2020 01:04:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 116232082F for ; Tue, 11 Feb 2020 01:04:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="TMd14DIO" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728098AbgBKBEh (ORCPT ); Mon, 10 Feb 2020 20:04:37 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:54794 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727954AbgBKBD5 (ORCPT ); Mon, 10 Feb 2020 20:03:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Cx0LUqifB1yYnmojEZZvvnccbKChns1lCEVtLd6hs7c=; b=TMd14DIOI5DcC8yQFFTMnysnlE RK0v7KxnkobGkPJh0WApA17G/VV3w1pA69bo0vVroQuDZcAw45nRVfzVbXcIn4htXW3+3fbpPr6IL V9Q1u0nW6I8yM6CUF8Td59CugBzCesebghNfnOlTgGRBVVX17KoGnpP1QEZcR3Sy7I18NBjg/OKFF WyxqulgXS7kuJQtxzj+TRx3mP74SPYFF2S2IgoGvUmXsh1edE50TXTb5BziYVRpg2jwQrOJUPRZfU X1LfECI6zU8m90sEKTSmDHqMVJfs8fVkTUfIIDMCoWWaqM4CaXLyhKutEB+QZuPG5fs81y9CNopyM BitRn2zw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1Jxu-0001np-CU; Tue, 11 Feb 2020 01:03:50 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, ocfs2-devel@oss.oracle.com, linux-xfs@vger.kernel.org Subject: [PATCH v5 03/13] mm: Put readahead pages in cache earlier Date: Mon, 10 Feb 2020 17:03:38 -0800 Message-Id: <20200211010348.6872-4-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200211010348.6872-1-willy@infradead.org> References: <20200211010348.6872-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: "Matthew Wilcox (Oracle)" At allocation time, put the pages in the cache unless we're using ->readpages. Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 66 ++++++++++++++++++++++++++++++++------------------ 1 file changed, 42 insertions(+), 24 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index fc77d13af556..96c6ca68a174 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -114,10 +114,10 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages, EXPORT_SYMBOL(read_cache_pages); static void read_pages(struct address_space *mapping, struct file *filp, - struct list_head *pages, unsigned int nr_pages, gfp_t gfp) + struct list_head *pages, pgoff_t start, + unsigned int nr_pages) { struct blk_plug plug; - unsigned page_idx; blk_start_plug(&plug); @@ -125,18 +125,17 @@ static void read_pages(struct address_space *mapping, struct file *filp, mapping->a_ops->readpages(filp, mapping, pages, nr_pages); /* Clean up the remaining pages */ put_pages_list(pages); - goto out; - } + } else { + struct page *page; + unsigned long index; - for (page_idx = 0; page_idx < nr_pages; page_idx++) { - struct page *page = lru_to_page(pages); - list_del(&page->lru); - if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) + xa_for_each_range(&mapping->i_pages, index, page, start, + start + nr_pages - 1) { mapping->a_ops->readpage(filp, page); - put_page(page); + put_page(page); + } } -out: blk_finish_plug(&plug); } @@ -149,17 +148,18 @@ static void read_pages(struct address_space *mapping, struct file *filp, * Returns the number of pages requested, or the maximum amount of I/O allowed. */ unsigned long __do_page_cache_readahead(struct address_space *mapping, - struct file *filp, pgoff_t offset, unsigned long nr_to_read, + struct file *filp, pgoff_t start, unsigned long nr_to_read, unsigned long lookahead_size) { struct inode *inode = mapping->host; - struct page *page; unsigned long end_index; /* The last page we want to read */ LIST_HEAD(page_pool); int page_idx; + pgoff_t page_offset = start; unsigned long nr_pages = 0; loff_t isize = i_size_read(inode); gfp_t gfp_mask = readahead_gfp_mask(mapping); + bool use_list = mapping->a_ops->readpages; if (isize == 0) goto out; @@ -170,7 +170,7 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, * Preallocate as many pages as we will need. */ for (page_idx = 0; page_idx < nr_to_read; page_idx++) { - pgoff_t page_offset = offset + page_idx; + struct page *page; if (page_offset > end_index) break; @@ -178,25 +178,43 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, page = xa_load(&mapping->i_pages, page_offset); if (page && !xa_is_value(page)) { /* - * Page already present? Kick off the current batch of - * contiguous pages before continuing with the next - * batch. + * Page already present? Kick off the current batch + * of contiguous pages before continuing with the + * next batch. + * It's possible this page is the page we should + * be marking with PageReadahead. However, we + * don't have a stable ref to this page so it might + * be reallocated to another user before we can set + * the bit. There's probably another page in the + * cache marked with PageReadahead from the other + * process which accessed this file. */ - if (nr_pages) - read_pages(mapping, filp, &page_pool, nr_pages, - gfp_mask); - nr_pages = 0; - continue; + goto skip; } page = __page_cache_alloc(gfp_mask); if (!page) break; - page->index = page_offset; - list_add(&page->lru, &page_pool); + if (use_list) { + page->index = page_offset; + list_add(&page->lru, &page_pool); + } else if (add_to_page_cache_lru(page, mapping, page_offset, + gfp_mask) < 0) { + put_page(page); + goto skip; + } + if (page_idx == nr_to_read - lookahead_size) SetPageReadahead(page); nr_pages++; + page_offset++; + continue; +skip: + if (nr_pages) + read_pages(mapping, filp, &page_pool, start, nr_pages); + nr_pages = 0; + page_offset++; + start = page_offset; } /* @@ -205,7 +223,7 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping, * will then handle the error. */ if (nr_pages) - read_pages(mapping, filp, &page_pool, nr_pages, gfp_mask); + read_pages(mapping, filp, &page_pool, start, nr_pages); BUG_ON(!list_empty(&page_pool)); out: return nr_pages;