From patchwork Fri Aug 12 18:38:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 9277621 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9EF4260780 for ; Fri, 12 Aug 2016 18:40:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9347728AD2 for ; Fri, 12 Aug 2016 18:40:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8693728AD3; Fri, 12 Aug 2016 18:40:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E116428AB6 for ; Fri, 12 Aug 2016 18:40:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753103AbcHLSkF (ORCPT ); Fri, 12 Aug 2016 14:40:05 -0400 Received: from mga02.intel.com ([134.134.136.20]:14034 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752591AbcHLSjG (ORCPT ); Fri, 12 Aug 2016 14:39:06 -0400 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP; 12 Aug 2016 11:38:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,512,1464678000"; d="scan'208";a="864486646" Received: from black.fi.intel.com ([10.237.72.93]) by orsmga003.jf.intel.com with ESMTP; 12 Aug 2016 11:38:48 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 40718A2A; Fri, 12 Aug 2016 21:38:35 +0300 (EEST) From: "Kirill A. Shutemov" To: "Theodore Ts'o" , Andreas Dilger , Jan Kara , Andrew Morton Cc: Alexander Viro , Hugh Dickins , Andrea Arcangeli , Dave Hansen , Vlastimil Babka , Matthew Wilcox , Ross Zwisler , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv2 40/41] mm, fs, ext4: expand use of page_mapping() and page_to_pgoff() Date: Fri, 12 Aug 2016 21:38:23 +0300 Message-Id: <1471027104-115213-41-git-send-email-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1471027104-115213-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1471027104-115213-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With huge pages in page cache we see tail pages in more code paths. This patch replaces direct access to struct page fields with macros which can handle tail pages properly. Signed-off-by: Kirill A. Shutemov --- fs/buffer.c | 2 +- fs/ext4/inode.c | 4 ++-- mm/filemap.c | 26 ++++++++++++++------------ mm/memory.c | 4 ++-- mm/page-writeback.c | 2 +- mm/truncate.c | 5 +++-- 6 files changed, 23 insertions(+), 20 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 20898b051044..56323862dad3 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -630,7 +630,7 @@ static void __set_page_dirty(struct page *page, struct address_space *mapping, unsigned long flags; spin_lock_irqsave(&mapping->tree_lock, flags); - if (page->mapping) { /* Race with truncate? */ + if (page_mapping(page)) { /* Race with truncate? */ WARN_ON_ONCE(warn && !PageUptodate(page)); account_page_dirtied(page, mapping); radix_tree_tag_set(&mapping->page_tree, diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index cd8d03559896..e9bfffbf22ed 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1223,7 +1223,7 @@ retry_journal: } lock_page(page); - if (page->mapping != mapping) { + if (page_mapping(page) != mapping) { /* The page got truncated from under us */ unlock_page(page); put_page(page); @@ -2962,7 +2962,7 @@ retry_journal: } lock_page(page); - if (page->mapping != mapping) { + if (page_mapping(page) != mapping) { /* The page got truncated from under us */ unlock_page(page); put_page(page); diff --git a/mm/filemap.c b/mm/filemap.c index 71c0bfdcab05..1514192086c3 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -369,7 +369,7 @@ static int __filemap_fdatawait_range(struct address_space *mapping, struct page *page = pvec.pages[i]; /* until radix tree lookup accepts end_index */ - if (page->index > end) + if (page_to_pgoff(page) > end) continue; page = compound_head(page); @@ -1307,12 +1307,12 @@ repeat: } /* Has the page been truncated? */ - if (unlikely(page->mapping != mapping)) { + if (unlikely(page_mapping(page) != mapping)) { unlock_page(page); put_page(page); goto repeat; } - VM_BUG_ON_PAGE(page->index != offset, page); + VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page); } if (page && (fgp_flags & FGP_ACCESSED)) @@ -1606,7 +1606,8 @@ repeat: * otherwise we can get both false positives and false * negatives, which is just confusing to the caller. */ - if (page->mapping == NULL || page_to_pgoff(page) != index) { + if (page_mapping(page) == NULL || + page_to_pgoff(page) != index) { put_page(page); break; } @@ -1907,7 +1908,7 @@ find_page: if (!trylock_page(page)) goto page_not_up_to_date; /* Did it get truncated before we got the lock? */ - if (!page->mapping) + if (page_mapping(page)) goto page_not_up_to_date_locked; if (!mapping->a_ops->is_partially_uptodate(page, offset, iter->count)) @@ -1987,7 +1988,7 @@ page_not_up_to_date: page_not_up_to_date_locked: /* Did it get truncated before we got the lock? */ - if (!page->mapping) { + if (!page_mapping(page)) { unlock_page(page); put_page(page); continue; @@ -2023,7 +2024,7 @@ readpage: if (unlikely(error)) goto readpage_error; if (!PageUptodate(page)) { - if (page->mapping == NULL) { + if (page_mapping(page) == NULL) { /* * invalidate_mapping_pages got it */ @@ -2324,12 +2325,12 @@ retry_find: } /* Did it get truncated? */ - if (unlikely(page->mapping != mapping)) { + if (unlikely(page_mapping(page) != mapping)) { unlock_page(page); put_page(page); goto retry_find; } - VM_BUG_ON_PAGE(page->index != offset, page); + VM_BUG_ON_PAGE(page_to_pgoff(page) != offset, page); /* * We have a locked page in the page cache, now we need to check @@ -2505,7 +2506,7 @@ int filemap_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) sb_start_pagefault(inode->i_sb); file_update_time(vma->vm_file); lock_page(page); - if (page->mapping != inode->i_mapping) { + if (page_mapping(page) != inode->i_mapping) { unlock_page(page); ret = VM_FAULT_NOPAGE; goto out; @@ -2654,7 +2655,7 @@ filler: lock_page(page); /* Case c or d, restart the operation */ - if (!page->mapping) { + if (!page_mapping(page)) { unlock_page(page); put_page(page); goto repeat; @@ -3110,12 +3111,13 @@ EXPORT_SYMBOL(generic_file_write_iter); */ int try_to_release_page(struct page *page, gfp_t gfp_mask) { - struct address_space * const mapping = page->mapping; + struct address_space * const mapping = page_mapping(page); BUG_ON(!PageLocked(page)); if (PageWriteback(page)) return 0; + page = compound_head(page); if (mapping && mapping->a_ops->releasepage) return mapping->a_ops->releasepage(page, gfp_mask); return try_to_free_buffers(page); diff --git a/mm/memory.c b/mm/memory.c index 5b7f0ce44a27..24d012571d32 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2052,7 +2052,7 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page, return ret; if (unlikely(!(ret & VM_FAULT_LOCKED))) { lock_page(page); - if (!page->mapping) { + if (!page_mapping(page)) { unlock_page(page); return 0; /* retry */ } @@ -2100,7 +2100,7 @@ static inline int wp_page_reuse(struct fault_env *fe, pte_t orig_pte, dirtied = set_page_dirty(page); VM_BUG_ON_PAGE(PageAnon(page), page); - mapping = page->mapping; + mapping = page_mapping(page); unlock_page(page); put_page(page); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 6390c9488e29..3bfa158aa784 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2878,7 +2878,7 @@ EXPORT_SYMBOL(mapping_tagged); */ void wait_for_stable_page(struct page *page) { - if (bdi_cap_stable_pages_required(inode_to_bdi(page->mapping->host))) + if (bdi_cap_stable_pages_required(inode_to_bdi(page_mapping(page)->host))) wait_on_page_writeback(page); } EXPORT_SYMBOL_GPL(wait_for_stable_page); diff --git a/mm/truncate.c b/mm/truncate.c index 6a445278aaaf..87b47de58b50 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -627,6 +627,7 @@ invalidate_complete_page2(struct address_space *mapping, struct page *page) { unsigned long flags; + page = compound_head(page); if (page->mapping != mapping) return 0; @@ -655,7 +656,7 @@ static int do_launder_page(struct address_space *mapping, struct page *page) { if (!PageDirty(page)) return 0; - if (page->mapping != mapping || mapping->a_ops->launder_page == NULL) + if (page_mapping(page) != mapping || mapping->a_ops->launder_page == NULL) return 0; return mapping->a_ops->launder_page(page); } @@ -703,7 +704,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping, lock_page(page); WARN_ON(page_to_pgoff(page) != index); - if (page->mapping != mapping) { + if (page_mapping(page) != mapping) { unlock_page(page); continue; }