From patchwork Mon Nov 16 10:18:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mauro Carvalho Chehab X-Patchwork-Id: 11907941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 538241398 for ; Mon, 16 Nov 2020 10:18:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CFEE9223BD for ; Mon, 16 Nov 2020 10:18:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="oxdK+++V" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CFEE9223BD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B5CC16B0036; Mon, 16 Nov 2020 05:18:31 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B33766B005D; Mon, 16 Nov 2020 05:18:31 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9D4036B006C; Mon, 16 Nov 2020 05:18:31 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0069.hostedemail.com [216.40.44.69]) by kanga.kvack.org (Postfix) with ESMTP id 67EBF6B0036 for ; Mon, 16 Nov 2020 05:18:31 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EEBA31EE6 for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) X-FDA: 77489881980.13.join35_2e1316e27328 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id CDB0A18140B70 for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) X-Spam-Summary: 1,0,0,20271f9d5112ff8d,d41d8cd98f00b204,mchehab@kernel.org,,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3165:3355:3865:3866:3867:3868:3870:3871:3874:4250:4321:4385:5007:6119:6261:6653:7875:7903:10004:11026:11473:11658:11914:12043:12297:12438:12517:12519:12555:12895:13894:14181:14394:14721:14819:21080:21451:21627:21740:30003:30030:30054:30060,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yfsoqzr6jcgt1fm54pyyz5gr6mwopju68gkognjdk9kneps8qch7idik5ydwu.3j4i9ecfs8q7ecnybr4pfhnto4mzr3zmi4f3jn869u1y6y8tkghtbrfbni6aszc.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: join35_2e1316e27328 X-Filterd-Recvd-Size: 5264 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) Received: from mail.kernel.org (ip5f5ad5de.dynamic.kabel-deutschland.de [95.90.213.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8F944241A3; Mon, 16 Nov 2020 10:18:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605521908; bh=CEaWy/9rCKH0y9rg+TtKCtrGstISKnLe8WUJn9c9U2A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oxdK+++V98RlV2eiilD+A8q0VukrYAH9azfgvpEcxRk2Mg6/bNEC6KgGNo7l/ALd8 EYYFRhPVJh1XA2piWCy3xBPCqEdZEvSZN/spHvuQX8OyKVFpbdKbTabnsIXcQUAMHd VnI6/iNjv9v9+FRaYGs8ENURDo6j84UTe641ezkA= Received: from mchehab by mail.kernel.org with local (Exim 4.94) (envelope-from ) id 1kebac-00FwEd-Ie; Mon, 16 Nov 2020 11:18:26 +0100 From: Mauro Carvalho Chehab To: Mike Rapoport Cc: Mauro Carvalho Chehab , "Jonathan Corbet" , "Linux Doc Mailing List" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mike Rapoport Subject: [PATCH v4 21/27] memblock: fix kernel-doc markups Date: Mon, 16 Nov 2020 11:18:17 +0100 Message-Id: <261dbe3eb58c66ba6bc289593e70a4399d111448.1605521731.git.mchehab+huawei@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some identifiers have different names between their prototypes and the kernel-doc markup. Acked-by: Mike Rapoport Signed-off-by: Mauro Carvalho Chehab --- include/linux/memblock.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index ef131255cedc..95fe3cb71c54 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -255,61 +255,61 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, /** * for_each_mem_pfn_range - early memory pfn range iterator * @i: an integer used as loop variable * @nid: node selector, %MAX_NUMNODES for all nodes * @p_start: ptr to ulong for start pfn of the range, can be %NULL * @p_end: ptr to ulong for end pfn of the range, can be %NULL * @p_nid: ptr to int for nid of the range, can be %NULL * * Walks over configured memory ranges. */ #define for_each_mem_pfn_range(i, nid, p_start, p_end, p_nid) \ for (i = -1, __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid); \ i >= 0; __next_mem_pfn_range(&i, nid, p_start, p_end, p_nid)) #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT void __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone, unsigned long *out_spfn, unsigned long *out_epfn); /** - * for_each_free_mem_range_in_zone - iterate through zone specific free + * for_each_free_mem_pfn_range_in_zone - iterate through zone specific free * memblock areas * @i: u64 used as loop variable * @zone: zone in which all of the memory blocks reside * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL * * Walks over free (memory && !reserved) areas of memblock in a specific * zone. Available once memblock and an empty zone is initialized. The main * assumption is that the zone start, end, and pgdat have been associated. * This way we can use the zone to determine NUMA node, and if a given part * of the memblock is valid for the zone. */ #define for_each_free_mem_pfn_range_in_zone(i, zone, p_start, p_end) \ for (i = 0, \ __next_mem_pfn_range_in_zone(&i, zone, p_start, p_end); \ i != U64_MAX; \ __next_mem_pfn_range_in_zone(&i, zone, p_start, p_end)) /** - * for_each_free_mem_range_in_zone_from - iterate through zone specific + * for_each_free_mem_pfn_range_in_zone_from - iterate through zone specific * free memblock areas from a given point * @i: u64 used as loop variable * @zone: zone in which all of the memory blocks reside * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL * * Walks over free (memory && !reserved) areas of memblock in a specific * zone, continuing from current position. Available as soon as memblock is * initialized. */ #define for_each_free_mem_pfn_range_in_zone_from(i, zone, p_start, p_end) \ for (; i != U64_MAX; \ __next_mem_pfn_range_in_zone(&i, zone, p_start, p_end)) int __init deferred_page_init_max_threads(const struct cpumask *node_cpumask); #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ /** * for_each_free_mem_range - iterate through free memblock areas From patchwork Mon Nov 16 10:18:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mauro Carvalho Chehab X-Patchwork-Id: 11907943 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 163061391 for ; Mon, 16 Nov 2020 10:18:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 98BD72419B for ; Mon, 16 Nov 2020 10:18:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="iZpYqYUK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98BD72419B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E31AC6B005D; Mon, 16 Nov 2020 05:18:31 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C45DB6B006C; Mon, 16 Nov 2020 05:18:31 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE4D86B006E; Mon, 16 Nov 2020 05:18:31 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 755E16B005D for ; Mon, 16 Nov 2020 05:18:31 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 09F8B181AC9CB for ; Mon, 16 Nov 2020 10:18:31 +0000 (UTC) X-FDA: 77489882022.24.rings04_5a0e2fd27328 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id EA52C1A4A0 for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) X-Spam-Summary: 1,0,0,b1312ed2ddf5fbdf,d41d8cd98f00b204,mchehab@kernel.org,,RULES_HIT:1:2:41:69:355:379:541:800:960:968:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2198:2199:2393:2559:2562:2689:2693:2739:2890:2903:2916:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3873:3874:4052:4250:4321:4605:5007:6119:6261:6653:7875:7903:8660:9036:10004:11026:11473:11658:11914:12043:12295:12296:12297:12438:12517:12519:12555:12679:12895:12986:13148:13230:13894:14096:14394:21080:21324:21433:21450:21451:21627:21795:21796:21939:21990:30003:30034:30036:30051:30054:30060:30070,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04y87izce4io4t5howft5s94xkqwjycojgf39zb6nfu4fkptdt87o77wkrm46fe.47d93p7ys7onfygdz7nih3ebghu1ij4zxdgyexseqba31qenj8up4fxjudf65ay.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SU MMARY:no X-HE-Tag: rings04_5a0e2fd27328 X-Filterd-Recvd-Size: 11158 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Nov 2020 10:18:30 +0000 (UTC) Received: from mail.kernel.org (ip5f5ad5de.dynamic.kabel-deutschland.de [95.90.213.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AE85D2463D; Mon, 16 Nov 2020 10:18:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1605521908; bh=0sQW+g6cEsXpFr6snJ6ztA+4abjwbDH4DFLh7sXWUmc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iZpYqYUKWj4Ys0HiUuVbFKYGIO/asdWAb4LHJsSVXW2lhFlLR9eBXe3lpbTrbFV5J z/dwgwn3ec7EkOHQwkaUI48FwDpr9ETwLF55nvGbW4cSOKq4Mc0ynSgHj4EKCat9P3 fCTClkfqSr0IwGovhiF8WZa+gurUXtD+/g0fOxKM= Received: from mchehab by mail.kernel.org with local (Exim 4.94) (envelope-from ) id 1kebac-00FwEs-N9; Mon, 16 Nov 2020 11:18:26 +0100 From: Mauro Carvalho Chehab To: Andrew Morton Cc: Mauro Carvalho Chehab , "Jonathan Corbet" , "Linux Doc Mailing List" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mattew Wilcox Subject: [PATCH v4 25/27] mm: fix kernel-doc markups Date: Mon, 16 Nov 2020 11:18:21 +0100 Message-Id: <80e85dddc92d333bc2159ee8a2294921612e8745.1605521731.git.mchehab+huawei@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kernel-doc markups should use this format: identifier - description Fix some issues on mm files: 1) The definition for get_user_pages_locked() doesn't follow it. Also, it expects a short descrpition at the header, followed by a long one, after the parameters. Fix it. 2) Kernel-doc requires that a kernel-doc markup to be immediatly below the function prototype, as otherwise it will rename it. So, move get_pfnblock_flags_mask() description to the right place. 3) Make invalidate_mapping_pagevec() to also follow the expected kernel-doc format. While here, fix a few minor English syntax issues, as suggested by Matthew: will used -> will be used similar with -> similar to Suggested-by: Mattew Wilcox # English fixes Signed-off-by: Mauro Carvalho Chehab --- mm/gup.c | 24 +++++++++++++----------- mm/page_alloc.c | 16 ++++++++-------- mm/truncate.c | 10 ++++++++-- 3 files changed, 29 insertions(+), 21 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 49c4eabca271..f3751bf28326 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1924,66 +1924,68 @@ static long __get_user_pages_remote(struct mm_struct *mm, * Or NULL if the caller does not require them. * * This is the same as get_user_pages_remote(), just with a less-flexible * calling convention where we assume that the mm being operated on belongs to * the current task, and doesn't allow passing of a locked parameter. We also * obviously don't pass FOLL_REMOTE in here. */ long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas) { if (!is_valid_gup_flags(gup_flags)) return -EINVAL; return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages); /** - * get_user_pages_locked() is suitable to replace the form: + * get_user_pages_locked() - variant of get_user_pages() + * + * @start: starting user address + * @nr_pages: number of pages from start to pin + * @gup_flags: flags modifying lookup behaviour + * @pages: array that receives pointers to the pages pinned. + * Should be at least nr_pages long. Or NULL, if caller + * only intends to ensure the pages are faulted in. + * @locked: pointer to lock flag indicating whether lock is held and + * subsequently whether VM_FAULT_RETRY functionality can be + * utilised. Lock must initially be held. + * + * It is suitable to replace the form: * * mmap_read_lock(mm); * do_something() * get_user_pages(mm, ..., pages, NULL); * mmap_read_unlock(mm); * * to: * * int locked = 1; * mmap_read_lock(mm); * do_something() * get_user_pages_locked(mm, ..., pages, &locked); * if (locked) * mmap_read_unlock(mm); * - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying lookup behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. Or NULL, if caller - * only intends to ensure the pages are faulted in. - * @locked: pointer to lock flag indicating whether lock is held and - * subsequently whether VM_FAULT_RETRY functionality can be - * utilised. Lock must initially be held. - * * We can leverage the VM_FAULT_RETRY functionality in the page fault * paths better by using either get_user_pages_locked() or * get_user_pages_unlocked(). * */ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked) { /* * FIXME: Current FOLL_LONGTERM behavior is incompatible with * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on * vmas. As there are no users of this flag in this call we simply * disallow this option for now. */ if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) return -EINVAL; /* * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, * never directly by the caller, so enforce that: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 63d8d8b72c10..7e4d1e4bdee9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -478,66 +478,66 @@ static inline bool defer_init(int nid, unsigned long pfn, unsigned long end_pfn) static inline unsigned long *get_pageblock_bitmap(struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM return section_to_usemap(__pfn_to_section(pfn)); #else return page_zone(page)->pageblock_flags; #endif /* CONFIG_SPARSEMEM */ } static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) { #ifdef CONFIG_SPARSEMEM pfn &= (PAGES_PER_SECTION-1); #else pfn = pfn - round_down(page_zone(page)->zone_start_pfn, pageblock_nr_pages); #endif /* CONFIG_SPARSEMEM */ return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS; } -/** - * get_pfnblock_flags_mask - Return the requested group of flags for the pageblock_nr_pages block of pages - * @page: The page within the block of interest - * @pfn: The target page frame number - * @mask: mask of bits that the caller is interested in - * - * Return: pageblock_bits flags - */ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { unsigned long *bitmap; unsigned long bitidx, word_bitidx; unsigned long word; bitmap = get_pageblock_bitmap(page, pfn); bitidx = pfn_to_bitidx(page, pfn); word_bitidx = bitidx / BITS_PER_LONG; bitidx &= (BITS_PER_LONG-1); word = bitmap[word_bitidx]; return (word >> bitidx) & mask; } +/** + * get_pfnblock_flags_mask - Return the requested group of flags for the pageblock_nr_pages block of pages + * @page: The page within the block of interest + * @pfn: The target page frame number + * @mask: mask of bits that the caller is interested in + * + * Return: pageblock_bits flags + */ unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { return __get_pfnblock_flags_mask(page, pfn, mask); } static __always_inline int get_pfnblock_migratetype(struct page *page, unsigned long pfn) { return __get_pfnblock_flags_mask(page, pfn, MIGRATETYPE_MASK); } /** * set_pfnblock_flags_mask - Set the requested group of flags for a pageblock_nr_pages block of pages * @page: The page within the block of interest * @flags: The flags to set * @pfn: The target page frame number * @mask: mask of bits that the caller is interested in */ void set_pfnblock_flags_mask(struct page *page, unsigned long flags, unsigned long pfn, diff --git a/mm/truncate.c b/mm/truncate.c index 960edf5803ca..604eaabc6d06 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -620,43 +620,49 @@ static unsigned long __invalidate_mapping_pages(struct address_space *mapping, * @start: the offset 'from' which to invalidate * @end: the offset 'to' which to invalidate (inclusive) * * This function only removes the unlocked pages, if you want to * remove all the pages of one inode, you must call truncate_inode_pages. * * invalidate_mapping_pages() will not block on IO activity. It will not * invalidate pages which are dirty, locked, under writeback or mapped into * pagetables. * * Return: the number of the pages that were invalidated */ unsigned long invalidate_mapping_pages(struct address_space *mapping, pgoff_t start, pgoff_t end) { return __invalidate_mapping_pages(mapping, start, end, NULL); } EXPORT_SYMBOL(invalidate_mapping_pages); /** - * This helper is similar with the above one, except that it accounts for pages - * that are likely on a pagevec and count them in @nr_pagevec, which will used by + * invalidate_mapping_pagevec - This helper is similar to + * invalidate_mapping_pages(), except that it accounts for pages that are + * likely on a pagevec and count them in @nr_pagevec, which will be used by * the caller. + * + * @mapping: the address_space which holds the pages to invalidate + * @start: the offset 'from' which to invalidate + * @end: the offset 'to' which to invalidate (inclusive) + * */ void invalidate_mapping_pagevec(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_pagevec) { __invalidate_mapping_pages(mapping, start, end, nr_pagevec); } /* * This is like invalidate_complete_page(), except it ignores the page's * refcount. We do this because invalidate_inode_pages2() needs stronger * invalidation guarantees, and cannot afford to leave pages behind because * shrink_page_list() has a temp ref on them, or because they're transiently * sitting in the lru_cache_add() pagevecs. */ static int invalidate_complete_page2(struct address_space *mapping, struct page *page) { unsigned long flags; if (page->mapping != mapping)