From patchwork Fri Oct 23 16:33:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mauro Carvalho Chehab X-Patchwork-Id: 11854043 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A97A14B4 for ; Fri, 23 Oct 2020 16:33:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4C03524641 for ; Fri, 23 Oct 2020 16:33:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="W2vDNg0M" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C03524641 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8C2356B006C; Fri, 23 Oct 2020 12:33:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 871A26B0070; Fri, 23 Oct 2020 12:33:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69AD46B0071; Fri, 23 Oct 2020 12:33:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 28E696B006C for ; Fri, 23 Oct 2020 12:33:54 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C93EF180AD811 for ; Fri, 23 Oct 2020 16:33:53 +0000 (UTC) X-FDA: 77403736746.09.chain41_63116e72725b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id AF17D180AD822 for ; Fri, 23 Oct 2020 16:33:53 +0000 (UTC) X-Spam-Summary: 1,0,0,78705e481264ec50,d41d8cd98f00b204,mchehab@kernel.org,,RULES_HIT:41:69:355:379:541:800:960:973:982:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2689:2693:2890:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4117:4321:4605:5007:6119:6261:6653:7875:7903:9036:10004:11026:11473:11658:11914:12043:12295:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14394:14721:21080:21324:21433:21450:21451:21627:21795:21939:21990:30003:30034:30051:30054:30060:30070,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04y8sx8mfnrxhb4m84zbo3a1rdrs8ocyfotdd59rt6oaxm6443q5q8ocrwjaapm.aqhc1j5ax1igrca8wkwmf8f5ffa7m8drjjrdoiahsnd7ppxdkqqmqtu63th73b4.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: chain41_63116e72725b X-Filterd-Recvd-Size: 6747 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 23 Oct 2020 16:33:53 +0000 (UTC) Received: from mail.kernel.org (ip5f5ad5a3.dynamic.kabel-deutschland.de [95.90.213.163]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BBA3F246E1; Fri, 23 Oct 2020 16:33:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1603470828; bh=uQxL4fjRXwR/EwQ2p4DPmP+oTqGlbr4Aye0gJB0dI5c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W2vDNg0Mz6RK6EHbm9ScUhhs2mzV1lMCKQdLZKZiNtlx1lZFI1F1CwqF2HXJq8AW6 8e9a95M1G9gbEf0i3a852U5t5wjg1/TAbdm3CJt2YjZsfWrmBwqINHWHGuOyvYuT0V u3CFOd6SGHI7Ztaha5WFfr/a3SzihbcDkCZcq/j4= Received: from mchehab by mail.kernel.org with local (Exim 4.94) (envelope-from ) id 1kW00g-002Axr-Q9; Fri, 23 Oct 2020 18:33:46 +0200 From: Mauro Carvalho Chehab To: Linux Doc Mailing List Cc: Mauro Carvalho Chehab , "Jonathan Corbet" , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 54/56] mm: fix kernel-doc markups Date: Fri, 23 Oct 2020 18:33:41 +0200 Message-Id: <61d452a7006c9aca9bb352bfa6ed52537dba5060.1603469755.git.mchehab+huawei@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: References: MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kernel-doc markups should use this format: identifier - description Fix some issues on mm files: 1) The definition for get_user_pages_locked() doesn't follow it. Also, it expects a short descrpition at the header, followed by a long one, after the parameters. Fix it. 2) Kernel-doc requires that a kernel-doc markup to be immediatly below the function prototype, as otherwise it will rename it. So, move get_pfnblock_flags_mask() description to the right place. 3) Make invalidate_mapping_pagevec() to also follow the expected kernel-doc format. Signed-off-by: Mauro Carvalho Chehab --- mm/gup.c | 24 +++++++++++++----------- mm/page_alloc.c | 16 ++++++++-------- mm/truncate.c | 10 ++++++++-- 3 files changed, 29 insertions(+), 21 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 102877ed77a4..3dc7c0fe9231 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1926,7 +1926,19 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, EXPORT_SYMBOL(get_user_pages); /** - * get_user_pages_locked() is suitable to replace the form: + * get_user_pages_locked() - variant of get_user_pages() + * + * @start: starting user address + * @nr_pages: number of pages from start to pin + * @gup_flags: flags modifying lookup behaviour + * @pages: array that receives pointers to the pages pinned. + * Should be at least nr_pages long. Or NULL, if caller + * only intends to ensure the pages are faulted in. + * @locked: pointer to lock flag indicating whether lock is held and + * subsequently whether VM_FAULT_RETRY functionality can be + * utilised. Lock must initially be held. + * + * It is suitable to replace the form: * * mmap_read_lock(mm); * do_something() @@ -1942,16 +1954,6 @@ EXPORT_SYMBOL(get_user_pages); * if (locked) * mmap_read_unlock(mm); * - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying lookup behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. Or NULL, if caller - * only intends to ensure the pages are faulted in. - * @locked: pointer to lock flag indicating whether lock is held and - * subsequently whether VM_FAULT_RETRY functionality can be - * utilised. Lock must initially be held. - * * We can leverage the VM_FAULT_RETRY functionality in the page fault * paths better by using either get_user_pages_locked() or * get_user_pages_unlocked(). diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23f5066bd4a5..c94094ce1621 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -495,14 +495,6 @@ static inline int pfn_to_bitidx(struct page *page, unsigned long pfn) return (pfn >> pageblock_order) * NR_PAGEBLOCK_BITS; } -/** - * get_pfnblock_flags_mask - Return the requested group of flags for the pageblock_nr_pages block of pages - * @page: The page within the block of interest - * @pfn: The target page frame number - * @mask: mask of bits that the caller is interested in - * - * Return: pageblock_bits flags - */ static __always_inline unsigned long __get_pfnblock_flags_mask(struct page *page, unsigned long pfn, @@ -521,6 +513,14 @@ unsigned long __get_pfnblock_flags_mask(struct page *page, return (word >> bitidx) & mask; } +/** + * get_pfnblock_flags_mask - Return the requested group of flags for the pageblock_nr_pages block of pages + * @page: The page within the block of interest + * @pfn: The target page frame number + * @mask: mask of bits that the caller is interested in + * + * Return: pageblock_bits flags + */ unsigned long get_pfnblock_flags_mask(struct page *page, unsigned long pfn, unsigned long mask) { diff --git a/mm/truncate.c b/mm/truncate.c index 18cec39a9f53..58fa634cf4f8 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -637,9 +637,15 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping, EXPORT_SYMBOL(invalidate_mapping_pages); /** - * This helper is similar with the above one, except that it accounts for pages - * that are likely on a pagevec and count them in @nr_pagevec, which will used by + * invalidate_mapping_pagevec - This helper is similar with + * invalidate_mapping_pages(), except that it accounts for pages tat are + * likely on a pagevec and count them in @nr_pagevec, which will used by * the caller. + * + * @mapping: the address_space which holds the pages to invalidate + * @start: the offset 'from' which to invalidate + * @end: the offset 'to' which to invalidate (inclusive) + * */ void invalidate_mapping_pagevec(struct address_space *mapping, pgoff_t start, pgoff_t end, unsigned long *nr_pagevec)