From patchwork Sat Jun 30 14:55:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 10498241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1A2DF60230 for ; Sat, 30 Jun 2018 14:55:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B3D828F7E for ; Sat, 30 Jun 2018 14:55:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F3BD22906E; Sat, 30 Jun 2018 14:55:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A578628F7E for ; Sat, 30 Jun 2018 14:55:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC1916B0270; Sat, 30 Jun 2018 10:55:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DCF836B0272; Sat, 30 Jun 2018 10:55:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C70E96B0273; Sat, 30 Jun 2018 10:55:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 44E036B0270 for ; Sat, 30 Jun 2018 10:55:37 -0400 (EDT) Received: by mail-ed1-f72.google.com with SMTP id v19-v6so3838812eds.3 for ; Sat, 30 Jun 2018 07:55:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=AaMGFEqS5UjQEsWdrPFoKAzcvDXFqQJB4txkPM6xJJE=; b=igqVHn1B2l1tTlLO1CTcLoi2lW/nLEZQ3imHq+wr/FyNRBV0hFEpYGKbciM3d1QP8I 6RMIbQSYR2QQnOFW8adBL6XwJ0ijif73DzaJWrtUR7h03i4ccRsS7FPdmrkIEcIJFVYa hc1BwQ557ROUgWVEHRV6RfpLfezz13JkwhCN2LQqBj9giVuUqfeXkQrYV3dJzMEAG3ON lYbY1GTQcUrT4qNS4UjSztWiByWUujxyxsgOfB+DISTMNbTQmm3hsJ37HEEtJJGTNsQ1 zn5cpA6QyzJvS/fmUhk7CilsMxtcWJaB5/wFIzotReGbdRqQoB2ois75sEvJdVbmx32a FFGQ== X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 148.163.156.1 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) smtp.mailfrom=rppt@linux.vnet.ibm.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: APt69E03l5azA1ono+NRA2odH7l/riQ3AXLsofxGz8RxOiK6CQBEh/0M ewg8koK16oUH5t2idCPsG1Zy+Or+Wi/Ctqq2YddTJFZHUOtGGzJKRwC5+E2tyrJ0YJDQNxzBAqo mcFqyysTNJWF07aoVFANUJWB8zZjDZeLdL62VUCqwr9N3Q5R2SNu1qA0xSob4qDU= X-Received: by 2002:a50:fd93:: with SMTP id o19-v6mr5967341edt.73.1530370536800; Sat, 30 Jun 2018 07:55:36 -0700 (PDT) X-Google-Smtp-Source: AAOMgpd19PUYgeze9t0HSMdtOYh6iyY3r8c5KuH/p5F9JwCaKjCGHAtVN3/j4a3ma9FrPvb3u7+v X-Received: by 2002:a50:fd93:: with SMTP id o19-v6mr5967282edt.73.1530370535462; Sat, 30 Jun 2018 07:55:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530370535; cv=none; d=google.com; s=arc-20160816; b=Y+93ugiL+9YOHMSqBaS/fqx0cEJ/i3yabIdvdRDnzCSYUVaC6P2PNM3UuIsc1KaGBp 89lLcluJqinLbjss2Aojcs9/D8UQODrWz6VYV0yg9pydVnXpi0so0CmbHhw+p2HlaT7a 8s/JWlG40kreOKMaXzDGQDnNYBf2qKq2rxSWoyUjJWQrS2PO+bbRHmSM56BuaduHYMSp 1ZS3HdacDr9UKbwWs9rvA7U4T/p4Gz7Tm5eqUDKDhTT/aLWja1dJu/mgUwfGlXiFIsXb pEGe+rcQH1UFbpduXq0GNr00rwK7t9sA/8ib6ci41fz3h8kyRqymi1Q2E6BE45gbzpfj hzOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from :arc-authentication-results; bh=AaMGFEqS5UjQEsWdrPFoKAzcvDXFqQJB4txkPM6xJJE=; b=HIrgOMvaYCvchvQHA7PY2LqAqjSE6c1chyfaPHriEBIrGvydjDtt3E1skQs8Vo6A3X j7xQyufb1uar4smfbuevJE2w/+1QWbFe2Pcmwprc8e1KPFrcB4AuiA7E0riEfca7THXv sqReWnf1vYwnBJ2Cn5jDBROs9q59kCXGBjqopNCQmiG5oSEQyq8JZwlrVsLadXCVCBRL IT44VUh20w5Vb5/3TDiksPzhhUlMtMxiEyFoYdEelm7pOvtEstmqPAvd8Ei38DZEdYYR 1wyW6s8ikt2oz5Z9iz06XRoLZKQ3Ya8+sfciTwpDYdMScJV2wayzw+nxS71NLZDJDG9d x6Gg== ARC-Authentication-Results: i=1; mx.google.com; spf=neutral (google.com: 148.163.156.1 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) smtp.mailfrom=rppt@linux.vnet.ibm.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id j2-v6si5576282edq.304.2018.06.30.07.55.34 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 30 Jun 2018 07:55:35 -0700 (PDT) Received-SPF: neutral (google.com: 148.163.156.1 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=neutral (google.com: 148.163.156.1 is neither permitted nor denied by best guess record for domain of rppt@linux.vnet.ibm.com) smtp.mailfrom=rppt@linux.vnet.ibm.com; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w5UErWHA102733 for ; Sat, 30 Jun 2018 10:55:33 -0400 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0a-001b2d01.pphosted.com with ESMTP id 2jx61kt1hb-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Sat, 30 Jun 2018 10:55:33 -0400 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 30 Jun 2018 15:55:30 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Sat, 30 Jun 2018 15:55:28 +0100 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w5UEtRaM37355642 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Sat, 30 Jun 2018 14:55:27 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9F95A52050; Sat, 30 Jun 2018 15:55:14 +0100 (BST) Received: from rapoport-lnx (unknown [9.148.205.240]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTPS id 0D9BA52054; Sat, 30 Jun 2018 15:55:12 +0100 (BST) Received: by rapoport-lnx (sSMTP sendmail emulation); Sat, 30 Jun 2018 17:55:24 +0300 From: Mike Rapoport To: Jonathan Corbet Cc: Randy Dunlap , linux-doc , linux-mm , lkml , Mike Rapoport Subject: [PATCH v2 07/11] docs/mm: memblock: update kernel-doc comments Date: Sat, 30 Jun 2018 17:55:02 +0300 X-Mailer: git-send-email 2.7.4 In-Reply-To: <1530370506-21751-1-git-send-email-rppt@linux.vnet.ibm.com> References: <1530370506-21751-1-git-send-email-rppt@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18063014-0016-0000-0000-000001E1E34B X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18063014-0017-0000-0000-0000323632AC Message-Id: <1530370506-21751-8-git-send-email-rppt@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-06-30_05:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1806210000 definitions=main-1806300176 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP * make memblock_discard description kernel-doc compatible * add brief description for memblock_setclr_flag and describe its parameters * fixup return value descriptions Signed-off-by: Mike Rapoport --- include/linux/memblock.h | 17 +++++++--- mm/memblock.c | 84 +++++++++++++++++++++++++++--------------------- 2 files changed, 59 insertions(+), 42 deletions(-) diff --git a/include/linux/memblock.h b/include/linux/memblock.h index 8b8fbce..63704c6 100644 --- a/include/linux/memblock.h +++ b/include/linux/memblock.h @@ -239,7 +239,6 @@ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn, /** * for_each_resv_unavail_range - iterate through reserved and unavailable memory * @i: u64 used as loop variable - * @flags: pick from blocks based on memory attributes * @p_start: ptr to phys_addr_t for start address of the range, can be %NULL * @p_end: ptr to phys_addr_t for end address of the range, can be %NULL * @@ -367,8 +366,10 @@ phys_addr_t memblock_get_current_limit(void); */ /** - * memblock_region_memory_base_pfn - Return the lowest pfn intersecting with the memory region + * memblock_region_memory_base_pfn - get the lowest pfn of the memory region * @reg: memblock_region structure + * + * Return: the lowest pfn intersecting with the memory region */ static inline unsigned long memblock_region_memory_base_pfn(const struct memblock_region *reg) { @@ -376,8 +377,10 @@ static inline unsigned long memblock_region_memory_base_pfn(const struct membloc } /** - * memblock_region_memory_end_pfn - Return the end_pfn this region + * memblock_region_memory_end_pfn - get the end pfn of the memory region * @reg: memblock_region structure + * + * Return: the end_pfn of the reserved region */ static inline unsigned long memblock_region_memory_end_pfn(const struct memblock_region *reg) { @@ -385,8 +388,10 @@ static inline unsigned long memblock_region_memory_end_pfn(const struct memblock } /** - * memblock_region_reserved_base_pfn - Return the lowest pfn intersecting with the reserved region + * memblock_region_reserved_base_pfn - get the lowest pfn of the reserved region * @reg: memblock_region structure + * + * Return: the lowest pfn intersecting with the reserved region */ static inline unsigned long memblock_region_reserved_base_pfn(const struct memblock_region *reg) { @@ -394,8 +399,10 @@ static inline unsigned long memblock_region_reserved_base_pfn(const struct membl } /** - * memblock_region_reserved_end_pfn - Return the end_pfn this region + * memblock_region_reserved_end_pfn - get the end pfn of the reserved region * @reg: memblock_region structure + * + * Return: the end_pfn of the reserved region */ static inline unsigned long memblock_region_reserved_end_pfn(const struct memblock_region *reg) { diff --git a/mm/memblock.c b/mm/memblock.c index 4f5aecb..8159869 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -93,10 +93,11 @@ bool __init_memblock memblock_overlaps_region(struct memblock_type *type, return i < type->cnt; } -/* +/** * __memblock_find_range_bottom_up - find free area utility in bottom-up * @start: start of candidate range - * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} + * @end: end of candidate range, can be %MEMBLOCK_ALLOC_ANYWHERE or + * %MEMBLOCK_ALLOC_ACCESSIBLE * @size: size of free area to find * @align: alignment of free area to find * @nid: nid of the free area to find, %NUMA_NO_NODE for any node @@ -104,7 +105,7 @@ bool __init_memblock memblock_overlaps_region(struct memblock_type *type, * * Utility called from memblock_find_in_range_node(), find free area bottom-up. * - * RETURNS: + * Return: * Found address on success, 0 on failure. */ static phys_addr_t __init_memblock @@ -130,7 +131,8 @@ __memblock_find_range_bottom_up(phys_addr_t start, phys_addr_t end, /** * __memblock_find_range_top_down - find free area utility, in top-down * @start: start of candidate range - * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} + * @end: end of candidate range, can be %MEMBLOCK_ALLOC_ANYWHERE or + * %MEMBLOCK_ALLOC_ACCESSIBLE * @size: size of free area to find * @align: alignment of free area to find * @nid: nid of the free area to find, %NUMA_NO_NODE for any node @@ -138,7 +140,7 @@ __memblock_find_range_bottom_up(phys_addr_t start, phys_addr_t end, * * Utility called from memblock_find_in_range_node(), find free area top-down. * - * RETURNS: + * Return: * Found address on success, 0 on failure. */ static phys_addr_t __init_memblock @@ -170,7 +172,8 @@ __memblock_find_range_top_down(phys_addr_t start, phys_addr_t end, * @size: size of free area to find * @align: alignment of free area to find * @start: start of candidate range - * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} + * @end: end of candidate range, can be %MEMBLOCK_ALLOC_ANYWHERE or + * %MEMBLOCK_ALLOC_ACCESSIBLE * @nid: nid of the free area to find, %NUMA_NO_NODE for any node * @flags: pick from blocks based on memory attributes * @@ -184,7 +187,7 @@ __memblock_find_range_top_down(phys_addr_t start, phys_addr_t end, * * If bottom-up allocation failed, will try to allocate memory top-down. * - * RETURNS: + * Return: * Found address on success, 0 on failure. */ phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, @@ -239,13 +242,14 @@ phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size, /** * memblock_find_in_range - find free area in given range * @start: start of candidate range - * @end: end of candidate range, can be %MEMBLOCK_ALLOC_{ANYWHERE|ACCESSIBLE} + * @end: end of candidate range, can be %MEMBLOCK_ALLOC_ANYWHERE or + * %MEMBLOCK_ALLOC_ACCESSIBLE * @size: size of free area to find * @align: alignment of free area to find * * Find @size free area aligned to @align in the specified range. * - * RETURNS: + * Return: * Found address on success, 0 on failure. */ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start, @@ -289,7 +293,7 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK /** - * Discard memory and reserved arrays if they were allocated + * memblock_discard - discard memory and reserved arrays if they were allocated */ void __init memblock_discard(void) { @@ -319,11 +323,11 @@ void __init memblock_discard(void) * * Double the size of the @type regions array. If memblock is being used to * allocate memory for a new reserved regions array and there is a previously - * allocated memory range [@new_area_start,@new_area_start+@new_area_size] + * allocated memory range [@new_area_start, @new_area_start + @new_area_size] * waiting to be reserved, ensure the memory used by the new array does * not overlap. * - * RETURNS: + * Return: * 0 on success, -1 on failure. */ static int __init_memblock memblock_double_array(struct memblock_type *type, @@ -468,7 +472,7 @@ static void __init_memblock memblock_merge_regions(struct memblock_type *type) * @nid: node id of the new region * @flags: flags of the new region * - * Insert new memblock region [@base,@base+@size) into @type at @idx. + * Insert new memblock region [@base, @base + @size) into @type at @idx. * @type must already have extra room to accommodate the new region. */ static void __init_memblock memblock_insert_region(struct memblock_type *type, @@ -497,12 +501,12 @@ static void __init_memblock memblock_insert_region(struct memblock_type *type, * @nid: nid of the new region * @flags: flags of the new region * - * Add new memblock region [@base,@base+@size) into @type. The new region + * Add new memblock region [@base, @base + @size) into @type. The new region * is allowed to overlap with existing ones - overlaps don't affect already * existing regions. @type is guaranteed to be minimal (all neighbouring * compatible regions are merged) after the addition. * - * RETURNS: + * Return: * 0 on success, -errno on failure. */ int __init_memblock memblock_add_range(struct memblock_type *type, @@ -616,11 +620,11 @@ int __init_memblock memblock_add(phys_addr_t base, phys_addr_t size) * @end_rgn: out parameter for the end of isolated region * * Walk @type and ensure that regions don't cross the boundaries defined by - * [@base,@base+@size). Crossing regions are split at the boundaries, + * [@base, @base + @size). Crossing regions are split at the boundaries, * which may create at most two more regions. The index of the first * region inside the range is returned in *@start_rgn and end in *@end_rgn. * - * RETURNS: + * Return: * 0 on success, -errno on failure. */ static int __init_memblock memblock_isolate_range(struct memblock_type *type, @@ -731,10 +735,15 @@ int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size) } /** + * memblock_setclr_flag - set or clear flag for a memory region + * @base: base address of the region + * @size: size of the region + * @set: set or clear the flag + * @flag: the flag to udpate * * This function isolates region [@base, @base + @size), and sets/clears flag * - * Return 0 on success, -errno on failure. + * Return: 0 on success, -errno on failure. */ static int __init_memblock memblock_setclr_flag(phys_addr_t base, phys_addr_t size, int set, int flag) @@ -761,7 +770,7 @@ static int __init_memblock memblock_setclr_flag(phys_addr_t base, * @base: the base phys addr of the region * @size: the size of the region * - * Return 0 on success, -errno on failure. + * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_mark_hotplug(phys_addr_t base, phys_addr_t size) { @@ -773,7 +782,7 @@ int __init_memblock memblock_mark_hotplug(phys_addr_t base, phys_addr_t size) * @base: the base phys addr of the region * @size: the size of the region * - * Return 0 on success, -errno on failure. + * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_clear_hotplug(phys_addr_t base, phys_addr_t size) { @@ -785,7 +794,7 @@ int __init_memblock memblock_clear_hotplug(phys_addr_t base, phys_addr_t size) * @base: the base phys addr of the region * @size: the size of the region * - * Return 0 on success, -errno on failure. + * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size) { @@ -799,7 +808,7 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size) * @base: the base phys addr of the region * @size: the size of the region * - * Return 0 on success, -errno on failure. + * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) { @@ -811,7 +820,7 @@ int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) * @base: the base phys addr of the region * @size: the size of the region * - * Return 0 on success, -errno on failure. + * Return: 0 on success, -errno on failure. */ int __init_memblock memblock_clear_nomap(phys_addr_t base, phys_addr_t size) { @@ -972,9 +981,6 @@ void __init_memblock __next_mem_range(u64 *idx, int nid, /** * __next_mem_range_rev - generic next function for for_each_*_range_rev() * - * Finds the next range from type_a which is not marked as unsuitable - * in type_b. - * * @idx: pointer to u64 loop variable * @nid: node selector, %NUMA_NO_NODE for all nodes * @flags: pick from blocks based on memory attributes @@ -984,6 +990,9 @@ void __init_memblock __next_mem_range(u64 *idx, int nid, * @out_end: ptr to phys_addr_t for end address of the range, can be %NULL * @out_nid: ptr to int for nid of the range, can be %NULL * + * Finds the next range from type_a which is not marked as unsuitable + * in type_b. + * * Reverse of __next_mem_range(). */ void __init_memblock __next_mem_range_rev(u64 *idx, int nid, @@ -1119,10 +1128,10 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid, * @type: memblock type to set node ID for * @nid: node ID to set * - * Set the nid of memblock @type regions in [@base,@base+@size) to @nid. + * Set the nid of memblock @type regions in [@base, @base + @size) to @nid. * Regions which cross the area boundaries are split as necessary. * - * RETURNS: + * Return: * 0 on success, -errno on failure. */ int __init_memblock memblock_set_node(phys_addr_t base, phys_addr_t size, @@ -1246,7 +1255,7 @@ phys_addr_t __init memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align, i * The allocation is performed from memory region limited by * memblock.current_limit if @max_addr == %BOOTMEM_ALLOC_ACCESSIBLE. * - * The memory block is aligned on SMP_CACHE_BYTES if @align == 0. + * The memory block is aligned on %SMP_CACHE_BYTES if @align == 0. * * The phys address of allocated boot memory block is converted to virtual and * allocated memory is reset to 0. @@ -1254,7 +1263,7 @@ phys_addr_t __init memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align, i * In addition, function sets the min_count to 0 using kmemleak_alloc for * allocated boot memory block, so that it is never reported as leaks. * - * RETURNS: + * Return: * Virtual address of allocated memory block on success, NULL on failure. */ static void * __init memblock_virt_alloc_internal( @@ -1339,7 +1348,7 @@ static void * __init memblock_virt_alloc_internal( * info), if enabled. Does not zero allocated memory, does not panic if request * cannot be satisfied. * - * RETURNS: + * Return: * Virtual address of allocated memory block on success, NULL on failure. */ void * __init memblock_virt_alloc_try_nid_raw( @@ -1376,7 +1385,7 @@ void * __init memblock_virt_alloc_try_nid_raw( * Public function, provides additional debug information (including caller * info), if enabled. This function zeroes the allocated memory. * - * RETURNS: + * Return: * Virtual address of allocated memory block on success, NULL on failure. */ void * __init memblock_virt_alloc_try_nid_nopanic( @@ -1412,7 +1421,7 @@ void * __init memblock_virt_alloc_try_nid_nopanic( * which provides debug information (including caller info), if enabled, * and panics if the request can not be satisfied. * - * RETURNS: + * Return: * Virtual address of allocated memory block on success, NULL on failure. */ void * __init memblock_virt_alloc_try_nid( @@ -1669,9 +1678,9 @@ int __init_memblock memblock_search_pfn_nid(unsigned long pfn, * @base: base of region to check * @size: size of region to check * - * Check if the region [@base, @base+@size) is a subset of a memory block. + * Check if the region [@base, @base + @size) is a subset of a memory block. * - * RETURNS: + * Return: * 0 if false, non-zero if true */ bool __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size) @@ -1690,9 +1699,10 @@ bool __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t siz * @base: base of region to check * @size: size of region to check * - * Check if the region [@base, @base+@size) intersects a reserved memory block. + * Check if the region [@base, @base + @size) intersects a reserved + * memory block. * - * RETURNS: + * Return: * True if they intersect, false if not. */ bool __init_memblock memblock_is_region_reserved(phys_addr_t base, phys_addr_t size)