From patchwork Sat Mar 2 19:12:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E94931575 for ; Sat, 2 Mar 2019 19:12:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BFE872AB55 for ; Sat, 2 Mar 2019 19:12:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B3D8F2AB5B; Sat, 2 Mar 2019 19:12:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 35D1F2AB58 for ; Sat, 2 Mar 2019 19:12:32 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 05BA921F2F4; Sat, 2 Mar 2019 11:12:30 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 002B021F20F for ; Sat, 2 Mar 2019 11:12:28 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id D4600274; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id CB2821E7; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:18 -0500 Message-Id: <1551553944-6419-2-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 1/7] lnet: move comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the DocBook handling to sphinix. Signed-off-by: James Simmons --- .../lustre/include/linux/libcfs/libcfs_cpu.h | 50 ++++---- .../lustre/include/linux/libcfs/libcfs_crypto.h | 20 ++-- .../lustre/include/uapi/linux/lnet/lnetctl.h | 2 +- .../staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h | 2 +- drivers/staging/lustre/lnet/libcfs/debug.c | 2 +- drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c | 2 +- drivers/staging/lustre/lnet/libcfs/libcfs_lock.c | 8 +- drivers/staging/lustre/lnet/libcfs/libcfs_string.c | 67 +++++------ drivers/staging/lustre/lnet/libcfs/linux-crypto.c | 131 +++++++++++---------- drivers/staging/lustre/lnet/lnet/api-ni.c | 34 +++--- drivers/staging/lustre/lnet/lnet/lib-eq.c | 100 ++++++++-------- drivers/staging/lustre/lnet/lnet/lib-md.c | 67 +++++------ drivers/staging/lustre/lnet/lnet/lib-me.c | 76 ++++++------ drivers/staging/lustre/lnet/lnet/lib-move.c | 113 ++++++++++-------- drivers/staging/lustre/lnet/lnet/lib-ptl.c | 12 +- drivers/staging/lustre/lnet/lnet/net_fault.c | 30 ++--- drivers/staging/lustre/lnet/lnet/nidstrings.c | 125 +++++++++----------- 17 files changed, 425 insertions(+), 416 deletions(-) diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h index 3e51752..84c6682 100644 --- a/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h +++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h @@ -85,7 +85,7 @@ extern struct cfs_cpt_table *cfs_cpt_tab; /** - * return cpumask of CPU partition \a cpt + * return cpumask of CPU partition @cpt */ cpumask_var_t *cfs_cpt_cpumask(struct cfs_cpt_table *cptab, int cpt); /** @@ -97,83 +97,83 @@ */ int cfs_cpt_distance_print(struct cfs_cpt_table *cptab, char *buf, int len); /** - * return total number of CPU partitions in \a cptab + * return total number of CPU partitions in @cptab */ int cfs_cpt_number(struct cfs_cpt_table *cptab); /** - * return number of HW cores or hyper-threadings in a CPU partition \a cpt + * return number of HW cores or hyper-threadings in a CPU partition @cpt */ int cfs_cpt_weight(struct cfs_cpt_table *cptab, int cpt); /** - * is there any online CPU in CPU partition \a cpt + * is there any online CPU in CPU partition @cpt */ int cfs_cpt_online(struct cfs_cpt_table *cptab, int cpt); /** - * return nodemask of CPU partition \a cpt + * return nodemask of CPU partition @cpt */ nodemask_t *cfs_cpt_nodemask(struct cfs_cpt_table *cptab, int cpt); /** - * shadow current HW processor ID to CPU-partition ID of \a cptab + * shadow current HW processor ID to CPU-partition ID of @cptab */ int cfs_cpt_current(struct cfs_cpt_table *cptab, int remap); /** - * shadow HW processor ID \a CPU to CPU-partition ID by \a cptab + * shadow HW processor ID @CPU to CPU-partition ID by @cptab */ int cfs_cpt_of_cpu(struct cfs_cpt_table *cptab, int cpu); /** - * shadow HW node ID \a NODE to CPU-partition ID by \a cptab + * shadow HW node ID @NODE to CPU-partition ID by @cptab */ int cfs_cpt_of_node(struct cfs_cpt_table *cptab, int node); /** - * NUMA distance between \a cpt1 and \a cpt2 in \a cptab + * NUMA distance between @cpt1 and @cpt2 in @cptab */ unsigned int cfs_cpt_distance(struct cfs_cpt_table *cptab, int cpt1, int cpt2); /** - * bind current thread on a CPU-partition \a cpt of \a cptab + * bind current thread on a CPU-partition @cpt of @cptab */ int cfs_cpt_bind(struct cfs_cpt_table *cptab, int cpt); /** - * add \a cpu to CPU partition @cpt of \a cptab, return 1 for success, + * add @cpu to CPU partition @cpt of @cptab, return 1 for success, * otherwise 0 is returned */ int cfs_cpt_set_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu); /** - * remove \a cpu from CPU partition \a cpt of \a cptab + * remove @cpu from CPU partition @cpt of @cptab */ void cfs_cpt_unset_cpu(struct cfs_cpt_table *cptab, int cpt, int cpu); /** - * add all cpus in \a mask to CPU partition \a cpt + * add all cpus in @mask to CPU partition @cpt * return 1 if successfully set all CPUs, otherwise return 0 */ int cfs_cpt_set_cpumask(struct cfs_cpt_table *cptab, int cpt, const cpumask_t *mask); /** - * remove all cpus in \a mask from CPU partition \a cpt + * remove all cpus in @mask from CPU partition @cpt */ void cfs_cpt_unset_cpumask(struct cfs_cpt_table *cptab, int cpt, const cpumask_t *mask); /** - * add all cpus in NUMA node \a node to CPU partition \a cpt + * add all cpus in NUMA node @node to CPU partition @cpt * return 1 if successfully set all CPUs, otherwise return 0 */ int cfs_cpt_set_node(struct cfs_cpt_table *cptab, int cpt, int node); /** - * remove all cpus in NUMA node \a node from CPU partition \a cpt + * remove all cpus in NUMA node @node from CPU partition @cpt */ void cfs_cpt_unset_node(struct cfs_cpt_table *cptab, int cpt, int node); /** - * add all cpus in node mask \a mask to CPU partition \a cpt + * add all cpus in node mask @mask to CPU partition @cpt * return 1 if successfully set all CPUs, otherwise return 0 */ int cfs_cpt_set_nodemask(struct cfs_cpt_table *cptab, int cpt, const nodemask_t *mask); /** - * remove all cpus in node mask \a mask from CPU partition \a cpt + * remove all cpus in node mask @mask from CPU partition @cpt */ void cfs_cpt_unset_nodemask(struct cfs_cpt_table *cptab, int cpt, const nodemask_t *mask); /** - * convert partition id \a cpt to numa node id, if there are more than one + * convert partition id @cpt to numa node id, if there are more than one * nodes in this partition, it might return a different node id each time. */ int cfs_cpt_spread_node(struct cfs_cpt_table *cptab, int cpt); @@ -329,7 +329,7 @@ static inline void cfs_cpu_fini(void) */ void cfs_cpt_table_free(struct cfs_cpt_table *cptab); /** - * create a cfs_cpt_table with \a ncpt number of partitions + * create a cfs_cpt_table with @ncpt number of partitions */ struct cfs_cpt_table *cfs_cpt_table_alloc(unsigned int ncpt); @@ -383,18 +383,18 @@ struct cfs_percpt_lock { #define cfs_percpt_lock_num(pcl) cfs_cpt_number(pcl->pcl_cptab) /* - * create a cpu-partition lock based on CPU partition table \a cptab, - * each private lock has extra \a psize bytes padding data + * create a cpu-partition lock based on CPU partition table @cptab, + * each private lock has extra @psize bytes padding data */ struct cfs_percpt_lock *cfs_percpt_lock_create(struct cfs_cpt_table *cptab, struct lock_class_key *keys); /* destroy a cpu-partition lock */ void cfs_percpt_lock_free(struct cfs_percpt_lock *pcl); -/* lock private lock \a index of \a pcl */ +/* lock private lock @index of @pcl */ void cfs_percpt_lock(struct cfs_percpt_lock *pcl, int index); -/* unlock private lock \a index of \a pcl */ +/* unlock private lock @index of @pcl */ void cfs_percpt_unlock(struct cfs_percpt_lock *pcl, int index); #define CFS_PERCPT_LOCK_KEYS 256 @@ -413,7 +413,7 @@ struct cfs_percpt_lock *cfs_percpt_lock_create(struct cfs_cpt_table *cptab, }) /** - * iterate over all CPU partitions in \a cptab + * iterate over all CPU partitions in @cptab */ #define cfs_cpt_for_each(i, cptab) \ for (i = 0; i < cfs_cpt_number(cptab); i++) diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_crypto.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_crypto.h index ca8620b..497e24d 100644 --- a/drivers/staging/lustre/include/linux/libcfs/libcfs_crypto.h +++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_crypto.h @@ -119,8 +119,8 @@ enum cfs_crypto_hash_alg { * * Hash information includes algorithm name, initial seed, hash size. * - * \retval cfs_crypto_hash_type for valid ID (CFS_HASH_ALG_*) - * \retval NULL for unknown algorithm identifier + * Return: cfs_crypto_hash_type for valid ID (CFS_HASH_ALG_*) + * NULL for unknown algorithm identifier */ static inline const struct cfs_crypto_hash_type * cfs_crypto_hash_type(enum cfs_crypto_hash_alg hash_alg) @@ -138,10 +138,10 @@ enum cfs_crypto_hash_alg { /** * Return hash name for hash algorithm identifier * - * \param[in] hash_alg hash alrgorithm id (CFS_HASH_ALG_*) + * @hash_alg hash alrgorithm id (CFS_HASH_ALG_*) * - * \retval string name of known hash algorithm - * \retval "unknown" if hash algorithm is unknown + * Return: string name of known hash algorithm + * "unknown" if hash algorithm is unknown */ static inline const char * cfs_crypto_hash_name(enum cfs_crypto_hash_alg hash_alg) @@ -157,10 +157,10 @@ enum cfs_crypto_hash_alg { /** * Return digest size for hash algorithm type * - * \param[in] hash_alg hash alrgorithm id (CFS_HASH_ALG_*) + * @hash_alg hash alrgorithm id (CFS_HASH_ALG_*) * - * \retval hash algorithm digest size in bytes - * \retval 0 if hash algorithm type is unknown + * Return: hash algorithm digest size in bytes + * 0 if hash algorithm type is unknown */ static inline int cfs_crypto_hash_digestsize(enum cfs_crypto_hash_alg hash_alg) { @@ -175,8 +175,8 @@ static inline int cfs_crypto_hash_digestsize(enum cfs_crypto_hash_alg hash_alg) /** * Find hash algorithm ID for the specified algorithm name * - * \retval hash algorithm ID for valid ID (CFS_HASH_ALG_*) - * \retval CFS_HASH_ALG_UNKNOWN for unknown algorithm name + * Return: hash algorithm ID for valid ID (CFS_HASH_ALG_*) + * CFS_HASH_ALG_UNKNOWN for unknown algorithm name */ static inline unsigned char cfs_crypto_hash_alg(const char *algname) { diff --git a/drivers/staging/lustre/include/uapi/linux/lnet/lnetctl.h b/drivers/staging/lustre/include/uapi/linux/lnet/lnetctl.h index 9d53c51..e9fc57c 100644 --- a/drivers/staging/lustre/include/uapi/linux/lnet/lnetctl.h +++ b/drivers/staging/lustre/include/uapi/linux/lnet/lnetctl.h @@ -46,7 +46,7 @@ struct lnet_fault_attr { * 255.255.255.255@net is wildcard for all addresses from @net */ lnet_nid_t fa_src; - /** destination NID of drop rule, see \a dr_src for details */ + /** destination NID of drop rule, see @dr_src for details */ lnet_nid_t fa_dst; /** * Portal mask to drop, -1 means all portals, for example: diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h index 2bf1228..044b7b6 100644 --- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h +++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h @@ -346,7 +346,7 @@ struct kib_data { /* peers wait for reconnection */ struct list_head kib_reconn_wait; /** - * The second that peers are pulled out from \a kib_reconn_wait + * The second that peers are pulled out from @kib_reconn_wait * for reconnection. */ time64_t kib_reconn_sec; diff --git a/drivers/staging/lustre/lnet/libcfs/debug.c b/drivers/staging/lustre/lnet/libcfs/debug.c index b7f0c73..5c9690e 100644 --- a/drivers/staging/lustre/lnet/libcfs/debug.c +++ b/drivers/staging/lustre/lnet/libcfs/debug.c @@ -335,7 +335,7 @@ static const char *libcfs_debug_dbg2str(int debug) /** * Upcall function once a Lustre log has been dumped. * - * \param file path of the dumped log + * @file path of the dumped log */ static void libcfs_run_debug_log_upcall(char *file) { diff --git a/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c b/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c index 262469f..5f0d7a2 100644 --- a/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c +++ b/drivers/staging/lustre/lnet/libcfs/libcfs_cpu.c @@ -759,7 +759,7 @@ int cfs_cpt_bind(struct cfs_cpt_table *cptab, int cpt) EXPORT_SYMBOL(cfs_cpt_bind); /** - * Choose max to \a number CPUs from \a node and set them in \a cpt. + * Choose max to @number CPUs from @node and set them in @cpt. * We always prefer to choose CPU in the same core/socket. */ static int cfs_cpt_choose_ncpus(struct cfs_cpt_table *cptab, int cpt, diff --git a/drivers/staging/lustre/lnet/libcfs/libcfs_lock.c b/drivers/staging/lustre/lnet/libcfs/libcfs_lock.c index 223505c..3d5157f 100644 --- a/drivers/staging/lustre/lnet/libcfs/libcfs_lock.c +++ b/drivers/staging/lustre/lnet/libcfs/libcfs_lock.c @@ -88,11 +88,11 @@ struct cfs_percpt_lock * /** * lock a CPU partition * - * \a index != CFS_PERCPT_LOCK_EX - * hold private lock indexed by \a index + * @index != CFS_PERCPT_LOCK_EX + * hold private lock indexed by @index * - * \a index == CFS_PERCPT_LOCK_EX - * exclusively lock @pcl and nobody can take private lock + * @index == CFS_PERCPT_LOCK_EX + * exclusively lock @pcl and nobody can take private lock */ void cfs_percpt_lock(struct cfs_percpt_lock *pcl, int index) diff --git a/drivers/staging/lustre/lnet/libcfs/libcfs_string.c b/drivers/staging/lustre/lnet/libcfs/libcfs_string.c index ae17b4d4..02814d3 100644 --- a/drivers/staging/lustre/lnet/libcfs/libcfs_string.c +++ b/drivers/staging/lustre/lnet/libcfs/libcfs_string.c @@ -146,12 +146,12 @@ char *cfs_firststr(char *str, size_t size) /** * Extracts tokens from strings. * - * Looks for \a delim in string \a next, sets \a res to point to - * substring before the delimiter, sets \a next right after the found + * Looks for @a delim in string @next, sets @res to point to + * substring before the delimiter, sets @next right after the found * delimiter. * - * \retval 1 if \a res points to a string of non-whitespace characters - * \retval 0 otherwise + * Return: 1 if @ res points to a string of non-whitespace characters + * 0 otherwise */ int cfs_gettok(struct cfs_lstr *next, char delim, struct cfs_lstr *res) @@ -204,9 +204,9 @@ char *cfs_firststr(char *str, size_t size) * * Accepts decimal and hexadecimal number recordings. * - * \retval 1 if first \a nob chars of \a str convert to decimal or - * hexadecimal integer in the range [\a min, \a max] - * \retval 0 otherwise + * Return: 1 if first @nob chars of @str convert to decimal or + * hexadecimal integer in the range [ @min, @max ] + * 0 otherwise */ int cfs_str2num_check(char *str, int nob, unsigned int *num, @@ -247,17 +247,18 @@ char *cfs_firststr(char *str, size_t size) EXPORT_SYMBOL(cfs_str2num_check); /** - * Parses \ token of the syntax. If \a bracketed is false, - * \a src should only have a single token which can be \ or \* + * Parses \ token of the syntax. If @bracketed is false, + * @src should only have a single token which can be \ or \* * - * \retval pointer to allocated range_expr and initialized - * range_expr::re_lo, range_expr::re_hi and range_expr:re_stride if \a - `* src parses to + * Return: pointer to allocated range_expr and initialized + * range_expr::re_lo, range_expr::re_hi and range_expr:re_stride if + * @src parses to * \ | * \ '-' \ | * \ '-' \ '/' \ - * \retval 0 will be returned if it can be parsed, otherwise -EINVAL or - * -ENOMEM will be returned. + * + * Return 0 will be returned if it can be parsed, otherwise -EINVAL or + * -ENOMEM will be returned. */ static int cfs_range_expr_parse(struct cfs_lstr *src, unsigned int min, unsigned int max, @@ -324,11 +325,11 @@ char *cfs_firststr(char *str, size_t size) } /** - * Print the range expression \a re into specified \a buffer. - * If \a bracketed is true, expression does not need additional + * Print the range expression @expr into specified @buffer. + * If @bracketed is true, expression does not need additional * brackets. * - * \retval number of characters written + * Return: number of characters written */ static int cfs_range_expr_print(char *buffer, int count, struct cfs_range_expr *expr, @@ -355,11 +356,11 @@ char *cfs_firststr(char *str, size_t size) } /** - * Print a list of range expressions (\a expr_list) into specified \a buffer. + * Print a list of range expressions (@expr_list) into specified @buffer. * If the list contains several expressions, separate them with comma * and surround the list with brackets. * - * \retval number of characters written + * Return: number of characters written */ int cfs_expr_list_print(char *buffer, int count, struct cfs_expr_list *expr_list) @@ -392,10 +393,10 @@ char *cfs_firststr(char *str, size_t size) EXPORT_SYMBOL(cfs_expr_list_print); /** - * Matches value (\a value) against ranges expression list \a expr_list. + * Matches value (@value) against ranges expression list @expr_list. * - * \retval 1 if \a value matches - * \retval 0 otherwise + * Return: 1 if @value matches + * 0 otherwise */ int cfs_expr_list_match(u32 value, struct cfs_expr_list *expr_list) @@ -413,11 +414,11 @@ char *cfs_firststr(char *str, size_t size) EXPORT_SYMBOL(cfs_expr_list_match); /** - * Convert express list (\a expr_list) to an array of all matched values + * Convert express list (@expr_list) to an array of all matched values * - * \retval N N is total number of all matched values - * \retval 0 if expression list is empty - * \retval < 0 for failure + * Return: N is total number of all matched values + * 0 if expression list is empty + * < 0 for failure */ int cfs_expr_list_values(struct cfs_expr_list *expr_list, int max, u32 **valpp) @@ -461,9 +462,7 @@ char *cfs_firststr(char *str, size_t size) EXPORT_SYMBOL(cfs_expr_list_values); /** - * Frees cfs_range_expr structures of \a expr_list. - * - * \retval none + * Frees cfs_range_expr structures of @expr_list. */ void cfs_expr_list_free(struct cfs_expr_list *expr_list) @@ -484,8 +483,8 @@ char *cfs_firststr(char *str, size_t size) /** * Parses \ token of the syntax. * - * \retval 0 if \a str parses to \ | \ - * \retval -errno otherwise + * Return: 0 if @str parses to \ | \ + * -errno otherwise */ int cfs_expr_list_parse(char *str, int len, unsigned int min, unsigned int max, @@ -541,12 +540,10 @@ char *cfs_firststr(char *str, size_t size) EXPORT_SYMBOL(cfs_expr_list_parse); /** - * Frees cfs_expr_list structures of \a list. + * Frees cfs_expr_list structures of @list. * - * For each struct cfs_expr_list structure found on \a list it frees + * For each struct cfs_expr_list structure found on @list it frees * range_expr list attached to it and frees the cfs_expr_list itself. - * - * \retval none */ void cfs_expr_list_free_list(struct list_head *list) diff --git a/drivers/staging/lustre/lnet/libcfs/linux-crypto.c b/drivers/staging/lustre/lnet/libcfs/linux-crypto.c index a0b1377..1c96dc6 100644 --- a/drivers/staging/lustre/lnet/libcfs/linux-crypto.c +++ b/drivers/staging/lustre/lnet/libcfs/linux-crypto.c @@ -44,21 +44,21 @@ /** * Initialize the state descriptor for the specified hash algorithm. * - * An internal routine to allocate the hash-specific state in \a req for + * An internal routine to allocate the hash-specific state in @req for * use with cfs_crypto_hash_digest() to compute the hash of a single message, * though possibly in multiple chunks. The descriptor internal state should * be freed with cfs_crypto_hash_final(). * - * \param[in] hash_alg hash algorithm id (CFS_HASH_ALG_*) - * \param[out] type pointer to the hash description in hash_types[] - * array - * \param[in,out] req hash state descriptor to be initialized - * \param[in] key initial hash value/state, NULL to use default - * value - * \param[in] key_len length of \a key + * @hash_alg hash algorithm id (CFS_HASH_ALG_*) + * @type pointer to the hash description in hash_types[] + * array + * @req hash state descriptor to be initialized + * @key initial hash value/state, NULL to use default + * value + * @key_len length of @key * - * \retval 0 on success - * \retval negative errno on failure + * Return 0 on success + * negative errno on failure */ static int cfs_crypto_hash_alloc(enum cfs_crypto_hash_alg hash_alg, const struct cfs_crypto_hash_type **type, @@ -125,25 +125,25 @@ static int cfs_crypto_hash_alloc(enum cfs_crypto_hash_alg hash_alg, * This should be used when computing the hash on a single contiguous buffer. * It combines the hash initialization, computation, and cleanup. * - * \param[in] hash_alg id of hash algorithm (CFS_HASH_ALG_*) - * \param[in] buf data buffer on which to compute hash - * \param[in] buf_len length of \a buf in bytes - * \param[in] key initial value/state for algorithm, - * if \a key = NULL use default initial value - * \param[in] key_len length of \a key in bytes - * \param[out] hash pointer to computed hash value, - * if \a hash = NULL then \a hash_len is to digest - * size in bytes, retval -ENOSPC - * \param[in,out] hash_len size of \a hash buffer - * - * \retval -EINVAL \a buf, \a buf_len, \a hash_len, - * \a hash_alg invalid - * \retval -ENOENT \a hash_alg is unsupported - * \retval -ENOSPC \a hash is NULL, or \a hash_len less than - * digest size - * \retval 0 for success - * \retval negative errno for other errors from lower - * layers. + * @hash_alg id of hash algorithm (CFS_HASH_ALG_*) + * @buf data buffer on which to compute hash + * @buf_len length of @buf in bytes + * @key initial value/state for algorithm, + * if @key = NULL use default initial value + * @key_len length of @key in bytes + * @hash pointer to computed hash value, + * if @hash = NULL then @hash_len is to digest + * size in bytes, returns -ENOSPC + * @hash_len size of @hash buffer + * + * Return: + * -EINVAL @buf, @buf_len, @hash_len, + * @hash_alg invalid + * -ENOENT @hash_alg is unsupported + * -ENOSPC @hash is NULL, or @hash_len less than + * digest size + * 0 for success + * negative errno for other errors from lower layers. */ int cfs_crypto_hash_digest(enum cfs_crypto_hash_alg hash_alg, const void *buf, unsigned int buf_len, @@ -188,13 +188,13 @@ int cfs_crypto_hash_digest(enum cfs_crypto_hash_alg hash_alg, * * The hash descriptor should be freed with cfs_crypto_hash_final(). * - * \param[in] hash_alg algorithm id (CFS_HASH_ALG_*) - * \param[in] key initial value/state for algorithm, if \a key = NULL - * use default initial value - * \param[in] key_len length of \a key in bytes + * @hash_alg algorithm id (CFS_HASH_ALG_*) + * @key initial value/state for algorithm, if @key = NULL + * use default initial value + * @key_len length of @key in bytes * - * \retval pointer to descriptor of hash instance - * \retval ERR_PTR(errno) in case of error + * Return: pointer to descriptor of hash instance + * ERR_PTR(errno) in case of error */ struct ahash_request * cfs_crypto_hash_init(enum cfs_crypto_hash_alg hash_alg, @@ -212,15 +212,15 @@ struct ahash_request * EXPORT_SYMBOL(cfs_crypto_hash_init); /** - * Update hash digest computed on data within the given \a page + * Update hash digest computed on data within the given @page * - * \param[in] hreq hash state descriptor - * \param[in] page data page on which to compute the hash - * \param[in] offset offset within \a page at which to start hash - * \param[in] len length of data on which to compute hash + * @hreq hash state descriptor + * @page data page on which to compute the hash + * @offset offset within @page at which to start hash + * @len length of data on which to compute hash * - * \retval 0 for success - * \retval negative errno on failure + * Return: 0 for success + * negative errno on failure */ int cfs_crypto_hash_update_page(struct ahash_request *req, struct page *page, unsigned int offset, @@ -239,12 +239,12 @@ int cfs_crypto_hash_update_page(struct ahash_request *req, /** * Update hash digest computed on the specified data * - * \param[in] req hash state descriptor - * \param[in] buf data buffer on which to compute the hash - * \param[in] buf_len length of \buf on which to compute hash + * @req hash state descriptor + * @buf data buffer on which to compute the hash + * @buf_len length of @buf on which to compute hash * - * \retval 0 for success - * \retval negative errno on failure + * Return: 0 for success + * negative errno on failure */ int cfs_crypto_hash_update(struct ahash_request *req, const void *buf, unsigned int buf_len) @@ -261,14 +261,15 @@ int cfs_crypto_hash_update(struct ahash_request *req, /** * Finish hash calculation, copy hash digest to buffer, clean up hash descriptor * - * \param[in] req hash descriptor - * \param[out] hash pointer to hash buffer to store hash digest - * \param[in,out] hash_len pointer to hash buffer size, if \a req = NULL - * only free \a req instead of computing the hash + * @req hash descriptor + * @hash pointer to hash buffer to store hash digest + * @hash_len pointer to hash buffer size, if @req = NULL + * only free @req instead of computing the hash * - * \retval 0 for success - * \retval -EOVERFLOW if hash_len is too small for the hash digest - * \retval negative errno for other errors from lower layers + * Return: + * 0 for success + * -EOVERFLOW if hash_len is too small for the hash digest + * negative errno for other errors from lower layers */ int cfs_crypto_hash_final(struct ahash_request *req, unsigned char *hash, unsigned int *hash_len) @@ -306,9 +307,9 @@ int cfs_crypto_hash_final(struct ahash_request *req, * The speed is stored internally in the cfs_crypto_hash_speeds[] array, and * is available through the cfs_crypto_hash_speed() function. * - * \param[in] hash_alg hash algorithm id (CFS_HASH_ALG_*) - * \param[in] buf data buffer on which to compute the hash - * \param[in] buf_len length of \buf on which to compute hash + * @hash_alg hash algorithm id (CFS_HASH_ALG_*) + * @buf data buffer on which to compute the hash + * @buf_len length of @buf on which to compute hash */ static void cfs_crypto_performance_test(enum cfs_crypto_hash_alg hash_alg) { @@ -375,18 +376,18 @@ static void cfs_crypto_performance_test(enum cfs_crypto_hash_alg hash_alg) /** * hash speed in Mbytes per second for valid hash algorithm * - * Return the performance of the specified \a hash_alg that was + * Return the performance of the specified @hash_alg that was * computed using cfs_crypto_performance_test(). If the performance * has not yet been computed, do that when it is first requested. * That avoids computing the speed when it is not actually needed. * To avoid competing threads computing the checksum speed at the * same time, only compute a single checksum speed at one time. * - * \param[in] hash_alg hash algorithm id (CFS_HASH_ALG_*) + * @hash_alg hash algorithm id (CFS_HASH_ALG_*) * - * \retval positive speed of the hash function in MB/s - * \retval -ENOENT if \a hash_alg is unsupported - * \retval negative errno if \a hash_alg speed is unavailable + * Return: positive speed of the hash function in MB/s + * -ENOENT if @hash_alg is unsupported + * negative errno if @hash_alg speed is unavailable */ int cfs_crypto_hash_speed(enum cfs_crypto_hash_alg hash_alg) { @@ -420,8 +421,8 @@ int cfs_crypto_hash_speed(enum cfs_crypto_hash_alg hash_alg) * The actual speeds are available via cfs_crypto_hash_speed() for later * comparison. * - * \retval 0 on success - * \retval -ENOMEM if no memory is available for test buffer + * Return: 0 on success + * -ENOMEM if no memory is available for test buffer */ static int cfs_crypto_test_hashes(void) { @@ -438,7 +439,7 @@ static int cfs_crypto_test_hashes(void) /** * Register available hash functions * - * \retval 0 + * Return: 0 */ int cfs_crypto_register(void) { diff --git a/drivers/staging/lustre/lnet/lnet/api-ni.c b/drivers/staging/lustre/lnet/lnet/api-ni.c index 671591a..12b3c44 100644 --- a/drivers/staging/lustre/lnet/lnet/api-ni.c +++ b/drivers/staging/lustre/lnet/lnet/api-ni.c @@ -1950,8 +1950,8 @@ static void lnet_push_target_fini(void) * lnet_lib_exit() after a call to lnet_lib_init(), if and only if the * latter returned 0. It must be called exactly once. * - * \retval 0 on success - * \retval -ve on failures. + * Return: 0 on success + * -ve on failures. */ int lnet_lib_init(void) { @@ -2031,15 +2031,15 @@ void lnet_lib_exit(void) * * Users must call this function at least once before any other functions. * For each successful call there must be a corresponding call to - * LNetNIFini(). For subsequent calls to LNetNIInit(), \a requested_pid is + * LNetNIFini(). For subsequent calls to LNetNIInit(), @requested_pid is * ignored. * * The PID used by LNet may be different from the one requested. * See LNetGetId(). * - * \param requested_pid PID requested by the caller. + * @requested_pid PID requested by the caller. * - * \return >= 0 on success, and < 0 error code on failures. + * Return: >= 0 on success, and < 0 error code on failures. */ int LNetNIInit(lnet_pid_t requested_pid) @@ -2185,7 +2185,7 @@ void lnet_lib_exit(void) * Once the LNetNIFini() operation has been started, the results of pending * API operations are undefined. * - * \return always 0 for current implementation. + * Return: always 0 for current implementation. */ int LNetNIFini(void) @@ -2224,9 +2224,9 @@ void lnet_lib_exit(void) * Grabs the ni data from the ni structure and fills the out * parameters * - * \param[in] ni network interface structure - * \param[out] cfg_ni NI config information - * \param[out] tun network and LND tunables + * @ni network interface structure + * @cfg_ni NI config information + * @tun network and LND tunables */ static void lnet_fill_ni_info(struct lnet_ni *ni, struct lnet_ioctl_config_ni *cfg_ni, @@ -2302,8 +2302,8 @@ void lnet_lib_exit(void) * Grabs the ni data from the ni structure and fills the out * parameters * - * \param[in] ni network interface structure - * \param[out] config config information + * @ni network interface structure + * @config config information */ static void lnet_fill_ni_info_legacy(struct lnet_ni *ni, @@ -3308,15 +3308,15 @@ void LNetDebugPeer(struct lnet_process_id id) EXPORT_SYMBOL(LNetDebugPeer); /** - * Retrieve the lnet_process_id ID of LNet interface at \a index. Note that + * Retrieve the lnet_process_id ID of LNet interface at @index. Note that * all interfaces share a same PID, as requested by LNetNIInit(). * - * \param index Index of the interface to look up. - * \param id On successful return, this location will hold the - * lnet_process_id ID of the interface. + * @index Index of the interface to look up. + * @id On successful return, this location will hold the + * lnet_process_id ID of the interface. * - * \retval 0 If an interface exists at \a index. - * \retval -ENOENT If no interface has been found. + * Return: 0 If an interface exists at @index. + * -ENOENT If no interface has been found. */ int LNetGetId(unsigned int index, struct lnet_process_id *id) diff --git a/drivers/staging/lustre/lnet/lnet/lib-eq.c b/drivers/staging/lustre/lnet/lnet/lib-eq.c index f500b49..3d99f0a 100644 --- a/drivers/staging/lustre/lnet/lnet/lib-eq.c +++ b/drivers/staging/lustre/lnet/lnet/lib-eq.c @@ -40,27 +40,27 @@ #include /** - * Create an event queue that has room for \a count number of events. + * Create an event queue that has room for @count number of events. * * The event queue is circular and older events will be overwritten by new * ones if they are not removed in time by the user using the functions * LNetEQGet(), LNetEQWait(), or LNetEQPoll(). It is up to the user to * determine the appropriate size of the event queue to prevent this loss - * of events. Note that when EQ handler is specified in \a callback, no + * of events. Note that when EQ handler is specified in @callback, no * event loss can happen, since the handler is run for each event deposited * into the EQ. * - * \param count The number of events to be stored in the event queue. It - * will be rounded up to the next power of two. - * \param callback A handler function that runs when an event is deposited - * into the EQ. The constant value LNET_EQ_HANDLER_NONE can be used to - * indicate that no event handler is desired. - * \param handle On successful return, this location will hold a handle for - * the newly created EQ. + * @count The number of events to be stored in the event queue. It + * will be rounded up to the next power of two. + * @callback A handler function that runs when an event is deposited + * into the EQ. The constant value LNET_EQ_HANDLER_NONE can + * be used to indicate that no event handler is desired. + * @handle On successful return, this location will hold a handle for + * the newly created EQ. * - * \retval 0 On success. - * \retval -EINVAL If an parameter is not valid. - * \retval -ENOMEM If memory for the EQ can't be allocated. + * Return: 0 On success. + * -EINVAL If an parameter is not valid. + * -ENOMEM If memory for the EQ can't be allocated. * * \see lnet_eq_handler_t for the discussion on EQ handler semantics. */ @@ -147,11 +147,11 @@ * Release the resources associated with an event queue if it's idle; * otherwise do nothing and it's up to the user to try again. * - * \param eqh A handle for the event queue to be released. + * @eqh A handle for the event queue to be released. * - * \retval 0 If the EQ is not in use and freed. - * \retval -ENOENT If \a eqh does not point to a valid EQ. - * \retval -EBUSY If the EQ is still in use by some MDs. + * Return: 0 If the EQ is not in use and freed. + * -ENOENT If @eqh does not point to a valid EQ. + * -EBUSY If the EQ is still in use by some MDs. */ int LNetEQFree(struct lnet_handle_eq eqh) @@ -278,16 +278,17 @@ * If an event handler is associated with the EQ, the handler will run before * this function returns successfully. The event is removed from the queue. * - * \param eventq A handle for the event queue. - * \param event On successful return (1 or -EOVERFLOW), this location will - * hold the next event in the EQ. + * @eventq A handle for the event queue. + * @event On successful return (1 or -EOVERFLOW), this location will + * hold the next event in the EQ. * - * \retval 0 No pending event in the EQ. - * \retval 1 Indicates success. - * \retval -ENOENT If \a eventq does not point to a valid EQ. - * \retval -EOVERFLOW Indicates success (i.e., an event is returned) and that - * at least one event between this event and the last event obtained from the - * EQ has been dropped due to limited space in the EQ. + * Return 0 No pending event in the EQ. + * 1 Indicates success. + * -ENOENT If @eventq does not point to a valid EQ. + * -EOVERFLOW Indicates success (i.e., an event is returned) + * and that at least one event between this event and the last + * event obtained from the EQ has been dropped due to limited + * space in the EQ. */ /** @@ -296,17 +297,17 @@ * this function returns successfully. This function returns the next event * in the EQ and removes it from the EQ. * - * \param eventq A handle for the event queue. - * \param event On successful return (1 or -EOVERFLOW), this location will - * hold the next event in the EQ. + * @eventq A handle for the event queue. + * @event On successful return (1 or -EOVERFLOW), this location will + * hold the next event in the EQ. * - * \retval 1 Indicates success. - * \retval -ENOENT If \a eventq does not point to a valid EQ. - * \retval -EOVERFLOW Indicates success (i.e., an event is returned) and that - * at least one event between this event and the last event obtained from the - * EQ has been dropped due to limited space in the EQ. + * Return: 1 Indicates success. + * -ENOENT If @eventq does not point to a valid EQ. + * -EOVERFLOW Indicates success (i.e., an event is returned) + * and that at least one event between this event and the last + * event obtained from the EQ has been dropped due to limited + * space in the EQ. */ - static int lnet_eq_wait_locked(signed long *timeout, long state) __must_hold(&the_lnet.ln_eq_wait_lock) @@ -345,21 +346,24 @@ * LNetEQPoll() provides a timeout to allow applications to poll, block for a * fixed period, or block indefinitely. * - * \param eventqs,neq An array of EQ handles, and size of the array. - * \param timeout Time in jiffies to wait for an event to occur on - * one of the EQs. The constant MAX_SCHEDULE_TIMEOUT can be used to indicate an - * infinite timeout. - * \param interruptible, if true, use TASK_INTERRUPTIBLE, else TASK_IDLE - * \param event,which On successful return (1 or -EOVERFLOW), \a event will - * hold the next event in the EQs, and \a which will contain the index of the - * EQ from which the event was taken. + * @eventqs,neq An array of EQ handles, and size of the array. + * @timeout Time in jiffies to wait for an event to occur on + * one of the EQs. The constant MAX_SCHEDULE_TIMEOUT + * can be used to indicate an infinite timeout. + * @interruptible if true, use TASK_INTERRUPTIBLE, else TASK_IDLE + * @event,which On successful return (1 or -EOVERFLOW), @event will + * hold the next event in the EQs, and @which will + * contain the index of the EQ from which the event + * was taken. * - * \retval 0 No pending event in the EQs after timeout. - * \retval 1 Indicates success. - * \retval -EOVERFLOW Indicates success (i.e., an event is returned) and that - * at least one event between this event and the last event obtained from the - * EQ indicated by \a which has been dropped due to limited space in the EQ. - * \retval -ENOENT If there's an invalid handle in \a eventqs. + * Return: 0 No pending event in the EQs after timeout. + * 1 Indicates success. + * -EOVERFLOW Indicates success (i.e., an event is + * returned) and that at least one event between + * this event and the last event obtained from the + * EQ indicated by @which has been dropped due to + * limited space in the EQ. + * -ENOENT If there's an invalid handle in @eventqs. */ int LNetEQPoll(struct lnet_handle_eq *eventqs, int neq, signed long timeout, diff --git a/drivers/staging/lustre/lnet/lnet/lib-md.c b/drivers/staging/lustre/lnet/lnet/lib-md.c index 2ab985e..33a59fb 100644 --- a/drivers/staging/lustre/lnet/lnet/lib-md.c +++ b/drivers/staging/lustre/lnet/lnet/lib-md.c @@ -333,27 +333,28 @@ int lnet_cpt_of_md(struct lnet_libmd *md, unsigned int offset) /** * Create a memory descriptor and attach it to a ME * - * \param meh A handle for a ME to associate the new MD with. - * \param umd Provides initial values for the user-visible parts of a MD. - * Other than its use for initialization, there is no linkage between this - * structure and the MD maintained by the LNet. - * \param unlink A flag to indicate whether the MD is automatically unlinked - * when it becomes inactive, either because the operation threshold drops to - * zero or because the available memory becomes less than \a umd.max_size. - * (Note that the check for unlinking a MD only occurs after the completion - * of a successful operation on the MD.) The value LNET_UNLINK enables auto - * unlinking; the value LNET_RETAIN disables it. - * \param handle On successful returns, a handle to the newly created MD is - * saved here. This handle can be used later in LNetMDUnlink(). + * @meh A handle for a ME to associate the new MD with. + * @umd Provides initial values for the user-visible parts of a MD. + * Other than its use for initialization, there is no linkage + * between this structure and the MD maintained by the LNet. + * @unlink A flag to indicate whether the MD is automatically unlinked + * when it becomes inactive, either because the operation + * threshold drops to zero or because the available memory + * becomes less than @umd.max_size. (Note that the check for + * unlinking a MD only occurs after the completion of a + * successful operation on the MD.) The value LNET_UNLINK + * enables auto unlinking; the value LNET_RETAIN disables it. + * @handle On successful returns, a handle to the newly created MD is + * saved here. This handle can be used later in LNetMDUnlink(). * - * \retval 0 On success. - * \retval -EINVAL If \a umd is not valid. - * \retval -ENOMEM If new MD cannot be allocated. - * \retval -ENOENT Either \a meh or \a umd.eq_handle does not point to a - * valid object. Note that it's OK to supply a NULL \a umd.eq_handle by - * calling LNetInvalidateHandle() on it. - * \retval -EBUSY If the ME pointed to by \a meh is already associated with - * a MD. + * Return: 0 on success. + * -EINVAL If @umd is not valid. + * -ENOMEM If new MD cannot be allocated. + * -ENOENT Either @meh or @umd.eq_handle does not point to a + * valid object. Note that it's OK to supply a NULL @umd.eq_handle + * by calling LNetInvalidateHandle() on it. + * -EBUSY if the ME pointed to by @meh is already associated with + * a MD. */ int LNetMDAttach(struct lnet_handle_me meh, struct lnet_md umd, @@ -426,17 +427,17 @@ int lnet_cpt_of_md(struct lnet_libmd *md, unsigned int offset) * Create a "free floating" memory descriptor - a MD that is not associated * with a ME. Such MDs are usually used in LNetPut() and LNetGet() operations. * - * \param umd,unlink See the discussion for LNetMDAttach(). - * \param handle On successful returns, a handle to the newly created MD is - * saved here. This handle can be used later in LNetMDUnlink(), LNetPut(), - * and LNetGet() operations. + * @umd,unlink See the discussion for LNetMDAttach(). + * @handle On successful returns, a handle to the newly created + * MD is saved here. This handle can be used later in + * LNetMDUnlink(), LNetPut(), and LNetGet() operations. * - * \retval 0 On success. - * \retval -EINVAL If \a umd is not valid. - * \retval -ENOMEM If new MD cannot be allocated. - * \retval -ENOENT \a umd.eq_handle does not point to a valid EQ. Note that - * it's OK to supply a NULL \a umd.eq_handle by calling - * LNetInvalidateHandle() on it. + * Return: 0 On success. + * -EINVAL If @umd is not valid. + * -ENOMEM If new MD cannot be allocated. + * -ENOENT @umd.eq_handle does not point to a valid EQ. + * Note that it's OK to supply a NULL @umd.eq_handle by + * calling LNetInvalidateHandle() on it. */ int LNetMDBind(struct lnet_md umd, enum lnet_unlink unlink, @@ -509,10 +510,10 @@ int lnet_cpt_of_md(struct lnet_libmd *md, unsigned int offset) * Note that in both cases the unlinked field of the event is always set; no * more event will happen on the MD after such an event is logged. * - * \param mdh A handle for the MD to be unlinked. + * @mdh A handle for the MD to be unlinked. * - * \retval 0 On success. - * \retval -ENOENT If \a mdh does not point to a valid MD object. + * Return: 0 On success. + * -ENOENT If @mdh does not point to a valid MD object. */ int LNetMDUnlink(struct lnet_handle_md mdh) diff --git a/drivers/staging/lustre/lnet/lnet/lib-me.c b/drivers/staging/lustre/lnet/lnet/lib-me.c index 4a5ffb1..f0365ea 100644 --- a/drivers/staging/lustre/lnet/lnet/lib-me.c +++ b/drivers/staging/lustre/lnet/lnet/lib-me.c @@ -40,34 +40,35 @@ #include /** - * Create and attach a match entry to the match list of \a portal. The new + * Create and attach a match entry to the match list of @portal. The new * ME is empty, i.e. not associated with a memory descriptor. LNetMDAttach() * can be used to attach a MD to an empty ME. * - * \param portal The portal table index where the ME should be attached. - * \param match_id Specifies the match criteria for the process ID of - * the requester. The constants LNET_PID_ANY and LNET_NID_ANY can be - * used to wildcard either of the identifiers in the lnet_process_id - * structure. - * \param match_bits,ignore_bits Specify the match criteria to apply - * to the match bits in the incoming request. The ignore bits are used - * to mask out insignificant bits in the incoming match bits. The resulting - * bits are then compared to the ME's match bits to determine if the - * incoming request meets the match criteria. - * \param unlink Indicates whether the ME should be unlinked when the memory - * descriptor associated with it is unlinked (Note that the check for - * unlinking a ME only occurs when the memory descriptor is unlinked.). - * Valid values are LNET_RETAIN and LNET_UNLINK. - * \param pos Indicates whether the new ME should be prepended or - * appended to the match list. Allowed constants: LNET_INS_BEFORE, - * LNET_INS_AFTER. - * \param handle On successful returns, a handle to the newly created ME - * object is saved here. This handle can be used later in LNetMEInsert(), - * LNetMEUnlink(), or LNetMDAttach() functions. + * @portal The portal table index where the ME should be attached. + * @match_id Specifies the match criteria for the process ID of + * the requester. The constants LNET_PID_ANY and LNET_NID_ANY + * can be used to wildcard either of the identifiers in the + * lnet_process_id structure. + * @match_bits + * @ignore_bits Specify the match criteria to apply to the match bits in the + * incoming request. The ignore bits are used to mask out + * insignificant bits in the incoming match bits. The resulting + * bits are then compared to the ME's match bits to determine if + * the incoming request meets the match criteria. + * @unlink Indicates whether the ME should be unlinked when the memory + * descriptor associated with it is unlinked (Note that the check + * for unlinking a ME only occurs when the memory descriptor is + * unlinked.). Valid values are LNET_RETAIN and LNET_UNLINK. + * @pos Indicates whether the new ME should be prepended or + * appended to the match list. Allowed constants: LNET_INS_BEFORE, + * LNET_INS_AFTER. + * @handle On successful returns, a handle to the newly created ME object + * is saved here. This handle can be used later in LNetMEInsert(), + * LNetMEUnlink(), or LNetMDAttach() functions. * - * \retval 0 On success. - * \retval -EINVAL If \a portal is invalid. - * \retval -ENOMEM If new ME object cannot be allocated. + * Return: 0 On success. + * -EINVAL If @portal is invalid. + * -ENOMEM If new ME object cannot be allocated. */ int LNetMEAttach(unsigned int portal, @@ -125,20 +126,24 @@ /** * Create and a match entry and insert it before or after the ME pointed to by - * \a current_meh. The new ME is empty, i.e. not associated with a memory + * @current_meh. The new ME is empty, i.e. not associated with a memory * descriptor. LNetMDAttach() can be used to attach a MD to an empty ME. * * This function is identical to LNetMEAttach() except for the position * where the new ME is inserted. * - * \param current_meh A handle for a ME. The new ME will be inserted - * immediately before or immediately after this ME. - * \param match_id,match_bits,ignore_bits,unlink,pos,handle See the discussion - * for LNetMEAttach(). + * @current_meh A handle for a ME. The new ME will be inserted + * immediately before or immediately after this ME. + * @match_id See the discussion for LNetMEAttach(). + * @match_bits + * @ignore_bits + * @unlink + * @pos + * @handle * - * \retval 0 On success. - * \retval -ENOMEM If new ME object cannot be allocated. - * \retval -ENOENT If \a current_meh does not point to a valid match entry. + * Return: 0 On success. + * -ENOMEM If new ME object cannot be allocated. + * -ENOENT If @current_meh does not point to a valid match entry. */ int LNetMEInsert(struct lnet_handle_me current_meh, @@ -214,10 +219,11 @@ * and an unlink event will be generated. It is an error to use the ME handle * after calling LNetMEUnlink(). * - * \param meh A handle for the ME to be unlinked. + * @meh A handle for the ME to be unlinked. + * + * Return 0 On success. + * -ENOENT If @meh does not point to a valid ME. * - * \retval 0 On success. - * \retval -ENOENT If \a meh does not point to a valid ME. * \see LNetMDUnlink() for the discussion on delivering unlink event. */ int diff --git a/drivers/staging/lustre/lnet/lnet/lib-move.c b/drivers/staging/lustre/lnet/lnet/lib-move.c index 185ea51..875d289 100644 --- a/drivers/staging/lustre/lnet/lnet/lib-move.c +++ b/drivers/staging/lustre/lnet/lnet/lib-move.c @@ -701,15 +701,16 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats, } /** - * \param msg The message to be sent. - * \param do_send True if lnet_ni_send() should be called in this function. - * lnet_send() is going to lnet_net_unlock immediately after this, so - * it sets do_send FALSE and I don't do the unlock/send/lock bit. + * @msg The message to be sent. + * @do_send True if lnet_ni_send() should be called in this function. + * lnet_send() is going to lnet_net_unlock immediately after this, + * so it sets do_send FALSE and I don't do the unlock/send/lock + * bit. * - * \retval LNET_CREDIT_OK If \a msg sent or OK to send. - * \retval LNET_CREDIT_WAIT If \a msg blocked for credit. - * \retval -EHOSTUNREACH If the next hop of the message appears dead. - * \retval -ECANCELED If the MD of the message has been unlinked. + * Return: LNET_CREDIT_OK If @msg sent or OK to send. + * LNET_CREDIT_WAIT If @msg blocked for credit. + * -EHOSTUNREACH If the next hop of the message appears dead. + * -ECANCELED If the MD of the message has been unlinked. */ static int lnet_post_send_locked(struct lnet_msg *msg, int do_send) @@ -2239,9 +2240,9 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats, } /** - * \retval LNET_CREDIT_OK If \a msg is forwarded - * \retval LNET_CREDIT_WAIT If \a msg is blocked because w/o buffer - * \retval -ve error code + * Return: LNET_CREDIT_OK if @msg is forwarded + * LNET_CREDIT_WAIT if @msg is blocked because w/o buffer + * -ve error code */ int lnet_parse_forward_locked(struct lnet_ni *ni, struct lnet_msg *msg) @@ -2706,7 +2707,7 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats, * delivery. * * The local events will be logged in the EQ associated with the MD pointed to - * by \a mdh handle. Using a MD without an associated EQ results in these + * by @mdh handle. Using a MD without an associated EQ results in these * events being discarded. In this case, the caller must have another * mechanism (e.g., a higher level protocol) for determining when it is safe * to modify the memory region associated with the MD. @@ -2714,28 +2715,29 @@ void lnet_usr_translate_stats(struct lnet_ioctl_element_msg_stats *msg_stats, * Note that LNet does not guarantee the order of LNET_EVENT_SEND and * LNET_EVENT_ACK, though intuitively ACK should happen after SEND. * - * \param self Indicates the NID of a local interface through which to send - * the PUT request. Use LNET_NID_ANY to let LNet choose one by itself. - * \param mdh A handle for the MD that describes the memory to be sent. The MD - * must be "free floating" (See LNetMDBind()). - * \param ack Controls whether an acknowledgment is requested. - * Acknowledgments are only sent when they are requested by the initiating - * process and the target MD enables them. - * \param target A process identifier for the target process. - * \param portal The index in the \a target's portal table. - * \param match_bits The match bits to use for MD selection at the target - * process. - * \param offset The offset into the target MD (only used when the target - * MD has the LNET_MD_MANAGE_REMOTE option set). - * \param hdr_data 64 bits of user data that can be included in the message - * header. This data is written to an event queue entry at the target if an - * EQ is present on the matching MD. + * @self Indicates the NID of a local interface through which to send + * the PUT request. Use LNET_NID_ANY to let LNet choose one by + * itself. + * @mdh A handle for the MD that describes the memory to be sent. + * The MD must be "free floating" (See LNetMDBind()). + * @ack Controls whether an acknowledgment is requested. + * Acknowledgments are only sent when they are requested by + * the initiating process and the target MD enables them. + * @target A process identifier for the target process. + * @portal The index in the @target's portal table. + * @match_bits The match bits to use for MD selection at the target + * process. + * @offset The offset into the target MD (only used when the target + * MD has the LNET_MD_MANAGE_REMOTE option set). + * @hdr_data 64 bits of user data that can be included in the message + * header. This data is written to an event queue entry at + * the target if an EQ is present on the matching MD. * - * \retval 0 Success, and only in this case events will be generated - * and logged to EQ (if it exists). - * \retval -EIO Simulated failure. - * \retval -ENOMEM Memory allocation failure. - * \retval -ENOENT Invalid MD object. + * Return: 0 Success, and only in this case events will be generated + * and logged to EQ (if it exists). + * -EIO Simulated failure. + * -ENOMEM Memory allocation failure. + * -ENOENT Invalid MD object. * * \see lnet_event::hdr_data and lnet_event_kind. */ @@ -2935,16 +2937,21 @@ struct lnet_msg * * On the target node, an LNET_EVENT_GET is logged when the GET request * arrives and is accepted into a MD. * - * \param self,target,portal,match_bits,offset See the discussion in LNetPut(). - * \param mdh A handle for the MD that describes the memory into which the - * requested data will be received. The MD must be "free floating" - * (See LNetMDBind()). + * @self See the discussion in LNetPut(). + * @target + * @portal + * @match_bits + * @offset * - * \retval 0 Success, and only in this case events will be generated - * and logged to EQ (if it exists) of the MD. - * \retval -EIO Simulated failure. - * \retval -ENOMEM Memory allocation failure. - * \retval -ENOENT Invalid MD object. + * @mdh A handle for the MD that describes the memory into which the + * requested data will be received. The MD must be "free floating" + * (See LNetMDBind()). + * + * Return: 0 Success, and only in this case events will be generated + * and logged to EQ (if it exists) of the MD. + * -EIO Simulated failure. + * -ENOMEM Memory allocation failure. + * -ENOENT Invalid MD object. */ int LNetGet(lnet_nid_t self, struct lnet_handle_md mdh, @@ -3024,18 +3031,20 @@ struct lnet_msg * EXPORT_SYMBOL(LNetGet); /** - * Calculate distance to node at \a dstnid. + * Calculate distance to node at @dstnid. + * + * @dstnid Target NID. + * @srcnidp If not NULL, NID of the local interface to reach @dstnid + * is saved here. + * @orderp If not NULL, order of the route to reach @dstnid is saved + * here. + * + * Return: 0 If @dstnid belongs to a local interface, and reserved + * option local_nid_dist_zero is set, which is the default. * - * \param dstnid Target NID. - * \param srcnidp If not NULL, NID of the local interface to reach \a dstnid - * is saved here. - * \param orderp If not NULL, order of the route to reach \a dstnid is saved - * here. + * positives Distance to target NID, i.e. number of hops plus one. * - * \retval 0 If \a dstnid belongs to a local interface, and reserved option - * local_nid_dist_zero is set, which is the default. - * \retval positives Distance to target NID, i.e. number of hops plus one. - * \retval -EHOSTUNREACH If \a dstnid is not reachable. + * -EHOSTUNREACH If @dstnid is not reachable. */ int LNetDist(lnet_nid_t dstnid, lnet_nid_t *srcnidp, u32 *orderp) diff --git a/drivers/staging/lustre/lnet/lnet/lib-ptl.c b/drivers/staging/lustre/lnet/lnet/lib-ptl.c index 4a12d86..ae061e8 100644 --- a/drivers/staging/lustre/lnet/lnet/lib-ptl.c +++ b/drivers/staging/lustre/lnet/lnet/lib-ptl.c @@ -894,10 +894,10 @@ struct list_head * * especially vulnerable since the connections to its neighbor routers are * shared among all clients. * - * \param portal Index of the portal to enable the lazy attribute on. + * @portal Index of the portal to enable the lazy attribute on. * - * \retval 0 On success. - * \retval -EINVAL If \a portal is not a valid index. + * Return: 0 On success. + * -EINVAL If @portal is not a valid index. */ int LNetSetLazyPortal(int portal) @@ -975,10 +975,10 @@ struct list_head * * Turn off the lazy portal attribute. Delayed requests on the portal, * if any, will be all dropped when this function returns. * - * \param portal Index of the portal to disable the lazy attribute on. + * @portal Index of the portal to disable the lazy attribute on. * - * \retval 0 On success. - * \retval -EINVAL If \a portal is not a valid index. + * Return: 0 On success. + * -EINVAL If @portal is not a valid index. */ int LNetClearLazyPortal(int portal) diff --git a/drivers/staging/lustre/lnet/lnet/net_fault.c b/drivers/staging/lustre/lnet/lnet/net_fault.c index 130a7c9..76fb61d4 100644 --- a/drivers/staging/lustre/lnet/lnet/net_fault.c +++ b/drivers/staging/lustre/lnet/lnet/net_fault.c @@ -47,7 +47,7 @@ struct lnet_drop_rule { struct list_head dr_link; /** attributes of this rule */ struct lnet_fault_attr dr_attr; - /** lock to protect \a dr_drop_at and \a dr_stat */ + /** lock to protect @dr_drop_at and @dr_stat */ spinlock_t dr_lock; /** * the message sequence to drop, which means message is dropped when @@ -188,10 +188,10 @@ struct lnet_drop_rule { } /** - * Remove matched drop rules from lnet, all rules that can match \a src and - * \a dst will be removed. - * If \a src is zero, then all rules have \a dst as destination will be remove - * If \a dst is zero, then all rules have \a src as source will be removed + * Remove matched drop rules from lnet, all rules that can match @src and + * @dst will be removed. + * If @src is zero, then all rules have @dst as destination will be remove + * If @dst is zero, then all rules have @src as source will be removed * If both of them are zero, all rules will be removed */ static int @@ -233,7 +233,7 @@ struct lnet_drop_rule { } /** - * List drop rule at position of \a pos + * List drop rule at position of @pos */ static int lnet_drop_rule_list(int pos, struct lnet_fault_attr *attr, @@ -349,7 +349,7 @@ struct lnet_drop_rule { } /** - * Check if message from \a src to \a dst can match any existed drop rule + * Check if message from @src to @dst can match any existed drop rule */ bool lnet_drop_rule_match(struct lnet_hdr *hdr) @@ -395,7 +395,7 @@ struct lnet_delay_rule { struct list_head dl_sched_link; /** attributes of this rule */ struct lnet_fault_attr dl_attr; - /** lock to protect \a below members */ + /** lock to protect @below members */ spinlock_t dl_lock; /** refcount of delay rule */ atomic_t dl_refcount; @@ -423,7 +423,7 @@ struct lnet_delay_rule { struct delay_daemon_data { /** serialise rule add/remove */ struct mutex dd_mutex; - /** protect rules on \a dd_sched_rules */ + /** protect rules on @dd_sched_rules */ spinlock_t dd_lock; /** scheduled delay rules (by timer) */ struct list_head dd_sched_rules; @@ -520,7 +520,7 @@ struct delay_daemon_data { } /** - * check if \a msg can match any Delay Rule, receiving of this message + * check if @msg can match any Delay Rule, receiving of this message * will be delayed if there is a match. */ bool @@ -792,11 +792,11 @@ struct delay_daemon_data { } /** - * Remove matched Delay Rules from lnet, if \a shutdown is true or both \a src - * and \a dst are zero, all rules will be removed, otherwise only matched rules + * Remove matched Delay Rules from lnet, if @shutdown is true or both @src + * and @dst are zero, all rules will be removed, otherwise only matched rules * will be removed. - * If \a src is zero, then all rules have \a dst as destination will be remove - * If \a dst is zero, then all rules have \a src as source will be removed + * If @src is zero, then all rules have @dst as destination will be remove + * If @dst is zero, then all rules have @src as source will be removed * * When a delay rule is removed, all delayed messages of this rule will be * processed immediately. @@ -871,7 +871,7 @@ struct delay_daemon_data { } /** - * List Delay Rule at position of \a pos + * List Delay Rule at position of @pos */ int lnet_delay_rule_list(int pos, struct lnet_fault_attr *attr, diff --git a/drivers/staging/lustre/lnet/lnet/nidstrings.c b/drivers/staging/lustre/lnet/lnet/nidstrings.c index 2df9ce4..892b28e 100644 --- a/drivers/staging/lustre/lnet/lnet/nidstrings.c +++ b/drivers/staging/lustre/lnet/lnet/nidstrings.c @@ -153,11 +153,11 @@ struct addrrange { /** * Parses \ token on the syntax. * - * Allocates struct addrrange and links to \a nidrange via + * Allocates struct addrrange and links to nidrange via * (nidrange::nr_addrranges) * - * \retval 0 if \a src parses to '*' | \ | \ - * \retval -errno otherwise + * Return: 0 if @src parses to '*' | \ | \ + * -errno otherwise */ static int parse_addrange(const struct cfs_lstr *src, struct nidrange *nidrange) @@ -183,12 +183,12 @@ struct addrrange { /** * Finds or creates struct nidrange. * - * Checks if \a src is a valid network name, looks for corresponding - * nidrange on the ist of nidranges (\a nidlist), creates new struct + * Checks if @src is a valid network name, looks for corresponding + * nidrange on the ist of nidranges (@nidlist), creates new struct * nidrange if it is not found. * - * \retval pointer to struct nidrange matching network specified via \a src - * \retval NULL if \a src does not match any network + * Return: pointer to struct nidrange matching network specified via @src + * NULL if @src does not match any network */ static struct nidrange * add_nidrange(const struct cfs_lstr *src, @@ -243,8 +243,8 @@ struct addrrange { /** * Parses \ token of the syntax. * - * \retval 1 if \a src parses to \ '@' \ - * \retval 0 otherwise + * Return: 1 if @src parses to \ '@' \ + * 0 otherwise */ static int parse_nidrange(struct cfs_lstr *src, struct list_head *nidlist) @@ -272,12 +272,10 @@ struct addrrange { } /** - * Frees addrrange structures of \a list. + * Frees addrrange structures of @list. * - * For each struct addrrange structure found on \a list it frees + * For each struct addrrange structure found on @list it frees * cfs_expr_list list attached to it and frees the addrrange itself. - * - * \retval none */ static void free_addrranges(struct list_head *list) @@ -295,12 +293,10 @@ struct addrrange { } /** - * Frees nidrange strutures of \a list. + * Frees nidrange strutures of @list. * - * For each struct nidrange structure found on \a list it frees + * For each struct nidrange structure found on @list it frees * addrrange list attached to it and frees the nidrange itself. - * - * \retval none */ void cfs_free_nidlist(struct list_head *list) @@ -320,15 +316,13 @@ struct addrrange { /** * Parses nid range list. * - * Parses with rigorous syntax and overflow checking \a str into - * \ [ ' ' \ ], compiles \a str into set of - * structures and links that structure to \a nidlist. The resulting - * list can be used to match a NID againts set of NIDS defined by \a - * str. - * \see cfs_match_nid + * Parses with rigorous syntax and overflow checking @str into + * \ [ ' ' \ ], compiles @str into set of + * structures and links that structure to @nidlist. The resulting + * list can be used to match a NID againts set of NIDS defined by + * @str. See cfs_match_nid * - * \retval 1 on success - * \retval 0 otherwise + * Return: 1 on success 0 otherwise */ int cfs_parse_nidlist(char *str, int len, struct list_head *nidlist) @@ -357,12 +351,11 @@ struct addrrange { EXPORT_SYMBOL(cfs_parse_nidlist); /** - * Matches a nid (\a nid) against the compiled list of nidranges (\a nidlist). + * Matches a nid (@nid) against the compiled list of nidranges (@nidlist). * * \see cfs_parse_nidlist() * - * \retval 1 on match - * \retval 0 otherwises + * Return: 1 on match, 0 otherwises */ int cfs_match_nid(lnet_nid_t nid, struct list_head *nidlist) { @@ -386,9 +379,9 @@ int cfs_match_nid(lnet_nid_t nid, struct list_head *nidlist) EXPORT_SYMBOL(cfs_match_nid); /** - * Print the network part of the nidrange \a nr into the specified \a buffer. + * Print the network part of the nidrange @nr into the specified @buffer. * - * \retval number of characters written + * Return: number of characters written */ static int cfs_print_network(char *buffer, int count, struct nidrange *nr) @@ -403,10 +396,10 @@ int cfs_match_nid(lnet_nid_t nid, struct list_head *nidlist) } /** - * Print a list of addrrange (\a addrranges) into the specified \a buffer. - * At max \a count characters can be printed into \a buffer. + * Print a list of addrrange (@addrranges) into the specified @buffer. + * At max @count characters can be printed into @buffer. * - * \retval number of characters written + * Return: number of characters written */ static int cfs_print_addrranges(char *buffer, int count, struct list_head *addrranges, @@ -427,11 +420,11 @@ int cfs_match_nid(lnet_nid_t nid, struct list_head *nidlist) } /** - * Print a list of nidranges (\a nidlist) into the specified \a buffer. - * At max \a count characters can be printed into \a buffer. + * Print a list of nidranges (@nidlist) into the specified @buffer. + * At max @count characters can be printed into @buffer. * Nidranges are separated by a space character. * - * \retval number of characters written + * Return: number of characters written */ int cfs_print_nidlist(char *buffer, int count, struct list_head *nidlist) { @@ -462,9 +455,9 @@ int cfs_print_nidlist(char *buffer, int count, struct list_head *nidlist) * Determines minimum and maximum addresses for a single * numeric address range * - * \param ar - * \param min_nid - * \param max_nid + * @ar + * @min_nid + * @max_nid */ static void cfs_ip_ar_min_max(struct addrrange *ar, u32 *min_nid, u32 *max_nid) @@ -501,9 +494,9 @@ static void cfs_ip_ar_min_max(struct addrrange *ar, u32 *min_nid, * Determines minimum and maximum addresses for a single * numeric address range * - * \param ar - * \param min_nid - * \param max_nid + * @ar + * @min_nid + * @max_nid */ static void cfs_num_ar_min_max(struct addrrange *ar, u32 *min_nid, u32 *max_nid) @@ -532,10 +525,10 @@ static void cfs_num_ar_min_max(struct addrrange *ar, u32 *min_nid, * Determines whether an expression list in an nidrange contains exactly * one contiguous address range. Calls the correct netstrfns for the LND * - * \param *nidlist + * @nidlist * - * \retval true if contiguous - * \retval false if not contiguous + * Return: true if contiguous + * false if not contiguous */ bool cfs_nidrange_is_contiguous(struct list_head *nidlist) { @@ -570,10 +563,10 @@ bool cfs_nidrange_is_contiguous(struct list_head *nidlist) * Determines whether an expression list in an num nidrange contains exactly * one contiguous address range. * - * \param *nidlist + * @nidlist * - * \retval true if contiguous - * \retval false if not contiguous + * Return: true if contiguous + * false if not contiguous */ static bool cfs_num_is_contiguous(struct list_head *nidlist) { @@ -616,10 +609,10 @@ static bool cfs_num_is_contiguous(struct list_head *nidlist) * Determines whether an expression list in an ip nidrange contains exactly * one contiguous address range. * - * \param *nidlist + * @nidlist * - * \retval true if contiguous - * \retval false if not contiguous + * Return: true if contiguous + * false if not contiguous */ static bool cfs_ip_is_contiguous(struct list_head *nidlist) { @@ -669,9 +662,9 @@ static bool cfs_ip_is_contiguous(struct list_head *nidlist) * Takes a linked list of nidrange expressions, determines the minimum * and maximum nid and creates appropriate nid structures * - * \param *nidlist - * \param *min_nid - * \param *max_nid + * @nidlist + * @min_nid + * @max_nid */ void cfs_nidrange_find_min_max(struct list_head *nidlist, char *min_nid, char *max_nid, size_t nidstr_length) @@ -706,9 +699,9 @@ void cfs_nidrange_find_min_max(struct list_head *nidlist, char *min_nid, /** * Determines the min and max NID values for num LNDs * - * \param *nidlist - * \param *min_nid - * \param *max_nid + * @nidlist + * @min_nid + * @max_nid */ static void cfs_num_min_max(struct list_head *nidlist, u32 *min_nid, u32 *max_nid) @@ -738,9 +731,9 @@ static void cfs_num_min_max(struct list_head *nidlist, u32 *min_nid, * Takes an nidlist and determines the minimum and maximum * ip addresses. * - * \param *nidlist - * \param *min_nid - * \param *max_nid + * @nidlist + * @min_nid + * @max_nid */ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid, u32 *max_nid) @@ -868,10 +861,9 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid, } /** - * Matches address (\a addr) against address set encoded in \a list. + * Matches address (@addr) against address set encoded in @list. * - * \retval 1 if \a addr matches - * \retval 0 otherwise + * Return: 1 if @addr matches, 0 otherwise */ int cfs_ip_addr_match(u32 addr, struct list_head *list) @@ -920,8 +912,8 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid, * * Examples of such networks are gm and elan. * - * \retval 0 if \a str parsed to numeric address - * \retval errno otherwise + * Return: 0 if @str parsed to numeric address + * errno otherwise */ static int libcfs_num_parse(char *str, int len, struct list_head *list) @@ -952,8 +944,7 @@ static void cfs_ip_min_max(struct list_head *nidlist, u32 *min_nid, /* * Nf_match_addr method for networks using numeric addresses * - * \retval 1 on match - * \retval 0 otherwise + * Return: 1 on match, 0 otherwise */ static int libcfs_num_match(u32 addr, struct list_head *numaddr) From patchwork Sat Mar 2 19:12:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836731 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B9C91575 for ; Sat, 2 Mar 2019 19:12:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7EE932AB5B for ; Sat, 2 Mar 2019 19:12:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 737902AB5C; Sat, 2 Mar 2019 19:12:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D5AB62AB55 for ; Sat, 2 Mar 2019 19:12:54 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 35DFA21FAE9; Sat, 2 Mar 2019 11:12:41 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id E360821F20F for ; Sat, 2 Mar 2019 11:12:29 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id D64BE275; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id CFA4D203; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:19 -0500 Message-Id: <1551553944-6419-3-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 2/7] lustre: move header file comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the DocBook handling to sphinix. Signed-off-by: James Simmons --- .../lustre/include/uapi/linux/lustre/lustre_fid.h | 20 ++-- .../lustre/include/uapi/linux/lustre/lustre_idl.h | 2 +- .../lustre/include/uapi/linux/lustre/lustre_user.h | 13 +-- drivers/staging/lustre/lustre/include/cl_object.h | 91 ++++++++-------- drivers/staging/lustre/lustre/include/lu_object.h | 8 +- drivers/staging/lustre/lustre/include/lustre_dlm.h | 10 +- .../staging/lustre/lustre/include/lustre_import.h | 4 +- drivers/staging/lustre/lustre/include/lustre_mdc.h | 4 +- drivers/staging/lustre/lustre/include/lustre_net.h | 76 ++++++------- drivers/staging/lustre/lustre/include/lustre_nrs.h | 118 ++++++++++----------- drivers/staging/lustre/lustre/include/lustre_sec.h | 83 ++++++++------- drivers/staging/lustre/lustre/include/obd_class.h | 9 +- drivers/staging/lustre/lustre/include/seq_range.h | 32 +++--- 13 files changed, 240 insertions(+), 230 deletions(-) diff --git a/drivers/staging/lustre/include/uapi/linux/lustre/lustre_fid.h b/drivers/staging/lustre/include/uapi/linux/lustre/lustre_fid.h index 746bf7a..9f7959c 100644 --- a/drivers/staging/lustre/include/uapi/linux/lustre/lustre_fid.h +++ b/drivers/staging/lustre/include/uapi/linux/lustre/lustre_fid.h @@ -137,8 +137,9 @@ static inline bool fid_is_mdt0(const struct lu_fid *fid) /** * Check if a fid is igif or not. - * \param fid the fid to be tested. - * \return true if the fid is an igif; otherwise false. + * + * @fid the fid to be tested. + * Return: true if the fid is an igif; otherwise false. */ static inline bool fid_seq_is_igif(__u64 seq) { @@ -152,8 +153,9 @@ static inline bool fid_is_igif(const struct lu_fid *fid) /** * Check if a fid is idif or not. - * \param fid the fid to be tested. - * \return true if the fid is an idif; otherwise false. + * + * @fid the fid to be tested. + * Return: true if the fid is an idif; otherwise false. */ static inline bool fid_seq_is_idif(__u64 seq) { @@ -205,8 +207,9 @@ static inline __u32 fid_idif_ost_idx(const struct lu_fid *fid) /** * Get inode number from an igif. - * \param fid an igif to get inode number from. - * \return inode number for the igif. + * + * @fid an igif to get inode number from. + * Return: inode number for the igif. */ static inline ino_t lu_igif_ino(const struct lu_fid *fid) { @@ -215,8 +218,9 @@ static inline ino_t lu_igif_ino(const struct lu_fid *fid) /** * Get inode generation from an igif. - * \param fid an igif to get inode generation from. - * \return inode generation for the igif. + * + * @fid an igif to get inode generation from. + * Return: inode generation for the igif. */ static inline __u32 lu_igif_gen(const struct lu_fid *fid) { diff --git a/drivers/staging/lustre/include/uapi/linux/lustre/lustre_idl.h b/drivers/staging/lustre/include/uapi/linux/lustre/lustre_idl.h index bffe62e..a86190d 100644 --- a/drivers/staging/lustre/include/uapi/linux/lustre/lustre_idl.h +++ b/drivers/staging/lustre/include/uapi/linux/lustre/lustre_idl.h @@ -2746,7 +2746,7 @@ struct lustre_capa_key { __u8 lk_key[CAPA_HMAC_KEY_MAX_LEN]; /**< key */ } __packed; -/** The link ea holds 1 \a link_ea_entry for each hardlink */ +/** The link ea holds 1 @link_ea_entry for each hardlink */ #define LINK_EA_MAGIC 0x11EAF1DFUL struct link_ea_header { __u32 leh_magic; diff --git a/drivers/staging/lustre/include/uapi/linux/lustre/lustre_user.h b/drivers/staging/lustre/include/uapi/linux/lustre/lustre_user.h index 178837c..8bc756f 100644 --- a/drivers/staging/lustre/include/uapi/linux/lustre/lustre_user.h +++ b/drivers/staging/lustre/include/uapi/linux/lustre/lustre_user.h @@ -1018,8 +1018,8 @@ static inline char *changelog_rec_sname(struct changelog_rec *rec) * - CLF_RENAME will not be removed * - CLF_JOBID will not be added without CLF_RENAME being added too * - * @param[in,out] rec The record to remap. - * @param[in] crf_wanted Flags describing the desired extensions. + * @rec The record to remap. + * @crf_wanted Flags describing the desired extensions. */ static inline void changelog_remap_rec(struct changelog_rec *rec, enum changelog_rec_flags crf_wanted) @@ -1297,10 +1297,11 @@ struct hsm_action_item { /* * helper function which print in hexa the first bytes of * hai opaque field - * \param hai [IN] record to print - * \param buffer [OUT] output buffer - * \param len [IN] max buffer len - * \retval buffer + * + * @hai record to print + * @buffer output buffer + * @len max buffer len + * Return: buffer */ static inline char *hai_dump_data_field(struct hsm_action_item *hai, char *buffer, size_t len) diff --git a/drivers/staging/lustre/lustre/include/cl_object.h b/drivers/staging/lustre/lustre/include/cl_object.h index 05be853..691c2f5 100644 --- a/drivers/staging/lustre/lustre/include/cl_object.h +++ b/drivers/staging/lustre/lustre/include/cl_object.h @@ -303,15 +303,15 @@ struct cl_object_operations { * every object layer when a new cl_page is instantiated. Layer * keeping private per-page data, or requiring its own page operations * vector should allocate these data here, and attach then to the page - * by calling cl_page_slice_add(). \a vmpage is locked (in the VM + * by calling cl_page_slice_add(). @vmpage is locked (in the VM * sense). Optional. * - * \retval NULL success. + * Return: NULL success. * - * \retval ERR_PTR(errno) failure code. + * ERR_PTR(errno) failure code. * - * \retval valid-pointer pointer to already existing referenced page - * to be used instead of newly created. + * valid-pointer pointer to already existing referenced + * page to be used instead of newly created. */ int (*coo_page_init)(const struct lu_env *env, struct cl_object *obj, struct cl_page *page, pgoff_t index); @@ -337,27 +337,27 @@ struct cl_object_operations { int (*coo_io_init)(const struct lu_env *env, struct cl_object *obj, struct cl_io *io); /** - * Fill portion of \a attr that this layer controls. This method is + * Fill portion of @attr that this layer controls. This method is * called top-to-bottom through all object layers. * * \pre cl_object_header::coh_attr_guard of the top-object is locked. * - * \return 0: to continue - * \return +ve: to stop iterating through layers (but 0 is returned - * from enclosing cl_object_attr_get()) - * \return -ve: to signal error + * Return: 0 to continue + * +ve to stop iterating through layers (but 0 is returned + * from enclosing cl_object_attr_get()) + * -ve to signal error */ int (*coo_attr_get)(const struct lu_env *env, struct cl_object *obj, struct cl_attr *attr); /** * Update attributes. * - * \a valid is a bitmask composed from enum #cl_attr_valid, and + * @valid is a bitmask composed from enum #cl_attr_valid, and * indicating what attributes are to be set. * * \pre cl_object_header::coh_attr_guard of the top-object is locked. * - * \return the same convention as for + * Return: the same convention as for * cl_object_operations::coo_attr_get() is used. */ int (*coo_attr_update)(const struct lu_env *env, struct cl_object *obj, @@ -372,7 +372,7 @@ struct cl_object_operations { const struct cl_object_conf *conf); /** * Glimpse ast. Executed when glimpse ast arrives for a lock on this - * object. Layers are supposed to fill parts of \a lvb that will be + * object. Layers are supposed to fill parts of @lvb that will be * shipped to the glimpse originator as a glimpse result. * * \see vvp_object_glimpse(), lovsub_object_glimpse(), @@ -451,16 +451,16 @@ struct cl_object_header { }; /** - * Helper macro: iterate over all layers of the object \a obj, assigning every - * layer top-to-bottom to \a slice. + * Helper macro: iterate over all layers of the object @obj, assigning every + * layer top-to-bottom to @slice. */ #define cl_object_for_each(slice, obj) \ list_for_each_entry((slice), \ &(obj)->co_lu.lo_header->loh_layers, \ co_lu.lo_linkage) /** - * Helper macro: iterate over all layers of the object \a obj, assigning every - * layer bottom-to-top to \a slice. + * Helper macro: iterate over all layers of the object @obj, assigning every + * layer bottom-to-top to @slice. */ #define cl_object_for_each_reverse(slice, obj) \ list_for_each_entry_reverse((slice), \ @@ -793,8 +793,8 @@ enum cl_req_type { /** * Per-layer page operations. * - * Methods taking an \a io argument are for the activity happening in the - * context of given \a io. Page is assumed to be owned by that io, except for + * Methods taking an @io argument are for the activity happening in the + * context of given @io. Page is assumed to be owned by that io, except for * the obvious cases (like cl_page_operations::cpo_own()). * * \see vvp_page_ops, lov_page_ops, osc_page_ops @@ -807,7 +807,7 @@ struct cl_page_operations { */ /** - * Called when \a io acquires this page into the exclusive + * Called when @io acquires this page into the exclusive * ownership. When this method returns, it is guaranteed that the is * not owned by other io, and no transfer is going on against * it. Optional. @@ -826,7 +826,7 @@ struct cl_page_operations { void (*cpo_disown)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io); /** - * Called for a page that is already "owned" by \a io from VM point of + * Called for a page that is already "owned" by @io from VM point of * view. Optional. * * \see cl_page_assume() @@ -845,7 +845,7 @@ struct cl_page_operations { const struct cl_page_slice *slice, struct cl_io *io); /** - * Announces whether the page contains valid data or not by \a uptodate. + * Announces whether the page contains valid data or not by @uptodate. * * \see cl_page_export() * \see vvp_page_export() @@ -856,9 +856,10 @@ struct cl_page_operations { * Checks whether underlying VM page is locked (in the suitable * sense). Used for assertions. * - * \retval -EBUSY: page is protected by a lock of a given mode; - * \retval -ENODATA: page is not protected by a lock; - * \retval 0: this layer cannot decide. (Should never happen.) + * Return: -EBUSY means page is protected by a lock of a given + * mode; + * -ENODATA when page is not protected by a lock; + * 0 this layer cannot decide. (Should never happen.) */ int (*cpo_is_vmlocked)(const struct lu_env *env, const struct cl_page_slice *slice); @@ -918,9 +919,9 @@ struct cl_page_operations { * Called when a page is submitted for a transfer as a part of * cl_page_list. * - * \return 0 : page is eligible for submission; - * \return -EALREADY : skip this page; - * \return -ve : error. + * Return: 0 if page is eligible for submission; + * -EALREADY skip this page; + * -ve if error. * * \see cl_page_prep() */ @@ -946,9 +947,9 @@ struct cl_page_operations { * Called when cached page is about to be added to the * ptlrpc request as a part of req formation. * - * \return 0 : proceed with this page; - * \return -EAGAIN : skip this page; - * \return -ve : error. + * Return 0 proceed with this page; + * -EAGAIN skip this page; + * -ve error. * * \see cl_page_make_ready() */ @@ -984,7 +985,7 @@ struct cl_page_operations { }; /** - * Helper macro, dumping detailed information about \a page into a log. + * Helper macro, dumping detailed information about @page into a log. */ #define CL_PAGE_DEBUG(mask, env, page, format, ...) \ do { \ @@ -996,7 +997,7 @@ struct cl_page_operations { } while (0) /** - * Helper macro, dumping shorter information about \a page into a log. + * Helper macro, dumping shorter information about @page into a log. */ #define CL_PAGE_HEADER(mask, env, page, format, ...) \ do { \ @@ -1203,10 +1204,10 @@ struct cl_lock_operations { /** * Attempts to enqueue the lock. Called top-to-bottom. * - * \retval 0 this layer has enqueued the lock successfully - * \retval >0 this layer has enqueued the lock, but need to wait on - * @anchor for resources - * \retval -ve failure + * Return: 0 this layer has enqueued the lock successfully + * >0 this layer has enqueued the lock, but need to + * wait on @anchor for resources + * -ve for failure * * \see vvp_lock_enqueue(), lov_lock_enqueue(), lovsub_lock_enqueue(), * \see osc_lock_enqueue() @@ -1537,7 +1538,7 @@ struct cl_io_operations { const struct cl_io_slice *slice); /** * Called bottom-to-top to notify layers that read/write IO - * iteration finished, with \a nob bytes transferred. + * iteration finished, with @nob bytes transferred. */ void (*cio_advance)(const struct lu_env *env, const struct cl_io_slice *slice, @@ -1550,11 +1551,11 @@ struct cl_io_operations { } op[CIT_OP_NR]; /** - * Submit pages from \a queue->c2_qin for IO, and move - * successfully submitted pages into \a queue->c2_qout. Return + * Submit pages from @queue->c2_qin for IO, and move + * successfully submitted pages into @queue->c2_qout. Return * non-zero if failed to submit even the single page. If - * submission failed after some pages were moved into \a - * queue->c2_qout, completion callback with non-zero ioret is + * submission failed after some pages were moved into + * @queue->c2_qout, completion callback with non-zero ioret is * executed on them. */ int (*cio_submit)(const struct lu_env *env, @@ -2049,7 +2050,7 @@ int cl_object_layout_get(const struct lu_env *env, struct cl_object *obj, loff_t cl_object_maxbytes(struct cl_object *obj); /** - * Returns true, iff \a o0 and \a o1 are slices of the same object. + * Returns true, iff @o0 and @o1 are slices of the same object. */ static inline int cl_object_same(struct cl_object *o0, struct cl_object *o1) { @@ -2280,7 +2281,7 @@ int cl_io_read_ahead(const struct lu_env *env, struct cl_io *io, pgoff_t start, struct cl_read_ahead *ra); /** - * True, iff \a io is an O_APPEND write(2). + * True, if @io is an O_APPEND write(2). */ static inline int cl_io_is_append(const struct cl_io *io) { @@ -2298,7 +2299,7 @@ static inline int cl_io_is_mkwrite(const struct cl_io *io) } /** - * True, iff \a io is a truncate(2). + * True, if @io is a truncate(2). */ static inline int cl_io_is_trunc(const struct cl_io *io) { diff --git a/drivers/staging/lustre/lustre/include/lu_object.h b/drivers/staging/lustre/lustre/include/lu_object.h index 68aa0d0..8137628 100644 --- a/drivers/staging/lustre/lustre/include/lu_object.h +++ b/drivers/staging/lustre/lustre/include/lu_object.h @@ -739,7 +739,7 @@ static inline const struct lu_fid *lu_object_fid(const struct lu_object *o) /** * Given a compound object, find its slice, corresponding to the device type - * \a dtype. + * @dtype. */ struct lu_object *lu_object_locate(struct lu_object_header *h, const struct lu_device_type *dtype); @@ -1058,7 +1058,7 @@ struct lu_context_key { struct lu_context_key *key); /** * Value destructor. Called when context with previously allocated - * value of this slot is destroyed. \a data is a value that was returned + * value of this slot is destroyed. @data is a value that was returned * by a matching call to lu_context_key::lct_init(). */ void (*lct_fini)(const struct lu_context *ctx, @@ -1247,8 +1247,8 @@ struct lu_name { /** * Validate names (path components) * - * To be valid \a name must be non-empty, '\0' terminated of length \a - * name_len, and not contain '/'. The maximum length of a name (before + * To be valid @name must be non-empty, '\0' terminated of length + * @name_len, and not contain '/'. The maximum length of a name (before * say -ENAMETOOLONG will be returned) is really controlled by llite * and the server. We only check for something insane coming from bad * integer handling here. diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm.h b/drivers/staging/lustre/lustre/include/lustre_dlm.h index c561d61..1bd5119 100644 --- a/drivers/staging/lustre/lustre/include/lustre_dlm.h +++ b/drivers/staging/lustre/lustre/include/lustre_dlm.h @@ -203,9 +203,9 @@ static inline int lockmode_compat(enum ldlm_mode exist_mode, * can trigger freeing of locks from the pool */ struct ldlm_pool_ops { - /** Recalculate pool \a pl usage */ + /** Recalculate pool @pl usage */ int (*po_recalc)(struct ldlm_pool *pl); - /** Cancel at least \a nr locks from pool \a pl */ + /** Cancel at least @nr locks from pool @pl */ int (*po_shrink)(struct ldlm_pool *pl, int nr, gfp_t gfp_mask); }; @@ -429,7 +429,7 @@ struct ldlm_namespace { /** * Used by filter code to store pointer to OBD of the service. - * Should be dropped in favor of \a ns_obd + * Should be dropped in favor of @ns_obd */ void *ns_lvbp; @@ -466,7 +466,7 @@ struct ldlm_namespace { }; /** - * Returns 1 if namespace \a ns supports early lock cancel (ELC). + * Returns 1 if namespace @ns supports early lock cancel (ELC). */ static inline int ns_connect_cancelset(struct ldlm_namespace *ns) { @@ -1082,7 +1082,7 @@ static inline struct ldlm_lock *ldlm_handle2lock(const struct lustre_handle *h) /** * Update Lock Value Block Operations (LVBO) on a resource taking into account - * data from request \a r + * data from request @r */ static inline int ldlm_res_lvbo_update(struct ldlm_resource *res, struct ptlrpc_request *r, int increase) diff --git a/drivers/staging/lustre/lustre/include/lustre_import.h b/drivers/staging/lustre/lustre/include/lustre_import.h index 7d52665..0c78708 100644 --- a/drivers/staging/lustre/lustre/include/lustre_import.h +++ b/drivers/staging/lustre/lustre/include/lustre_import.h @@ -98,7 +98,7 @@ enum lustre_imp_state { LUSTRE_IMP_EVICTED = 10, }; -/** Returns test string representation of numeric import state \a state */ +/** Returns test string representation of numeric import state @state */ static inline char *ptlrpc_import_state_name(enum lustre_imp_state state) { static char *import_state_names[] = { @@ -257,7 +257,7 @@ struct obd_import { /** List of all possible connection for import. */ struct list_head imp_conn_list; /** - * Current connection. \a imp_connection is imp_conn_current->oic_conn + * Current connection. @imp_connection is imp_conn_current->oic_conn */ struct obd_import_conn *imp_conn_current; diff --git a/drivers/staging/lustre/lustre/include/lustre_mdc.h b/drivers/staging/lustre/lustre/include/lustre_mdc.h index 90fcbae..63a7413 100644 --- a/drivers/staging/lustre/lustre/include/lustre_mdc.h +++ b/drivers/staging/lustre/lustre/include/lustre_mdc.h @@ -190,8 +190,8 @@ static inline void mdc_put_mod_rpc_slot(struct ptlrpc_request *req, * * \see client_obd::cl_default_mds_easize * - * \param[in] exp export for MDC device - * \param[in] body body of ptlrpc reply from MDT + * @exp: export for MDC device + * @body: body of ptlrpc reply from MDT * */ static inline void mdc_update_max_ea_from_body(struct obd_export *exp, diff --git a/drivers/staging/lustre/lustre/include/lustre_net.h b/drivers/staging/lustre/lustre/include/lustre_net.h index 47b9632..f6d1be1 100644 --- a/drivers/staging/lustre/lustre/include/lustre_net.h +++ b/drivers/staging/lustre/lustre/include/lustre_net.h @@ -358,16 +358,16 @@ struct ptlrpc_request_set { struct list_head set_requests; /** * List of completion callbacks to be called when the set is completed - * This is only used if \a set_interpret is NULL. + * This is only used if @set_interpret is NULL. * Links struct ptlrpc_set_cbdata. */ struct list_head set_cblist; /** Completion callback, if only one. */ set_interpreter_func set_interpret; - /** opaq argument passed to completion \a set_interpret callback. */ + /** opaq argument passed to completion @set_interpret callback. */ void *set_arg; /** - * Lock for \a set_new_requests manipulations + * Lock for @set_new_requests manipulations * locked so that any old caller can communicate requests to * the set holder who can then fold them into the lock-free set */ @@ -476,13 +476,13 @@ struct ptlrpc_reply_state { /** * Actual reply message. Its content is encrypted (if needed) to * produce reply buffer for actual sending. In simple case - * of no network encryption we just set \a rs_repbuf to \a rs_msg + * of no network encryption we just set @rs_repbuf to @rs_msg */ struct lustre_msg *rs_msg; /* reply message */ /** Handles of locks awaiting client reply ACK */ struct lustre_handle rs_locks[RS_MAX_LOCKS]; - /** Lock modes of locks in \a rs_locks */ + /** Lock modes of locks in @rs_locks */ enum ldlm_mode rs_modes[RS_MAX_LOCKS]; }; @@ -818,7 +818,7 @@ struct ptlrpc_request { /** * List item to for replay list. Not yet committed requests get linked * there. - * Also see \a rq_replay comment above. + * Also see @rq_replay comment above. * It's also link chain on obd_export::exp_req_replay_queue */ struct list_head rq_replay_list; @@ -941,7 +941,7 @@ static inline bool ptlrpc_nrs_req_can_move(struct ptlrpc_request *req) /** @} nrs */ /** - * Returns 1 if request buffer at offset \a index was already swabbed + * Returns 1 if request buffer at offset @index was already swabbed */ static inline int lustre_req_swabbed(struct ptlrpc_request *req, size_t index) { @@ -950,7 +950,7 @@ static inline int lustre_req_swabbed(struct ptlrpc_request *req, size_t index) } /** - * Returns 1 if request reply buffer at offset \a index was already swabbed + * Returns 1 if request reply buffer at offset @index was already swabbed */ static inline int lustre_rep_swabbed(struct ptlrpc_request *req, size_t index) { @@ -975,7 +975,7 @@ static inline int ptlrpc_rep_need_swab(struct ptlrpc_request *req) } /** - * Mark request buffer at offset \a index that it was already swabbed + * Mark request buffer at offset @index that it was already swabbed */ static inline void lustre_set_req_swabbed(struct ptlrpc_request *req, size_t index) @@ -986,7 +986,7 @@ static inline void lustre_set_req_swabbed(struct ptlrpc_request *req, } /** - * Mark request reply buffer at offset \a index that it was already swabbed + * Mark request reply buffer at offset @index that it was already swabbed */ static inline void lustre_set_rep_swabbed(struct ptlrpc_request *req, size_t index) @@ -997,7 +997,7 @@ static inline void lustre_set_rep_swabbed(struct ptlrpc_request *req, } /** - * Convert numerical request phase value \a phase into text string description + * Convert numerical request phase value @phase into text string description */ static inline const char * ptlrpc_phase2str(enum rq_phase phase) @@ -1023,7 +1023,7 @@ static inline void lustre_set_rep_swabbed(struct ptlrpc_request *req, } /** - * Convert numerical request phase of the request \a req into text stringi + * Convert numerical request phase of the request @req into text stringi * description */ static inline const char * @@ -1096,7 +1096,7 @@ struct ptlrpc_bulk_page { /** Linkage to list of pages in a bulk */ struct list_head bp_link; /** - * Number of bytes in a page to transfer starting from \a bp_pageoffset + * Number of bytes in a page to transfer starting from @bp_pageoffset */ int bp_buflen; /** offset within a page */ @@ -1169,22 +1169,22 @@ static inline bool ptlrpc_is_bulk_op_passive(enum ptlrpc_bulk_op_type type) struct ptlrpc_bulk_frag_ops { /** - * Add a page \a page to the bulk descriptor \a desc - * Data to transfer in the page starts at offset \a pageoffset and - * amount of data to transfer from the page is \a len + * Add a page @page to the bulk descriptor @desc + * Data to transfer in the page starts at offset @pageoffset and + * amount of data to transfer from the page is @len */ void (*add_kiov_frag)(struct ptlrpc_bulk_desc *desc, struct page *page, int pageoffset, int len); /* - * Add a \a fragment to the bulk descriptor \a desc. - * Data to transfer in the fragment is pointed to by \a frag - * The size of the fragment is \a len + * Add a @fragment to the bulk descriptor @desc. + * Data to transfer in the fragment is pointed to by @frag + * The size of the fragment is @len */ int (*add_iov_frag)(struct ptlrpc_bulk_desc *desc, void *frag, int len); /** - * Uninitialize and free bulk descriptor \a desc. + * Uninitialize and free bulk descriptor @desc. * Works on bulk descriptors both from server and client side. */ void (*release_frags)(struct ptlrpc_bulk_desc *desc); @@ -1499,14 +1499,14 @@ struct ptlrpc_service { * will have multiple instances very soon (instance per CPT). * * it has four locks: - * \a scp_lock - * serialize operations on rqbd and requests waiting for preprocess - * \a scp_req_lock - * serialize operations active requests sent to this portal - * \a scp_at_lock - * serialize adaptive timeout stuff - * \a scp_rep_lock - * serialize operations on RS list (reply states) + * @scp_lock + * serialize operations on rqbd and requests waiting for preprocess + * @scp_req_lock + * serialize operations active requests sent to this portal + * @scp_at_lock + * serialize adaptive timeout stuff + * @scp_rep_lock + * serialize operations on RS list (reply states) * * We don't have any use-case to take two or more locks at the same time * for now, so there is no lock order issue. @@ -1708,10 +1708,10 @@ enum ptlrpcd_ctl_flags { * * Service compatibility function; the policy is compatible with all services. * - * \param[in] svc The service the policy is attempting to register with. - * \param[in] desc The policy descriptor + * @svc: The service the policy is attempting to register with. + * @desc: The policy descriptor * - * \retval true The policy is compatible with the service + * Returns: true The policy is compatible with the service * * \see ptlrpc_nrs_pol_desc::pd_compat() */ @@ -1726,11 +1726,11 @@ static inline bool nrs_policy_compat_all(const struct ptlrpc_service *svc, * service which is identified by its human-readable name at * ptlrpc_service::srv_name. * - * \param[in] svc The service the policy is attempting to register with. - * \param[in] desc The policy descriptor + * @svc: The service the policy is attempting to register with. + * @desc: The policy descriptor * - * \retval false The policy is not compatible with the service - * \retval true The policy is compatible with the service + * Returns: false The policy is not compatible with the service + * true The policy is compatible with the service * * \see ptlrpc_nrs_pol_desc::pd_compat() */ @@ -2130,7 +2130,7 @@ static inline int ptlrpc_status_ntoh(int n) #endif /** @} */ -/** Change request phase of \a req to \a new_phase */ +/** Change request phase of @req to @new_phase */ static inline void ptlrpc_rqphase_move(struct ptlrpc_request *req, enum rq_phase new_phase) { @@ -2162,7 +2162,7 @@ static inline int ptlrpc_status_ntoh(int n) } /** - * Returns true if request \a req got early reply and hard deadline is not met + * Returns true if request @req got early reply and hard deadline is not met */ static inline int ptlrpc_client_early(struct ptlrpc_request *req) @@ -2181,7 +2181,7 @@ static inline int ptlrpc_status_ntoh(int n) return req->rq_replied; } -/** Returns true if request \a req is in process of receiving server reply */ +/** Returns true if request @req is in process of receiving server reply */ static inline int ptlrpc_client_recv(struct ptlrpc_request *req) { diff --git a/drivers/staging/lustre/lustre/include/lustre_nrs.h b/drivers/staging/lustre/lustre/include/lustre_nrs.h index 822eeb3..f57756a 100644 --- a/drivers/staging/lustre/lustre/include/lustre_nrs.h +++ b/drivers/staging/lustre/lustre/include/lustre_nrs.h @@ -77,20 +77,20 @@ struct ptlrpc_nrs_pol_ops { /** * Called during policy registration; this operation is optional. * - * \param[in,out] policy The policy being initialized + * @policy: The policy being initialized */ int (*op_policy_init)(struct ptlrpc_nrs_policy *policy); /** * Called during policy unregistration; this operation is optional. * - * \param[in,out] policy The policy being unregistered/finalized + * @policy: The policy being unregistered/finalized */ void (*op_policy_fini)(struct ptlrpc_nrs_policy *policy); /** * Called when activating a policy via lprocfs; policies allocate and * initialize their resources here; this operation is optional. * - * \param[in,out] policy The policy being started + * @policy: The policy being started * * \see nrs_policy_start_locked() */ @@ -99,7 +99,7 @@ struct ptlrpc_nrs_pol_ops { * Called when deactivating a policy via lprocfs; policies deallocate * their resources here; this operation is optional * - * \param[in,out] policy The policy being stopped + * @policy: The policy being stopped * * \see __nrs_policy_stop() */ @@ -109,13 +109,13 @@ struct ptlrpc_nrs_pol_ops { * \e PTLRPC_NRS_CTL_START and \e PTLRPC_NRS_CTL_GET_INFO; analogous * to an ioctl; this operation is optional. * - * \param[in,out] policy The policy carrying out operation \a opc - * \param[in] opc The command operation being carried out - * \param[in,out] arg An generic buffer for communication between the - * user and the control operation + * @policy: The policy carrying out operation opc + * @opc: The command operation being carried out + * @arg: An generic buffer for communication between the + * user and the control operation * - * \retval -ve error - * \retval 0 success + * Return: -ve error + * 0 success * * \see ptlrpc_nrs_policy_control() */ @@ -128,31 +128,31 @@ struct ptlrpc_nrs_pol_ops { * service. Policies should return -ve for requests they do not wish * to handle. This operation is mandatory. * - * \param[in,out] policy The policy we're getting resources for. - * \param[in,out] nrq The request we are getting resources for. - * \param[in] parent The parent resource of the resource being - * requested; set to NULL if none. - * \param[out] resp The resource is to be returned here; the - * fallback policy in an NRS head should - * \e always return a non-NULL pointer value. - * \param[in] moving_req When set, signifies that this is an attempt - * to obtain resources for a request being moved - * to the high-priority NRS head by - * ldlm_lock_reorder_req(). - * This implies two things: - * 1. We are under obd_export::exp_rpc_lock and - * so should not sleep. - * 2. We should not perform non-idempotent or can - * skip performing idempotent operations that - * were carried out when resources were first - * taken for the request when it was initialized - * in ptlrpc_nrs_req_initialize(). - * - * \retval 0, +ve The level of the returned resource in the resource - * hierarchy; currently only 0 (for a non-leaf resource) - * and 1 (for a leaf resource) are supported by the - * framework. - * \retval -ve error + * @policy: The policy we're getting resources for. + * @nrq: The request we are getting resources for. + * @parent: The parent resource of the resource being + * requested; set to NULL if none. + * @resp: The resource is to be returned here; the + * fallback policy in an NRS head should + * \e always return a non-NULL pointer value. + * @moving_req: When set, signifies that this is an attempt + * to obtain resources for a request being moved + * to the high-priority NRS head by + * ldlm_lock_reorder_req(). + * This implies two things: + * 1. We are under obd_export::exp_rpc_lock and + * so should not sleep. + * 2. We should not perform non-idempotent or can + * skip performing idempotent operations that + * were carried out when resources were first + * taken for the request when it was initialized + * in ptlrpc_nrs_req_initialize(). + * + * Return: 0, +ve The level of the returned resource in the resource + * hierarchy; currently only 0 (for a non-leaf resource) + * and 1 (for a leaf resource) are supported by the + * framework. + * -ve error * * \see ptlrpc_nrs_req_initialize() * \see ptlrpc_nrs_hpreq_add_nolock() @@ -167,8 +167,8 @@ struct ptlrpc_nrs_pol_ops { * Called when releasing references taken for resources in the resource * hierarchy for the request; this operation is optional. * - * \param[in,out] policy The policy the resource belongs to - * \param[in] res The resource to be freed + * @policy: The policy the resource belongs to + * @res: The resource to be freed * * \see ptlrpc_nrs_req_finalize() * \see ptlrpc_nrs_hpreq_add_nolock() @@ -181,15 +181,15 @@ struct ptlrpc_nrs_pol_ops { * Obtains a request for handling from the policy, and optionally * removes the request from the policy; this operation is mandatory. * - * \param[in,out] policy The policy to poll - * \param[in] peek When set, signifies that we just want to - * examine the request, and not handle it, so the - * request is not removed from the policy. - * \param[in] force When set, it will force a policy to return a - * request if it has one queued. + * @policy: The policy to poll + * @peek: When set, signifies that we just want to + * examine the request, and not handle it, so the + * request is not removed from the policy. + * @force: When set, it will force a policy to return a + * request if it has one queued. * - * \retval NULL No request available for handling - * \retval valid-pointer The request polled for handling + * Return: NULL No request available for handling + * valid-pointer The request polled for handling * * \see ptlrpc_nrs_req_get_nolock() */ @@ -200,11 +200,11 @@ struct ptlrpc_nrs_pol_ops { * Called when attempting to add a request to a policy for later * handling; this operation is mandatory. * - * \param[in,out] policy The policy on which to enqueue \a nrq - * \param[in,out] nrq The request to enqueue + * @policy: The policy on which to enqueue @nrq + * @nrq: The request to enqueue * - * \retval 0 success - * \retval != 0 error + * Return: 0 on success + * != 0 error * * \see ptlrpc_nrs_req_add_nolock() */ @@ -215,8 +215,8 @@ struct ptlrpc_nrs_pol_ops { * called after a request has been polled successfully from the policy * for handling; this operation is mandatory. * - * \param[in,out] policy The policy the request \a nrq belongs to - * \param[in,out] nrq The request to dequeue + * @policy: The policy the request @nrq belongs to + * @nrq: The request to dequeue * * \see ptlrpc_nrs_req_del_nolock() */ @@ -226,9 +226,9 @@ struct ptlrpc_nrs_pol_ops { * Called after the request being carried out. Could be used for * job/resource control; this operation is optional. * - * \param[in,out] policy The policy which is stopping to handle request - * \a nrq - * \param[in,out] nrq The request + * @policy: The policy which is stopping to handle request @nrq + * + * @nrq: The request * * \pre assert_spin_locked(&svcpt->scp_req_lock) * @@ -239,10 +239,10 @@ struct ptlrpc_nrs_pol_ops { /** * Registers the policy's lprocfs interface with a PTLRPC service. * - * \param[in] svc The service + * @svc: The service * - * \retval 0 success - * \retval != 0 error + * Return: 0 success + * != 0 error */ int (*op_lprocfs_init)(struct ptlrpc_service *svc); /** @@ -254,7 +254,7 @@ struct ptlrpc_nrs_pol_ops { * implementations of this method should make sure their operations are * safe in such cases. * - * \param[in] svc The service + * @svc: The service */ void (*op_lprocfs_fini)(struct ptlrpc_service *svc); }; @@ -410,7 +410,7 @@ struct ptlrpc_nrs_pol_conf { nrs_pol_desc_compat_t nc_compat; /** * Set for policies that support a single ptlrpc service, i.e. ones that - * have \a pd_compat set to nrs_policy_compat_one(). The variable value + * have @pd_compat set to nrs_policy_compat_one(). The variable value * depicts the name of the single service that such policies are * compatible with. */ diff --git a/drivers/staging/lustre/lustre/include/lustre_sec.h b/drivers/staging/lustre/lustre/include/lustre_sec.h index 5a5625e..66054d5 100644 --- a/drivers/staging/lustre/lustre/include/lustre_sec.h +++ b/drivers/staging/lustre/lustre/include/lustre_sec.h @@ -350,28 +350,28 @@ struct vfs_cred { struct ptlrpc_ctx_ops { /** - * To determine whether it's suitable to use the \a ctx for \a vcred. + * To determine whether it's suitable to use the @ctx for @vcred. */ int (*match)(struct ptlrpc_cli_ctx *ctx, struct vfs_cred *vcred); /** - * To bring the \a ctx uptodate. + * To bring the @ctx uptodate. */ int (*refresh)(struct ptlrpc_cli_ctx *ctx); /** - * Validate the \a ctx. + * Validate the @ctx. */ int (*validate)(struct ptlrpc_cli_ctx *ctx); /** - * Force the \a ctx to die. + * Force the @ctx to die. */ void (*force_die)(struct ptlrpc_cli_ctx *ctx, int grace); int (*display)(struct ptlrpc_cli_ctx *ctx, char *buf, int bufsize); /** - * Sign the request message using \a ctx. + * Sign the request message using @ctx. * * \pre req->rq_reqmsg point to request message. * \pre req->rq_reqlen is the request message length. @@ -383,7 +383,7 @@ struct ptlrpc_ctx_ops { int (*sign)(struct ptlrpc_cli_ctx *ctx, struct ptlrpc_request *req); /** - * Verify the reply message using \a ctx. + * Verify the reply message using @ctx. * * \pre req->rq_repdata point to reply message with signature. * \pre req->rq_repdata_len is the total reply message length. @@ -395,7 +395,7 @@ struct ptlrpc_ctx_ops { int (*verify)(struct ptlrpc_cli_ctx *ctx, struct ptlrpc_request *req); /** - * Encrypt the request message using \a ctx. + * Encrypt the request message using @ctx. * * \pre req->rq_reqmsg point to request message in clear text. * \pre req->rq_reqlen is the request message length. @@ -407,7 +407,7 @@ struct ptlrpc_ctx_ops { int (*seal)(struct ptlrpc_cli_ctx *ctx, struct ptlrpc_request *req); /** - * Decrypt the reply message using \a ctx. + * Decrypt the reply message using @ctx. * * \pre req->rq_repdata point to encrypted reply message. * \pre req->rq_repdata_len is the total cipher text length. @@ -498,11 +498,11 @@ struct ptlrpc_cli_ctx { */ struct ptlrpc_sec_cops { /** - * Given an \a imp, create and initialize a ptlrpc_sec structure. - * \param ctx service context: - * - regular import: \a ctx should be NULL; - * - reverse import: \a ctx is obtained from incoming request. - * \param flavor specify what flavor to use. + * Given an @imp, create and initialize a ptlrpc_sec structure. + * @ctx service context: + * - regular import: @ctx should be NULL; + * - reverse import: @ctx is obtained from incoming request. + * @flavor specify what flavor to use. * * When necessary, policy module is responsible for taking reference * on the import. @@ -531,9 +531,9 @@ struct ptlrpc_sec_cops { void (*kill_sec)(struct ptlrpc_sec *sec); /** - * Given \a vcred, lookup and/or create its context. The policy module + * Given @vcred, lookup and/or create its context. The policy module * is supposed to maintain its own context cache. - * XXX currently \a create and \a remove_dead is always 1, perhaps + * XXX currently @create and @remove_dead is always 1, perhaps * should be removed completely. * * \see null_lookup_ctx(), plain_lookup_ctx(), gss_sec_lookup_ctx_kr(). @@ -543,11 +543,11 @@ struct ptlrpc_sec_cops { int create, int remove_dead); /** - * Called then the reference of \a ctx dropped to 0. The policy module + * Called then the reference of @ctx dropped to 0. The policy module * is supposed to destroy this context or whatever else according to * its cache maintenance mechanism. * - * \param sync if zero, we shouldn't wait for the context being + * @sync if zero, we shouldn't wait for the context being * destroyed completely. * * \see plain_release_ctx(), gss_sec_release_ctx_kr(). @@ -558,10 +558,10 @@ struct ptlrpc_sec_cops { /** * Flush the context cache. * - * \param uid context of which user, -1 means all contexts. - * \param grace if zero, the PTLRPC_CTX_UPTODATE_BIT of affected + * @uid context of which user, -1 means all contexts. + * @grace if zero, the PTLRPC_CTX_UPTODATE_BIT of affected * contexts should be cleared immediately. - * \param force if zero, only idle contexts will be flushed. + * @force if zero, only idle contexts will be flushed. * * \see plain_flush_ctx_cache(), gss_sec_flush_ctx_cache_kr(). */ @@ -577,7 +577,7 @@ struct ptlrpc_sec_cops { void (*gc_ctx)(struct ptlrpc_sec *sec); /** - * Given an context \a ctx, install a corresponding reverse service + * Given an context @ctx, install a corresponding reverse service * context on client side. * XXX currently it's only used by GSS module, maybe we should remove * this from general API. @@ -586,13 +586,13 @@ struct ptlrpc_sec_cops { struct ptlrpc_cli_ctx *ctx); /** - * To allocate request buffer for \a req. + * To allocate request buffer for @req. * * \pre req->rq_reqmsg == NULL. * \pre req->rq_reqbuf == NULL, otherwise it must be pre-allocated, * we are not supposed to free it. * \post if success, req->rq_reqmsg point to a buffer with size - * at least \a lustre_msg_size. + * at least @lustre_msg_size. * * \see null_alloc_reqbuf(), plain_alloc_reqbuf(), gss_alloc_reqbuf(). */ @@ -600,7 +600,7 @@ struct ptlrpc_sec_cops { int lustre_msg_size); /** - * To free request buffer for \a req. + * To free request buffer for @req. * * \pre req->rq_reqbuf != NULL. * @@ -609,12 +609,12 @@ struct ptlrpc_sec_cops { void (*free_reqbuf)(struct ptlrpc_sec *sec, struct ptlrpc_request *req); /** - * To allocate reply buffer for \a req. + * To allocate reply buffer for @req. * * \pre req->rq_repbuf == NULL. * \post if success, req->rq_repbuf point to a buffer with size * req->rq_repbuf_len, the size should be large enough to receive - * reply which be transformed from \a lustre_msg_size of clear text. + * reply which be transformed from @lustre_msg_size of clear text. * * \see null_alloc_repbuf(), plain_alloc_repbuf(), gss_alloc_repbuf(). */ @@ -622,7 +622,7 @@ struct ptlrpc_sec_cops { int lustre_msg_size); /** - * To free reply buffer for \a req. + * To free reply buffer for @req. * * \pre req->rq_repbuf != NULL. * \post req->rq_repbuf == NULL. @@ -633,9 +633,9 @@ struct ptlrpc_sec_cops { void (*free_repbuf)(struct ptlrpc_sec *sec, struct ptlrpc_request *req); /** - * To expand the request buffer of \a req, thus the \a segment in + * To expand the request buffer of @req, thus the @segment in * the request message pointed by req->rq_reqmsg can accommodate - * at least \a newsize of data. + * at least @newsize of data. * * \pre req->rq_reqmsg->lm_buflens[segment] < newsize. * @@ -662,13 +662,16 @@ struct ptlrpc_sec_sops { * req->rq_reqdata_len; and the message has been unpacked to * host byte order. * - * \retval SECSVC_OK success, req->rq_reqmsg point to request message - * in clear text, size is req->rq_reqlen; req->rq_svc_ctx is set; - * req->rq_sp_from is decoded from request. - * \retval SECSVC_COMPLETE success, the request has been fully - * processed, and reply message has been prepared; req->rq_sp_from is - * decoded from request. - * \retval SECSVC_DROP failed, this request should be dropped. + * Return: SECSVC_OK success, req->rq_reqmsg point to request + * message in clear text, size is req->rq_reqlen; + * req->rq_svc_ctx is set; req->rq_sp_from is decoded + * from request. + * + * SECSVC_COMPLETE success, the request has been fully + * processed, and reply message has been prepared; + * req->rq_sp_from is decoded from request. + * + * SECSVC_DROP failed, this request should be dropped. * * \see null_accept(), plain_accept(), gss_svc_accept_kr(). */ @@ -687,7 +690,7 @@ struct ptlrpc_sec_sops { int (*authorize)(struct ptlrpc_request *req); /** - * Invalidate server context \a ctx. + * Invalidate server context @ctx. * * \see gss_svc_invalidate_ctx(). */ @@ -696,7 +699,7 @@ struct ptlrpc_sec_sops { /** * Allocate a ptlrpc_reply_state. * - * \param msgsize size of the reply message in clear text. + * @msgsize size of the reply message in clear text. * \pre if req->rq_reply_state != NULL, then it's pre-allocated, we * should simply use it; otherwise we'll responsible for allocating * a new one. @@ -713,14 +716,14 @@ struct ptlrpc_sec_sops { void (*free_rs)(struct ptlrpc_reply_state *rs); /** - * Release the server context \a ctx. + * Release the server context @ctx. * * \see gss_svc_free_ctx(). */ void (*free_ctx)(struct ptlrpc_svc_ctx *ctx); /** - * Install a reverse context based on the server context \a ctx. + * Install a reverse context based on the server context @ctx. * * \see gss_svc_install_rctx_kr(). */ diff --git a/drivers/staging/lustre/lustre/include/obd_class.h b/drivers/staging/lustre/lustre/include/obd_class.h index 32d4ab6..e4cde19 100644 --- a/drivers/staging/lustre/lustre/include/obd_class.h +++ b/drivers/staging/lustre/lustre/include/obd_class.h @@ -677,10 +677,11 @@ static inline struct obd_uuid *obd_get_uuid(struct obd_export *exp) } /* - * Create a new /a exp on device /a obd for the uuid /a cluuid - * @param exp New export handle - * @param d Connect data, supported flags are set, flags also understood - * by obd are returned. + * Create a new @exp on device @obd for the uuid @cluuid + * + * @exp: New export handle + * @d: Connect data, supported flags are set, flags also understood + * by obd are returned. */ static inline int obd_connect(const struct lu_env *env, struct obd_export **exp, struct obd_device *obd, diff --git a/drivers/staging/lustre/lustre/include/seq_range.h b/drivers/staging/lustre/lustre/include/seq_range.h index 884d4d4..dbf73ea 100644 --- a/drivers/staging/lustre/lustre/include/seq_range.h +++ b/drivers/staging/lustre/lustre/include/seq_range.h @@ -38,7 +38,7 @@ #include /** - * computes the sequence range type \a range + * computes the sequence range type @range */ static inline unsigned int fld_range_type(const struct lu_seq_range *range) @@ -47,7 +47,7 @@ static inline unsigned int fld_range_type(const struct lu_seq_range *range) } /** - * Is this sequence range an OST? \a range + * Is this sequence range an OST? @range */ static inline bool fld_range_is_ost(const struct lu_seq_range *range) @@ -56,7 +56,7 @@ static inline bool fld_range_is_ost(const struct lu_seq_range *range) } /** - * Is this sequence range an MDT? \a range + * Is this sequence range an MDT? @range */ static inline bool fld_range_is_mdt(const struct lu_seq_range *range) @@ -68,7 +68,7 @@ static inline bool fld_range_is_mdt(const struct lu_seq_range *range) * ANY range is only used when the fld client sends a fld query request, * but it does not know whether the seq is an MDT or OST, so it will send the * request with ANY type, which means any seq type from the lookup can be - * expected. /a range + * expected. @range */ static inline unsigned int fld_range_is_any(const struct lu_seq_range *range) { @@ -76,7 +76,7 @@ static inline unsigned int fld_range_is_any(const struct lu_seq_range *range) } /** - * Apply flags to range \a range \a flags + * Apply flags @flasg to range @range */ static inline void fld_range_set_type(struct lu_seq_range *range, @@ -86,7 +86,7 @@ static inline void fld_range_set_type(struct lu_seq_range *range, } /** - * Add MDT to range type \a range + * Add MDT to range type @range */ static inline void fld_range_set_mdt(struct lu_seq_range *range) @@ -95,7 +95,7 @@ static inline void fld_range_set_mdt(struct lu_seq_range *range) } /** - * Add OST to range type \a range + * Add OST to range type @range */ static inline void fld_range_set_ost(struct lu_seq_range *range) @@ -104,7 +104,7 @@ static inline void fld_range_set_ost(struct lu_seq_range *range) } /** - * Add ANY to range type \a range + * Add ANY to range type @range */ static inline void fld_range_set_any(struct lu_seq_range *range) @@ -113,7 +113,7 @@ static inline void fld_range_set_any(struct lu_seq_range *range) } /** - * computes width of given sequence range \a range + * computes width of given sequence range @range */ static inline u64 lu_seq_range_space(const struct lu_seq_range *range) @@ -122,7 +122,7 @@ static inline u64 lu_seq_range_space(const struct lu_seq_range *range) } /** - * initialize range to zero \a range + * initialize range to zero @range */ static inline void lu_seq_range_init(struct lu_seq_range *range) @@ -131,7 +131,7 @@ static inline void lu_seq_range_init(struct lu_seq_range *range) } /** - * check if given seq id \a s is within given range \a range + * check if given seq id @s is within given range @range */ static inline bool lu_seq_range_within(const struct lu_seq_range *range, @@ -141,7 +141,7 @@ static inline bool lu_seq_range_within(const struct lu_seq_range *range, } /** - * Is the range sane? Is the end after the beginning? \a range + * Is the range sane? Is the end after the beginning? @range */ static inline bool lu_seq_range_is_sane(const struct lu_seq_range *range) @@ -150,7 +150,7 @@ static inline bool lu_seq_range_is_sane(const struct lu_seq_range *range) } /** - * Is the range 0? \a range + * Is the range 0? @range */ static inline bool lu_seq_range_is_zero(const struct lu_seq_range *range) @@ -159,7 +159,7 @@ static inline bool lu_seq_range_is_zero(const struct lu_seq_range *range) } /** - * Is the range out of space? \a range + * Is the range out of space? @range */ static inline bool lu_seq_range_is_exhausted(const struct lu_seq_range *range) @@ -169,7 +169,7 @@ static inline bool lu_seq_range_is_exhausted(const struct lu_seq_range *range) /** * return 0 if two ranges have the same location, nonzero if they are - * different \a r1 \a r2 + * different @r1 @r2 */ static inline int lu_seq_range_compare_loc(const struct lu_seq_range *r1, @@ -181,7 +181,7 @@ static inline int lu_seq_range_compare_loc(const struct lu_seq_range *r1, #if !defined(__REQ_LAYOUT_USER__) /** - * byte swap range structure \a range + * byte swap range structure @range */ void lustre_swab_lu_seq_range(struct lu_seq_range *range); From patchwork Sat Mar 2 19:12:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836727 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D9F3922 for ; Sat, 2 Mar 2019 19:12:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F8032AB55 for ; Sat, 2 Mar 2019 19:12:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 536EF2AB5B; Sat, 2 Mar 2019 19:12:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C369F2AB58 for ; Sat, 2 Mar 2019 19:12:41 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id B7ED221F8BE; Sat, 2 Mar 2019 11:12:34 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 6DB0C21F20F for ; Sat, 2 Mar 2019 11:12:29 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id D7CCA276; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id D0E03205; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:20 -0500 Message-Id: <1551553944-6419-4-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 3/7] ptlrpc: move comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the DocBook handling to sphinix. Signed-off-by: James Simmons --- drivers/staging/lustre/lustre/ptlrpc/client.c | 72 ++--- drivers/staging/lustre/lustre/ptlrpc/import.c | 6 +- drivers/staging/lustre/lustre/ptlrpc/layout.c | 102 +++---- .../staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c | 10 +- drivers/staging/lustre/lustre/ptlrpc/niobuf.c | 14 +- drivers/staging/lustre/lustre/ptlrpc/nrs.c | 305 +++++++++++---------- drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c | 68 ++--- .../staging/lustre/lustre/ptlrpc/pack_generic.c | 8 +- drivers/staging/lustre/lustre/ptlrpc/sec.c | 104 +++---- drivers/staging/lustre/lustre/ptlrpc/sec_config.c | 2 +- drivers/staging/lustre/lustre/ptlrpc/service.c | 4 +- 11 files changed, 350 insertions(+), 345 deletions(-) diff --git a/drivers/staging/lustre/lustre/ptlrpc/client.c b/drivers/staging/lustre/lustre/ptlrpc/client.c index b2b1104..eb5d22a 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/client.c +++ b/drivers/staging/lustre/lustre/ptlrpc/client.c @@ -65,7 +65,7 @@ static int ptlrpc_unregister_reply(struct ptlrpc_request *request, int async); /** - * Initialize passed in client structure \a cl. + * Initialize passed in client structure @cl. */ void ptlrpc_init_client(int req_portal, int rep_portal, char *name, struct ptlrpc_client *cl) @@ -77,7 +77,7 @@ void ptlrpc_init_client(int req_portal, int rep_portal, char *name, EXPORT_SYMBOL(ptlrpc_init_client); /** - * Return PortalRPC connection for remote uud \a uuid + * Return PortalRPC connection for remote uud @uuid */ struct ptlrpc_connection *ptlrpc_uuid_to_connection(struct obd_uuid *uuid, lnet_nid_t nid4refnet) @@ -167,8 +167,8 @@ struct ptlrpc_bulk_desc *ptlrpc_new_bulk(unsigned int nfrags, } /** - * Prepare bulk descriptor for specified outgoing request \a req that - * can fit \a nfrags * pages. \a type is bulk type. \a portal is where + * Prepare bulk descriptor for specified outgoing request @req that + * can fit @nfrags * pages. @type is bulk type. @portal is where * the bulk to be sent. Used on client-side. * Returns pointer to newly allocated initialized bulk descriptor or NULL on * error. @@ -296,7 +296,7 @@ void ptlrpc_at_set_req_timeout(struct ptlrpc_request *req) /* * non-AT settings * - * \a imp_server_timeout means this is reverse import and + * @imp_server_timeout means this is reverse import and * we send (currently only) ASTs to the client and cannot afford * to wait too long for the reply, otherwise the other client * (because of which we are sending this request) would @@ -505,7 +505,7 @@ void ptlrpc_request_cache_free(struct ptlrpc_request *req) } /** - * Wind down request pool \a pool. + * Wind down request pool @pool. * Frees all requests from the pool too */ void ptlrpc_free_rq_pool(struct ptlrpc_request_pool *pool) @@ -525,7 +525,7 @@ void ptlrpc_free_rq_pool(struct ptlrpc_request_pool *pool) EXPORT_SYMBOL(ptlrpc_free_rq_pool); /** - * Allocates, initializes and adds \a num_rq requests to the pool \a pool + * Allocates, initializes and adds @num_rq requests to the pool @pool */ int ptlrpc_add_rqs_to_pool(struct ptlrpc_request_pool *pool, int num_rq) { @@ -568,9 +568,9 @@ int ptlrpc_add_rqs_to_pool(struct ptlrpc_request_pool *pool, int num_rq) /** * Create and initialize new request pool with given attributes: - * \a num_rq - initial number of requests to create for the pool - * \a msgsize - maximum message size possible for requests in thid pool - * \a populate_pool - function to be called when more requests need to be added + * @num_rq - initial number of requests to create for the pool + * @msgsize - maximum message size possible for requests in thid pool + * @populate_pool - function to be called when more requests need to be added * to the pool * Returns pointer to newly created pool or NULL on error. */ @@ -601,7 +601,7 @@ struct ptlrpc_request_pool * EXPORT_SYMBOL(ptlrpc_init_rq_pool); /** - * Fetches one request from pool \a pool + * Fetches one request from pool @pool */ static struct ptlrpc_request * ptlrpc_prep_req_from_pool(struct ptlrpc_request_pool *pool) @@ -643,7 +643,7 @@ struct ptlrpc_request_pool * } /** - * Returns freed \a request to pool. + * Returns freed @request to pool. */ static void __ptlrpc_free_req_to_pool(struct ptlrpc_request *request) { @@ -816,8 +816,8 @@ int ptlrpc_request_pack(struct ptlrpc_request *request, EXPORT_SYMBOL(ptlrpc_request_pack); /** - * Helper function to allocate new request on import \a imp - * and possibly using existing request from pool \a pool if provided. + * Helper function to allocate new request on import @imp + * and possibly using existing request from pool @pool if provided. * Returns allocated request structure with import field filled or * NULL on error. */ @@ -852,7 +852,7 @@ struct ptlrpc_request *__ptlrpc_request_alloc(struct obd_import *imp, /** * Helper function for creating a request. * Calls __ptlrpc_request_alloc to allocate new request structure and inits - * buffer structures according to capsule template \a format. + * buffer structures according to capsule template @format. * Returns allocated request structure pointer or NULL on error. */ static struct ptlrpc_request * @@ -872,8 +872,8 @@ struct ptlrpc_request *__ptlrpc_request_alloc(struct obd_import *imp, } /** - * Allocate new request structure for import \a imp and initialize its - * buffer structure according to capsule template \a format. + * Allocate new request structure for import @imp and initialize its + * buffer structure according to capsule template @format. */ struct ptlrpc_request *ptlrpc_request_alloc(struct obd_import *imp, const struct req_format *format) @@ -883,8 +883,8 @@ struct ptlrpc_request *ptlrpc_request_alloc(struct obd_import *imp, EXPORT_SYMBOL(ptlrpc_request_alloc); /** - * Allocate new request structure for import \a imp from pool \a pool and - * initialize its buffer structure according to capsule template \a format. + * Allocate new request structure for import @imp from pool @pool and + * initialize its buffer structure according to capsule template @format. */ struct ptlrpc_request *ptlrpc_request_alloc_pool(struct obd_import *imp, struct ptlrpc_request_pool *pool, @@ -908,7 +908,7 @@ void ptlrpc_request_free(struct ptlrpc_request *request) EXPORT_SYMBOL(ptlrpc_request_free); /** - * Allocate new request for operation \a opcode and immediately pack it for + * Allocate new request for operation @opcode and immediately pack it for * network transfer. * Only used for simple requests like OBD_PING where the only important * part of the request is operation itself. @@ -1178,10 +1178,10 @@ static int ptlrpc_import_delay_req(struct obd_import *imp, * Decide if the error message should be printed to the console or not. * Makes its decision based on request type, status, and failure frequency. * - * \param[in] req request that failed and may need a console message + * @req: request that failed and may need a console message * - * \retval false if no message should be printed - * \retval true if console message should be printed + * Return: false if no message should be printed + * true if console message should be printed */ static bool ptlrpc_console_allow(struct ptlrpc_request *req) { @@ -1285,7 +1285,7 @@ u64 ptlrpc_known_replied_xid(struct obd_import *imp) } /** - * Callback function called when client receives RPC reply for \a req. + * Callback function called when client receives RPC reply for @req. * Returns 0 on success or error code. * The return value would be assigned to req->rq_status by the caller * as request processing status. @@ -1482,7 +1482,7 @@ static int after_reply(struct ptlrpc_request *req) } /** - * Helper function to send request \a req over the network for the first time + * Helper function to send request @req over the network for the first time * Also adjusts request phase. * Returns 0 on success or error code. */ @@ -1629,7 +1629,7 @@ static inline int ptlrpc_set_producer(struct ptlrpc_request_set *set) } /** - * this sends any unsent RPCs in \a set and returns 1 if all are sent + * this sends any unsent RPCs in @set and returns 1 if all are sent * and no more replies are expected. * (it is possible to get less replies than requests sent e.g. due to timed out * requests or requests that we had trouble to send out) @@ -2046,7 +2046,7 @@ int ptlrpc_check_set(const struct lu_env *env, struct ptlrpc_request_set *set) EXPORT_SYMBOL(ptlrpc_check_set); /** - * Time out request \a req. is \a async_unlink is set, that means do not wait + * Time out request @req. is @async_unlink is set, that means do not wait * until LNet actually confirms network buffer unlinking. * Return 1 if we should give up further retrying attempts or 0 otherwise. */ @@ -2119,7 +2119,7 @@ int ptlrpc_expire_one_request(struct ptlrpc_request *req, int async_unlink) } /** - * Time out all uncompleted requests in request set pointed by \a data + * Time out all uncompleted requests in request set pointed by @data * Called when wait_event_idle_timeout times out. */ void ptlrpc_expired_set(struct ptlrpc_request_set *set) @@ -2153,7 +2153,7 @@ void ptlrpc_expired_set(struct ptlrpc_request_set *set) /** * Interrupts (sets interrupted flag) all uncompleted requests in - * a set \a data. Called when l_wait_event_abortable_timeout receives signal. + * a set @data. Called when l_wait_event_abortable_timeout receives signal. */ static void ptlrpc_interrupted_set(struct ptlrpc_request_set *set) { @@ -2343,7 +2343,7 @@ int ptlrpc_set_wait(struct ptlrpc_request_set *set) * Called when request count reached zero and request needs to be freed. * Removes request from all sorts of sending/replay lists it might be on, * frees network buffers if any are present. - * If \a locked is set, that means caller is already holding import imp_lock + * If @locked is set, that means caller is already holding import imp_lock * and so we no longer need to reobtain it (for certain lists manipulations) */ static void __ptlrpc_free_req(struct ptlrpc_request *request, int locked) @@ -2403,8 +2403,8 @@ static void __ptlrpc_free_req(struct ptlrpc_request *request, int locked) /** * Helper function - * Drops one reference count for request \a request. - * \a locked set indicates that caller holds import imp_lock. + * Drops one reference count for request @request. + * @locked set indicates that caller holds import imp_lock. * Frees the request when reference count reaches zero. * * RETURN 1 the request is freed @@ -2466,7 +2466,7 @@ void ptlrpc_req_finished(struct ptlrpc_request *request) EXPORT_SYMBOL(ptlrpc_req_finished); /** - * Returns xid of a \a request + * Returns xid of a @request */ u64 ptlrpc_req_xid(struct ptlrpc_request *request) { @@ -2699,7 +2699,7 @@ void ptlrpc_resend_req(struct ptlrpc_request *req) } /** - * Grab additional reference on a request \a req + * Grab additional reference on a request @req */ struct ptlrpc_request *ptlrpc_request_addref(struct ptlrpc_request *req) { @@ -2949,7 +2949,7 @@ int ptlrpc_replay_req(struct ptlrpc_request *req) } /** - * Aborts all in-flight request on import \a imp sending and delayed lists + * Aborts all in-flight request on import @imp sending and delayed lists */ void ptlrpc_abort_inflight(struct obd_import *imp) { @@ -3002,7 +3002,7 @@ void ptlrpc_abort_inflight(struct obd_import *imp) } /** - * Abort all uncompleted requests in request set \a set + * Abort all uncompleted requests in request set @set */ void ptlrpc_abort_set(struct ptlrpc_request_set *set) { diff --git a/drivers/staging/lustre/lustre/ptlrpc/import.c b/drivers/staging/lustre/lustre/ptlrpc/import.c index 7bb2e06..18823d5 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/import.c +++ b/drivers/staging/lustre/lustre/ptlrpc/import.c @@ -56,7 +56,7 @@ struct ptlrpc_connect_async_args { }; /** - * Updates import \a imp current state to provided \a state value + * Updates import @imp current state to provided @state value * Helper function. Must be called under imp_lock. */ static void __import_set_state(struct obd_import *imp, @@ -435,7 +435,7 @@ int ptlrpc_reconnect_import(struct obd_import *imp) EXPORT_SYMBOL(ptlrpc_reconnect_import); /** - * Connection on import \a imp is changed to another one (if more than one is + * Connection on import @imp is changed to another one (if more than one is * present). We typically chose connection that we have not tried to connect to * the longest */ @@ -579,7 +579,7 @@ static int ptlrpc_first_transno(struct obd_import *imp, u64 *transno) } /** - * Attempt to (re)connect import \a imp. This includes all preparations, + * Attempt to (re)connect import @imp. This includes all preparations, * initializing CONNECT RPC request and passing it to ptlrpcd for * actual sending. * Returns 0 on success or error code. diff --git a/drivers/staging/lustre/lustre/ptlrpc/layout.c b/drivers/staging/lustre/lustre/ptlrpc/layout.c index d9f2b3d..3bebfd7 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/layout.c +++ b/drivers/staging/lustre/lustre/ptlrpc/layout.c @@ -771,8 +771,8 @@ struct req_msg_field { const char *rmf_name; /** * Field length. (-1) means "variable length". If the - * \a RMF_F_STRUCT_ARRAY flag is set the field is also variable-length, - * but the actual size must be a whole multiple of \a rmf_size. + * @RMF_F_STRUCT_ARRAY flag is set the field is also variable-length, + * but the actual size must be a whole multiple of @rmf_size. */ const int rmf_size; void (*rmf_swabber)(void *); @@ -786,13 +786,13 @@ enum rmf_flags { */ RMF_F_STRING = BIT(0), /** - * The field's buffer size need not match the declared \a rmf_size. + * The field's buffer size need not match the declared @rmf_size. */ RMF_F_NO_SIZE_CHECK = BIT(1), /** - * The field's buffer size must be a whole multiple of the declared \a - * rmf_size and the \a rmf_swabber function must work on the declared \a - * rmf_size worth of bytes. + * The field's buffer size must be a whole multiple of the declared + * @rmf_size and the @rmf_swabber function must work on the declared + * @rmf_size worth of bytes. */ RMF_F_STRUCT_ARRAY = BIT(2) }; @@ -1603,8 +1603,8 @@ struct req_format RQF_OST_LADVISE = #define FMT_FIELD(fmt, i, j) ((fmt)->rf_fields[(i)].d[(j)]) /** - * Initializes the capsule abstraction by computing and setting the \a rf_idx - * field of RQFs and the \a rmf_offset field of RMFs. + * Initializes the capsule abstraction by computing and setting the @rf_idx + * field of RQFs and the @rmf_offset field of RMFs. */ int req_layout_init(void) { @@ -1643,11 +1643,11 @@ void req_layout_fini(void) EXPORT_SYMBOL(req_layout_fini); /** - * Initializes the expected sizes of each RMF in a \a pill (\a rc_area) to -1. + * Initializes the expected sizes of each RMF in a @pill (@rc_area) to -1. * * Actual/expected field sizes are set elsewhere in functions in this file: * req_capsule_init(), req_capsule_server_pack(), req_capsule_set_size() and - * req_capsule_msg_size(). The \a rc_area information is used by. + * req_capsule_msg_size(). The @rc_area information is used by. * ptlrpc_request_set_replen(). */ static void req_capsule_init_area(struct req_capsule *pill) @@ -1663,7 +1663,7 @@ static void req_capsule_init_area(struct req_capsule *pill) /** * Initialize a pill. * - * The \a location indicates whether the caller is executing on the client side + * The @location indicates whether the caller is executing on the client side * (RCL_CLIENT) or server side (RCL_SERVER).. */ void req_capsule_init(struct req_capsule *pill, @@ -1717,7 +1717,7 @@ static struct lustre_msg *__req_msg(const struct req_capsule *pill, } /** - * Set the format (\a fmt) of a \a pill; format changes are not allowed here + * Set the format (@fmt) of a @pill; format changes are not allowed here * (see req_capsule_extend()). */ void req_capsule_set(struct req_capsule *pill, const struct req_format *fmt) @@ -1730,12 +1730,12 @@ void req_capsule_set(struct req_capsule *pill, const struct req_format *fmt) EXPORT_SYMBOL(req_capsule_set); /** - * Fills in any parts of the \a rc_area of a \a pill that haven't been filled in + * Fills in any parts of the @rc_area of a @pill that haven't been filled in * yet. - * \a rc_area is an array of REQ_MAX_FIELD_NR elements, used to store sizes of - * variable-sized fields. The field sizes come from the declared \a rmf_size - * field of a \a pill's \a rc_fmt's RMF's. + * @rc_area is an array of REQ_MAX_FIELD_NR elements, used to store sizes of + * variable-sized fields. The field sizes come from the declared @rmf_size + * field of a @pill's @rc_fmt's RMF's. */ size_t req_capsule_filled_sizes(struct req_capsule *pill, enum req_location loc) @@ -1766,7 +1766,7 @@ size_t req_capsule_filled_sizes(struct req_capsule *pill, /** * Capsule equivalent of lustre_pack_request() and lustre_pack_reply(). * - * This function uses the \a pill's \a rc_area as filled in by + * This function uses the @pill's @rc_area as filled in by * req_capsule_set_size() or req_capsule_filled_sizes() (the latter is called by * this function). */ @@ -1793,8 +1793,8 @@ int req_capsule_server_pack(struct req_capsule *pill) EXPORT_SYMBOL(req_capsule_server_pack); /** - * Returns the PTLRPC request or reply (\a loc) buffer offset of a \a pill - * corresponding to the given RMF (\a field). + * Returns the PTLRPC request or reply (@loc) buffer offset of a @pill + * corresponding to the given RMF (@field). */ u32 __req_capsule_offset(const struct req_capsule *pill, const struct req_msg_field *field, @@ -1886,13 +1886,13 @@ u32 __req_capsule_offset(const struct req_capsule *pill, } /** - * Returns the pointer to a PTLRPC request or reply (\a loc) buffer of a \a pill - * corresponding to the given RMF (\a field). + * Returns the pointer to a PTLRPC request or reply (@loc) buffer of a @pill + * corresponding to the given RMF (@field). * - * The buffer will be swabbed using the given \a swabber. If \a swabber == NULL - * then the \a rmf_swabber from the RMF will be used. Soon there will be no - * calls to __req_capsule_get() with a non-NULL \a swabber; \a swabber will then - * be removed. Fields with the \a RMF_F_STRUCT_ARRAY flag set will have each + * The buffer will be swabbed using the given @swabber. If @swabber == NULL + * then the @rmf_swabber from the RMF will be used. Soon there will be no + * calls to __req_capsule_get() with a non-NULL @swabber; @swabber will then + * be removed. Fields with the @RMF_F_STRUCT_ARRAY flag set will have each * element of the array swabbed. */ static void *__req_capsule_get(struct req_capsule *pill, @@ -1960,7 +1960,7 @@ static void *__req_capsule_get(struct req_capsule *pill, /** * Trivial wrapper around __req_capsule_get(), that returns the PTLRPC request - * buffer corresponding to the given RMF (\a field) of a \a pill. + * buffer corresponding to the given RMF (@field) of a @pill. */ void *req_capsule_client_get(struct req_capsule *pill, const struct req_msg_field *field) @@ -1970,7 +1970,7 @@ void *req_capsule_client_get(struct req_capsule *pill, EXPORT_SYMBOL(req_capsule_client_get); /** - * Same as req_capsule_client_get(), but with a \a swabber argument. + * Same as req_capsule_client_get(), but with a @swabber argument. * * Currently unused; will be removed when req_capsule_server_swab_get() is * unused too. @@ -1986,8 +1986,8 @@ void *req_capsule_client_swab_get(struct req_capsule *pill, /** * Utility that combines req_capsule_set_size() and req_capsule_client_get(). * - * First the \a pill's request \a field's size is set (\a rc_area) using - * req_capsule_set_size() with the given \a len. Then the actual buffer is + * First the @pill's request @field's size is set (@rc_area) using + * req_capsule_set_size() with the given @len. Then the actual buffer is * returned. */ void *req_capsule_client_sized_get(struct req_capsule *pill, @@ -2001,7 +2001,7 @@ void *req_capsule_client_sized_get(struct req_capsule *pill, /** * Trivial wrapper around __req_capsule_get(), that returns the PTLRPC reply - * buffer corresponding to the given RMF (\a field) of a \a pill. + * buffer corresponding to the given RMF (@field) of a @pill. */ void *req_capsule_server_get(struct req_capsule *pill, const struct req_msg_field *field) @@ -2011,7 +2011,7 @@ void *req_capsule_server_get(struct req_capsule *pill, EXPORT_SYMBOL(req_capsule_server_get); /** - * Same as req_capsule_server_get(), but with a \a swabber argument. + * Same as req_capsule_server_get(), but with a @swabber argument. * * Ideally all swabbing should be done pursuant to RMF definitions, with no * swabbing done outside this capsule abstraction. @@ -2027,8 +2027,8 @@ void *req_capsule_server_swab_get(struct req_capsule *pill, /** * Utility that combines req_capsule_set_size() and req_capsule_server_get(). * - * First the \a pill's request \a field's size is set (\a rc_area) using - * req_capsule_set_size() with the given \a len. Then the actual buffer is + * First the @pill's request @field's size is set (@rc_area) using + * req_capsule_set_size() with the given @len. Then the actual buffer is * returned. */ void *req_capsule_server_sized_get(struct req_capsule *pill, @@ -2050,8 +2050,8 @@ void *req_capsule_server_sized_swab_get(struct req_capsule *pill, EXPORT_SYMBOL(req_capsule_server_sized_swab_get); /** - * Set the size of the PTLRPC request/reply (\a loc) buffer for the given \a - * field of the given \a pill. + * Set the size of the PTLRPC request/reply (@loc) buffer for the given + * @field of the given @pill. * * This function must be used when constructing variable sized fields of a * request or reply. @@ -2086,8 +2086,8 @@ void req_capsule_set_size(struct req_capsule *pill, EXPORT_SYMBOL(req_capsule_set_size); /** - * Return the actual PTLRPC buffer length of a request or reply (\a loc) - * for the given \a pill's given \a field. + * Return the actual PTLRPC buffer length of a request or reply (@loc) + * for the given @pill's given @field. * * NB: this function doesn't correspond with req_capsule_set_size(), which * actually sets the size in pill.rc_area[loc][offset], but this function @@ -2106,8 +2106,8 @@ u32 req_capsule_get_size(const struct req_capsule *pill, /** * Wrapper around lustre_msg_size() that returns the PTLRPC size needed for the - * given \a pill's request or reply (\a loc) given the field size recorded in - * the \a pill's rc_area. + * given @pill's request or reply (@loc) given the field size recorded in + * the @pill's rc_area. * * See also req_capsule_set_size(). */ @@ -2120,8 +2120,8 @@ u32 req_capsule_msg_size(struct req_capsule *pill, enum req_location loc) /** * While req_capsule_msg_size() computes the size of a PTLRPC request or reply - * (\a loc) given a \a pill's \a rc_area, this function computes the size of a - * PTLRPC request or reply given only an RQF (\a fmt). + * (@loc) given a @pill's @rc_area, this function computes the size of a + * PTLRPC request or reply given only an RQF (@fmt). * * This function should not be used for formats which contain variable size * fields. @@ -2154,19 +2154,19 @@ u32 req_capsule_fmt_size(u32 magic, const struct req_format *fmt, * Changes the format of an RPC. * * The pill must already have been initialized, which means that it already has - * a request format. The new format \a fmt must be an extension of the pill's + * a request format. The new format @fmt must be an extension of the pill's * old format. Specifically: the new format must have as many request and reply * fields as the old one, and all fields shared by the old and new format must * be at least as large in the new format. * * The new format's fields may be of different "type" than the old format, but * only for fields that are "opaque" blobs: fields which have a) have no - * \a rmf_swabber, b) \a rmf_flags == 0 or RMF_F_NO_SIZE_CHECK, and c) \a - * rmf_size == -1 or \a rmf_flags == RMF_F_NO_SIZE_CHECK. For example, + * @rmf_swabber, b) @rmf_flags == 0 or RMF_F_NO_SIZE_CHECK, and c) + * @rmf_size == -1 or @rmf_flags == RMF_F_NO_SIZE_CHECK. For example, * OBD_SET_INFO has a key field and an opaque value field that gets interpreted * according to the key field. When the value, according to the key, contains a * structure (or array thereof) to be swabbed, the format should be changed to - * one where the value field has \a rmf_size/rmf_flags/rmf_swabber set + * one where the value field has @rmf_size/rmf_flags/rmf_swabber set * accordingly. */ void req_capsule_extend(struct req_capsule *pill, const struct req_format *fmt) @@ -2207,8 +2207,8 @@ void req_capsule_extend(struct req_capsule *pill, const struct req_format *fmt) EXPORT_SYMBOL(req_capsule_extend); /** - * This function returns a non-zero value if the given \a field is present in - * the format (\a rc_fmt) of \a pill's PTLRPC request or reply (\a loc), else it + * This function returns a non-zero value if the given @field is present in + * the format (@rc_fmt) of @pill's PTLRPC request or reply (@loc), else it * returns 0. */ int req_capsule_has_field(const struct req_capsule *pill, @@ -2222,8 +2222,8 @@ int req_capsule_has_field(const struct req_capsule *pill, EXPORT_SYMBOL(req_capsule_has_field); /** - * Returns a non-zero value if the given \a field is present in the given \a - * pill's PTLRPC request or reply (\a loc), else it returns 0. + * Returns a non-zero value if the given @field is present in the given + * @pill's PTLRPC request or reply (@loc), else it returns 0. */ static int req_capsule_field_present(const struct req_capsule *pill, const struct req_msg_field *field, @@ -2239,8 +2239,8 @@ static int req_capsule_field_present(const struct req_capsule *pill, } /** - * This function shrinks the size of the _buffer_ of the \a pill's PTLRPC - * request or reply (\a loc). + * This function shrinks the size of the _buffer_ of the @pill's PTLRPC + * request or reply (@loc). * * This is not the opposite of req_capsule_extend(). */ diff --git a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c index 25858b8..08f9282 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c +++ b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c @@ -415,7 +415,7 @@ static ssize_t threads_max_store(struct kobject *kobj, struct attribute *attr, /** * Translates \e ptlrpc_nrs_pol_state values to human-readable strings. * - * \param[in] state The policy state + * @state: The policy state */ static const char *nrs_state2str(enum ptlrpc_nrs_pol_state state) { @@ -436,12 +436,12 @@ static const char *nrs_state2str(enum ptlrpc_nrs_pol_state state) } /** - * Obtains status information for \a policy. + * Obtains status information for @policy. * - * Information is copied in \a info. + * Information is copied in @info. * - * \param[in] policy The policy - * \param[out] info Holds returned status information + * @policy: The policy + * @info: Holds returned status information */ static void nrs_policy_get_info_locked(struct ptlrpc_nrs_policy *policy, struct ptlrpc_nrs_pol_info *info) diff --git a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c index ea7a7f9..c279fb1 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c +++ b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c @@ -41,8 +41,8 @@ #include "ptlrpc_internal.h" /** - * Helper function. Sends \a len bytes from \a base at offset \a offset - * over \a conn connection to portal \a portal. + * Helper function. Sends @len bytes from @base at offset @offset + * over @conn connection to portal @portal. * Returns 0 on success or error code. */ static int ptl_send_buf(struct lnet_handle_md *mdh, void *base, int len, @@ -343,8 +343,8 @@ static void ptlrpc_at_set_reply(struct ptlrpc_request *req, int flags) } /** - * Send request reply from request \a req reply buffer. - * \a flags defines reply types + * Send request reply from request @req reply buffer. + * @flags defines reply types * Returns 0 on success or error code */ int ptlrpc_send_reply(struct ptlrpc_request *req, int flags) @@ -443,7 +443,7 @@ int ptlrpc_reply(struct ptlrpc_request *req) } /** - * For request \a req send an error reply back. Create empty + * For request @req send an error reply back. Create empty * reply buffers if necessary. */ int ptlrpc_send_error(struct ptlrpc_request *req, int may_be_difficult) @@ -474,8 +474,8 @@ int ptlrpc_error(struct ptlrpc_request *req) } /** - * Send request \a request. - * if \a noreply is set, don't expect any reply back and don't set up + * Send request @request. + * if @noreply is set, don't expect any reply back and don't set up * reply buffers. * Returns 0 on success or error code. */ diff --git a/drivers/staging/lustre/lustre/ptlrpc/nrs.c b/drivers/staging/lustre/lustre/ptlrpc/nrs.c index ef7dd5d..a56b7b3 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/nrs.c +++ b/drivers/staging/lustre/lustre/ptlrpc/nrs.c @@ -131,11 +131,11 @@ static int nrs_policy_stop_locked(struct ptlrpc_nrs_policy *policy) } /** - * Transitions the \a nrs NRS head's primary policy to + * Transitions the @nrs NRS head's primary policy to * ptlrpc_nrs_pol_state::NRS_POL_STATE_STOPPING and if the policy has no * pending usage references, to ptlrpc_nrs_pol_state::NRS_POL_STATE_STOPPED. * - * \param[in] nrs the NRS head to carry out this operation on + * @nrs: the NRS head to carry out this operation on */ static void nrs_policy_stop_primary(struct ptlrpc_nrs *nrs) { @@ -347,17 +347,17 @@ static void nrs_resource_put(struct ptlrpc_nrs_resource *res) /** * Obtains references for each resource in the resource hierarchy for request - * \a nrq if it is to be handled by \a policy. + * @nrq if it is to be handled by @policy. * - * \param[in] policy the policy - * \param[in] nrq the request - * \param[in] moving_req denotes whether this is a call to the function by - * ldlm_lock_reorder_req(), in order to move \a nrq to - * the high-priority NRS head; we should not sleep when - * set. + * @policy: the policy + * @nrq: the request + * @moving_req: denotes whether this is a call to the function by + * ldlm_lock_reorder_req(), in order to move @nrq to + * the high-priority NRS head; we should not sleep when + * set. * - * \retval NULL resource hierarchy references not obtained - * \retval valid-pointer the bottom level of the resource hierarchy + * Returns: NULL resource hierarchy references not obtained + * valid-pointer the bottom level of the resource hierarchy * * \see ptlrpc_nrs_pol_ops::op_res_get() */ @@ -398,19 +398,19 @@ struct ptlrpc_nrs_resource *nrs_resource_get(struct ptlrpc_nrs_policy *policy, /** * Obtains resources for the resource hierarchies and policy references for * the fallback and current primary policy (if any), that will later be used - * to handle request \a nrq. - * - * \param[in] nrs the NRS head instance that will be handling request \a nrq. - * \param[in] nrq the request that is being handled. - * \param[out] resp the array where references to the resource hierarchy are - * stored. - * \param[in] moving_req is set when obtaining resources while moving a - * request from a policy on the regular NRS head to a - * policy on the HP NRS head (via - * ldlm_lock_reorder_req()). It signifies that - * allocations to get resources should be atomic; for - * a full explanation, see comment in - * ptlrpc_nrs_pol_ops::op_res_get(). + * to handle request @nrq. + * + * @nrs: the NRS head instance that will be handling request @nrq. + * @nrq: the request that is being handled. + * @resp: the array where references to the resource hierarchy are + * stored. + * @moving_req: is set when obtaining resources while moving a + * request from a policy on the regular NRS head to a + * policy on the HP NRS head (via + * ldlm_lock_reorder_req()). It signifies that + * allocations to get resources should be atomic; for + * a full explanation, see comment in + * ptlrpc_nrs_pol_ops::op_res_get(). */ static void nrs_resource_get_safe(struct ptlrpc_nrs *nrs, struct ptlrpc_nrs_request *nrq, @@ -461,7 +461,7 @@ static void nrs_resource_get_safe(struct ptlrpc_nrs *nrs, * longer required; used when request handling has been completed, or the * request is moving to the high priority NRS head. * - * \param resp the resource hierarchy that is being released + * @resp: the resource hierarchy that is being released * * \see ptlrpc_nrs_req_finalize() */ @@ -487,20 +487,20 @@ static void nrs_resource_put_safe(struct ptlrpc_nrs_resource **resp) } /** - * Obtains an NRS request from \a policy for handling or examination; the + * Obtains an NRS request from @policy for handling or examination; the * request should be removed in the 'handling' case. * * Calling into this function implies we already know the policy has a request * waiting to be handled. * - * \param[in] policy the policy from which a request - * \param[in] peek when set, signifies that we just want to examine the - * request, and not handle it, so the request is not removed - * from the policy. - * \param[in] force when set, it will force a policy to return a request if it - * has one pending + * @policy: the policy from which a request + * @peek: when set, signifies that we just want to examine the + * request, and not handle it, so the request is not removed + * from the policy. + * @force: when set, it will force a policy to return a request if it + * has one pending * - * \retval the NRS request to be handled + * Returns: the NRS request to be handled */ static inline struct ptlrpc_nrs_request *nrs_request_get(struct ptlrpc_nrs_policy *policy, @@ -518,12 +518,12 @@ struct ptlrpc_nrs_request *nrs_request_get(struct ptlrpc_nrs_policy *policy, } /** - * Enqueues request \a nrq for later handling, via one one the policies for + * Enqueues request @nrq for later handling, via one one the policies for * which resources where earlier obtained via nrs_resource_get_safe(). The * function attempts to enqueue the request first on the primary policy * (if any), since this is the preferred choice. * - * \param nrq the request being enqueued + * @nrq: the request being enqueued * * \see nrs_resource_get_safe() */ @@ -562,8 +562,8 @@ static inline void nrs_request_enqueue(struct ptlrpc_nrs_request *nrq) /** * Called when a request has been handled * - * \param[in] nrs the request that has been handled; can be used for - * job/resource control. + * @nrs: the request that has been handled; can be used for + * job/resource control. * * \see ptlrpc_nrs_req_stop_nolock() */ @@ -587,17 +587,17 @@ static inline void nrs_request_stop(struct ptlrpc_nrs_request *nrq) * Handles opcodes that are common to all policy types within NRS core, and * passes any unknown opcodes to the policy-specific control function. * - * \param[in] nrs the NRS head this policy belongs to. - * \param[in] name the human-readable policy name; should be the same as - * ptlrpc_nrs_pol_desc::pd_name. - * \param[in] opc the opcode of the operation being carried out. - * \param[in,out] arg can be used to pass information in and out between when - * carrying an operation; usually data that is private to - * the policy at some level, or generic policy status - * information. - * - * \retval -ve error condition - * \retval 0 operation was carried out successfully + * @nrs: the NRS head this policy belongs to. + * @name the human-readable policy name; should be the same as + * ptlrpc_nrs_pol_desc::pd_name. + * @opc: the opcode of the operation being carried out. + * @arg: can be used to pass information in and out between when + * carrying an operation; usually data that is private to + * the policy at some level, or generic policy status + * information. + * + * Return: -ve error condition + * 0 operation was carried out successfully */ static int nrs_policy_ctl(struct ptlrpc_nrs *nrs, char *name, enum ptlrpc_nrs_ctl opc, void *arg) @@ -647,12 +647,12 @@ static int nrs_policy_ctl(struct ptlrpc_nrs *nrs, char *name, /** * Unregisters a policy by name. * - * \param[in] nrs the NRS head this policy belongs to. - * \param[in] name the human-readable policy name; should be the same as - * ptlrpc_nrs_pol_desc::pd_name + * @nrs: the NRS head this policy belongs to. + * @name: the human-readable policy name; should be the same as + * ptlrpc_nrs_pol_desc::pd_name * - * \retval -ve error - * \retval 0 success + * Return: -ve error + * 0 success */ static int nrs_policy_unregister(struct ptlrpc_nrs *nrs, char *name) { @@ -701,14 +701,14 @@ static int nrs_policy_unregister(struct ptlrpc_nrs *nrs, char *name) } /** - * Register a policy from \policy descriptor \a desc with NRS head \a nrs. + * Register a policy from \policy descriptor @desc with NRS head @nrs. * - * \param[in] nrs the NRS head on which the policy will be registered. - * \param[in] desc the policy descriptor from which the information will be - * obtained to register the policy. + * @nrs: the NRS head on which the policy will be registered. + * @desc: the policy descriptor from which the information will be + * obtained to register the policy. * - * \retval -ve error - * \retval 0 success + * Return: -ve error + * 0 success */ static int nrs_policy_register(struct ptlrpc_nrs *nrs, struct ptlrpc_nrs_pol_desc *desc) @@ -775,10 +775,10 @@ static int nrs_policy_register(struct ptlrpc_nrs *nrs, } /** - * Enqueue request \a req using one of the policies its resources are referring + * Enqueue request @req using one of the policies its resources are referring * to. * - * \param[in] req the request to enqueue. + * @req: the request to enqueue. */ static void ptlrpc_nrs_req_add_nolock(struct ptlrpc_request *req) { @@ -803,7 +803,7 @@ static void ptlrpc_nrs_req_add_nolock(struct ptlrpc_request *req) /** * Enqueue a request on the high priority NRS head. * - * \param req the request to enqueue. + * @req: the request to enqueue. */ static void ptlrpc_nrs_hpreq_add_nolock(struct ptlrpc_request *req) { @@ -819,13 +819,13 @@ static void ptlrpc_nrs_hpreq_add_nolock(struct ptlrpc_request *req) /** * Returns a boolean predicate indicating whether the policy described by - * \a desc is adequate for use with service \a svc. + * @desc is adequate for use with service @svc. * - * \param[in] svc the service - * \param[in] desc the policy descriptor + * @svc: the service + * @desc: the policy descriptor * - * \retval false the policy is not compatible with the service - * \retval true the policy is compatible with the service + * Return: false the policy is not compatible with the service + * true the policy is compatible with the service */ static inline bool nrs_policy_compatible(const struct ptlrpc_service *svc, const struct ptlrpc_nrs_pol_desc *desc) @@ -835,12 +835,12 @@ static inline bool nrs_policy_compatible(const struct ptlrpc_service *svc, /** * Registers all compatible policies in nrs_core.nrs_policies, for NRS head - * \a nrs. + * @nrs. * - * \param[in] nrs the NRS head + * @nrs: the NRS head * - * \retval -ve error - * \retval 0 success + * Return: -ve error + * 0 success * * \pre mutex_is_locked(&nrs_core.nrs_mutex) * @@ -876,14 +876,14 @@ static int nrs_register_policies_locked(struct ptlrpc_nrs *nrs) } /** - * Initializes NRS head \a nrs of service partition \a svcpt, and registers all + * Initializes NRS head @nrs of service partition @svcpt, and registers all * compatible policies in NRS core, with the NRS head. * - * \param[in] nrs the NRS head - * \param[in] svcpt the PTLRPC service partition to setup + * @nrs: the NRS head + * @svcpt: the PTLRPC service partition to setup * - * \retval -ve error - * \retval 0 success + * Return: -ve error + * 0 success * * \pre mutex_is_locked(&nrs_core.nrs_mutex) */ @@ -915,7 +915,7 @@ static int __nrs_svcpt_setup_locked(struct ptlrpc_nrs *nrs, * handles high-priority RPCs), and then registers all available compatible * policies on those NRS heads. * - * \param[in,out] svcpt the PTLRPC service partition to setup + * @svcpt: the PTLRPC service partition to setup * * \pre mutex_is_locked(&nrs_core.nrs_mutex) */ @@ -960,7 +960,7 @@ static int nrs_svcpt_setup_locked(struct ptlrpc_service_part *svcpt) * Unregisters all policies on all available NRS heads in a service partition; * called at PTLRPC service unregistration time. * - * \param[in] svcpt the PTLRPC service partition + * @svcpt: the PTLRPC service partition * * \pre mutex_is_locked(&nrs_core.nrs_mutex) */ @@ -1000,12 +1000,12 @@ static void nrs_svcpt_cleanup_locked(struct ptlrpc_service_part *svcpt) } /** - * Returns the descriptor for a policy as identified by by \a name. + * Returns the descriptor for a policy as identified by by @name. * - * \param[in] name the policy name + * @name: the policy name * - * \retval the policy descriptor - * \retval NULL + * Return: the policy descriptor + * NULL if not found */ static struct ptlrpc_nrs_pol_desc *nrs_policy_find_desc_locked(const char *name) { @@ -1022,10 +1022,10 @@ static struct ptlrpc_nrs_pol_desc *nrs_policy_find_desc_locked(const char *name) * Removes the policy from all supported NRS heads of all partitions of all * PTLRPC services. * - * \param[in] desc the policy descriptor to unregister + * @desc: the policy descriptor to unregister * - * \retval -ve error - * \retval 0 successfully unregistered policy on all supported NRS heads + * Return: -ve error + * 0 successfully unregistered policy on all supported NRS heads * * \pre mutex_is_locked(&nrs_core.nrs_mutex) * \pre mutex_is_locked(&ptlrpc_all_services_mutex) @@ -1088,10 +1088,10 @@ static int nrs_policy_unregister_locked(struct ptlrpc_nrs_pol_desc *desc) * time when registering a policy that ships with NRS core, or in a * module's init() function for policies registering from other modules. * - * \param[in] conf configuration information for the new policy to register + * @conf: configuration information for the new policy to register * - * \retval -ve error - * \retval 0 success + * Return: -ve error + * 0 success */ static int ptlrpc_nrs_policy_register(struct ptlrpc_nrs_pol_conf *conf) { @@ -1236,15 +1236,16 @@ static int ptlrpc_nrs_policy_register(struct ptlrpc_nrs_pol_conf *conf) } /** - * Setup NRS heads on all service partitions of service \a svc, and register + * Setup NRS heads on all service partitions of service @svc, and register * all compatible policies on those NRS heads. * * To be called from within ptl - * \param[in] svc the service to setup * - * \retval -ve error, the calling logic should eventually call - * ptlrpc_service_nrs_cleanup() to undo any work performed - * by this function. + * @svc: the service to setup + * + * Return: -ve error, the calling logic should eventually call + * ptlrpc_service_nrs_cleanup() to undo any work performed + * by this function. * * \see ptlrpc_register_service() * \see ptlrpc_service_nrs_cleanup() @@ -1290,9 +1291,9 @@ int ptlrpc_service_nrs_setup(struct ptlrpc_service *svc) } /** - * Unregisters all policies on all service partitions of service \a svc. + * Unregisters all policies on all service partitions of service @svc. * - * \param[in] svc the PTLRPC service to unregister + * @svc: the PTLRPC service to unregister */ void ptlrpc_service_nrs_cleanup(struct ptlrpc_service *svc) { @@ -1324,15 +1325,15 @@ void ptlrpc_service_nrs_cleanup(struct ptlrpc_service *svc) } /** - * Obtains NRS head resources for request \a req. + * Obtains NRS head resources for request @req. * - * These could be either on the regular or HP NRS head of \a svcpt; resources + * These could be either on the regular or HP NRS head of @svcpt; resources * taken on the regular head can later be swapped for HP head resources by * ldlm_lock_reorder_req(). * - * \param[in] svcpt the service partition - * \param[in] req the request - * \param[in] hp which NRS head of \a svcpt to use + * @svcpt: the service partition + * @req: the request + * @hp: which NRS head of @svcpt to use */ void ptlrpc_nrs_req_initialize(struct ptlrpc_service_part *svcpt, struct ptlrpc_request *req, bool hp) @@ -1354,7 +1355,7 @@ void ptlrpc_nrs_req_initialize(struct ptlrpc_service_part *svcpt, * Releases resources for a request; is called after the request has been * handled. * - * \param[in] req the request + * @req: the request * * \see ptlrpc_server_finish_request() */ @@ -1376,13 +1377,13 @@ void ptlrpc_nrs_req_stop_nolock(struct ptlrpc_request *req) } /** - * Enqueues request \a req on either the regular or high-priority NRS head - * of service partition \a svcpt. + * Enqueues request @req on either the regular or high-priority NRS head + * of service partition @svcpt. * - * \param[in] svcpt the service partition - * \param[in] req the request to be enqueued - * \param[in] hp whether to enqueue the request on the regular or - * high-priority NRS head. + * @svcpt: the service partition + * @req: the request to be enqueued + * @hp: whether to enqueue the request on the regular or + * high-priority NRS head. */ void ptlrpc_nrs_req_add(struct ptlrpc_service_part *svcpt, struct ptlrpc_request *req, bool hp) @@ -1428,19 +1429,19 @@ static void nrs_request_removed(struct ptlrpc_nrs_policy *policy) /** * Obtains a request for handling from an NRS head of service partition - * \a svcpt. - * - * \param[in] svcpt the service partition - * \param[in] hp whether to obtain a request from the regular or - * high-priority NRS head. - * \param[in] peek when set, signifies that we just want to examine the - * request, and not handle it, so the request is not removed - * from the policy. - * \param[in] force when set, it will force a policy to return a request if it - * has one pending - * - * \retval the request to be handled - * \retval NULL the head has no requests to serve + * @svcpt. + * + * @svcpt: the service partition + * @hp: whether to obtain a request from the regular or + * high-priority NRS head. + * @peek: when set, signifies that we just want to examine the + * request, and not handle it, so the request is not removed + * from the policy. + * @force: when set, it will force a policy to return a request if it + * has one pending + * + * Return: the request to be handled + * NULL the head has no requests to serve */ struct ptlrpc_request * __ptlrpc_nrs_req_get_nolock(struct ptlrpc_service_part *svcpt, bool hp, @@ -1475,16 +1476,16 @@ struct ptlrpc_request * /** * Returns whether there are any requests currently enqueued on any of the - * policies of service partition's \a svcpt NRS head specified by \a hp. Should + * policies of service partition's @svcpt NRS head specified by @hp. Should * be called while holding ptlrpc_service_part::scp_req_lock to get a reliable * result. * - * \param[in] svcpt the service partition to enquire. - * \param[in] hp whether the regular or high-priority NRS head is to be - * enquired. + * @svcpt: the service partition to enquire. + * @hp: whether the regular or high-priority NRS head is to be + * enquired. * - * \retval false the indicated NRS head has no enqueued requests. - * \retval true the indicated NRS head has some enqueued requests. + * Return: false the indicated NRS head has no enqueued requests. + * true the indicated NRS head has some enqueued requests. */ bool ptlrpc_nrs_req_pending_nolock(struct ptlrpc_service_part *svcpt, bool hp) { @@ -1494,30 +1495,30 @@ bool ptlrpc_nrs_req_pending_nolock(struct ptlrpc_service_part *svcpt, bool hp) }; /** - * Carries out a control operation \a opc on the policy identified by the - * human-readable \a name, on either all partitions, or only on the first - * partition of service \a svc. - * - * \param[in] svc the service the policy belongs to. - * \param[in] queue whether to carry out the command on the policy which - * belongs to the regular, high-priority, or both NRS - * heads of service partitions of \a svc. - * \param[in] name the policy to act upon, by human-readable name - * \param[in] opc the opcode of the operation to carry out - * \param[in] single when set, the operation will only be carried out on the - * NRS heads of the first service partition of \a svc. - * This is useful for some policies which e.g. share - * identical values on the same parameters of different - * service partitions; when reading these parameters via - * lprocfs, these policies may just want to obtain and - * print out the values from the first service partition. - * Storing these values centrally elsewhere then could be - * another solution for this. - * \param[in,out] arg can be used as a generic in/out buffer between control - * operations and the user environment. - * - *\retval -ve error condition - *\retval 0 operation was carried out successfully + * Carries out a control operation @opc on the policy identified by the + * human-readable @name, on either all partitions, or only on the first + * partition of service @svc. + * + * @svc: the service the policy belongs to. + * @queue: whether to carry out the command on the policy which + * belongs to the regular, high-priority, or both NRS + * heads of service partitions of @svc. + * @name: the policy to act upon, by human-readable name + * @opc: the opcode of the operation to carry out + * @single: when set, the operation will only be carried out on the + * NRS heads of the first service partition of @svc. + * This is useful for some policies which e.g. share + * identical values on the same parameters of different + * service partitions; when reading these parameters via + * lprocfs, these policies may just want to obtain and + * print out the values from the first service partition. + * Storing these values centrally elsewhere then could be + * another solution for this. + * @arg: can be used as a generic in/out buffer between control + * operations and the user environment. + * + * Return: -ve error condition + * 0 operation was carried out successfully */ int ptlrpc_nrs_policy_control(const struct ptlrpc_service *svc, enum ptlrpc_nrs_queue_type queue, char *name, @@ -1564,8 +1565,8 @@ int ptlrpc_nrs_policy_control(const struct ptlrpc_service *svc, * Adds all policies that ship with the ptlrpc module, to NRS core's list of * policies \e nrs_core.nrs_policies. * - * \retval 0 all policies have been registered successfully - * \retval -ve error + * Return: 0 all policies have been registered successfully + * -ve error */ int ptlrpc_nrs_init(void) { diff --git a/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c b/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c index ab186d8..d0eaebc 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c +++ b/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c @@ -66,10 +66,10 @@ * ptlrpc_nrs_pol_state::NRS_POL_STATE_STARTED; allocates and initializes a * policy-specific private data structure. * - * \param[in] policy The policy to start + * @policy The policy to start * - * \retval -ENOMEM OOM error - * \retval 0 success + * Return: -ENOMEM OOM error + * 0 for success * * \see nrs_policy_register() * \see nrs_policy_ctl() @@ -94,7 +94,7 @@ static int nrs_fifo_start(struct ptlrpc_nrs_policy *policy) * ptlrpc_nrs_pol_state::NRS_POL_STATE_STOPPED; deallocates the policy-specific * private data structure. * - * \param[in] policy The policy to stop + * @policy The policy to stop * * \see __nrs_policy_stop() */ @@ -111,18 +111,18 @@ static void nrs_fifo_stop(struct ptlrpc_nrs_policy *policy) /** * Is called for obtaining a FIFO policy resource. * - * \param[in] policy The policy on which the request is being asked for - * \param[in] nrq The request for which resources are being taken - * \param[in] parent Parent resource, unused in this policy - * \param[out] resp Resources references are placed in this array - * \param[in] moving_req Signifies limited caller context; unused in this - * policy + * @policy The policy on which the request is being asked for + * @nrq The request for which resources are being taken + * @parent Parent resource, unused in this policy + * @resp Resources references are placed in this array + * @moving_req Signifies limited caller context; unused in this + * policy * - * \retval 1 The FIFO policy only has a one-level resource hierarchy, as since - * it implements a simple scheduling algorithm in which request - * priority is determined on the request arrival order, it does not - * need to maintain a set of resources that would otherwise be used - * to calculate a request's priority. + * Return 1 The FIFO policy only has a one-level resource hierarchy, as + * since it implements a simple scheduling algorithm in which + * request priority is determined on the request arrival order, + * it does not need to maintain a set of resources that would + * otherwise be used to calculate a request's priority. * * \see nrs_resource_get_safe() */ @@ -143,15 +143,15 @@ static int nrs_fifo_res_get(struct ptlrpc_nrs_policy *policy, * Called when getting a request from the FIFO policy for handling, or just * peeking; removes the request from the policy when it is to be handled. * - * \param[in] policy The policy - * \param[in] peek When set, signifies that we just want to examine the - * request, and not handle it, so the request is not removed - * from the policy. - * \param[in] force Force the policy to return a request; unused in this - * policy + * @policy The policy + * @peek When set, signifies that we just want to examine the + * request, and not handle it, so the request is not removed + * from the policy. + * @force Force the policy to return a request; unused in this + * policy * - * \retval The request to be handled; this is the next request in the FIFO - * queue + * Return: The request to be handled; this is the next request in the + * FIFO queue * * \see ptlrpc_nrs_req_get_nolock() * \see nrs_request_get() @@ -183,13 +183,13 @@ struct ptlrpc_nrs_request *nrs_fifo_req_get(struct ptlrpc_nrs_policy *policy, } /** - * Adds request \a nrq to \a policy's list of queued requests + * Adds request @nrq to @policy's list of queued requests * - * \param[in] policy The policy - * \param[in] nrq The request to add + * @policy The policy + * @nrq The request to add * - * \retval 0 success; nrs_request_enqueue() assumes this function will always - * succeed + * Return: 0 success; nrs_request_enqueue() assumes this function will + * always succeed */ static int nrs_fifo_req_add(struct ptlrpc_nrs_policy *policy, struct ptlrpc_nrs_request *nrq) @@ -208,10 +208,10 @@ static int nrs_fifo_req_add(struct ptlrpc_nrs_policy *policy, } /** - * Removes request \a nrq from \a policy's list of queued requests. + * Removes request @nrq from @policy's list of queued requests. * - * \param[in] policy The policy - * \param[in] nrq The request to remove + * @policy The policy + * @nrq The request to remove */ static void nrs_fifo_req_del(struct ptlrpc_nrs_policy *policy, struct ptlrpc_nrs_request *nrq) @@ -221,11 +221,11 @@ static void nrs_fifo_req_del(struct ptlrpc_nrs_policy *policy, } /** - * Prints a debug statement right before the request \a nrq stops being + * Prints a debug statement right before the request @nrq stops being * handled. * - * \param[in] policy The policy handling the request - * \param[in] nrq The request being handled + * @policy The policy handling the request + * @nrq The request being handled * * \see ptlrpc_server_finish_request() * \see ptlrpc_nrs_req_stop_nolock() diff --git a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c index c7cc86c..75be2d7 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c +++ b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c @@ -648,11 +648,11 @@ static inline u32 lustre_msg_buflen_v2(struct lustre_msg_v2 *m, u32 n) } /** - * lustre_msg_buflen - return the length of buffer \a n in message \a m - * \param m lustre_msg (request or reply) to look at - * \param n message index (base 0) + * lustre_msg_buflen - return the length of buffer @n in message @m + * @m lustre_msg (request or reply) to look at + * @n message index (base 0) * - * returns zero for non-existent message indices + * Return: zero for non-existent message indices */ u32 lustre_msg_buflen(struct lustre_msg *m, u32 n) { diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec.c b/drivers/staging/lustre/lustre/ptlrpc/sec.c index 6dc7731..1cf0e9b 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/sec.c +++ b/drivers/staging/lustre/lustre/ptlrpc/sec.c @@ -316,16 +316,16 @@ static int import_sec_check_expire(struct obd_import *imp) /** * Get and validate the client side ptlrpc security facilities from - * \a imp. There is a race condition on client reconnect when the import is + * @imp. There is a race condition on client reconnect when the import is * being destroyed while there are outstanding client bound requests. In * this case do not output any error messages if import secuity is not * found. * - * \param[in] imp obd import associated with client - * \param[out] sec client side ptlrpc security + * @imp obd import associated with client + * @sec client side ptlrpc security * - * \retval 0 if security retrieved successfully - * \retval -ve errno if there was a problem + * Return: 0 if security retrieved successfully + * -ve errno if there was a problem */ static int import_sec_validate_get(struct obd_import *imp, struct ptlrpc_sec **sec) @@ -355,11 +355,11 @@ static int import_sec_validate_get(struct obd_import *imp, } /** - * Given a \a req, find or allocate a appropriate context for it. + * Given a @req, find or allocate a appropriate context for it. * \pre req->rq_cli_ctx == NULL. * - * \retval 0 succeed, and req->rq_cli_ctx is set. - * \retval -ev error number, and req->rq_cli_ctx == NULL. + * Return: 0 succeed, and req->rq_cli_ctx is set. + * -ev error number, and req->rq_cli_ctx == NULL. */ int sptlrpc_req_get_ctx(struct ptlrpc_request *req) { @@ -387,11 +387,11 @@ int sptlrpc_req_get_ctx(struct ptlrpc_request *req) } /** - * Drop the context for \a req. + * Drop the context for @req. * \pre req->rq_cli_ctx != NULL. * \post req->rq_cli_ctx == NULL. * - * If \a sync == 0, this function should return quickly without sleep; + * If @sync == 0, this function should return quickly without sleep; * otherwise it might trigger and wait for the whole process of sending * an context-destroying rpc to server. */ @@ -475,9 +475,9 @@ int sptlrpc_req_ctx_switch(struct ptlrpc_request *req, } /** - * If current context of \a req is dead somehow, e.g. we just switched flavor + * If current context of @req is dead somehow, e.g. we just switched flavor * thus marked original contexts dead, we'll find a new context for it. if - * no switch is needed, \a req will end up with the same context. + * no switch is needed, @req will end up with the same context. * * \note a request must have a context, to keep other parts of code happy. * In any case of failure during the switching, we must restore the old one. @@ -589,17 +589,17 @@ void req_off_ctx_list(struct ptlrpc_request *req, struct ptlrpc_cli_ctx *ctx) /** * To refresh the context of \req, if it's not up-to-date. - * \param timeout - * - < 0: don't wait - * - = 0: wait until success or fatal error occur - * - > 0: timeout value (in seconds) + * @timeout + * - < 0: don't wait + * - = 0: wait until success or fatal error occur + * - > 0: timeout value (in seconds) * * The status of the context could be subject to be changed by other threads * at any time. We allow this race, but once we return with 0, the caller will * suppose it's uptodated and keep using it until the owning rpc is done. * - * \retval 0 only if the context is uptodated. - * \retval -ev error number. + * Return: 0 only if the context is uptodated. + * -ev error number. */ int sptlrpc_req_refresh_ctx(struct ptlrpc_request *req, long timeout) { @@ -781,7 +781,7 @@ int sptlrpc_req_refresh_ctx(struct ptlrpc_request *req, long timeout) } /** - * Initialize flavor settings for \a req, according to \a opcode. + * Initialize flavor settings for @req, according to @opcode. * * \note this could be called in two situations: * - new request from ptlrpc_pre_req(), with proper @opcode @@ -865,7 +865,7 @@ void sptlrpc_request_out_callback(struct ptlrpc_request *req) } /** - * Given an import \a imp, check whether current user has a valid context + * Given an import @imp, check whether current user has a valid context * or not. We may create a new context and try to refresh it, and try * repeatedly try in case of non-fatal errors. Return 0 means success. */ @@ -917,7 +917,7 @@ int sptlrpc_import_check_ctx(struct obd_import *imp) /** * Used by ptlrpc client, to perform the pre-defined security transformation - * upon the request message of \a req. After this function called, + * upon the request message of @req. After this function called, * req->rq_reqmsg is still accessible as clear text. */ int sptlrpc_cli_wrap_request(struct ptlrpc_request *req) @@ -1024,7 +1024,7 @@ static int do_cli_unwrap_reply(struct ptlrpc_request *req) /** * Used by ptlrpc client, to perform security transformation upon the reply - * message of \a req. After return successfully, req->rq_repmsg points to + * message of @req. After return successfully, req->rq_repmsg points to * the reply message in clear text. * * \pre the reply buffer should have been un-posted from LNet, so nothing is @@ -1057,7 +1057,7 @@ int sptlrpc_cli_unwrap_reply(struct ptlrpc_request *req) /** * Used by ptlrpc client, to perform security transformation upon the early - * reply message of \a req. We expect the rq_reply_off is 0, and + * reply message of @req. We expect the rq_reply_off is 0, and * rq_nob_received is the early reply size. * * Because the receive buffer might be still posted, the reply data might be @@ -1065,10 +1065,11 @@ int sptlrpc_cli_unwrap_reply(struct ptlrpc_request *req) * we allocate a separate ptlrpc_request and reply buffer for early reply * processing. * - * \retval 0 success, \a req_ret is filled with a duplicated ptlrpc_request. - * Later the caller must call sptlrpc_cli_finish_early_reply() on the returned - * \a *req_ret to release it. - * \retval -ev error number, and \a req_ret will not be set. + * Return: 0 success, @req_ret is filled with a duplicated ptlrpc_request. + * Later the caller must call sptlrpc_cli_finish_early_reply() + * on the returned @*req_ret to release it. + * + * -ev error number, and @req_ret will not be set. */ int sptlrpc_cli_unwrap_early_reply(struct ptlrpc_request *req, struct ptlrpc_request **req_ret) @@ -1162,9 +1163,9 @@ int sptlrpc_cli_unwrap_early_reply(struct ptlrpc_request *req, } /** - * Used by ptlrpc client, to release a processed early reply \a early_req. + * Used by ptlrpc client, to release a processed early reply @early_req. * - * \pre \a early_req was obtained from calling sptlrpc_cli_unwrap_early_reply(). + * \pre @early_req was obtained from calling sptlrpc_cli_unwrap_early_reply(). */ void sptlrpc_cli_finish_early_reply(struct ptlrpc_request *early_req) { @@ -1369,11 +1370,11 @@ static void sptlrpc_import_sec_adapt_inplace(struct obd_import *imp, } /** - * To get an appropriate ptlrpc_sec for the \a imp, according to the current + * To get an appropriate ptlrpc_sec for the @imp, according to the current * configuration. Upon called, imp->imp_sec may or may not be NULL. * - * - regular import: \a svc_ctx should be NULL and \a flvr is ignored; - * - reverse import: \a svc_ctx and \a flvr are obtained from incoming request. + * - regular import: @svc_ctx should be NULL and @flvr is ignored; + * - reverse import: @svc_ctx and @flvr are obtained from incoming request. */ int sptlrpc_import_sec_adapt(struct obd_import *imp, struct ptlrpc_svc_ctx *svc_ctx, @@ -1506,8 +1507,8 @@ void sptlrpc_import_flush_all_ctx(struct obd_import *imp) EXPORT_SYMBOL(sptlrpc_import_flush_all_ctx); /** - * Used by ptlrpc client to allocate request buffer of \a req. Upon return - * successfully, req->rq_reqmsg points to a buffer with size \a msgsize. + * Used by ptlrpc client to allocate request buffer of @req. Upon return + * successfully, req->rq_reqmsg points to a buffer with size @msgsize. */ int sptlrpc_cli_alloc_reqbuf(struct ptlrpc_request *req, int msgsize) { @@ -1536,7 +1537,7 @@ int sptlrpc_cli_alloc_reqbuf(struct ptlrpc_request *req, int msgsize) } /** - * Used by ptlrpc client to free request buffer of \a req. After this + * Used by ptlrpc client to free request buffer of @req. After this * req->rq_reqmsg is set to NULL and should not be accessed anymore. */ void sptlrpc_cli_free_reqbuf(struct ptlrpc_request *req) @@ -1602,8 +1603,8 @@ void _sptlrpc_enlarge_msg_inplace(struct lustre_msg *msg, EXPORT_SYMBOL(_sptlrpc_enlarge_msg_inplace); /** - * Used by ptlrpc client to enlarge the \a segment of request message pointed - * by req->rq_reqmsg to size \a newsize, all previously filled-in data will be + * Used by ptlrpc client to enlarge the @segment of request message pointed + * by req->rq_reqmsg to size @newsize, all previously filled-in data will be * preserved after the enlargement. this must be called after original request * buffer being allocated. * @@ -1635,7 +1636,7 @@ int sptlrpc_cli_enlarge_reqbuf(struct ptlrpc_request *req, EXPORT_SYMBOL(sptlrpc_cli_enlarge_reqbuf); /** - * Used by ptlrpc client to allocate reply buffer of \a req. + * Used by ptlrpc client to allocate reply buffer of @req. * * \note After this, req->rq_repmsg is still not accessible. */ @@ -1656,7 +1657,7 @@ int sptlrpc_cli_alloc_repbuf(struct ptlrpc_request *req, int msgsize) } /** - * Used by ptlrpc client to free reply buffer of \a req. After this + * Used by ptlrpc client to free reply buffer of @req. After this * req->rq_repmsg is set to NULL and should not be accessed anymore. */ void sptlrpc_cli_free_repbuf(struct ptlrpc_request *req) @@ -1712,8 +1713,8 @@ static int flavor_allowed(struct sptlrpc_flavor *exp, #define EXP_FLVR_UPDATE_EXPIRE (OBD_TIMEOUT_DEFAULT + 10) /** - * Given an export \a exp, check whether the flavor of incoming \a req - * is allowed by the export \a exp. Main logic is about taking care of + * Given an export @exp, check whether the flavor of incoming @req + * is allowed by the export @exp. Main logic is about taking care of * changing configurations. Return 0 means success. */ int sptlrpc_target_export_check(struct obd_export *exp, @@ -1943,14 +1944,17 @@ static int sptlrpc_svc_check_from(struct ptlrpc_request *req, int svc_rc) /** * Used by ptlrpc server, to perform transformation upon request message of - * incoming \a req. This must be the first thing to do with a incoming + * incoming @req. This must be the first thing to do with a incoming * request in ptlrpc layer. * - * \retval SECSVC_OK success, and req->rq_reqmsg point to request message in - * clear text, size is req->rq_reqlen; also req->rq_svc_ctx is set. - * \retval SECSVC_COMPLETE success, the request has been fully processed, and - * reply message has been prepared. - * \retval SECSVC_DROP failed, this request should be dropped. + * Return: SECSVC_OK success, and req->rq_reqmsg point to request message + * in clear text, size is req->rq_reqlen; also req->rq_svc_ctx is + * set. + * + * SECSVC_COMPLETE success, the request has been fully processed, + * and reply message has been prepared. + * + * SECSVC_DROP failed, this request should be dropped. */ int sptlrpc_svc_unwrap_request(struct ptlrpc_request *req) { @@ -2007,9 +2011,9 @@ int sptlrpc_svc_unwrap_request(struct ptlrpc_request *req) } /** - * Used by ptlrpc server, to allocate reply buffer for \a req. If succeed, + * Used by ptlrpc server, to allocate reply buffer for @req. If succeed, * req->rq_reply_state is set, and req->rq_reply_state->rs_msg point to - * a buffer of \a msglen size. + * a buffer of @msglen size. */ int sptlrpc_svc_alloc_rs(struct ptlrpc_request *req, int msglen) { @@ -2127,7 +2131,7 @@ void sptlrpc_svc_ctx_decref(struct ptlrpc_request *req) ****************************************/ /** - * Perform transformation upon bulk data pointed by \a desc. This is called + * Perform transformation upon bulk data pointed by @desc. This is called * before transforming the request message. */ int sptlrpc_cli_wrap_bulk(struct ptlrpc_request *req, diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c index 54130ae..35ebd56 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c +++ b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c @@ -568,7 +568,7 @@ static int sptlrpc_conf_merge_rule(struct sptlrpc_conf *conf, } /** - * process one LCFG_SPTLRPC_CONF record. if \a conf is NULL, we + * process one LCFG_SPTLRPC_CONF record. if @conf is NULL, we * find one through the target name in the record inside conf_lock; * otherwise means caller already hold conf_lock. */ diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c index eda921c..5a7e9fa 100644 --- a/drivers/staging/lustre/lustre/ptlrpc/service.c +++ b/drivers/staging/lustre/lustre/ptlrpc/service.c @@ -877,7 +877,7 @@ static void ptlrpc_server_finish_active_request( } /** - * Sanity check request \a req. + * Sanity check request @req. * Return 0 if all is ok, error code otherwise. */ static int ptlrpc_check_req(struct ptlrpc_request *req) @@ -2375,7 +2375,7 @@ static void ptlrpc_svcpt_stop_threads(struct ptlrpc_service_part *svcpt) } /** - * Stops all threads of a particular service \a svc + * Stops all threads of a particular service @svc */ static void ptlrpc_stop_all_threads(struct ptlrpc_service *svc) { From patchwork Sat Mar 2 19:12:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 61A121575 for ; Sat, 2 Mar 2019 19:12:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48ECD2AB55 for ; Sat, 2 Mar 2019 19:12:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3BDA92AB5C; Sat, 2 Mar 2019 19:12:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7732C2AB55 for ; Sat, 2 Mar 2019 19:12:41 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 846C821F8AE; Sat, 2 Mar 2019 11:12:34 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 4577A21F213 for ; Sat, 2 Mar 2019 11:12:30 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id DB6A2277; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id D4C5B209; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:21 -0500 Message-Id: <1551553944-6419-5-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 4/7] ldlm: move comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the DocBook handling to sphinix. Signed-off-by: James Simmons --- drivers/staging/lustre/lustre/ldlm/ldlm_flock.c | 14 +-- drivers/staging/lustre/lustre/ldlm/ldlm_lib.c | 8 +- drivers/staging/lustre/lustre/ldlm/ldlm_lock.c | 90 ++++++++++---------- drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c | 6 +- drivers/staging/lustre/lustre/ldlm/ldlm_pool.c | 22 ++--- drivers/staging/lustre/lustre/ldlm/ldlm_request.c | 99 +++++++++++----------- drivers/staging/lustre/lustre/ldlm/ldlm_resource.c | 18 ++-- 7 files changed, 127 insertions(+), 130 deletions(-) diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c index 4fc380d2..4316b2b 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c @@ -100,7 +100,7 @@ * Process a granting attempt for flock lock. * Must be called under ns lock held. * - * This function looks for any conflicts for \a lock in the granted or + * This function looks for any conflicts for @lock in the granted or * waiting queues. The lock is granted if no conflicts are found in * either queue. * @@ -291,7 +291,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req) /* In case we're reprocessing the requested lock we can't destroy * it until after calling ldlm_add_ast_work_item() above so that laawi() - * can bump the reference count on \a req. Otherwise \a req + * can bump the reference count on @req. Otherwise @req * could be freed before the completion AST can be sent. */ if (added) @@ -304,12 +304,12 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req) /** * Flock completion callback function. * - * \param lock [in,out]: A lock to be handled - * \param flags [in]: flags - * \param *data [in]: ldlm_work_cp_ast_lock() will use ldlm_cb_set_arg + * @lock A lock to be handled + * @flags flags + * @data ldlm_work_cp_ast_lock() will use ldlm_cb_set_arg * - * \retval 0 : success - * \retval <0 : failure + * Return: 0 success + * <0 failure */ int ldlm_flock_completion_ast(struct ldlm_lock *lock, u64 flags, void *data) diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c index aef83ff..e0d2851 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c @@ -99,7 +99,7 @@ static int import_set_conn(struct obd_import *imp, struct obd_uuid *uuid, goto out_free; } } - /* No existing import connection found for \a uuid. */ + /* No existing import connection found for @uuid. */ if (create) { imp_conn->oic_conn = ptlrpc_conn; imp_conn->oic_uuid = *uuid; @@ -198,8 +198,8 @@ int client_import_del_conn(struct obd_import *imp, struct obd_uuid *uuid) EXPORT_SYMBOL(client_import_del_conn); /** - * Find conn UUID by peer NID. \a peer is a server NID. This function is used - * to find a conn uuid of \a imp which can reach \a peer. + * Find conn UUID by peer NID. @peer is a server NID. This function is used + * to find a conn uuid of @imp which can reach @peer. */ int client_import_find_conn(struct obd_import *imp, lnet_nid_t peer, struct obd_uuid *uuid) @@ -654,7 +654,7 @@ int client_disconnect_export(struct obd_export *exp) EXPORT_SYMBOL(client_disconnect_export); /** - * Packs current SLV and Limit into \a req. + * Packs current SLV and Limit into @req. */ int target_pack_pool_reply(struct ptlrpc_request *req) { diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c index f2433dc..ba28011 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c @@ -192,7 +192,7 @@ void ldlm_lock_put(struct ldlm_lock *lock) EXPORT_SYMBOL(ldlm_lock_put); /** - * Removes LDLM lock \a lock from LRU. Assumes LRU is already locked. + * Removes LDLM lock @lock from LRU. Assumes LRU is already locked. */ int ldlm_lock_remove_from_lru_nolock(struct ldlm_lock *lock) { @@ -211,15 +211,16 @@ int ldlm_lock_remove_from_lru_nolock(struct ldlm_lock *lock) } /** - * Removes LDLM lock \a lock from LRU. Obtains the LRU lock first. + * Removes LDLM lock @lock from LRU. Obtains the LRU lock first. * - * If \a last_use is non-zero, it will remove the lock from LRU only if + * If @last_use is non-zero, it will remove the lock from LRU only if * it matches lock's l_last_used. * - * \retval 0 if \a last_use is set, the lock is not in LRU list or \a last_use - * doesn't match lock's l_last_used; - * otherwise, the lock hasn't been in the LRU list. - * \retval 1 the lock was in LRU list and removed. + * Return: 0 if @last_use is set, the lock is not in LRU list or + * @last_use doesn't match lock's l_last_used; + * otherwise, the lock hasn't been in the LRU list. + * + * 1 the lock was in LRU list and removed. */ int ldlm_lock_remove_from_lru_check(struct ldlm_lock *lock, time_t last_use) { @@ -235,7 +236,7 @@ int ldlm_lock_remove_from_lru_check(struct ldlm_lock *lock, time_t last_use) } /** - * Adds LDLM lock \a lock to namespace LRU. Assumes LRU is already locked. + * Adds LDLM lock @lock to namespace LRU. Assumes LRU is already locked. */ static void ldlm_lock_add_to_lru_nolock(struct ldlm_lock *lock) { @@ -251,7 +252,7 @@ static void ldlm_lock_add_to_lru_nolock(struct ldlm_lock *lock) } /** - * Adds LDLM lock \a lock to namespace LRU. Obtains necessary LRU locks + * Adds LDLM lock @lock to namespace LRU. Obtains necessary LRU locks * first. */ static void ldlm_lock_add_to_lru(struct ldlm_lock *lock) @@ -264,7 +265,7 @@ static void ldlm_lock_add_to_lru(struct ldlm_lock *lock) } /** - * Moves LDLM lock \a lock that is already in namespace LRU to the tail of + * Moves LDLM lock @lock that is already in namespace LRU to the tail of * the LRU. Performs necessary LRU locking */ static void ldlm_lock_touch_in_lru(struct ldlm_lock *lock) @@ -323,7 +324,7 @@ static int ldlm_lock_destroy_internal(struct ldlm_lock *lock) } /** - * Destroys a LDLM lock \a lock. Performs necessary locking first. + * Destroys a LDLM lock @lock. Performs necessary locking first. */ static void ldlm_lock_destroy(struct ldlm_lock *lock) { @@ -341,7 +342,7 @@ static void ldlm_lock_destroy(struct ldlm_lock *lock) } /** - * Destroys a LDLM lock \a lock that is already locked. + * Destroys a LDLM lock @lock that is already locked. */ void ldlm_lock_destroy_nolock(struct ldlm_lock *lock) { @@ -426,7 +427,7 @@ static struct ldlm_lock *ldlm_lock_new(struct ldlm_resource *resource) } /** - * Moves LDLM lock \a lock to another resource. + * Moves LDLM lock @lock to another resource. * This is used on client when server returns some other lock than requested * (typically as a result of intent operation) */ @@ -492,7 +493,7 @@ int ldlm_lock_change_resource(struct ldlm_namespace *ns, struct ldlm_lock *lock, */ /** - * Fills in handle for LDLM lock \a lock into supplied \a lockh + * Fills in handle for LDLM lock @lock into supplied @lockh * Does not take any references. */ void ldlm_lock2handle(const struct ldlm_lock *lock, struct lustre_handle *lockh) @@ -504,7 +505,7 @@ void ldlm_lock2handle(const struct ldlm_lock *lock, struct lustre_handle *lockh) /** * Obtain a lock reference by handle. * - * if \a flags: atomically get the lock and set the flags. + * if @flags: atomically get the lock and set the flags. * Return NULL if flag already set */ struct ldlm_lock *__ldlm_handle2lock(const struct lustre_handle *handle, @@ -563,7 +564,7 @@ struct ldlm_lock *__ldlm_handle2lock(const struct lustre_handle *handle, /** * Fill in "on the wire" representation for given LDLM lock into supplied - * lock descriptor \a desc structure. + * lock descriptor @desc structure. */ void ldlm_lock2desc(struct ldlm_lock *lock, struct ldlm_lock_desc *desc) { @@ -632,8 +633,8 @@ static void ldlm_add_ast_work_item(struct ldlm_lock *lock, } /** - * Add specified reader/writer reference to LDLM lock with handle \a lockh. - * r/w reference type is determined by \a mode + * Add specified reader/writer reference to LDLM lock with handle @lockh. + * r/w reference type is determined by @mode * Calls ldlm_lock_addref_internal. */ void ldlm_lock_addref(const struct lustre_handle *lockh, enum ldlm_mode mode) @@ -649,8 +650,8 @@ void ldlm_lock_addref(const struct lustre_handle *lockh, enum ldlm_mode mode) /** * Helper function. - * Add specified reader/writer reference to LDLM lock \a lock. - * r/w reference type is determined by \a mode + * Add specified reader/writer reference to LDLM lock @lock. + * r/w reference type is determined by @mode * Removes lock from LRU if it is there. * Assumes the LDLM lock is already locked. */ @@ -672,12 +673,11 @@ void ldlm_lock_addref_internal_nolock(struct ldlm_lock *lock, } /** - * Attempts to add reader/writer reference to a lock with handle \a lockh, and + * Attempts to add reader/writer reference to a lock with handle @lockh, and * fails if lock is already LDLM_FL_CBPENDING or destroyed. * - * \retval 0 success, lock was addref-ed - * - * \retval -EAGAIN lock is being canceled. + * Return: 0 success, lock was addref-ed + * -EAGAIN lock is being canceled. */ int ldlm_lock_addref_try(const struct lustre_handle *lockh, enum ldlm_mode mode) { @@ -701,7 +701,7 @@ int ldlm_lock_addref_try(const struct lustre_handle *lockh, enum ldlm_mode mode) EXPORT_SYMBOL(ldlm_lock_addref_try); /** - * Add specified reader/writer reference to LDLM lock \a lock. + * Add specified reader/writer reference to LDLM lock @lock. * Locks LDLM lock and calls ldlm_lock_addref_internal_nolock to do the work. * Only called for local locks. */ @@ -713,7 +713,7 @@ void ldlm_lock_addref_internal(struct ldlm_lock *lock, enum ldlm_mode mode) } /** - * Removes reader/writer reference for LDLM lock \a lock. + * Removes reader/writer reference for LDLM lock @lock. * Assumes LDLM lock is already locked. * only called in ldlm_flock_destroy and for local locks. * Does NOT add lock to LRU if no r/w references left to accommodate flock locks @@ -739,7 +739,7 @@ void ldlm_lock_decref_internal_nolock(struct ldlm_lock *lock, } /** - * Removes reader/writer reference for LDLM lock \a lock. + * Removes reader/writer reference for LDLM lock @lock. * Locks LDLM lock first. * If the lock is determined to be client lock on a client and r/w refcount * drops to zero and the lock is not blocked, the lock is added to LRU lock @@ -814,7 +814,7 @@ void ldlm_lock_decref_internal(struct ldlm_lock *lock, enum ldlm_mode mode) } /** - * Decrease reader/writer refcount for LDLM lock with handle \a lockh + * Decrease reader/writer refcount for LDLM lock with handle @lockh */ void ldlm_lock_decref(const struct lustre_handle *lockh, enum ldlm_mode mode) { @@ -828,7 +828,7 @@ void ldlm_lock_decref(const struct lustre_handle *lockh, enum ldlm_mode mode) /** * Decrease reader/writer refcount for LDLM lock with handle - * \a lockh and mark it for subsequent cancellation once r/w refcount + * @lockh and mark it for subsequent cancellation once r/w refcount * drops to zero instead of putting into LRU. */ void ldlm_lock_decref_and_cancel(const struct lustre_handle *lockh, @@ -942,7 +942,7 @@ static void search_granted_lock(struct list_head *queue, /** * Add a lock into resource granted list after a position described by - * \a prev. + * @prev. */ static void ldlm_granted_list_add_lock(struct ldlm_lock *lock, struct sl_insert_point *prev) @@ -1051,8 +1051,8 @@ struct lock_match_data { * Check if the given @lock meets the criteria for a match. * A reference on the lock is taken if matched. * - * \param lock test-against this lock - * \param data parameters + * @lock test-against this lock + * @data parameters */ static bool lock_matches(struct ldlm_lock *lock, void *vdata) { @@ -1140,10 +1140,10 @@ static bool lock_matches(struct ldlm_lock *lock, void *vdata) /** * Search for a lock with given parameters in interval trees. * - * \param res search for a lock in this resource - * \param data parameters + * @res search for a lock in this resource + * @data parameters * - * \retval a referenced lock or NULL. + * Return: a referenced lock or NULL. */ static struct ldlm_lock *search_itree(struct ldlm_resource *res, struct lock_match_data *data) @@ -1170,10 +1170,10 @@ static struct ldlm_lock *search_itree(struct ldlm_resource *res, /** * Search for a lock with given properties in a queue. * - * \param queue search for a lock in this queue - * \param data parameters + * @queue search for a lock in this queue + * @data parameters * - * \retval a referenced lock or NULL. + * Return: a referenced lock or NULL. */ static struct ldlm_lock *search_queue(struct list_head *queue, struct lock_match_data *data) @@ -1224,7 +1224,7 @@ void ldlm_lock_allow_match(struct ldlm_lock *lock) * Attempt to find a lock with specified properties. * * Typically returns a reference to matched lock unless LDLM_FL_TEST_LOCK is - * set in \a flags + * set in @flags * * Can be called in two ways: * @@ -1243,8 +1243,8 @@ void ldlm_lock_allow_match(struct ldlm_lock *lock) * If 'flags' contains LDLM_FL_TEST_LOCK, then don't actually reference a lock, * just tell us if we would have matched. * - * \retval 1 if it finds an already-existing lock that is compatible; in this - * case, lockh is filled in with a addref()ed lock + * Return: 1 if it finds an already-existing lock that is compatible; + * in this case, lockh is filled in with a addref()ed lock * * We also check security context, and if that fails we simply return 0 (to * keep caller code unchanged), the context failure will be discovered by @@ -1831,7 +1831,7 @@ int ldlm_run_ast_work(struct ldlm_namespace *ns, struct list_head *rpc_list, } /** - * Helper function to call blocking AST for LDLM lock \a lock in a + * Helper function to call blocking AST for LDLM lock @lock in a * "cancelling" mode. */ void ldlm_cancel_callback(struct ldlm_lock *lock) @@ -1862,7 +1862,7 @@ void ldlm_cancel_callback(struct ldlm_lock *lock) } /** - * Remove skiplist-enabled LDLM lock \a req from granted list + * Remove skiplist-enabled LDLM lock @req from granted list */ void ldlm_unlink_lock_skiplist(struct ldlm_lock *req) { @@ -1875,7 +1875,7 @@ void ldlm_unlink_lock_skiplist(struct ldlm_lock *req) } /** - * Attempts to cancel LDLM lock \a lock that has no reader/writer references. + * Attempts to cancel LDLM lock @lock that has no reader/writer references. */ void ldlm_lock_cancel(struct ldlm_lock *lock) { @@ -1937,7 +1937,7 @@ struct export_cl_data { }; /** - * Print lock with lock handle \a lockh description into debug log. + * Print lock with lock handle @lockh description into debug log. * * Used when printing all locks on a resource for debug purposes. */ diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c index bae67ac..589b89d 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c @@ -382,9 +382,9 @@ static inline void init_blwi(struct ldlm_bl_work_item *blwi, } /** - * Queues a list of locks \a cancels containing \a count locks - * for later processing by a blocking thread. If \a count is zero, - * then the lock referenced as \a lock is queued instead. + * Queues a list of locks @cancels containing @count locks + * for later processing by a blocking thread. If @count is zero, + * then the lock referenced as @lock is queued instead. * * The blocking thread would then call ->l_blocking_ast callback in the lock. * If list addition fails an error is returned and caller is supposed to diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c index 5b23767f..1f81795 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c @@ -175,7 +175,7 @@ enum { /** * Calculates suggested grant_step in % of available locks for passed - * \a period. This is later used in grant_plan calculations. + * @period. This is later used in grant_plan calculations. */ static inline int ldlm_pool_t2gsp(unsigned int t) { @@ -205,7 +205,7 @@ static inline int ldlm_pool_t2gsp(unsigned int t) } /** - * Recalculates next stats on passed \a pl. + * Recalculates next stats on passed @pl. * * \pre ->pl_lock is locked. */ @@ -231,7 +231,7 @@ static void ldlm_pool_recalc_stats(struct ldlm_pool *pl) /** * Sets SLV and Limit from container_of(pl, struct ldlm_namespace, - * ns_pool)->ns_obd tp passed \a pl. + * ns_pool)->ns_obd tp passed @pl. */ static void ldlm_cli_pool_pop_slv(struct ldlm_pool *pl) { @@ -250,7 +250,7 @@ static void ldlm_cli_pool_pop_slv(struct ldlm_pool *pl) } /** - * Recalculates client size pool \a pl according to current SLV and Limit. + * Recalculates client size pool @pl according to current SLV and Limit. */ static int ldlm_cli_pool_recalc(struct ldlm_pool *pl) { @@ -312,7 +312,7 @@ static int ldlm_cli_pool_recalc(struct ldlm_pool *pl) /** * This function is main entry point for memory pressure handling on client * side. Main goal of this function is to cancel some number of locks on - * passed \a pl according to \a nr and \a gfp_mask. + * passed @pl according to @nr and @gfp_mask. */ static int ldlm_cli_pool_shrink(struct ldlm_pool *pl, int nr, gfp_t gfp_mask) @@ -350,7 +350,7 @@ static int ldlm_cli_pool_shrink(struct ldlm_pool *pl, /** * Pool recalc wrapper. Will call either client or server pool recalc callback - * depending what pool \a pl is used. + * depending what pool @pl is used. */ static int ldlm_pool_recalc(struct ldlm_pool *pl) { @@ -691,7 +691,7 @@ void ldlm_pool_fini(struct ldlm_pool *pl) } /** - * Add new taken ldlm lock \a lock into pool \a pl accounting. + * Add new taken ldlm lock @lock into pool @pl accounting. */ void ldlm_pool_add(struct ldlm_pool *pl, struct ldlm_lock *lock) { @@ -716,7 +716,7 @@ void ldlm_pool_add(struct ldlm_pool *pl, struct ldlm_lock *lock) } /** - * Remove ldlm lock \a lock from pool \a pl accounting. + * Remove ldlm lock @lock from pool @pl accounting. */ void ldlm_pool_del(struct ldlm_pool *pl, struct ldlm_lock *lock) { @@ -734,7 +734,7 @@ void ldlm_pool_del(struct ldlm_pool *pl, struct ldlm_lock *lock) } /** - * Returns current \a pl SLV. + * Returns current @pl SLV. * * \pre ->pl_lock is not locked. */ @@ -749,7 +749,7 @@ u64 ldlm_pool_get_slv(struct ldlm_pool *pl) } /** - * Sets passed \a clv to \a pl. + * Sets passed @clv to @pl. * * \pre ->pl_lock is not locked. */ @@ -761,7 +761,7 @@ void ldlm_pool_set_clv(struct ldlm_pool *pl, u64 clv) } /** - * Returns current LVF from \a pl. + * Returns current LVF from @pl. */ u32 ldlm_pool_get_lvf(struct ldlm_pool *pl) { diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c index b819ade..a614d74 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c @@ -143,9 +143,9 @@ static void ldlm_expired_completion_wait(struct ldlm_lock *lock, u32 conn_cnt) * lock cancel, and their replies). Used for lock completion timeout on the * client side. * - * \param[in] lock lock which is waiting the completion callback + * @lock: lock which is waiting the completion callback * - * \retval timeout in seconds to wait for the server reply + * Return: timeout in seconds to wait for the server reply */ /* We use the same basis for both server side and client side functions * from a single node. @@ -555,7 +555,7 @@ static inline int ldlm_format_handles_avail(struct obd_import *imp, /** * Cancel LRU locks and pack them into the enqueue request. Pack there the given - * \a count locks in \a cancels. + * @count locks in @cancels. * * This is to be called by functions preparing their own requests that * might contain lists of locks to cancel in addition to actual operation @@ -660,12 +660,12 @@ static struct ptlrpc_request *ldlm_enqueue_pack(struct obd_export *exp, /** * Client-side lock enqueue. * - * If a request has some specific initialisation it is passed in \a reqp, + * If a request has some specific initialisation it is passed in @reqp, * otherwise it is created in ldlm_cli_enqueue. * - * Supports sync and async requests, pass \a async flag accordingly. If a + * Supports sync and async requests, pass @async flag accordingly. If a * request was created in ldlm_cli_enqueue and it is the async request, - * pass it to the caller in \a reqp. + * pass it to the caller in @reqp. */ int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp, struct ldlm_enqueue_info *einfo, @@ -787,10 +787,11 @@ int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp, /** * Cancel locks locally. - * Returns: - * \retval LDLM_FL_LOCAL_ONLY if there is no need for a CANCEL RPC to the server - * \retval LDLM_FL_CANCELING otherwise; - * \retval LDLM_FL_BL_AST if there is a need for a separate CANCEL RPC. + * + * Returns: LDLM_FL_LOCAL_ONLY if there is no need for a CANCEL RPC + * to the server + * LDLM_FL_CANCELING otherwise; + * LDLM_FL_BL_AST if there is a need for a separate CANCEL RPC. */ static u64 ldlm_cli_cancel_local(struct ldlm_lock *lock) { @@ -824,7 +825,7 @@ static u64 ldlm_cli_cancel_local(struct ldlm_lock *lock) } /** - * Pack \a count locks in \a head into ldlm_request buffer of request \a req. + * Pack @count locks in @head into ldlm_request buffer of request @req. */ static void ldlm_cancel_pack(struct ptlrpc_request *req, struct list_head *head, int count) @@ -860,8 +861,8 @@ static void ldlm_cancel_pack(struct ptlrpc_request *req, } /** - * Prepare and send a batched cancel RPC. It will include \a count lock - * handles of locks given in \a cancels list. + * Prepare and send a batched cancel RPC. It will include @count lock + * handles of locks given in @cancels list. */ static int ldlm_cli_cancel_req(struct obd_export *exp, struct list_head *cancels, @@ -955,7 +956,7 @@ static inline struct ldlm_pool *ldlm_imp2pl(struct obd_import *imp) } /** - * Update client's OBD pool related fields with new SLV and Limit from \a req. + * Update client's OBD pool related fields with new SLV and Limit from @req. */ int ldlm_cli_update_pool(struct ptlrpc_request *req) { @@ -1071,7 +1072,7 @@ int ldlm_cli_cancel(const struct lustre_handle *lockh, EXPORT_SYMBOL(ldlm_cli_cancel); /** - * Locally cancel up to \a count locks in list \a cancels. + * Locally cancel up to @count locks in list @cancels. * Return the number of cancelled locks. */ int ldlm_cli_cancel_list_local(struct list_head *cancels, int count, @@ -1155,12 +1156,11 @@ int ldlm_cli_cancel_list_local(struct list_head *cancels, int count, /** * Callback function for LRU-resize policy. Decides whether to keep - * \a lock in LRU for current \a LRU size \a unused, added in current - * scan \a added and number of locks to be preferably canceled \a count. - * - * \retval LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * @lock in LRU for current @LRU size @unused, added in current + * scan @added and number of locks to be preferably canceled @count. * - * \retval LDLM_POLICY_CANCEL_LOCK cancel lock from LRU + * Retun: LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * LDLM_POLICY_CANCEL_LOCK cancel lock from LRU */ static enum ldlm_policy_res ldlm_cancel_lrur_policy(struct ldlm_namespace *ns, struct ldlm_lock *lock, @@ -1204,12 +1204,11 @@ static enum ldlm_policy_res ldlm_cancel_lrur_policy(struct ldlm_namespace *ns, /** * Callback function for debugfs used policy. Makes decision whether to keep - * \a lock in LRU for current \a LRU size \a unused, added in current scan \a - * added and number of locks to be preferably canceled \a count. - * - * \retval LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * @lock in LRU for current @LRU size @unused, added in current scan + * @added and number of locks to be preferably canceled @count. * - * \retval LDLM_POLICY_CANCEL_LOCK cancel lock from LRU + * Return: LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * LDLM_POLICY_CANCEL_LOCK cancel lock from LRU */ static enum ldlm_policy_res ldlm_cancel_passed_policy(struct ldlm_namespace *ns, struct ldlm_lock *lock, @@ -1224,13 +1223,12 @@ static enum ldlm_policy_res ldlm_cancel_passed_policy(struct ldlm_namespace *ns, } /** - * Callback function for aged policy. Makes decision whether to keep \a lock in - * LRU for current LRU size \a unused, added in current scan \a added and - * number of locks to be preferably canceled \a count. + * Callback function for aged policy. Makes decision whether to keep @lock in + * LRU for current LRU size @unused, added in current scan @added and + * number of locks to be preferably canceled @count. * - * \retval LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning - * - * \retval LDLM_POLICY_CANCEL_LOCK cancel lock from LRU + * Return: LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * LDLM_POLICY_CANCEL_LOCK cancel lock from LRU */ static enum ldlm_policy_res ldlm_cancel_aged_policy(struct ldlm_namespace *ns, struct ldlm_lock *lock, @@ -1274,13 +1272,12 @@ static enum ldlm_policy_res ldlm_cancel_aged_policy(struct ldlm_namespace *ns, } /** - * Callback function for default policy. Makes decision whether to keep \a lock - * in LRU for current LRU size \a unused, added in current scan \a added and - * number of locks to be preferably canceled \a count. - * - * \retval LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * Callback function for default policy. Makes decision whether to keep @lock + * in LRU for current LRU size @unused, added in current scan @added and + * number of locks to be preferably canceled @count. * - * \retval LDLM_POLICY_CANCEL_LOCK cancel lock from LRU + * Return: LDLM_POLICY_KEEP_LOCK keep lock in LRU in stop scanning + * LDLM_POLICY_CANCEL_LOCK cancel lock from LRU */ static enum ldlm_policy_res ldlm_cancel_default_policy(struct ldlm_namespace *ns, struct ldlm_lock *lock, @@ -1329,11 +1326,11 @@ typedef enum ldlm_policy_res (*ldlm_cancel_lru_policy_t)( } /** - * - Free space in LRU for \a count new locks, + * - Free space in LRU for @count new locks, * redundant unused locks are canceled locally; * - also cancel locally unused aged locks; - * - do not cancel more than \a max locks; - * - GET the found locks and add them into the \a cancels list. + * - do not cancel more than @max locks; + * - GET the found locks and add them into the @cancels list. * * A client lock can be added to the l_bl_ast list only when it is * marked LDLM_FL_CANCELING. Otherwise, somebody is already doing @@ -1346,15 +1343,15 @@ typedef enum ldlm_policy_res (*ldlm_cancel_lru_policy_t)( * Calling policies for enabled LRU resize: * ---------------------------------------- * flags & LDLM_LRU_FLAG_LRUR - use LRU resize policy (SLV from server) to - * cancel not more than \a count locks; + * cancel not more than @count locks; * - * flags & LDLM_LRU_FLAG_PASSED - cancel \a count number of old locks (located + * flags & LDLM_LRU_FLAG_PASSED - cancel @count number of old locks (located * at the beginning of LRU list); * - * flags & LDLM_LRU_FLAG_SHRINK - cancel not more than \a count locks according + * flags & LDLM_LRU_FLAG_SHRINK - cancel not more than @count locks according * to memory pressure policy function; * - * flags & LDLM_LRU_FLAG_AGED - cancel \a count locks according to + * flags & LDLM_LRU_FLAG_AGED - cancel @count locks according to * "aged policy". * * flags & LDLM_LRU_FLAG_NO_WAIT - cancel as many unused locks as possible @@ -1529,7 +1526,7 @@ int ldlm_cancel_lru_local(struct ldlm_namespace *ns, } /** - * Cancel at least \a nr locks from given namespace LRU. + * Cancel at least @nr locks from given namespace LRU. * * When called with LCF_ASYNC the blocking callback will be handled * in a thread and this function will return after the thread has been @@ -1556,7 +1553,7 @@ int ldlm_cancel_lru(struct ldlm_namespace *ns, int nr, /** * Find and cancel locally unused locks found on resource, matched to the - * given policy, mode. GET the found locks and add them into the \a cancels + * given policy, mode. GET the found locks and add them into the @cancels * list. */ int ldlm_cancel_resource_local(struct ldlm_resource *res, @@ -1615,12 +1612,12 @@ int ldlm_cancel_resource_local(struct ldlm_resource *res, /** * Cancel client-side locks from a list and send/prepare cancel RPCs to the * server. - * If \a req is NULL, send CANCEL request to server with handles of locks - * in the \a cancels. If EARLY_CANCEL is not supported, send CANCEL requests + * If @req is NULL, send CANCEL request to server with handles of locks + * in the @cancels. If EARLY_CANCEL is not supported, send CANCEL requests * separately per lock. - * If \a req is not NULL, put handles of locks in \a cancels into the request - * buffer at the offset \a off. - * Destroy \a cancels at the end. + * If @req is not NULL, put handles of locks in @cancels into the request + * buffer at the offset @off. + * Destroy @cancels at the end. */ int ldlm_cli_cancel_list(struct list_head *cancels, int count, struct ptlrpc_request *req, diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c index 74c7644..c1f585a 100644 --- a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c @@ -557,7 +557,7 @@ struct ldlm_ns_hash_def { }, }; -/** Register \a ns in the list of namespaces */ +/** Register @ns in the list of namespaces */ static void ldlm_namespace_register(struct ldlm_namespace *ns, enum ldlm_side client) { @@ -859,13 +859,13 @@ static int __ldlm_namespace_free(struct ldlm_namespace *ns, int force) } /** - * Performs various cleanups for passed \a ns to make it drop refc and be + * Performs various cleanups for passed @ns to make it drop refc and be * ready for freeing. Waits for refc == 0. * * The following is done: - * (0) Unregister \a ns from its list to make inaccessible for potential + * (0) Unregister @ns from its list to make inaccessible for potential * users like pools thread and others; - * (1) Clear all locks in \a ns. + * (1) Clear all locks in @ns. */ void ldlm_namespace_free_prior(struct ldlm_namespace *ns, struct obd_import *imp, @@ -899,7 +899,7 @@ void ldlm_namespace_free_prior(struct ldlm_namespace *ns, } } -/** Unregister \a ns from the list of namespaces. */ +/** Unregister @ns from the list of namespaces. */ static void ldlm_namespace_unregister(struct ldlm_namespace *ns, enum ldlm_side client) { @@ -915,9 +915,9 @@ static void ldlm_namespace_unregister(struct ldlm_namespace *ns, } /** - * Performs freeing memory structures related to \a ns. This is only done + * Performs freeing memory structures related to @ns. This is only done * when ldlm_namespce_free_prior() successfully removed all resources - * referencing \a ns and its refc == 0. + * referencing @ns and its refc == 0. */ void ldlm_namespace_free_post(struct ldlm_namespace *ns) { @@ -936,8 +936,8 @@ void ldlm_namespace_free_post(struct ldlm_namespace *ns) ldlm_namespace_sysfs_unregister(ns); cfs_hash_putref(ns->ns_rs_hash); kfree(ns->ns_name); - /* Namespace \a ns should be not on list at this time, otherwise - * this will cause issues related to using freed \a ns in poold + /* Namespace @ns should be not on list at this time, otherwise + * this will cause issues related to using freed @ns in poold * thread. */ LASSERT(list_empty(&ns->ns_list_chain)); From patchwork Sat Mar 2 19:12:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836735 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3132922 for ; Sat, 2 Mar 2019 19:13:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B51212AB55 for ; Sat, 2 Mar 2019 19:13:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A64052AB5B; Sat, 2 Mar 2019 19:13:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C89D12AB55 for ; Sat, 2 Mar 2019 19:13:04 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id B0AD821F45F; Sat, 2 Mar 2019 11:12:46 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 02FE621F213 for ; Sat, 2 Mar 2019 11:12:30 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id E1C2627A; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id D7D31213; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:22 -0500 Message-Id: <1551553944-6419-6-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 5/7] llite: move comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the DocBook handling to sphinix. Signed-off-by: James Simmons --- drivers/staging/lustre/lustre/llite/dir.c | 14 ++--- drivers/staging/lustre/lustre/llite/file.c | 62 +++++++++++----------- drivers/staging/lustre/lustre/llite/glimpse.c | 4 +- drivers/staging/lustre/lustre/llite/lcommon_cl.c | 5 +- drivers/staging/lustre/lustre/llite/llite_lib.c | 48 ++++++++--------- drivers/staging/lustre/lustre/llite/llite_mmap.c | 19 +++---- drivers/staging/lustre/lustre/llite/llite_nfs.c | 4 +- drivers/staging/lustre/lustre/llite/lproc_llite.c | 22 ++++---- drivers/staging/lustre/lustre/llite/range_lock.c | 20 +++---- drivers/staging/lustre/lustre/llite/rw.c | 14 ++--- drivers/staging/lustre/lustre/llite/statahead.c | 59 ++++++++++---------- drivers/staging/lustre/lustre/llite/super25.c | 3 +- drivers/staging/lustre/lustre/llite/vvp_io.c | 2 +- drivers/staging/lustre/lustre/llite/vvp_page.c | 8 +-- drivers/staging/lustre/lustre/llite/xattr_cache.c | 44 +++++++-------- .../staging/lustre/lustre/llite/xattr_security.c | 12 ++--- 16 files changed, 174 insertions(+), 166 deletions(-) diff --git a/drivers/staging/lustre/lustre/llite/dir.c b/drivers/staging/lustre/lustre/llite/dir.c index 17bb618..1fbfc3a 100644 --- a/drivers/staging/lustre/lustre/llite/dir.c +++ b/drivers/staging/lustre/lustre/llite/dir.c @@ -393,13 +393,13 @@ static int ll_send_mgc_param(struct obd_export *mgc, char *string) /** * Create striped directory with specified stripe(@lump) * - * param[in] dparent the parent of the directory. - * param[in] lump the specified stripes. - * param[in] dirname the name of the directory. - * param[in] mode the specified mode of the directory. + * @dparent: the parent of the directory. + * @lump: the specified stripes. + * @dirname: the name of the directory. + * @mode: the specified mode of the directory. * - * retval =0 if striped directory is being created successfully. - * <0 if the creation is failed. + * Returns: =0 if striped directory is being created successfully. + * <0 if the creation is failed. */ static int ll_dir_setdirstripe(struct dentry *dparent, struct lmv_user_md *lump, const char *dirname, umode_t mode) @@ -738,7 +738,7 @@ int ll_get_mdt_idx(struct inode *inode) * first information for it that real work has started. * * Moreover, for a ARCHIVE request, it will sample the file data version and - * store it in \a copy. + * store it in @copy. * * \return 0 on success. */ diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c index 6afaa90..4c13a1d 100644 --- a/drivers/staging/lustre/lustre/llite/file.c +++ b/drivers/staging/lustre/lustre/llite/file.c @@ -112,8 +112,8 @@ static void ll_prepare_close(struct inode *inode, struct md_op_data *op_data, * Perform a close, possibly with a bias. * The meaning of "data" depends on the value of "bias". * - * If \a bias is MDS_HSM_RELEASE then \a data is a pointer to the data version. - * If \a bias is MDS_CLOSE_LAYOUT_SWAP then \a data is a pointer to the inode to + * If @bias is MDS_HSM_RELEASE then @data is a pointer to the data version. + * If @bias is MDS_CLOSE_LAYOUT_SWAP then @data is a pointer to the inode to * swap layouts with. */ static int ll_close_inode_openhandle(struct inode *inode, @@ -923,11 +923,12 @@ static int ll_lease_och_release(struct inode *inode, struct file *file) /** * Check whether a layout swap can be done between two inodes. * - * \param[in] inode1 First inode to check - * \param[in] inode2 Second inode to check + * @inode1: First inode to check + * @inode2: Second inode to check * - * \retval 0 on success, layout swap can be performed between both inodes - * \retval negative error code if requirements are not met + * Return: 0 on success, layout swap can be performed between + * both inodes + * negative error code if requirements are not met */ static int ll_check_swap_layouts_validity(struct inode *inode1, struct inode *inode2) @@ -1268,11 +1269,11 @@ static void ll_io_init(struct cl_io *io, const struct file *file, int write) * doesn't make the situation worse on single node but it may interleave write * results from multiple nodes due to short read handling in ll_file_aio_read(). * - * @env - lu_env - * @iocb - kiocb from kernel - * @iter - user space buffers where the data will be copied + * @env: lu_env + * @iocb: kiocb from kernel + * @iter: user space buffers where the data will be copied * - * RETURN - number of bytes have been read, or error code if error occurred. + * Returns: number of bytes have been read, or error code if error occurred. */ static ssize_t ll_do_fast_read(const struct lu_env *env, struct kiocb *iocb, @@ -1667,11 +1668,11 @@ static int ll_put_grouplock(struct inode *inode, struct file *file, /** * Close inode open handle * - * \param inode [in] inode in question - * \param it [in,out] intent which contains open info and result + * @inode: inode in question + * @it: intent which contains open info and result * - * \retval 0 success - * \retval <0 failure + * Returns: 0 success + * <0 failure */ int ll_release_openhandle(struct inode *inode, struct lookup_intent *it) { @@ -1712,8 +1713,8 @@ int ll_release_openhandle(struct inode *inode, struct lookup_intent *it) * Get size for inode for which FIEMAP mapping is requested. * Make the FIEMAP get_info call and returns the result. * - * \param fiemap kernel buffer to hold extens - * \param num_bytes kernel buffer size + * @fiemap: kernel buffer to hold extens + * @num_bytes: kernel buffer size */ static int ll_do_fiemap(struct inode *inode, struct fiemap *fiemap, size_t num_bytes) @@ -1823,7 +1824,7 @@ int ll_fid2path(struct inode *inode, void __user *arg) * This value is computed using stripe object version on OST. * Version is computed using server side locking. * - * @param flags if do sync on the OST side; + * @flags: if do sync on the OST side; * 0: no sync * LL_DV_RD_FLUSH: flush dirty pages, LCK_PR on OSTs * LL_DV_WR_FLUSH: drop all caching pages, LCK_PW on OSTs @@ -3174,10 +3175,12 @@ int ll_migrate(struct inode *parent, struct file *file, int mdtidx, * - bits can be in different locks * - if found clear the common lock bits in *bits * - the bits not found, are kept in *bits - * \param inode [IN] - * \param bits [IN] searched lock bits [IN] - * \param l_req_mode [IN] searched lock mode - * \retval boolean, true iff all bits are found + * + * @inode: inode + * @bits: searched lock bits [IN] + * @l_req_mode: searched lock mode + * + * Returns: boolean, true iff all bits are found */ int ll_have_md_lock(struct inode *inode, u64 *bits, enum ldlm_mode l_req_mode) @@ -3828,9 +3831,8 @@ static int ll_layout_lock_set(struct lustre_handle *lockh, enum ldlm_mode mode, * @inode file inode * @intent layout intent * - * RETURNS: - * 0 on success - * retval < 0 error code + * Returns: 0 on success + * < 0 error code */ static int ll_layout_intent(struct inode *inode, struct layout_intent *intent) { @@ -3938,13 +3940,13 @@ int ll_layout_refresh(struct inode *inode, u32 *gen) /** * Issue layout intent RPC indicating where in a file an IO is about to write. * - * \param[in] inode file inode. - * \param[in] start start offset of fille in bytes where an IO is about to - * write. - * \param[in] end exclusive end offset in bytes of the write range. + * @inode: file inode. + * @start: start offset of fille in bytes where an IO is about to + * write. + * @end: exclusive end offset in bytes of the write range. * - * \retval 0 on success - * \retval < 0 error code + * Returns: 0 on success + * < 0 error code */ int ll_layout_write_intent(struct inode *inode, u64 start, u64 end) { diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c index 27c233d..05c267f 100644 --- a/drivers/staging/lustre/lustre/llite/glimpse.c +++ b/drivers/staging/lustre/lustre/llite/glimpse.c @@ -58,8 +58,8 @@ /* * Check whether file has possible unwriten pages. * - * \retval 1 file is mmap-ed or has dirty pages - * 0 otherwise + * Return: 1 if file is mmap-ed or has dirty pages + * 0 otherwise */ blkcnt_t dirty_cnt(struct inode *inode) { diff --git a/drivers/staging/lustre/lustre/llite/lcommon_cl.c b/drivers/staging/lustre/lustre/llite/lcommon_cl.c index afcaa5e..cc8d1b2 100644 --- a/drivers/staging/lustre/lustre/llite/lcommon_cl.c +++ b/drivers/staging/lustre/lustre/llite/lcommon_cl.c @@ -130,8 +130,9 @@ int cl_setattr_ost(struct cl_object *obj, const struct iattr *attr, * Initialize or update CLIO structures for regular files when new * meta-data arrives from the server. * - * \param inode regular file inode - * \param md new file metadata from MDS + * @inode regular file inode + * @md new file metadata from MDS + * * - allocates cl_object if necessary, * - updated layout, if object was already here. */ diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c index e2417cd..43f5fc7 100644 --- a/drivers/staging/lustre/lustre/llite/llite_lib.c +++ b/drivers/staging/lustre/lustre/llite/llite_lib.c @@ -638,11 +638,11 @@ int ll_get_max_mdsize(struct ll_sb_info *sbi, int *lmmsize) * * \see client_obd::cl_default_mds_easize * - * \param[in] sbi superblock info for this filesystem - * \param[out] lmmsize pointer to storage location for value + * @sbi: superblock info for this filesystem + * @lmmsize: pointer to storage location for value * - * \retval 0 on success - * \retval negative negated errno on failure + * Returns: 0 on success + * negated errno on failure */ int ll_get_default_mdsize(struct ll_sb_info *sbi, int *lmmsize) { @@ -662,11 +662,11 @@ int ll_get_default_mdsize(struct ll_sb_info *sbi, int *lmmsize) * * \see client_obd::cl_default_mds_easize * - * \param[in] sbi superblock info for this filesystem - * \param[in] lmmsize the size to set + * @sbi: superblock info for this filesystem + * @lmmsize: the size to set * - * \retval 0 on success - * \retval negative negated errno on failure + * Return: 0 on success + * negated errno on failure */ int ll_set_default_mdsize(struct ll_sb_info *sbi, int lmmsize) { @@ -2181,8 +2181,8 @@ int ll_remount_fs(struct super_block *sb, int *flags, char *data) * holds the reference on such file/object, then it will block the * subsequent threads that want to locate such object via FID. * - * \param[in] sb super block for this file-system - * \param[in] open_req pointer to the original open request + * @sb: super block for this file-system + * @open_req: pointer to the original open request */ void ll_open_cleanup(struct super_block *sb, struct ptlrpc_request *open_req) { @@ -2475,7 +2475,7 @@ int ll_get_obd_name(struct inode *inode, unsigned int cmd, unsigned long arg) } /** - * Get lustre file system name by \a sbi. If \a buf is provided(non-NULL), the + * Get lustre file system name by @sbi. If @buf is provided(non-NULL), the * fsname will be returned in this buffer; otherwise, a static buffer will be * used to store the fsname and returned to caller. */ @@ -2612,13 +2612,13 @@ void ll_compute_rootsquash_state(struct ll_sb_info *sbi) /** * Parse linkea content to extract information about a given hardlink * - * \param[in] ldata - Initialized linkea data - * \param[in] linkno - Link identifier - * \param[out] parent_fid - The entry's parent FID - * \param[in] size - Entry name destination buffer + * @ldata: - Initialized linkea data + * @linkno: - Link identifier + * @parent_fid: - The entry's parent FID + * @size: - Entry name destination buffer * - * \retval 0 on success - * \retval Appropriate negative error code on failure + * Returns: 0 on success + * Appropriate negative error code on failure */ static int ll_linkea_decode(struct linkea_data *ldata, unsigned int linkno, struct lu_fid *parent_fid, struct lu_name *ln) @@ -2655,14 +2655,14 @@ static int ll_linkea_decode(struct linkea_data *ldata, unsigned int linkno, * a given link number, letting the caller iterate over linkno to list one or * all links of an entry. * - * \param[in] file - File descriptor against which to perform the operation - * \param[in,out] arg - User-filled structure containing the linkno to operate - * on and the available size. It is eventually filled - * with the requested information or left untouched on - * error + * @file: - File descriptor against which to perform the operation + * @arg: - User-filled structure containing the linkno to operate + * on and the available size. It is eventually filled + * with the requested information or left untouched on + * error * - * \retval - 0 on success - * \retval - Appropriate negative error code on failure + * Returns: - 0 on success + * - Appropriate negative error code on failure */ int ll_getparent(struct file *file, struct getparent __user *arg) { diff --git a/drivers/staging/lustre/lustre/llite/llite_mmap.c b/drivers/staging/lustre/lustre/llite/llite_mmap.c index f5aaaf7..1865db1 100644 --- a/drivers/staging/lustre/lustre/llite/llite_mmap.c +++ b/drivers/staging/lustre/lustre/llite/llite_mmap.c @@ -79,10 +79,11 @@ struct vm_area_struct *our_vma(struct mm_struct *mm, unsigned long addr, /** * API independent part for page fault initialization. - * \param vma - virtual memory area addressed to page fault - * \param env - corespondent lu_env to processing - * \param index - page index corespondent to fault. - * \parm ra_flags - vma readahead flags. + * + * @vma virtual memory area addressed to page fault + * @env corespondent lu_env to processing + * @index page index corespondent to fault. + * @ra_flags vma readahead flags. * * \return error codes from cl_io_init. */ @@ -254,12 +255,12 @@ static inline vm_fault_t to_fault_error(int result) * Lustre implementation of a vm_operations_struct::fault() method, called by * VM to server page fault (both in kernel and user space). * - * \param vma - is virtual area struct related to page fault - * \param vmf - structure which describe type and address where hit fault + * @vma is virtual area struct related to page fault + * @vmf structure which describe type and address where hit fault * - * \return allocated and filled _locked_ page for address - * \retval VM_FAULT_ERROR on general error - * \retval NOPAGE_OOM not have memory for allocate new page + * Return: allocated and filled _locked_ page for address + * VM_FAULT_ERROR on general error + * NOPAGE_OOM not have memory for allocate new page */ static vm_fault_t __ll_fault(struct vm_area_struct *vma, struct vm_fault *vmf) { diff --git a/drivers/staging/lustre/lustre/llite/llite_nfs.c b/drivers/staging/lustre/lustre/llite/llite_nfs.c index 3f34073..9129f47 100644 --- a/drivers/staging/lustre/lustre/llite/llite_nfs.c +++ b/drivers/staging/lustre/lustre/llite/llite_nfs.c @@ -178,8 +178,8 @@ struct lustre_nfs_fid { } /** - * \a connectable - is nfsd will connect himself or this should be done - * at lustre + * @connectable - is nfsd will connect himself or this should be done + * at lustre * * The return value is file handle type: * 1 -- contains child file handle; diff --git a/drivers/staging/lustre/lustre/llite/lproc_llite.c b/drivers/staging/lustre/lustre/llite/lproc_llite.c index 8215296..dc6494a 100644 --- a/drivers/staging/lustre/lustre/llite/lproc_llite.c +++ b/drivers/staging/lustre/lustre/llite/lproc_llite.c @@ -893,12 +893,12 @@ static ssize_t max_easize_show(struct kobject *kobj, * * \see client_obd::cl_default_mds_easize * - * \param[in] kobj kernel object for sysfs tree - * \param[in] attr attribute of this kernel object - * \param[in] buf buffer to write data into + * @kobj: kernel object for sysfs tree + * @attr: attribute of this kernel object + * @buf: buffer to write data into * - * \retval positive \a count on success - * \retval negative negated errno on failure + * Returns: positive @count on success + * negated errno on failure */ static ssize_t default_easize_show(struct kobject *kobj, struct attribute *attr, @@ -924,13 +924,13 @@ static ssize_t default_easize_show(struct kobject *kobj, * * \see client_obd::cl_default_mds_easize * - * \param[in] kobj kernel object for sysfs tree - * \param[in] attr attribute of this kernel object - * \param[in] buffer string passed from user space - * \param[in] count \a buffer length + * @kobj: kernel object for sysfs tree + * @attr: attribute of this kernel object + * @buffer: string passed from user space + * @count: @buffer length * - * \retval positive \a count on success - * \retval negative negated errno on failure + * Returns: positive @count on success + * negated errno on failure */ static ssize_t default_easize_store(struct kobject *kobj, struct attribute *attr, diff --git a/drivers/staging/lustre/lustre/llite/range_lock.c b/drivers/staging/lustre/lustre/llite/range_lock.c index c1f0e1e..4cd21b8 100644 --- a/drivers/staging/lustre/lustre/llite/range_lock.c +++ b/drivers/staging/lustre/lustre/llite/range_lock.c @@ -47,7 +47,7 @@ /** * Initialize a range lock tree * - * \param tree [in] an empty range lock tree + * @tree an empty range lock tree * * Pre: Caller should have allocated the range lock tree. * Post: The range lock tree is ready to function. @@ -62,9 +62,9 @@ void range_lock_tree_init(struct range_lock_tree *tree) /** * Initialize a range lock node * - * \param lock [in] an empty range lock node - * \param start [in] start of the covering region - * \param end [in] end of the covering region + * @lock an empty range lock node + * @start start of the covering region + * @end end of the covering region * * Pre: Caller should have allocated the range lock node. * Post: The range lock node is meant to cover [start, end] region @@ -89,8 +89,8 @@ int range_lock_init(struct range_lock *lock, u64 start, u64 end) /** * Unlock a range lock, wake up locks blocked by this lock. * - * \param tree [in] range lock tree - * \param lock [in] range lock to be deleted + * @tree range lock tree + * @lock range lock to be deleted * * If this lock has been granted, relase it; if not, just delete it from * the tree or the same region lock list. Wake up those locks only blocked @@ -120,11 +120,11 @@ void range_unlock(struct range_lock_tree *tree, struct range_lock *lock) /** * Lock a region * - * \param tree [in] range lock tree - * \param lock [in] range lock node containing the region span + * @tree range lock tree + * @lock range lock node containing the region span * - * \retval 0 get the range lock - * \retval <0 error code while not getting the range lock + * Return: 0 get the range lock + * <0 error code while not getting the range lock * * If there exists overlapping range lock, the new lock will wait and * retry, if later it find that it is not the chosen one to wake up, diff --git a/drivers/staging/lustre/lustre/llite/rw.c b/drivers/staging/lustre/lustre/llite/rw.c index af983ee..e66aa67 100644 --- a/drivers/staging/lustre/lustre/llite/rw.c +++ b/drivers/staging/lustre/lustre/llite/rw.c @@ -62,9 +62,9 @@ * Get readahead pages from the filesystem readahead pool of the client for a * thread. * - * /param sbi superblock for filesystem readahead state ll_ra_info - * /param ria per-thread readahead state - * /param pages number of pages requested for readahead for the thread. + * @sbi: superblock for filesystem readahead state ll_ra_info + * @ria: per-thread readahead state + * @pages: number of pages requested for readahead for the thread. * * WARNING: This algorithm is used to reduce contention on sbi->ll_lock. * It should work well if the ra_max_pages is much greater than the single @@ -73,7 +73,7 @@ * * TODO: There may be a 'global sync problem' if many threads are trying * to get an ra budget that is larger than the remaining readahead pages - * and reach here at exactly the same time. They will compute /a ret to + * and reach here at exactly the same time. They will compute @ret to * consume the remaining pages, but will fail at atomic_add_return() and * get a zero ra window, although there is still ra space remaining. - Jay */ @@ -168,10 +168,10 @@ void ll_ras_enter(struct file *f) /** * Initiates read-ahead of a page with given index. * - * \retval +ve: page was already uptodate so it will be skipped + * Return: +ve if page was already uptodate so it will be skipped * from being added; - * \retval -ve: page wasn't added to \a queue for error; - * \retval 0: page was added into \a queue for read ahead. + * -ve if page wasn't added to @queue for error; + * 0 if page was added into @queue for read ahead. */ static int ll_read_ahead_page(const struct lu_env *env, struct cl_io *io, struct cl_page_list *queue, pgoff_t index) diff --git a/drivers/staging/lustre/lustre/llite/statahead.c b/drivers/staging/lustre/lustre/llite/statahead.c index de7586d..53bab47 100644 --- a/drivers/staging/lustre/lustre/llite/statahead.c +++ b/drivers/staging/lustre/lustre/llite/statahead.c @@ -792,9 +792,9 @@ static int sa_lookup(struct inode *dir, struct sa_entry *entry) /** * async stat for file found in dcache, similar to .revalidate * - * \retval 1 dentry valid, no RPC sent - * \retval 0 dentry invalid, will send async stat RPC - * \retval negative number upon error + * Return: 1 dentry valid, no RPC sent + * 0 dentry invalid, will send async stat RPC + * negative number upon error */ static int sa_revalidate(struct inode *dir, struct sa_entry *entry, struct dentry *dentry) @@ -1342,14 +1342,15 @@ static int is_first_dirent(struct inode *dir, struct dentry *dentry) /** * revalidate @dentryp from statahead cache * - * \param[in] dir parent directory - * \param[in] sai sai structure - * \param[out] dentryp pointer to dentry which will be revalidated - * \param[in] unplug unplug statahead window only (normally for negative - * dentry) - * \retval 1 on success, dentry is saved in @dentryp - * \retval 0 if revalidation failed (no proper lock on client) - * \retval negative number upon error + * @dir: parent directory + * @sai: sai structure + * @dentryp: pointer to dentry which will be revalidated + * @unplug: unplug statahead window only (normally for negative + * dentry) + * + * Return: 1 on success, dentry is saved in @dentryp + * 0 if revalidation failed (no proper lock on client) + * negative number upon error */ static int revalidate_statahead_dentry(struct inode *dir, struct ll_statahead_info *sai, @@ -1487,14 +1488,16 @@ static int revalidate_statahead_dentry(struct inode *dir, /** * start statahead thread * - * \param[in] dir parent directory - * \param[in] dentry dentry that triggers statahead, normally the first - * dirent under @dir - * \retval -EAGAIN on success, because when this function is - * called, it's already in lookup call, so client should - * do it itself instead of waiting for statahead thread - * to do it asynchronously. - * \retval negative number upon error + * @dir: parent directory + * @dentry: dentry that triggers statahead, normally the first + * dirent under @dir + * + * Returns: -EAGAIN on success, because when this function is + * called, it's already in lookup call, so client should + * do it itself instead of waiting for statahead thread + * to do it asynchronously. + * + * negative number upon error */ static int start_statahead_thread(struct inode *dir, struct dentry *dentry) { @@ -1594,15 +1597,15 @@ static int start_statahead_thread(struct inode *dir, struct dentry *dentry) * will start statahead thread if this is the first dir entry, else revalidate * dentry from statahead cache. * - * \param[in] dir parent directory - * \param[out] dentryp dentry to getattr - * \param[in] unplug unplug statahead window only (normally for negative - * dentry) - * \retval 1 on success - * \retval 0 revalidation from statahead cache failed, caller needs - * to getattr from server directly - * \retval negative number on error, caller often ignores this and - * then getattr from server + * @dir: parent directory + * @dentryp: dentry to getattr + * @unplug: unplug statahead window only (normally for negative + * dentry) + * Returns: 1 on success + * 0 revalidation from statahead cache failed, caller needs + * to getattr from server directly + * negative number on error, caller often ignores this and + * then getattr from server */ int ll_statahead(struct inode *dir, struct dentry **dentryp, bool unplug) { diff --git a/drivers/staging/lustre/lustre/llite/super25.c b/drivers/staging/lustre/lustre/llite/super25.c index c2b1668..a25d03c 100644 --- a/drivers/staging/lustre/lustre/llite/super25.c +++ b/drivers/staging/lustre/lustre/llite/super25.c @@ -86,7 +86,8 @@ struct super_operations lustre_super_operations = { /** This is the entry point for the mount call into Lustre. * This is called when a server or client is mounted, * and this is where we start setting things up. - * @param data Mount options (e.g. -o flock,abort_recov) + * + * @data: Mount options (e.g. -o flock,abort_recov) */ static int lustre_fill_super(struct super_block *sb, void *lmd2_data, int silent) { diff --git a/drivers/staging/lustre/lustre/llite/vvp_io.c b/drivers/staging/lustre/lustre/llite/vvp_io.c index 593b10c..225a858 100644 --- a/drivers/staging/lustre/lustre/llite/vvp_io.c +++ b/drivers/staging/lustre/lustre/llite/vvp_io.c @@ -107,7 +107,7 @@ static void vvp_object_size_unlock(struct cl_object *obj) /** * Helper function that if necessary adjusts file size (inode->i_size), when - * position at the offset \a pos is accessed. File size can be arbitrary stale + * position at the offset @pos is accessed. File size can be arbitrary stale * on a Lustre client, but client at least knows KMS. If accessed area is * inside [0, KMS], set file size to KMS, otherwise glimpse file size. * diff --git a/drivers/staging/lustre/lustre/llite/vvp_page.c b/drivers/staging/lustre/lustre/llite/vvp_page.c index ec0d933..590e5f5 100644 --- a/drivers/staging/lustre/lustre/llite/vvp_page.c +++ b/drivers/staging/lustre/lustre/llite/vvp_page.c @@ -227,7 +227,7 @@ static int vvp_page_prep_write(const struct lu_env *env, * Handles page transfer errors at VM level. * * This takes inode as a separate argument, because inode on which error is to - * be set can be different from \a vmpage inode in case of direct-io. + * be set can be different from @vmpage inode in case of direct-io. */ static void vvp_vmpage_error(struct inode *inode, struct page *vmpage, int ioret) @@ -309,10 +309,10 @@ static void vvp_page_completion_write(const struct lu_env *env, * but hopefully rare situation, as it usually results in transfer being * shorter than possible). * - * \retval 0 success, page can be placed into transfer + * Return: 0 success, page can be placed into transfer * - * \retval -EAGAIN page is either used by concurrent IO has been - * truncated. Skip it. + * -EAGAIN page is either used by concurrent IO has been + * truncated. Skip it. */ static int vvp_page_make_ready(const struct lu_env *env, const struct cl_page_slice *slice) diff --git a/drivers/staging/lustre/lustre/llite/xattr_cache.c b/drivers/staging/lustre/lustre/llite/xattr_cache.c index bb235e0..001bdba 100644 --- a/drivers/staging/lustre/lustre/llite/xattr_cache.c +++ b/drivers/staging/lustre/lustre/llite/xattr_cache.c @@ -69,8 +69,8 @@ static void ll_xattr_cache_init(struct ll_inode_info *lli) * Find in @cache and return @xattr_name attribute in @xattr, * for the NULL @xattr_name return the first cached @xattr. * - * \retval 0 success - * \retval -ENODATA if not found + * Return: 0 success + * -ENODATA if not found */ static int ll_xattr_cache_find(struct list_head *cache, const char *xattr_name, @@ -97,9 +97,9 @@ static int ll_xattr_cache_find(struct list_head *cache, * * Add @xattr_name attr with @xattr_val value and @xattr_val_len length, * - * \retval 0 success - * \retval -ENOMEM if no memory could be allocated for the cached attr - * \retval -EPROTO if duplicate xattr is being added + * Return: 0 success + * -ENOMEM if no memory could be allocated for the cached attr + * -EPROTO if duplicate xattr is being added */ static int ll_xattr_cache_add(struct list_head *cache, const char *xattr_name, @@ -151,8 +151,8 @@ static int ll_xattr_cache_add(struct list_head *cache, * * Remove @xattr_name attribute from @cache. * - * \retval 0 success - * \retval -ENODATA if @xattr_name is not cached + * Return: 0 success + * -ENODATA if @xattr_name is not cached */ static int ll_xattr_cache_del(struct list_head *cache, const char *xattr_name) @@ -180,8 +180,8 @@ static int ll_xattr_cache_del(struct list_head *cache, * fill in @xld_buffer or only calculate buffer * size if @xld_buffer is NULL. * - * \retval >= 0 buffer list size - * \retval -ENODATA if the list cannot fit @xld_size buffer + * Return: >= 0 buffer list size + * -ENODATA if the list cannot fit @xld_size buffer */ static int ll_xattr_cache_list(struct list_head *cache, char *xld_buffer, @@ -213,8 +213,8 @@ static int ll_xattr_cache_list(struct list_head *cache, /** * Check if the xattr cache is initialized (filled). * - * \retval 0 @cache is not initialized - * \retval 1 @cache is initialized + * Return: 0 @cache is not initialized + * 1 @cache is initialized */ static int ll_xattr_cache_valid(struct ll_inode_info *lli) { @@ -226,7 +226,7 @@ static int ll_xattr_cache_valid(struct ll_inode_info *lli) * * Free all xattr memory. @lli is the inode info pointer. * - * \retval 0 no error occurred + * Return: 0 no error occurred */ static int ll_xattr_cache_destroy_locked(struct ll_inode_info *lli) { @@ -261,8 +261,8 @@ int ll_xattr_cache_destroy(struct inode *inode) * the function handles it with a separate enq lock. * If successful, the function exits with the list lock held. * - * \retval 0 no error occurred - * \retval -ENOMEM not enough memory + * Return: 0 no error occurred + * -ENOMEM not enough memory */ static int ll_xattr_find_get_lock(struct inode *inode, struct lookup_intent *oit, @@ -326,9 +326,9 @@ static int ll_xattr_find_get_lock(struct inode *inode, * * Fetch and cache the whole of xattrs for @inode, acquiring a read lock. * - * \retval 0 no error occurred - * \retval -EPROTO network protocol error - * \retval -ENOMEM not enough memory for the cache + * Return: 0 no error occurred + * -EPROTO network protocol error + * -ENOMEM not enough memory for the cache */ static int ll_xattr_cache_refill(struct inode *inode) { @@ -451,11 +451,11 @@ static int ll_xattr_cache_refill(struct inode *inode) * The resulting value/list is stored in @buffer if the former * is not larger than @size. * - * \retval 0 no error occurred - * \retval -EPROTO network protocol error - * \retval -ENOMEM not enough memory for the cache - * \retval -ERANGE the buffer is not large enough - * \retval -ENODATA no such attr or the list is empty + * Return: 0 no error occurred + * -EPROTO network protocol error + * -ENOMEM not enough memory for the cache + * -ERANGE the buffer is not large enough + * -ENODATA no such attr or the list is empty */ int ll_xattr_cache_get(struct inode *inode, const char *name, char *buffer, size_t size, u64 valid) diff --git a/drivers/staging/lustre/lustre/llite/xattr_security.c b/drivers/staging/lustre/lustre/llite/xattr_security.c index b419d8f..f1c011e 100644 --- a/drivers/staging/lustre/lustre/llite/xattr_security.c +++ b/drivers/staging/lustre/lustre/llite/xattr_security.c @@ -79,9 +79,9 @@ int ll_dentry_init_security(struct dentry *dentry, int mode, struct qstr *name, * and put it in 'security.xxx' xattr of dentry * stored in @fs_info. * - * \retval 0 success - * \retval -ENOMEM if no memory could be allocated for xattr name - * \retval < 0 failure to set xattr + * Return: 0 success + * -ENOMEM if no memory could be allocated for xattr name + * < 0 failure to set xattr */ static int ll_initxattrs(struct inode *inode, const struct xattr *xattr_array, @@ -116,9 +116,9 @@ int ll_dentry_init_security(struct dentry *dentry, int mode, struct qstr *name, * Get security context of @inode in @dir, * and put it in 'security.xxx' xattr of @dentry. * - * \retval 0 success, or SELinux is disabled - * \retval -ENOMEM if no memory could be allocated for xattr name - * \retval < 0 failure to get security context or set xattr + * Return: 0 success, or SELinux is disabled + * -ENOMEM if no memory could be allocated for xattr name + * < 0 failure to get security context or set xattr */ int ll_inode_init_security(struct dentry *dentry, struct inode *inode, From patchwork Sat Mar 2 19:12:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836733 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB51D922 for ; Sat, 2 Mar 2019 19:12:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE9312AB55 for ; Sat, 2 Mar 2019 19:12:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A2E9E2AB5B; Sat, 2 Mar 2019 19:12:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D838E2AB58 for ; Sat, 2 Mar 2019 19:12:55 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id C4BD321FB0D; Sat, 2 Mar 2019 11:12:41 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 58A0321F3CE for ; Sat, 2 Mar 2019 11:12:31 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id E326427B; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id DAC6E214; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:23 -0500 Message-Id: <1551553944-6419-7-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 6/7] obdclass: move comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the DocBook handling to sphinix. Signed-off-by: James Simmons --- drivers/staging/lustre/lustre/obdclass/cl_io.c | 12 +++--- drivers/staging/lustre/lustre/obdclass/cl_lock.c | 11 ++--- drivers/staging/lustre/lustre/obdclass/cl_object.c | 44 ++++++++++---------- drivers/staging/lustre/lustre/obdclass/cl_page.c | 30 +++++++------- drivers/staging/lustre/lustre/obdclass/genops.c | 22 +++++----- .../staging/lustre/lustre/obdclass/kernelcomm.c | 23 ++++++----- drivers/staging/lustre/lustre/obdclass/linkea.c | 15 +++---- .../lustre/lustre/obdclass/lprocfs_status.c | 30 +++++++------- drivers/staging/lustre/lustre/obdclass/lu_object.c | 44 ++++++++++---------- drivers/staging/lustre/lustre/obdclass/obd_mount.c | 47 ++++++++++++---------- 10 files changed, 143 insertions(+), 135 deletions(-) diff --git a/drivers/staging/lustre/lustre/obdclass/cl_io.c b/drivers/staging/lustre/lustre/obdclass/cl_io.c index 3b4aca4..eef0dd8 100644 --- a/drivers/staging/lustre/lustre/obdclass/cl_io.c +++ b/drivers/staging/lustre/lustre/obdclass/cl_io.c @@ -83,7 +83,7 @@ static int cl_io_invariant(const struct cl_io *io) } /** - * Finalize \a io, by calling cl_io_operations::cio_fini() bottom-to-top. + * Finalize @io, by calling cl_io_operations::cio_fini() bottom-to-top. */ void cl_io_fini(const struct lu_env *env, struct cl_io *io) { @@ -175,7 +175,7 @@ int cl_io_sub_init(const struct lu_env *env, struct cl_io *io, EXPORT_SYMBOL(cl_io_sub_init); /** - * Initialize \a io, by calling cl_io_operations::cio_init() top-to-bottom. + * Initialize @io, by calling cl_io_operations::cio_init() top-to-bottom. * * Caller has to call cl_io_fini() after a call to cl_io_init(), no matter * what the latter returned. @@ -413,7 +413,7 @@ void cl_io_iter_fini(const struct lu_env *env, struct cl_io *io) EXPORT_SYMBOL(cl_io_iter_fini); /** - * Records that read or write io progressed \a nob bytes forward. + * Records that read or write io progressed @nob bytes forward. */ static void cl_io_rw_advance(const struct lu_env *env, struct cl_io *io, size_t nob) @@ -618,7 +618,7 @@ static void cl_page_list_assume(const struct lu_env *env, /** * Submit a sync_io and wait for the IO to be finished, or error happens. - * If \a timeout is zero, it means to wait for the IO unconditionally. + * If @timeout is zero, it means to wait for the IO unconditionally. */ int cl_io_submit_sync(const struct lu_env *env, struct cl_io *io, enum cl_req_type iot, struct cl_2queue *queue, @@ -962,7 +962,7 @@ void cl_2queue_fini(const struct lu_env *env, struct cl_2queue *queue) EXPORT_SYMBOL(cl_2queue_fini); /** - * Initialize a 2-queue to contain \a page in its incoming page list. + * Initialize a 2-queue to contain @page in its incoming page list. */ void cl_2queue_init_page(struct cl_2queue *queue, struct cl_page *page) { @@ -989,7 +989,7 @@ struct cl_io *cl_io_top(struct cl_io *io) /** * Fills in attributes that are passed to server together with transfer. Only - * attributes from \a flags may be touched. This can be called multiple times + * attributes from @flags may be touched. This can be called multiple times * for the same request. */ void cl_req_attr_set(const struct lu_env *env, struct cl_object *obj, diff --git a/drivers/staging/lustre/lustre/obdclass/cl_lock.c b/drivers/staging/lustre/lustre/obdclass/cl_lock.c index fc5976d..797302c 100644 --- a/drivers/staging/lustre/lustre/obdclass/cl_lock.c +++ b/drivers/staging/lustre/lustre/obdclass/cl_lock.c @@ -148,10 +148,11 @@ void cl_lock_cancel(const struct lu_env *env, struct cl_lock *lock) /** * Enqueue a lock. - * \param anchor: if we need to wait for resources before getting the lock, - * use @anchor for the purpose. - * \retval 0 enqueue successfully - * \retval <0 error code + * @anchor if we need to wait for resources before getting the lock, + * use @anchor for the purpose. + * + * Return: 0 enqueue successfully + * <0 error code */ int cl_lock_enqueue(const struct lu_env *env, struct cl_io *io, struct cl_lock *lock, struct cl_sync_io *anchor) @@ -250,7 +251,7 @@ void cl_lock_descr_print(const struct lu_env *env, void *cookie, EXPORT_SYMBOL(cl_lock_descr_print); /** - * Prints human readable representation of \a lock to the \a f. + * Prints human readable representation of @lock to the @f. */ void cl_lock_print(const struct lu_env *env, void *cookie, lu_printer_t printer, const struct cl_lock *lock) diff --git a/drivers/staging/lustre/lustre/obdclass/cl_object.c b/drivers/staging/lustre/lustre/obdclass/cl_object.c index b09621f..6c084bc 100644 --- a/drivers/staging/lustre/lustre/obdclass/cl_object.c +++ b/drivers/staging/lustre/lustre/obdclass/cl_object.c @@ -79,7 +79,7 @@ int cl_object_header_init(struct cl_object_header *h) EXPORT_SYMBOL(cl_object_header_init); /** - * Returns a cl_object with a given \a fid. + * Returns a cl_object with a given @fid. * * Returns either cached or newly created object. Additional reference on the * returned object is acquired. @@ -96,7 +96,7 @@ struct cl_object *cl_object_find(const struct lu_env *env, EXPORT_SYMBOL(cl_object_find); /** - * Releases a reference on \a o. + * Releases a reference on @o. * * When last reference is released object is returned to the cache, unless * lu_object_header_flags::LU_OBJECT_HEARD_BANSHEE bit is set in its header. @@ -110,10 +110,10 @@ void cl_object_put(const struct lu_env *env, struct cl_object *o) EXPORT_SYMBOL(cl_object_put); /** - * Acquire an additional reference to the object \a o. + * Acquire an additional reference to the object @o. * * This can only be used to acquire _additional_ reference, i.e., caller - * already has to possess at least one reference to \a o before calling this. + * already has to possess at least one reference to @o before calling this. * * \see cl_page_get(), cl_lock_get(). */ @@ -124,7 +124,7 @@ void cl_object_get(struct cl_object *o) EXPORT_SYMBOL(cl_object_get); /** - * Returns the top-object for a given \a o. + * Returns the top-object for a given @o. * * \see cl_io_top() */ @@ -144,7 +144,7 @@ struct cl_object *cl_object_top(struct cl_object *o) /** * Returns pointer to the lock protecting data-attributes for the given object - * \a o. + * @o. * * Data-attributes are protected by the cl_object_header::coh_attr_guard * spin-lock in the top-object. @@ -181,10 +181,10 @@ void cl_object_attr_unlock(struct cl_object *o) EXPORT_SYMBOL(cl_object_attr_unlock); /** - * Returns data-attributes of an object \a obj. + * Returns data-attributes of an object @obj. * * Every layer is asked (by calling cl_object_operations::coo_attr_get()) - * top-to-bottom to fill in parts of \a attr that this layer is responsible + * top-to-bottom to fill in parts of @attr that this layer is responsible * for. */ int cl_object_attr_get(const struct lu_env *env, struct cl_object *obj, @@ -210,9 +210,9 @@ int cl_object_attr_get(const struct lu_env *env, struct cl_object *obj, EXPORT_SYMBOL(cl_object_attr_get); /** - * Updates data-attributes of an object \a obj. + * Updates data-attributes of an object @obj. * - * Only attributes, mentioned in a validness bit-mask \a v are + * Only attributes, mentioned in a validness bit-mask @v are * updated. Calls cl_object_operations::coo_attr_update() on every layer, * bottom to top. */ @@ -242,7 +242,7 @@ int cl_object_attr_update(const struct lu_env *env, struct cl_object *obj, /** * Notifies layers (bottom-to-top) that glimpse AST was received. * - * Layers have to fill \a lvb fields with information that will be shipped + * Layers have to fill @lvb fields with information that will be shipped * back to glimpse issuer. * * \see cl_lock_operations::clo_glimpse() @@ -269,7 +269,7 @@ int cl_object_glimpse(const struct lu_env *env, struct cl_object *obj, EXPORT_SYMBOL(cl_object_glimpse); /** - * Updates a configuration of an object \a obj. + * Updates a configuration of an object @obj. */ int cl_conf_set(const struct lu_env *env, struct cl_object *obj, const struct cl_object_conf *conf) @@ -332,14 +332,14 @@ int cl_object_getstripe(const struct lu_env *env, struct cl_object *obj, /** * Get fiemap extents from file object. * - * \param env [in] lustre environment - * \param obj [in] file object - * \param key [in] fiemap request argument - * \param fiemap [out] fiemap extents mapping retrived - * \param buflen [in] max buffer length of @fiemap + * @env lustre environment + * @obj file object + * @key fiemap request argument + * @fiemap fiemap extents mapping retrived + * @buflen max buffer length of @fiemap * - * \retval 0 success - * \retval < 0 error + * Return: 0 success + * < 0 error */ int cl_object_fiemap(const struct lu_env *env, struct cl_object *obj, struct ll_fiemap_info_key *key, @@ -660,9 +660,9 @@ static inline struct cl_env *cl_env_container(struct lu_env *env) * * Allocations are amortized through the global cache of environments. * - * \param refcheck pointer to a counter used to detect environment leaks. In + * @refcheck pointer to a counter used to detect environment leaks. In * the usual case cl_env_get() and cl_env_put() are called in the same lexical - * scope and pointer to the same integer is passed as \a refcheck. This is + * scope and pointer to the same integer is passed as @refcheck. This is * used to detect missed cl_env_put(). * * \see cl_env_put() @@ -747,7 +747,7 @@ unsigned int cl_env_cache_purge(unsigned int nr) /** * Release an environment. * - * Decrement \a env reference counter. When counter drops to 0, nothing in + * Decrement @env reference counter. When counter drops to 0, nothing in * this thread is using environment and it is returned to the allocation * cache, or freed straight away, if cache is large enough. */ diff --git a/drivers/staging/lustre/lustre/obdclass/cl_page.c b/drivers/staging/lustre/lustre/obdclass/cl_page.c index 7dcd3af..349f19e 100644 --- a/drivers/staging/lustre/lustre/obdclass/cl_page.c +++ b/drivers/staging/lustre/lustre/obdclass/cl_page.c @@ -61,7 +61,7 @@ * This function can be used to obtain initial reference to previously * unreferenced cached object. It can be called only if concurrent page * reclamation is somehow prevented, e.g., by keeping a lock on a VM page, - * associated with \a page. + * associated with @page. * * Use with care! Not exported. */ @@ -165,8 +165,8 @@ struct cl_page *cl_page_alloc(const struct lu_env *env, } /** - * Returns a cl_page with index \a idx at the object \a o, and associated with - * the VM page \a vmpage. + * Returns a cl_page with index @idx at the object @o, and associated with + * the VM page @vmpage. * * This is the main entry point into the cl_page caching interface. First, a * cache (implemented as a per-object radix tree) is consulted. If page is @@ -287,8 +287,8 @@ static void cl_page_state_set(const struct lu_env *env, /** * Acquires an additional reference to a page. * - * This can be called only by caller already possessing a reference to \a - * page. + * This can be called only by caller already possessing a reference to + * @page. * * \see cl_object_get(), cl_lock_get(). */ @@ -415,11 +415,11 @@ int cl_page_is_owned(const struct cl_page *pg, const struct cl_io *io) * \pre !cl_page_is_owned(pg, io) * \post result == 0 iff cl_page_is_owned(pg, io) * - * \retval 0 success + * Return: 0 success * - * \retval -ve failure, e.g., page was destroyed (and landed in - * cl_page_state::CPS_FREEING instead of cl_page_state::CPS_CACHED). - * or, page was owned by another thread, or in IO. + * -ve failure, e.g., page was destroyed (and landed in + * cl_page_state::CPS_FREEING instead of cl_page_state::CPS_CACHED). + * or, page was owned by another thread, or in IO. * * \see cl_page_disown() * \see cl_page_operations::cpo_own() @@ -642,7 +642,7 @@ void cl_page_delete(const struct lu_env *env, struct cl_page *pg) * * Call cl_page_operations::cpo_export() through all layers top-to-bottom. The * layer responsible for VM interaction has to mark/clear page as up-to-date - * by the \a uptodate argument. + * by the @uptodate argument. * * \see cl_page_operations::cpo_export() */ @@ -658,7 +658,7 @@ void cl_page_export(const struct lu_env *env, struct cl_page *pg, int uptodate) EXPORT_SYMBOL(cl_page_export); /** - * Returns true, iff \a pg is VM locked in a suitable sense by the calling + * Returns true, if @pg is VM locked in a suitable sense by the calling * thread. */ int cl_page_is_vmlocked(const struct lu_env *env, const struct cl_page *pg) @@ -862,7 +862,7 @@ void cl_page_clip(const struct lu_env *env, struct cl_page *pg, EXPORT_SYMBOL(cl_page_clip); /** - * Prints human readable representation of \a pg to the \a f. + * Prints human readable representation of @pg to the @f. */ void cl_page_header_print(const struct lu_env *env, void *cookie, lu_printer_t printer, const struct cl_page *pg) @@ -876,7 +876,7 @@ void cl_page_header_print(const struct lu_env *env, void *cookie, EXPORT_SYMBOL(cl_page_header_print); /** - * Prints human readable representation of \a pg to the \a f. + * Prints human readable representation of @pg to the @f. */ void cl_page_print(const struct lu_env *env, void *cookie, lu_printer_t printer, const struct cl_page *pg) @@ -898,7 +898,7 @@ void cl_page_print(const struct lu_env *env, void *cookie, EXPORT_SYMBOL(cl_page_print); /** - * Converts a byte offset within object \a obj into a page index. + * Converts a byte offset within object @obj into a page index. */ loff_t cl_offset(const struct cl_object *obj, pgoff_t idx) { @@ -910,7 +910,7 @@ loff_t cl_offset(const struct cl_object *obj, pgoff_t idx) EXPORT_SYMBOL(cl_offset); /** - * Converts a page index into a byte offset within object \a obj. + * Converts a page index into a byte offset within object @obj. */ pgoff_t cl_index(const struct cl_object *obj, loff_t offset) { diff --git a/drivers/staging/lustre/lustre/obdclass/genops.c b/drivers/staging/lustre/lustre/obdclass/genops.c index 39919f1..80cb7b9 100644 --- a/drivers/staging/lustre/lustre/obdclass/genops.c +++ b/drivers/staging/lustre/lustre/obdclass/genops.c @@ -291,12 +291,12 @@ int class_unregister_type(const char *name) * * Allocate the new obd_device and initialize it. * - * \param[in] type_name obd device type string. - * \param[in] name obd device name. - * @uuid obd device UUID. + * @type_name: obd device type string. + * @name: obd device name. + * @uuid: obd device UUID. * - * RETURN newdev pointer to created obd_device - * RETURN ERR_PTR(errno) on error + * Returns: newdev pointer to created obd_device + * ERR_PTR(errno) on error */ struct obd_device *class_newdev(const char *type_name, const char *name, const char *uuid) @@ -407,7 +407,7 @@ void class_free_dev(struct obd_device *obd) /** * Unregister obd device. * - * Free slot in obd_dev[] used by \a obd. + * Free slot in obd_dev[] used by @obd. * * @new_obd obd_device to be unregistered * @@ -427,7 +427,7 @@ void class_unregister_device(struct obd_device *obd) /** * Register obd device. * - * Find free slot in obd_devs[], fills it with \a new_obd. + * Find free slot in obd_devs[], fills it with @new_obd. * * @new_obd obd_device to be registered * @@ -538,10 +538,10 @@ int class_uuid2dev(struct obd_uuid *uuid) /** * Get obd device from ::obd_devs[] * - * \param num [in] array index + * @num array index * - * \retval NULL if ::obd_devs[\a num] does not contains an obd device - * otherwise return the obd device there. + * Return: NULL if ::obd_devs[@num] does not contains an obd device + * otherwise return the obd device there. */ struct obd_device *class_num2obd(int num) { @@ -632,7 +632,7 @@ struct obd_device *class_devices_in_group(struct obd_uuid *grp_uuid, int *next) EXPORT_SYMBOL(class_devices_in_group); /** - * to notify sptlrpc log for \a fsname has changed, let every relevant OBD + * to notify sptlrpc log for @fsname has changed, let every relevant OBD * adjust sptlrpc settings accordingly. */ int class_notify_sptlrpc_conf(const char *fsname, int namelen) diff --git a/drivers/staging/lustre/lustre/obdclass/kernelcomm.c b/drivers/staging/lustre/lustre/obdclass/kernelcomm.c index 925ba52..49d4717 100644 --- a/drivers/staging/lustre/lustre/obdclass/kernelcomm.c +++ b/drivers/staging/lustre/lustre/obdclass/kernelcomm.c @@ -45,9 +45,10 @@ /** * libcfs_kkuc_msg_put - send an message from kernel to userspace - * @param fp to send the message to - * @param payload Payload data. First field of payload is always - * struct kuc_hdr + * + * @fp: to send the message to + * @payload: Payload data. First field of payload is always + * struct kuc_hdr */ int libcfs_kkuc_msg_put(struct file *filp, void *payload) { @@ -113,10 +114,11 @@ void libcfs_kkuc_init(void) } /** Add a receiver to a broadcast group - * @param filp pipe to write into - * @param uid identifier for this receiver - * @param group group number - * @param data user data + * + * @filp: pipe to write into + * @uid: identifier for this receiver + * @group: group number + * @data: user data */ int libcfs_kkuc_group_add(struct file *filp, int uid, unsigned int group, void *data, size_t data_len) @@ -234,9 +236,10 @@ int libcfs_kkuc_group_put(unsigned int group, void *payload) /** * Calls a callback function for each link of the given kuc group. - * @param group the group to call the function on. - * @param cb_func the function to be called. - * @param cb_arg extra argument to be passed to the callback function. + * + * @group: the group to call the function on. + * @cb_func: the function to be called. + * @cb_arg: extra argument to be passed to the callback function. */ int libcfs_kkuc_group_foreach(unsigned int group, libcfs_kkuc_cb_t cb_func, void *cb_arg) diff --git a/drivers/staging/lustre/lustre/obdclass/linkea.c b/drivers/staging/lustre/lustre/obdclass/linkea.c index 33594bd..7e42e3a 100644 --- a/drivers/staging/lustre/lustre/obdclass/linkea.c +++ b/drivers/staging/lustre/lustre/obdclass/linkea.c @@ -90,7 +90,8 @@ int linkea_init_with_rec(struct linkea_data *ldata) * Pack a link_ea_entry. * All elements are stored as chars to avoid alignment issues. * Numbers are always big-endian - * \retval record length + * + * Return: record length */ int linkea_entry_pack(struct link_ea_entry *lee, const struct lu_name *lname, const struct lu_fid *pfid) @@ -204,13 +205,13 @@ void linkea_del_buf(struct linkea_data *ldata, const struct lu_name *lname) /** * Check if such a link exists in linkEA. * - * \param ldata link data the search to be done on - * \param lname name in the parent's directory entry pointing to this object - * \param pfid parent fid the link to be found for + * @ldata link data the search to be done on + * @lname name in the parent's directory entry pointing to this object + * @pfid parent fid the link to be found for * - * \retval 0 success - * \retval -ENOENT link does not exist - * \retval -ve on error + * Return: 0 success + * -ENOENT link does not exist + * -ve on error */ int linkea_links_find(struct linkea_data *ldata, const struct lu_name *lname, const struct lu_fid *pfid) diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c index e1ac610..72d504c 100644 --- a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c +++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c @@ -563,16 +563,16 @@ int lprocfs_rd_conn_uuid(struct seq_file *m, void *data) * * For global statistics, lock the stats structure to prevent concurrent update. * - * \param[in] stats statistics structure to lock - * \param[in] opc type of operation: - * LPROCFS_GET_SMP_ID: "lock" and return current CPU index - * for incrementing statistics for that CPU - * LPROCFS_GET_NUM_CPU: "lock" and return number of used - * CPU indices to iterate over all indices - * \param[out] flags CPU interrupt saved state for IRQ-safe locking + * @stats: statistics structure to lock + * @opc: type of operation: + * LPROCFS_GET_SMP_ID: "lock" and return current CPU index + * for incrementing statistics for that CPU + * LPROCFS_GET_NUM_CPU: "lock" and return number of used + * CPU indices to iterate over all indices + * @flags: CPU interrupt saved state for IRQ-safe locking * - * \retval cpuid of current thread or number of allocated structs - * \retval negative on error (only for opc LPROCFS_GET_SMP_ID + per-CPU stats) + * Returns: cpuid of current thread or number of allocated structs + * negative on error (only for opc LPROCFS_GET_SMP_ID + per-CPU stats) */ int lprocfs_stats_lock(struct lprocfs_stats *stats, enum lprocfs_stats_lock_ops opc, @@ -616,9 +616,9 @@ int lprocfs_stats_lock(struct lprocfs_stats *stats, * This function must be called using the same arguments as used when calling * lprocfs_stats_lock() so that the correct operation can be performed. * - * \param[in] stats statistics structure to unlock - * \param[in] opc type of operation (current cpuid or number of structs) - * \param[in] flags CPU interrupt saved state for IRQ-safe locking + * @stats: statistics structure to unlock + * @opc: type of operation (current cpuid or number of structs) + * @flags: CPU interrupt saved state for IRQ-safe locking */ void lprocfs_stats_unlock(struct lprocfs_stats *stats, enum lprocfs_stats_lock_ops opc, @@ -1614,9 +1614,9 @@ static char *lprocfs_strnstr(const char *s1, const char *s2, size_t len) } /** - * Find the string \a name in the input \a buffer, and return a pointer to the - * value immediately following \a name, reducing \a count appropriately. - * If \a name is not found the original \a buffer is returned. + * Find the string @name in the input @buffer, and return a pointer to the + * value immediately following @name, reducing @count appropriately. + * If @name is not found the original @buffer is returned. */ char *lprocfs_find_named_value(const char *buffer, const char *name, size_t *count) diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c index 639c298..8a78470 100644 --- a/drivers/staging/lustre/lustre/obdclass/lu_object.c +++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c @@ -359,7 +359,7 @@ static void lu_object_free(const struct lu_env *env, struct lu_object *o) } /** - * Free \a nr objects from the cold end of the site LRU list. + * Free @nr objects from the cold end of the site LRU list. * if canblock is false, then don't block awaiting for another * instance of lu_site_purge() to complete */ @@ -552,7 +552,7 @@ void lu_object_header_print(const struct lu_env *env, void *cookie, EXPORT_SYMBOL(lu_object_header_print); /** - * Print human readable representation of the \a o to the \a printer. + * Print human readable representation of the @o to the @printer. */ void lu_object_print(const struct lu_env *env, void *cookie, lu_printer_t printer, const struct lu_object *o) @@ -567,7 +567,7 @@ void lu_object_print(const struct lu_env *env, void *cookie, list_for_each_entry(o, &top->loh_layers, lo_linkage) { /* - * print `.' \a depth times followed by type name and address + * print `.' @depth times followed by type name and address */ (*printer)(env, cookie, "%*.*s%s@%p", depth, depth, ruler, o->lo_dev->ld_type->ldt_name, o); @@ -621,7 +621,7 @@ static struct lu_object *htable_lookup(struct lu_site *s, } /** - * Search cache for an object with the fid \a f. If such object is found, + * Search cache for an object with the fid @f. If such object is found, * return it. Otherwise, create new object, insert it into cache and return * it. In any case, additional reference is acquired on the returned object. */ @@ -661,7 +661,7 @@ static void lu_object_limit(const struct lu_env *env, struct lu_device *dev) * Core logic of lu_object_find*() functions. * * Much like lu_object_find(), but top level device of object is specifically - * \a dev rather than top level device of the site. This interface allows + * @dev rather than top level device of the site. This interface allows * objects of different "stacking" to be created within the same site. */ struct lu_object *lu_object_find_at(const struct lu_env *env, @@ -821,7 +821,7 @@ struct lu_site_print_arg { } /** - * Print all objects in \a s. + * Print all objects in @s. */ void lu_site_print(const struct lu_env *env, struct lu_site *s, void *cookie, lu_printer_t printer) @@ -950,7 +950,7 @@ static void lu_dev_add_linkage(struct lu_site *s, struct lu_device *d) } /** - * Initialize site \a s, with \a d as the top level device. + * Initialize site @s, with @d as the top level device. */ int lu_site_init(struct lu_site *s, struct lu_device *top) { @@ -1030,7 +1030,7 @@ int lu_site_init(struct lu_site *s, struct lu_device *top) EXPORT_SYMBOL(lu_site_init); /** - * Finalize \a s and release its resources. + * Finalize @s and release its resources. */ void lu_site_fini(struct lu_site *s) { @@ -1074,7 +1074,7 @@ int lu_site_init_finish(struct lu_site *s) EXPORT_SYMBOL(lu_site_init_finish); /** - * Acquire additional reference on device \a d + * Acquire additional reference on device @d */ void lu_device_get(struct lu_device *d) { @@ -1083,7 +1083,7 @@ void lu_device_get(struct lu_device *d) EXPORT_SYMBOL(lu_device_get); /** - * Release reference on device \a d. + * Release reference on device @d. */ void lu_device_put(struct lu_device *d) { @@ -1093,7 +1093,7 @@ void lu_device_put(struct lu_device *d) EXPORT_SYMBOL(lu_device_put); /** - * Initialize device \a d of type \a t. + * Initialize device @d of type @t. */ int lu_device_init(struct lu_device *d, struct lu_device_type *t) { @@ -1111,7 +1111,7 @@ int lu_device_init(struct lu_device *d, struct lu_device_type *t) EXPORT_SYMBOL(lu_device_init); /** - * Finalize device \a d. + * Finalize device @d. */ void lu_device_fini(struct lu_device *d) { @@ -1134,8 +1134,8 @@ void lu_device_fini(struct lu_device *d) EXPORT_SYMBOL(lu_device_fini); /** - * Initialize object \a o that is part of compound object \a h and was created - * by device \a d. + * Initialize object @o that is part of compound object @h and was created + * by device @d. */ int lu_object_init(struct lu_object *o, struct lu_object_header *h, struct lu_device *d) @@ -1170,7 +1170,7 @@ void lu_object_fini(struct lu_object *o) EXPORT_SYMBOL(lu_object_fini); /** - * Add object \a o as first layer of compound object \a h + * Add object @o as first layer of compound object @h * * This is typically called by the ->ldo_object_alloc() method of top-level * device. @@ -1182,10 +1182,10 @@ void lu_object_add_top(struct lu_object_header *h, struct lu_object *o) EXPORT_SYMBOL(lu_object_add_top); /** - * Add object \a o as a layer of compound object, going after \a before. + * Add object @o as a layer of compound object, going after @before. * - * This is typically called by the ->ldo_object_alloc() method of \a - * before->lo_dev. + * This is typically called by the ->ldo_object_alloc() method of + * @before->lo_dev. */ void lu_object_add(struct lu_object *before, struct lu_object *o) { @@ -1222,7 +1222,7 @@ void lu_object_header_fini(struct lu_object_header *h) /** * Given a compound object, find its slice, corresponding to the device type - * \a dtype. + * @dtype. */ struct lu_object *lu_object_locate(struct lu_object_header *h, const struct lu_device_type *dtype) @@ -1452,7 +1452,7 @@ void lu_context_key_quiesce_many(struct lu_context_key *k, ...) EXPORT_SYMBOL(lu_context_key_quiesce_many); /** - * Return value associated with key \a key in context \a ctx. + * Return value associated with key @key in context @ctx. */ void *lu_context_key_get(const struct lu_context *ctx, const struct lu_context_key *key) @@ -1471,7 +1471,7 @@ void *lu_context_key_get(const struct lu_context *ctx, static DEFINE_SPINLOCK(lu_context_remembered_guard); /** - * Destroy \a key in all remembered contexts. This is used to destroy key + * Destroy @key in all remembered contexts. This is used to destroy key * values in "shared" contexts (like service threads), when a module owning * the key is about to be unloaded. */ @@ -1646,7 +1646,7 @@ void lu_context_enter(struct lu_context *ctx) EXPORT_SYMBOL(lu_context_enter); /** - * Called after exiting from \a ctx + * Called after exiting from @ctx */ void lu_context_exit(struct lu_context *ctx) { diff --git a/drivers/staging/lustre/lustre/obdclass/obd_mount.c b/drivers/staging/lustre/lustre/obdclass/obd_mount.c index 33aa790..104e64b 100644 --- a/drivers/staging/lustre/lustre/obdclass/obd_mount.c +++ b/drivers/staging/lustre/lustre/obdclass/obd_mount.c @@ -56,12 +56,13 @@ * Continue to process new statements appended to the logs * (whenever the config lock is revoked) until lustre_end_log * is called. - * @param sb The superblock is used by the MGC to write to the local copy of - * the config log - * @param logname The name of the llog to replicate from the MGS - * @param cfg Since the same mgc may be used to follow multiple config logs - * (e.g. ost1, ost2, client), the config_llog_instance keeps the state for - * this log, and is added to the mgc's list of logs to follow. + * @sb: The superblock is used by the MGC to write to the local copy of + * the config log + * @logname: The name of the llog to replicate from the MGS + * @cfg: Since the same mgc may be used to follow multiple config logs + * (e.g. ost1, ost2, client), the config_llog_instance keeps the + * state for this log, and is added to the mgc's list of logs to + * follow. */ int lustre_process_log(struct super_block *sb, char *logname, struct config_llog_instance *cfg) @@ -204,9 +205,9 @@ static int lustre_start_simple(char *obdname, char *type, char *uuid, /** Set up a mgc obd to process startup logs * - * \param sb [in] super block of the mgc obd + * @sb: super block of the mgc obd * - * \retval 0 success, otherwise error code + * Returns: 0 success, otherwise error code */ int lustre_start_mgc(struct super_block *sb) { @@ -588,11 +589,13 @@ int lustre_put_lsi(struct super_block *sb) */ /** Get the fsname ("lustre") from the server name ("lustre-OST003F"). - * @param [in] svname server name including type and index - * @param [out] fsname Buffer to copy filesystem name prefix into. - * Must have at least 'strlen(fsname) + 1' chars. - * @param [out] endptr if endptr isn't NULL it is set to end of fsname - * rc < 0 on error + * + * @svname: server name including type and index + * @fsname: Buffer to copy filesystem name prefix into. + * Must have at least 'strlen(fsname) + 1' chars. + * @endptr: if endptr isn't NULL it is set to end of fsname + * + * Returns: rc < 0 on error */ static int server_name2fsname(const char *svname, char *fsname, const char **endptr) @@ -910,15 +913,15 @@ static int lmd_parse_mgs(struct lustre_mount_data *lmd, char **ptr) } /** - * Find the first delimiter (comma or colon) from the specified \a buf and - * make \a *endh point to the string starting with the delimiter. The commas + * Find the first delimiter (comma or colon) from the specified @buf and + * make @*endh point to the string starting with the delimiter. The commas * in expression list [...] will be skipped. * * @buf a delimiter-separated string * @endh a pointer to a pointer that will point to the string * starting with the delimiter * - * RETURNS true if delimiter is found, false if delimiter is not found + * Returns: true if delimiter is found, false if delimiter is not found */ static bool lmd_find_delimiter(char *buf, char **endh) { @@ -964,15 +967,15 @@ static bool lmd_find_delimiter(char *buf, char **endh) /** * Find the first valid string delimited by comma or colon from the specified - * \a buf and parse it to see whether it's a valid nid list. If yes, \a *endh + * @buf and parse it to see whether it's a valid nid list. If yes, @*endh * will point to the next string starting with the delimiter. * - * \param[in] buf a delimiter-separated string - * \param[in] endh a pointer to a pointer that will point to the string - * starting with the delimiter + * @buf: a delimiter-separated string + * @endh: a pointer to a pointer that will point to the string + * starting with the delimiter * - * \retval 0 if the string is a valid nid list - * \retval 1 if the string is not a valid nid list + * Returns: 0 if the string is a valid nid list + * 1 if the string is not a valid nid list */ static int lmd_parse_nidlist(char *buf, char **endh) { From patchwork Sat Mar 2 19:12:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 10836729 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66A9D1575 for ; Sat, 2 Mar 2019 19:12:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D2182AB55 for ; Sat, 2 Mar 2019 19:12:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41C422AB5B; Sat, 2 Mar 2019 19:12:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 92E842AB55 for ; Sat, 2 Mar 2019 19:12:49 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 0EA9121F9B0; Sat, 2 Mar 2019 11:12:38 -0800 (PST) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp3.ccs.ornl.gov (smtp3.ccs.ornl.gov [160.91.203.39]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 9B2DD21F213 for ; Sat, 2 Mar 2019 11:12:30 -0800 (PST) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp3.ccs.ornl.gov (Postfix) with ESMTP id E1003278; Sat, 2 Mar 2019 14:12:26 -0500 (EST) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id DDDE7A9; Sat, 2 Mar 2019 14:12:26 -0500 (EST) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Sat, 2 Mar 2019 14:12:24 -0500 Message-Id: <1551553944-6419-8-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> References: <1551553944-6419-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 7/7] lustre: move remaining comments to sphinix format X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" X-Virus-Scanned: ClamAV using ClamSMTP Lustre comments was written for DocBook which is no longer used by the Linux kernel. Move all the remaining lustre DocBook handling to sphinix. Signed-off-by: James Simmons --- drivers/staging/lustre/lustre/fld/fld_cache.c | 4 +- drivers/staging/lustre/lustre/fld/fld_internal.h | 2 +- drivers/staging/lustre/lustre/lmv/lmv_obd.c | 119 +++++++++++----------- drivers/staging/lustre/lustre/lov/lov_io.c | 2 +- drivers/staging/lustre/lustre/lov/lov_object.c | 47 ++++----- drivers/staging/lustre/lustre/lov/lov_pack.c | 4 +- drivers/staging/lustre/lustre/mdc/mdc_changelog.c | 91 +++++++++-------- drivers/staging/lustre/lustre/mdc/mdc_lib.c | 12 +-- drivers/staging/lustre/lustre/mdc/mdc_request.c | 34 ++++--- drivers/staging/lustre/lustre/mgc/mgc_request.c | 8 +- drivers/staging/lustre/lustre/osc/osc_cache.c | 12 +-- drivers/staging/lustre/lustre/osc/osc_lock.c | 10 +- drivers/staging/lustre/lustre/osc/osc_request.c | 4 +- 13 files changed, 182 insertions(+), 167 deletions(-) diff --git a/drivers/staging/lustre/lustre/fld/fld_cache.c b/drivers/staging/lustre/lustre/fld/fld_cache.c index b4baa53..d289c29 100644 --- a/drivers/staging/lustre/lustre/fld/fld_cache.c +++ b/drivers/staging/lustre/lustre/fld/fld_cache.c @@ -468,7 +468,7 @@ struct fld_cache_entry } /** - * lookup \a seq sequence for range in fld cache. + * lookup @seq sequence for range in fld cache. */ struct fld_cache_entry *fld_cache_entry_lookup(struct fld_cache *cache, struct lu_seq_range *range) @@ -482,7 +482,7 @@ struct fld_cache_entry } /** - * lookup \a seq sequence for range in fld cache. + * lookup @seq sequence for range in fld cache. */ int fld_cache_lookup(struct fld_cache *cache, const u64 seq, struct lu_seq_range *range) diff --git a/drivers/staging/lustre/lustre/fld/fld_internal.h b/drivers/staging/lustre/lustre/fld/fld_internal.h index 76666a4..e2eda59 100644 --- a/drivers/staging/lustre/lustre/fld/fld_internal.h +++ b/drivers/staging/lustre/lustre/fld/fld_internal.h @@ -94,7 +94,7 @@ struct fld_cache { /** Preferred number of cached entries */ int fci_cache_size; - /** Current number of cached entries. Protected by \a fci_lock */ + /** Current number of cached entries. Protected by @fci_lock */ int fci_cache_count; /** LRU list fld entries. */ diff --git a/drivers/staging/lustre/lustre/lmv/lmv_obd.c b/drivers/staging/lustre/lustre/lmv/lmv_obd.c index 1c7379b..fe1c14c 100644 --- a/drivers/staging/lustre/lustre/lmv/lmv_obd.c +++ b/drivers/staging/lustre/lustre/lmv/lmv_obd.c @@ -1520,13 +1520,14 @@ static int lmv_close(struct obd_export *exp, struct md_op_data *op_data, * walk through all of stripes to locate the entry. * * For normal direcotry, it will locate MDS by FID directly. - * \param[in] lmv LMV device - * \param[in] op_data client MD stack parameters, name, namelen - * mds_num etc. - * \param[in] fid object FID used to locate MDS. * - * retval pointer to the lmv_tgt_desc if succeed. - * ERR_PTR(errno) if failed. + * @lmv: LMV device + * @op_data: client MD stack parameters, name, namelen + * mds_num etc. + * @fid: object FID used to locate MDS. + * + * Returns: pointer to the lmv_tgt_desc if succeed. + * ERR_PTR(errno) if failed. */ struct lmv_tgt_desc* lmv_locate_mds(struct lmv_obd *lmv, struct md_op_data *op_data, @@ -2005,26 +2006,26 @@ static int lmv_fsync(struct obd_export *exp, const struct lu_fid *fid, * closest(>=) to @hash_offset, from all of sub-stripes, and it is * only being called for striped directory. * - * \param[in] exp export of LMV - * \param[in] op_data parameters transferred beween client MD stack - * stripe_information will be included in this - * parameter - * \param[in] cb_op ldlm callback being used in enqueue in - * mdc_read_page - * \param[in] hash_offset the hash value, which is used to locate - * minum(closet) dir entry - * \param[in|out] stripe_offset the caller use this to indicate the stripe - * index of last entry, so to avoid hash conflict - * between stripes. It will also be used to - * return the stripe index of current dir entry. - * \param[in|out] entp the minum entry and it also is being used - * to input the last dir entry to resolve the - * hash conflict + * @exp: export of LMV + * @op_data: parameters transferred beween client MD stack + * stripe_information will be included in this + * parameter + * @cb_op: ldlm callback being used in enqueue in + * mdc_read_page + * @hash_offset: the hash value, which is used to locate + * minum(closet) dir entry + * @stripe_offset: the caller use this to indicate the stripe + * index of last entry, so to avoid hash conflict + * between stripes. It will also be used to + * return the stripe index of current dir entry. + * @entp: the minum entry and it also is being used + * to input the last dir entry to resolve the + * hash conflict * - * \param[out] ppage the page which holds the minum entry + * @ppage: the page which holds the minum entry * - * \retval = 0 get the entry successfully - * negative errno (< 0) does not get the entry + * Return: = 0 get the entry successfully + * negative errno (< 0) does not get the entry */ static int lmv_get_min_striped_entry(struct obd_export *exp, struct md_op_data *op_data, @@ -2152,17 +2153,17 @@ static int lmv_get_min_striped_entry(struct obd_export *exp, * one, so need to restore before reusing. * 3. release the entry page if that is not being chosen. * - * \param[in] exp obd export refer to LMV - * \param[in] op_data hold those MD parameters of read_entry - * \param[in] cb_op ldlm callback being used in enqueue in mdc_read_entry - * \param[out] ldp the entry being read - * \param[out] ppage the page holding the entry. Note: because the entry - * will be accessed in upper layer, so we need hold the - * page until the usages of entry is finished, see - * ll_dir_entry_next. + * @exp: obd export refer to LMV + * @op_data: hold those MD parameters of read_entry + * @cb_op: ldlm callback being used in enqueue in mdc_read_entry + * @ldp: the entry being read + * @ppage: the page holding the entry. Note: because the entry + * will be accessed in upper layer, so we need hold the + * page until the usages of entry is finished, see + * ll_dir_entry_next. * - * retval =0 if get entry successfully - * <0 cannot get entry + * Returns: =0 if get entry successfully + * <0 cannot get entry */ static int lmv_read_striped_page(struct obd_export *exp, struct md_op_data *op_data, @@ -2327,15 +2328,15 @@ static int lmv_read_page(struct obd_export *exp, struct md_op_data *op_data, * it will walk through all of sub-stripes until the child is being * unlinked finally. * - * \param[in] exp export refer to LMV - * \param[in] op_data different parameters transferred beween client - * MD stacks, name, namelen, FIDs etc. - * op_fid1 is the parent FID, op_fid2 is the child - * FID. - * \param[out] request point to the request of unlink. + * @exp: export refer to LMV + * @op_data: different parameters transferred beween client + * MD stacks, name, namelen, FIDs etc. + * op_fid1 is the parent FID, op_fid2 is the child + * FID. + * @request: point to the request of unlink. * - * retval 0 if succeed - * negative errno if failed. + * Return: 0 if succeed + * negative errno if failed. */ static int lmv_unlink(struct obd_export *exp, struct md_op_data *op_data, struct ptlrpc_request **request) @@ -2506,15 +2507,15 @@ static int lmv_precleanup(struct obd_device *obd) * * Dispatch request to lower-layer devices as needed. * - * \param[in] env execution environment for this thread - * \param[in] exp export for the LMV device - * \param[in] keylen length of key identifier - * \param[in] key identifier of key to get value for - * \param[in] vallen size of \a val - * \param[out] val pointer to storage location for value + * @env: execution environment for this thread + * @exp: export for the LMV device + * @keylen: length of key identifier + * @key: identifier of key to get value for + * @vallen: size of @val + * @val: pointer to storage location for value * - * \retval 0 on success - * \retval negative negated errno on failure + * Return: 0 on success + * negated errno on failure */ static int lmv_get_info(const struct lu_env *env, struct obd_export *exp, u32 keylen, void *key, u32 *vallen, void *val) @@ -2575,16 +2576,16 @@ static int lmv_get_info(const struct lu_env *env, struct obd_export *exp, * * Dispatch request to lower-layer devices as needed. * - * \param[in] env execution environment for this thread - * \param[in] exp export for the LMV device - * \param[in] keylen length of key identifier - * \param[in] key identifier of key to store value for - * \param[in] vallen size of value to store - * \param[in] val pointer to data to be stored - * \param[in] set optional list of related ptlrpc requests + * @env: execution environment for this thread + * @exp: export for the LMV device + * @keylen: length of key identifier + * @key: identifier of key to store value for + * @vallen: size of value to store + * @val: pointer to data to be stored + * @set: optional list of related ptlrpc requests * - * \retval 0 on success - * \retval negative negated errno on failure + * Returs: 0 on success + * negative negated errno on failure */ static int lmv_set_info_async(const struct lu_env *env, struct obd_export *exp, u32 keylen, void *key, u32 vallen, diff --git a/drivers/staging/lustre/lustre/lov/lov_io.c b/drivers/staging/lustre/lustre/lov/lov_io.c index 77efb86..02bc4e6 100644 --- a/drivers/staging/lustre/lustre/lov/lov_io.c +++ b/drivers/staging/lustre/lustre/lov/lov_io.c @@ -722,7 +722,7 @@ static int lov_io_read_ahead(const struct lu_env *env, /** * lov implementation of cl_operations::cio_submit() method. It takes a list - * of pages in \a queue, splits it into per-stripe sub-lists, invokes + * of pages in @queue, splits it into per-stripe sub-lists, invokes * cl_io_submit() on underlying devices to submit sub-lists, and then splices * everything back. * diff --git a/drivers/staging/lustre/lustre/lov/lov_object.c b/drivers/staging/lustre/lustre/lov/lov_object.c index 397ecc1..2058275 100644 --- a/drivers/staging/lustre/lustre/lov/lov_object.c +++ b/drivers/staging/lustre/lustre/lov/lov_object.c @@ -1085,14 +1085,14 @@ int lov_lock_init(const struct lu_env *env, struct cl_object *obj, * This function returns the last_stripe and also sets the stripe_count * over which the mapping is spread * - * \param lsm [in] striping information for the file + * @lsm striping information for the file * @index stripe component index * @ext logical extent of mapping - * \param start_stripe [in] starting stripe of the mapping - * \param stripe_count [out] the number of stripes across which to map is + * @start_stripe starting stripe of the mapping + * @stripe_count the number of stripes across which to map is * returned * - * \retval last_stripe return the last stripe of the mapping + * Return: return the last stripe of the mapping */ static int fiemap_calc_last_stripe(struct lov_stripe_md *lsm, int index, struct lu_extent *ext, @@ -1126,12 +1126,12 @@ static int fiemap_calc_last_stripe(struct lov_stripe_md *lsm, int index, /** * Set fe_device and copy extents from local buffer into main return buffer. * - * \param fiemap [out] fiemap to hold all extents - * \param lcl_fm_ext [in] array of fiemap extents get from OSC layer - * \param ost_index [in] OST index to be written into the fm_device - * field for each extent - * \param ext_count [in] number of extents to be copied - * \param current_extent [in] where to start copying in the extent array + * @fiemap fiemap to hold all extents + * @lcl_fm_ext array of fiemap extents get from OSC layer + * @ost_index OST index to be written into the fm_device + * field for each extent + * @ext_count number of extents to be copied + * @current_extent where to start copying in the extent array */ static void fiemap_prepare_and_copy_exts(struct fiemap *fiemap, struct fiemap_extent *lcl_fm_ext, @@ -1164,11 +1164,11 @@ static void fiemap_prepare_and_copy_exts(struct fiemap *fiemap, * will re-calculate proper offset in next stripe. * Note that the first extent is passed to lov_get_info via the value field. * - * \param fiemap [in] fiemap request header - * \param lsm [in] striping information for the file - * @index stripe component index - * @ext logical extent of mapping - * \param start_stripe [out] starting stripe will be returned in this + * @fiemap fiemap request header + * @lsm striping information for the file + * @index stripe component index + * @ext logical extent of mapping + * @start_stripe starting stripe will be returned in this */ static u64 fiemap_calc_fm_end_offset(struct fiemap *fiemap, struct lov_stripe_md *lsm, @@ -1411,14 +1411,15 @@ static int fiemap_for_stripe(const struct lu_env *env, struct cl_object *obj, * This also handles the restarting of FIEMAP calls in case mapping overflows * the available number of extents in single call. * - * \param env [in] lustre environment - * \param obj [in] file object - * \param fmkey [in] fiemap request header and other info - * \param fiemap [out] fiemap buffer holding retrived map extents - * \param buflen [in/out] max buffer length of @fiemap, when iterate - * each OST, it is used to limit max map needed - * \retval 0 success - * \retval < 0 error + * @env lustre environment + * @obj file object + * @fmkey fiemap request header and other info + * @fiemap fiemap buffer holding retrived map extents + * @buflen max buffer length of @fiemap, when iterate + * each OST, it is used to limit max map needed + * + * Return: 0 success + * < 0 error */ static int lov_object_fiemap(const struct lu_env *env, struct cl_object *obj, struct ll_fiemap_info_key *fmkey, diff --git a/drivers/staging/lustre/lustre/lov/lov_pack.c b/drivers/staging/lustre/lustre/lov/lov_pack.c index 18ce9f9..269e61c 100644 --- a/drivers/staging/lustre/lustre/lov/lov_pack.c +++ b/drivers/staging/lustre/lustre/lov/lov_pack.c @@ -103,8 +103,8 @@ void lov_dump_lmm_v3(int level, struct lov_mds_md_v3 *lmm) * Pack LOV striping metadata for disk storage format (in little * endian byte order). * - * This follows the getxattr() conventions. If \a buf_size is zero - * then return the size needed. If \a buf_size is too small then + * This follows the getxattr() conventions. If @buf_size is zero + * then return the size needed. If @buf_size is too small then * return -ERANGE. Otherwise return the size of the result. */ ssize_t lov_lsm_pack_v1v3(const struct lov_stripe_md *lsm, void *buf, diff --git a/drivers/staging/lustre/lustre/mdc/mdc_changelog.c b/drivers/staging/lustre/lustre/mdc/mdc_changelog.c index ea6dda7..45aef9c 100644 --- a/drivers/staging/lustre/lustre/mdc/mdc_changelog.c +++ b/drivers/staging/lustre/lustre/mdc/mdc_changelog.c @@ -107,12 +107,12 @@ enum { * If the current record is eligible to userland delivery, push * it into the crs_rec_queue where the consumer code will fetch it. * - * @param[in] env (unused) - * @param[in] llh Client-side handle used to identify the llog - * @param[in] hdr Header of the current llog record - * @param[in,out] data chlg_reader_state passed from caller + * @env: (unused) + * @llh: Client-side handle used to identify the llog + * @hdr: Header of the current llog record + * @data: chlg_reader_state passed from caller * - * @return 0 or LLOG_PROC_* control code on success, negated error on failure. + * Returns: 0 or LLOG_PROC_* control code on success, negated error on failure. */ static int chlg_read_cat_process_cb(const struct lu_env *env, struct llog_handle *llh, @@ -198,8 +198,9 @@ static inline struct obd_device *chlg_obd_get(struct chlg_registered_dev *dev) * Record prefetch thread entry point. Opens the changelog catalog and starts * reading records. * - * @param[in,out] args chlg_reader_state passed from caller. - * @return 0 on success, negated error code on failure. + * @args: chlg_reader_state passed from caller. + * + * Returns: 0 on success, negated error code on failure. */ static int chlg_load(void *args) { @@ -269,12 +270,13 @@ static int chlg_load(void *args) * No partial records are copied to userland so this function can return less * data than required (short read). * - * @param[in] file File pointer to the character device. - * @param[out] buff Userland buffer where to copy the records. - * @param[in] count Userland buffer size. - * @param[out] ppos File position, updated with the index number of the next - * record to read. - * @return number of copied bytes on success, negated error code on failure. + * @file: File pointer to the character device. + * @buff: Userland buffer where to copy the records. + * @count: Userland buffer size. + * @ppos: File position, updated with the index number of the next + * record to read. + * + * Returns: number of copied bytes on success, negated error code on failure. */ static ssize_t chlg_read(struct file *file, char __user *buff, size_t count, loff_t *ppos) @@ -336,9 +338,10 @@ static ssize_t chlg_read(struct file *file, char __user *buff, size_t count, /** * Jump to a given record index. Helper for chlg_llseek(). * - * @param[in,out] crs Internal reader state. - * @param[in] offset Desired offset (index record). - * @return 0 on success, negated error code on failure. + * @crs: Internal reader state. + * @offset: Desired offset (index record). + * + * Returns: 0 on success, negated error code on failure. */ static int chlg_set_start_offset(struct chlg_reader_state *crs, u64 offset) { @@ -370,10 +373,11 @@ static int chlg_set_start_offset(struct chlg_reader_state *crs, u64 offset) /** * Move read pointer to a certain record index, encoded as an offset. * - * @param[in,out] file File pointer to the changelog character device - * @param[in] off Offset to skip, actually a record index, not byte count - * @param[in] whence Relative/Absolute interpretation of the offset - * @return the resulting position on success or negated error code on failure. + * @file: File pointer to the changelog character device + * @off: Offset to skip, actually a record index, not byte count + * @whence: Relative/Absolute interpretation of the offset + * + * Returns: the resulting position on success or negated error code on failure. */ static loff_t chlg_llseek(struct file *file, loff_t off, int whence) { @@ -408,10 +412,11 @@ static loff_t chlg_llseek(struct file *file, loff_t off, int whence) /** * Clear record range for a given changelog reader. * - * @param[in] crs Current internal state. - * @param[in] reader Changelog reader ID (cl1, cl2...) - * @param[in] record Record index up which to clear - * @return 0 on success, negated error code on failure. + * @crs: Current internal state. + * @reader: Changelog reader ID (cl1, cl2...) + * @record: Record index up which to clear + * + * Returns: 0 on success, negated error code on failure. */ static int chlg_clear(struct chlg_reader_state *crs, u32 reader, u64 record) { @@ -441,11 +446,12 @@ static int chlg_clear(struct chlg_reader_state *crs, u32 reader, u64 record) * Handle writes() into the changelog character device. Write() can be used * to request special control operations. * - * @param[in] file File pointer to the changelog character device - * @param[in] buff User supplied data (written data) - * @param[in] count Number of written bytes - * @param[in] off (unused) - * @return number of written bytes on success, negated error code on failure. + * @file: File pointer to the changelog character device + * @buff: User supplied data (written data) + * @count: Number of written bytes + * @off: (unused) + * + * Returns: number of written bytes on success, negated error code on failure. */ static ssize_t chlg_write(struct file *file, const char __user *buff, size_t count, loff_t *off) @@ -483,9 +489,11 @@ static ssize_t chlg_write(struct file *file, const char __user *buff, /** * Open handler, initialize internal CRS state and spawn prefetch thread if * needed. - * @param[in] inode Inode struct for the open character device. - * @param[in] file Corresponding file pointer. - * @return 0 on success, negated error code on failure. + * + * @inode: Inode struct for the open character device. + * @file: Corresponding file pointer. + * + * Returns: 0 on success, negated error code on failure. */ static int chlg_open(struct inode *inode, struct file *file) { @@ -532,9 +540,10 @@ static int chlg_open(struct inode *inode, struct file *file) /** * Close handler, release resources. * - * @param[in] inode Inode struct for the open character device. - * @param[in] file Corresponding file pointer. - * @return 0 on success, negated error code on failure. + * @inode: Inode struct for the open character device. + * @file: Corresponding file pointer. + * + * Returns: 0 on success, negated error code on failure. */ static int chlg_release(struct inode *inode, struct file *file) { @@ -557,9 +566,10 @@ static int chlg_release(struct inode *inode, struct file *file) * Poll handler, indicates whether the device is readable (new records) and * writable (always). * - * @param[in] file Device file pointer. - * @param[in] wait (opaque) - * @return combination of the poll status flags. + * @file: Device file pointer. + * @wait: (opaque) + * + * Returns: combination of the poll status flags. */ static unsigned int chlg_poll(struct file *file, poll_table *wait) { @@ -649,8 +659,9 @@ static void get_chlg_name(char *name, size_t name_len, struct obd_device *obd) * Register a misc character device with a dynamic minor number, under a name * of the form: 'changelog-fsname-MDTxxxx'. Reference this OBD device with it. * - * @param[in] obd This MDC obd_device. - * @return 0 on success, negated error code on failure. + * @obd: This MDC obd_device. + * + * Returns: 0 on success, negated error code on failure. */ int mdc_changelog_cdev_init(struct obd_device *obd) { diff --git a/drivers/staging/lustre/lustre/mdc/mdc_lib.c b/drivers/staging/lustre/lustre/mdc/mdc_lib.c index 55d2ea1..7680346 100644 --- a/drivers/staging/lustre/lustre/mdc/mdc_lib.c +++ b/drivers/staging/lustre/lustre/mdc/mdc_lib.c @@ -82,14 +82,14 @@ void mdc_pack_body(struct ptlrpc_request *req, const struct lu_fid *fid, /** * Pack a name (path component) into a request * - * \param[in] req request - * \param[in] field request field (usually RMF_NAME) - * \param[in] name path component - * \param[in] name_len length of path component + * @req: request + * @field: request field (usually RMF_NAME) + * @name: path component + * @name_len: length of path component * - * \a field must be present in \a req and of size \a name_len + 1. + * @field must be present in @req and of size @name_len + 1. * - * \a name must be '\0' terminated of length \a name_len and represent + * @name must be '\0' terminated of length @name_len and represent * a single path component (not contain '/'). */ static void mdc_pack_name(struct ptlrpc_request *req, diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c index 3eb89ec..bc764f9 100644 --- a/drivers/staging/lustre/lustre/mdc/mdc_request.c +++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c @@ -81,16 +81,16 @@ static inline int mdc_queue_wait(struct ptlrpc_request *req) /* * Send MDS_GET_ROOT RPC to fetch root FID. * - * If \a fileset is not NULL it should contain a subdirectory off + * If @fileset is not NULL it should contain a subdirectory off * the ROOT/ directory to be mounted on the client. Return the FID * of the subdirectory to the client to mount onto its mountpoint. * - * \param[in] imp MDC import - * \param[in] fileset fileset name, which could be NULL - * \param[out] rootfid root FID of this mountpoint - * \param[out] pc root capa will be unpacked and saved in this pointer + * @imp: MDC import + * @fileset: fileset name, which could be NULL + * @rootfid: root FID of this mountpoint + * @pc: root capa will be unpacked and saved in this pointer * - * \retval 0 on success, negative errno on failure + * Returns: 0 on success, negative errno on failure */ static int mdc_get_root(struct obd_export *exp, const char *fileset, struct lu_fid *rootfid) @@ -1273,15 +1273,15 @@ static int mdc_read_page_remote(void *data, struct page *page0) * Read dir page from cache first, if it can not find it, read it from * server and add into the cache. * - * \param[in] exp MDC export - * \param[in] op_data client MD stack parameters, transferring parameters + * @exp: MDC export + * @op_data: client MD stack parameters, transferring parameters * between different layers on client MD stack. - * \param[in] cb_op callback required for ldlm lock enqueue during + * @cb_op: callback required for ldlm lock enqueue during * read page - * \param[in] hash_offset the hash offset of the page to be read - * \param[in] ppage the page to be read + * @hash_offset: the hash offset of the page to be read + * @ppage the page to be read * - * retval = 0 get the page successfully + * Return: = 0 get the page successfully * errno(<0) get the page failed */ static int mdc_read_page(struct obd_export *exp, struct md_op_data *op_data, @@ -2151,8 +2151,9 @@ static int mdc_ioc_hsm_ct_start(struct obd_export *exp, /** * Send a message to any listening copytools - * @param val KUC message (kuc_hdr + hsm_action_list) - * @param len total length of message + * + * @val: KUC message (kuc_hdr + hsm_action_list) + * @len: total length of message */ static int mdc_hsm_copytool_send(size_t len, void *val) { @@ -2184,8 +2185,9 @@ static int mdc_hsm_copytool_send(size_t len, void *val) /** * callback function passed to kuc for re-registering each HSM copytool * running on MDC, after MDT shutdown/recovery. - * @param data copytool registration data - * @param cb_arg callback argument (obd_import) + * + * @data: copytool registration data + * @cb_arg: callback argument (obd_import) */ static int mdc_hsm_ct_reregister(void *data, void *cb_arg) { diff --git a/drivers/staging/lustre/lustre/mgc/mgc_request.c b/drivers/staging/lustre/lustre/mgc/mgc_request.c index a4dfdc0..bb837ef 100644 --- a/drivers/staging/lustre/lustre/mgc/mgc_request.c +++ b/drivers/staging/lustre/lustre/mgc/mgc_request.c @@ -1589,11 +1589,11 @@ static bool mgc_import_in_recovery(struct obd_import *imp) * trying to update from the same log simultaneously, in which case we * should use a per-log semaphore instead of cld_lock. * - * \param[in] mgc MGC device by which to fetch the configuration log - * \param[in] cld log processing state (stored in lock callback data) + * @mgc: MGC device by which to fetch the configuration log + * @cld: log processing state (stored in lock callback data) * - * \retval 0 on success - * \retval negative errno on failure + * Returns: 0 on success + * negative errno on failure */ int mgc_process_log(struct obd_device *mgc, struct config_llog_data *cld) { diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c index 4359a93..fa554dd 100644 --- a/drivers/staging/lustre/lustre/osc/osc_cache.c +++ b/drivers/staging/lustre/lustre/osc/osc_cache.c @@ -2105,12 +2105,12 @@ static unsigned int get_write_extents(struct osc_object *obj, /** * prepare pages for ASYNC io and put pages in send queue. * - * \param cmd OBD_BRW_* macroses - * \param lop pending pages + * @cmd OBD_BRW_* macroses + * @lop pending pages * - * \return zero if no page added to send queue. - * \return 1 if pages successfully added to send queue. - * \return negative on errors. + * Return: zero if no page added to send queue. + * 1 if pages successfully added to send queue. + * negative on errors. */ static int osc_send_read_rpc(const struct lu_env *env, struct client_obd *cli, @@ -3021,7 +3021,7 @@ int osc_cache_writeback_range(const struct lu_env *env, struct osc_object *obj, } /** - * Returns a list of pages by a given [start, end] of \a obj. + * Returns a list of pages by a given [start, end] of @obj. * * Gang tree lookup (radix_tree_gang_lookup()) optimization is absolutely * crucial in the face of [offset, EOF] locks. diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c index bfc1abb..612305a 100644 --- a/drivers/staging/lustre/lustre/osc/osc_lock.c +++ b/drivers/staging/lustre/lustre/osc/osc_lock.c @@ -485,14 +485,14 @@ static int __osc_dlm_blocking_ast(const struct lu_env *env, * Control flow is tricky, because ldlm uses the same call-back * (ldlm_lock::l_blocking_ast()) for both blocking and cancellation ast's. * - * \param dlmlock lock for which ast occurred. + * @dlmlock lock for which ast occurred. * - * \param new description of a conflicting lock in case of blocking ast. + * @new description of a conflicting lock in case of blocking ast. * - * \param data value of dlmlock->l_ast_data + * @data value of dlmlock->l_ast_data * - * \param flag LDLM_CB_BLOCKING or LDLM_CB_CANCELING. Used to distinguish - * cancellation and blocking ast's. + * @flag LDLM_CB_BLOCKING or LDLM_CB_CANCELING. Used to distinguish + * cancellation and blocking ast's. * * Possible use cases: * diff --git a/drivers/staging/lustre/lustre/osc/osc_request.c b/drivers/staging/lustre/lustre/osc/osc_request.c index 3fedfaf..1a82e85 100644 --- a/drivers/staging/lustre/lustre/osc/osc_request.c +++ b/drivers/staging/lustre/lustre/osc/osc_request.c @@ -2807,8 +2807,8 @@ static int osc_import_event(struct obd_device *obd, * Determine whether the lock can be canceled before replaying the lock * during recovery, see bug16774 for detailed information. * - * \retval zero the lock can't be canceled - * \retval other ok to cancel + * Return: zero the lock can't be canceled + * other ok to cancel */ static int osc_cancel_weight(struct ldlm_lock *lock) {