From patchwork Sat Feb 24 10:36:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengguang Xu X-Patchwork-Id: 10240351 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A14E5602DC for ; Sat, 24 Feb 2018 10:37:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FE4C29B9E for ; Sat, 24 Feb 2018 10:37:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8489529BD4; Sat, 24 Feb 2018 10:37:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0ABE829B9E for ; Sat, 24 Feb 2018 10:37:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751417AbeBXKhH (ORCPT ); Sat, 24 Feb 2018 05:37:07 -0500 Received: from mr11p00im-asmtp004.me.com ([17.110.69.135]:64828 "EHLO mr11p00im-asmtp004.me.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750990AbeBXKhG (ORCPT ); Sat, 24 Feb 2018 05:37:06 -0500 Received: from process-dkim-sign-daemon.mr11p00im-asmtp004.me.com by mr11p00im-asmtp004.me.com (Oracle Communications Messaging Server 8.0.1.2.20170607 64bit (built Jun 7 2017)) id <0P4N00I00H6RMT00@mr11p00im-asmtp004.me.com> for ceph-devel@vger.kernel.org; Sat, 24 Feb 2018 10:36:43 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=icloud.com; s=04042017; t=1519468603; bh=vncFDKKFPUSbb6cZgSL6D5/TpuEocjAthryIiE09uWI=; h=From:To:Subject:Date:Message-id; b=gJ4obEvGVx3wyy6N5gTxKHQWTpntVcP94uNFWkSMjMsZbIHzflvnH3Bjh1hE2Hu3M 2y6VeUm1zKTJu1p9uLZMw0o7Hzk9BX1j+pD/vNGSIjV52RWTBjPucw6g1ARNwDtyBP FRs9l4jUrMQJ1aRprE+a6YzZi3LJ2AvTkzBh3Tdpr4mob87cKopj5tG9n3BIC8XuDp l5QxbfaWudUUaeudDEgakigPKyPTISodVie1BvmNQniOzjzT34clqnwR6RusK5wb7N UJBFQCw5cTBYrFWMwhPPulE8kQnSRKv0US5JPhwu8Bqxoxmv5v50EimA55WT2367Xh HnvCA3M+SkWbA== Received: from icloud.com ([127.0.0.1]) by mr11p00im-asmtp004.me.com (Oracle Communications Messaging Server 8.0.1.2.20170607 64bit (built Jun 7 2017)) with ESMTPSA id <0P4N00HUZHH01A10@mr11p00im-asmtp004.me.com>; Sat, 24 Feb 2018 10:36:43 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2018-02-24_02:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1015 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000 definitions=main-1802240139 From: Chengguang Xu To: zyan@redhat.com, idryomov@gmail.com Cc: ceph-devel@vger.kernel.org, Chengguang Xu Subject: [PATCH] ceph: replacing hard-coded function name using __func__ Date: Sat, 24 Feb 2018 18:36:28 +0800 Message-id: <1519468588-68272-1-git-send-email-cgxu519@icloud.com> X-Mailer: git-send-email 1.8.3.1 Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Function name may be frequently chaged over time, but hard-coded function name in error message is easily leftover. Meanwhile, patch check script always gives a warning when detecting hard-coded function name in output message. Signed-off-by: Chengguang Xu --- fs/ceph/cache.c | 11 ++++++---- fs/ceph/caps.c | 35 ++++++++++++++++------------- fs/ceph/inode.c | 43 ++++++++++++++++++++---------------- fs/ceph/mds_client.c | 62 ++++++++++++++++++++++++++++------------------------ fs/ceph/mdsmap.c | 2 +- fs/ceph/snap.c | 9 ++++---- fs/ceph/super.c | 8 +++---- 7 files changed, 95 insertions(+), 75 deletions(-) diff --git a/fs/ceph/cache.c b/fs/ceph/cache.c index a3ab265..6657e83 100644 --- a/fs/ceph/cache.c +++ b/fs/ceph/cache.c @@ -98,8 +98,11 @@ int ceph_fscache_register_fs(struct ceph_fs_client* fsc) if (uniq_len && memcmp(ent->uniquifier, fscache_uniq, uniq_len)) continue; - pr_err("fscache cookie already registered for fsid %pU\n", fsid); - pr_err(" use fsc=%%s mount option to specify a uniquifier\n"); + pr_err("%s: fscache cookie already registered for fsid %pU\n", + __func__, fsid); + pr_err("%s: use fsc=%%s mount option to specify a uniquifier\n", + __func__); + err = -EBUSY; goto out_unlock; } @@ -124,8 +127,8 @@ int ceph_fscache_register_fs(struct ceph_fs_client* fsc) list_add_tail(&ent->list, &ceph_fscache_list); } else { kfree(ent); - pr_err("unable to register fscache cookie for fsid %pU\n", - fsid); + pr_err("%s: unable to register fscache cookie for fsid %pU\n", + __func__, fsid); /* all other fs ignore this error */ } out_unlock: diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index 6582c45..728bbb7 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -647,8 +647,8 @@ void ceph_add_cap(struct inode *inode, if (oldrealm) ceph_put_snap_realm(mdsc, oldrealm); } else { - pr_err("ceph_add_cap: couldn't find snap realm %llx\n", - realmino); + pr_err("%s: couldn't find snap realm %llx\n", + __func__, realmino); WARN_ON(!realm); } } @@ -1469,8 +1469,9 @@ static void __ceph_flush_snaps(struct ceph_inode_info *ci, ret = __send_flush_snap(inode, session, capsnap, cap->mseq, oldest_flush_tid); if (ret < 0) { - pr_err("__flush_snaps: error sending cap flushsnap, " + pr_err("%s: error sending cap flushsnap, " "ino (%llx.%llx) tid %llu follows %llu\n", + __func__, ceph_vinop(inode), cf->tid, capsnap->follows); } @@ -2246,8 +2247,8 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc, cap = ci->i_auth_cap; if (!(cap && cap->session == session)) { - pr_err("%p auth cap %p not mds%d ???\n", - inode, cap, session->s_mds); + pr_err("%s: %p auth cap %p not mds%d ???\n", + __func__, inode, cap, session->s_mds); break; } @@ -2263,9 +2264,10 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc, cap->issued | cap->implemented, cf->caps, cf->tid, oldest_flush_tid); if (ret) { - pr_err("kick_flushing_caps: error sending " + pr_err("%s: error sending " "cap flush, ino (%llx.%llx) " "tid %llu flushing %s\n", + __func__, ceph_vinop(inode), cf->tid, ceph_cap_string(cf->caps)); } @@ -2283,9 +2285,10 @@ static void __kick_flushing_caps(struct ceph_mds_client *mdsc, ret = __send_flush_snap(inode, session, capsnap, cap->mseq, oldest_flush_tid); if (ret < 0) { - pr_err("kick_flushing_caps: error sending " + pr_err("%s: error sending " "cap flushsnap, ino (%llx.%llx) " "tid %llu follows %llu\n", + __func__, ceph_vinop(inode), cf->tid, capsnap->follows); } @@ -2314,8 +2317,8 @@ void ceph_early_kick_flushing_caps(struct ceph_mds_client *mdsc, spin_lock(&ci->i_ceph_lock); cap = ci->i_auth_cap; if (!(cap && cap->session == session)) { - pr_err("%p auth cap %p not mds%d ???\n", - &ci->vfs_inode, cap, session->s_mds); + pr_err("%s: %p auth cap %p not mds%d ???\n", + __func__, &ci->vfs_inode, cap, session->s_mds); spin_unlock(&ci->i_ceph_lock); continue; } @@ -2357,8 +2360,8 @@ void ceph_kick_flushing_caps(struct ceph_mds_client *mdsc, spin_lock(&ci->i_ceph_lock); cap = ci->i_auth_cap; if (!(cap && cap->session == session)) { - pr_err("%p auth cap %p not mds%d ???\n", - &ci->vfs_inode, cap, session->s_mds); + pr_err("%s: %p auth cap %p not mds%d ???\n", + __func__, &ci->vfs_inode, cap, session->s_mds); spin_unlock(&ci->i_ceph_lock); continue; } @@ -3474,9 +3477,10 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, issued = cap->issued; if (issued != cap->implemented) - pr_err_ratelimited("handle_cap_export: issued != implemented: " + pr_err_ratelimited("%s: issued != implemented: " "ino (%llx.%llx) mds%d seq %d mseq %d " "issued %s implemented %s\n", + __func__, ceph_vinop(inode), mds, cap->seq, cap->mseq, ceph_cap_string(issued), ceph_cap_string(cap->implemented)); @@ -3626,10 +3630,11 @@ static void handle_cap_import(struct ceph_mds_client *mdsc, if ((ph->flags & CEPH_CAP_FLAG_AUTH) && (ocap->seq != le32_to_cpu(ph->seq) || ocap->mseq != le32_to_cpu(ph->mseq))) { - pr_err_ratelimited("handle_cap_import: " + pr_err_ratelimited("%s: " "mismatched seq/mseq: ino (%llx.%llx) " "mds%d seq %d mseq %d importer mds%d " "has peer seq %d mseq %d\n", + __func__, ceph_vinop(inode), peer, ocap->seq, ocap->mseq, mds, le32_to_cpu(ph->seq), le32_to_cpu(ph->mseq)); @@ -3841,7 +3846,7 @@ void ceph_handle_caps(struct ceph_mds_session *session, default: spin_unlock(&ci->i_ceph_lock); - pr_err("ceph_handle_caps: unknown cap op %d %s\n", op, + pr_err("%s: unknown cap op %d %s\n", __func__, op, ceph_cap_op_name(op)); } @@ -3863,7 +3868,7 @@ void ceph_handle_caps(struct ceph_mds_session *session, return; bad: - pr_err("ceph_handle_caps: corrupt message\n"); + pr_err("%s: corrupt message\n", __func__); ceph_msg_dump(msg); return; } diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index c6ec5aa..b88d413 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -281,8 +281,10 @@ static int ceph_fill_dirfrag(struct inode *inode, if (IS_ERR(frag)) { /* this is not the end of the world; we can continue with bad/inaccurate delegation info */ - pr_err("fill_dirfrag ENOMEM on mds ref %llx.%llx fg %x\n", - ceph_vinop(inode), le32_to_cpu(dirinfo->frag)); + pr_err("%s: ENOMEM on mds ref %llx.%llx fg %x\n", + __func__, + ceph_vinop(inode), le32_to_cpu(dirinfo->frag)); + err = -ENOMEM; goto out; } @@ -361,9 +363,11 @@ static int ceph_fill_fragtree(struct inode *inode, id = le32_to_cpu(fragtree->splits[i].frag); split_by = le32_to_cpu(fragtree->splits[i].by); if (split_by == 0 || ceph_frag_bits(id) + split_by > 24) { - pr_err("fill_fragtree %llx.%llx invalid split %d/%u, " - "frag %x split by %d\n", ceph_vinop(inode), - i, nsplits, id, split_by); + pr_err("%s: %llx.%llx invalid split %d/%u, " + "frag %x split by %d\n", + __func__, ceph_vinop(inode), + i, nsplits, id, split_by); + continue; } frag = NULL; @@ -604,7 +608,7 @@ int ceph_fill_file_size(struct inode *inode, int issued, (truncate_seq == ci->i_truncate_seq && size > inode->i_size)) { dout("size %lld -> %llu\n", inode->i_size, size); if (size > 0 && S_ISDIR(inode->i_mode)) { - pr_err("fill_file_size non-zero size for directory\n"); + pr_err("%s: non-zero size for directory\n", __func__); size = 0; } i_size_write(inode, size); @@ -755,8 +759,8 @@ static int fill_inode(struct inode *inode, struct page *locked_page, if (iinfo->xattr_len > 4) { xattr_blob = ceph_buffer_new(iinfo->xattr_len, GFP_NOFS); if (!xattr_blob) - pr_err("fill_inode ENOMEM xattr blob %d bytes\n", - iinfo->xattr_len); + pr_err("%s: ENOMEM xattr blob %d bytes\n", + __func__, iinfo->xattr_len); } if (iinfo->pool_ns_len > 0) @@ -880,8 +884,9 @@ static int fill_inode(struct inode *inode, struct page *locked_page, spin_unlock(&ci->i_ceph_lock); if (symlen != i_size_read(inode)) { - pr_err("fill_inode %llx.%llx BAD symlink " - "size %lld\n", ceph_vinop(inode), + pr_err("%s: %llx.%llx BAD symlink " + "size %lld\n", + __func__, ceph_vinop(inode), i_size_read(inode)); i_size_write(inode, symlen); inode->i_blocks = calc_inode_blocks(symlen); @@ -914,8 +919,8 @@ static int fill_inode(struct inode *inode, struct page *locked_page, ceph_decode_timespec(&ci->i_rctime, &info->rctime); break; default: - pr_err("fill_inode %llx.%llx BAD mode 0%o\n", - ceph_vinop(inode), inode->i_mode); + pr_err("%s: %llx.%llx BAD mode 0%o\n", + __func__, ceph_vinop(inode), inode->i_mode); } /* were we issued a capability? */ @@ -1106,8 +1111,8 @@ static struct dentry *splice_dentry(struct dentry *dn, struct inode *in) d_drop(dn); realdn = d_splice_alias(in, dn); if (IS_ERR(realdn)) { - pr_err("splice_dentry error %ld %p inode %p ino %llx.%llx\n", - PTR_ERR(realdn), dn, in, ceph_vinop(in)); + pr_err("%s: error %ld %p inode %p ino %llx.%llx\n", + __func__, PTR_ERR(realdn), dn, in, ceph_vinop(in)); dn = realdn; /* note realdn contains the error */ goto out; } else if (realdn) { @@ -1234,8 +1239,8 @@ int ceph_fill_trace(struct super_block *sb, struct ceph_mds_request *req) rinfo->head->result == 0) ? req->r_fmode : -1, &req->r_caps_reservation); if (err < 0) { - pr_err("fill_inode badness %p %llx.%llx\n", - in, ceph_vinop(in)); + pr_err("%s: badness %p %llx.%llx\n", + __func__, in, ceph_vinop(in)); goto done; } } @@ -1429,7 +1434,7 @@ static int readdir_prepopulate_inodes_only(struct ceph_mds_request *req, req->r_request_started, -1, &req->r_caps_reservation); if (rc < 0) { - pr_err("fill_inode badness on %p got %d\n", in, rc); + pr_err("%s: badness on %p got %d\n", __func__, in, rc); err = rc; } iput(in); @@ -1629,7 +1634,7 @@ int ceph_readdir_prepopulate(struct ceph_mds_request *req, req->r_request_started, -1, &req->r_caps_reservation); if (ret < 0) { - pr_err("fill_inode badness on %p\n", in); + pr_err("%s: badness on %p\n", __func__, in); if (d_really_is_negative(dn)) iput(in); d_drop(dn); @@ -1780,7 +1785,7 @@ static void ceph_invalidate_work(struct work_struct *work) spin_unlock(&ci->i_ceph_lock); if (invalidate_inode_pages2(inode->i_mapping) < 0) { - pr_err("invalidate_pages %p fails\n", inode); + pr_err("%s: %p fails\n", __func__, inode); } spin_lock(&ci->i_ceph_lock); diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 2e8f90f..f9175fa 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -160,7 +160,7 @@ static int parse_reply_info_trace(void **p, void *end, bad: err = -EIO; out_bad: - pr_err("problem parsing mds trace %d\n", err); + pr_err("%s: problem parsing mds trace %d\n", __func__, err); return err; } @@ -197,7 +197,7 @@ static int parse_reply_info_dir(void **p, void *end, BUG_ON(!info->dir_entries); if ((unsigned long)(info->dir_entries + num) > (unsigned long)info->dir_entries + info->dir_buf_size) { - pr_err("dir contents are larger than expected\n"); + pr_err("%s: dir contents are larger than expected\n", __func__); WARN_ON(1); goto bad; } @@ -233,7 +233,7 @@ static int parse_reply_info_dir(void **p, void *end, bad: err = -EIO; out_bad: - pr_err("problem parsing dir contents %d\n", err); + pr_err("%s: problem parsing dir contents %d\n", __func__, err); return err; } @@ -347,7 +347,7 @@ static int parse_reply_info(struct ceph_msg *msg, bad: err = -EIO; out_bad: - pr_err("mds parse_reply err %d\n", err); + pr_err("%s: mds parse_reply err %d\n", __func__, err); return err; } @@ -611,8 +611,9 @@ static void __register_request(struct ceph_mds_client *mdsc, ret = ceph_reserve_caps(mdsc, &req->r_caps_reservation, req->r_num_caps); if (ret < 0) { - pr_err("__register_request %p " - "failed to reserve caps: %d\n", req, ret); + pr_err("%s: %p " + "failed to reserve caps: %d\n", + __func__, req, ret); /* set req->r_err to fail early from __do_request */ req->r_err = ret; return; @@ -869,7 +870,7 @@ static struct ceph_msg *create_session_msg(u32 op, u64 seq) msg = ceph_msg_new(CEPH_MSG_CLIENT_SESSION, sizeof(*h), GFP_NOFS, false); if (!msg) { - pr_err("create_session_msg ENOMEM creating msg\n"); + pr_err("%s: ENOMEM creating msg\n", __func__); return NULL; } h = msg->front.iov_base; @@ -914,7 +915,7 @@ static struct ceph_msg *create_session_open_msg(struct ceph_mds_client *mdsc, u6 msg = ceph_msg_new(CEPH_MSG_CLIENT_SESSION, sizeof(*h) + metadata_bytes, GFP_NOFS, false); if (!msg) { - pr_err("create_session_msg ENOMEM creating msg\n"); + pr_err("%s: ENOMEM creating msg\n", __func__); return NULL; } h = msg->front.iov_base; @@ -1700,8 +1701,8 @@ void ceph_send_cap_releases(struct ceph_mds_client *mdsc, } return; out_err: - pr_err("send_cap_releases mds%d, failed to allocate message\n", - session->s_mds); + pr_err("%s mds%d, failed to allocate message\n", + __func__, session->s_mds); spin_lock(&session->s_cap_lock); list_splice(&tmp_list, &session->s_cap_releases); session->s_num_cap_releases += num_cap_releases; @@ -1873,8 +1874,9 @@ char *ceph_mdsc_build_path(struct dentry *dentry, int *plen, u64 *base, } rcu_read_unlock(); if (pos != 0 || read_seqretry(&rename_lock, seq)) { - pr_err("build_path did not end path lookup where " - "expected, namelen is %d, pos is %d\n", len, pos); + pr_err("%s: did not end path lookup where " + "expected, namelen is %d, pos is %d\n", + __func__, len, pos); /* presumably this is only possible if racing with a rename of one of the parent directories (we can not lock the dentries above us to prevent this, but @@ -2481,7 +2483,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) int mds = session->s_mds; if (msg->front.iov_len < sizeof(*head)) { - pr_err("mdsc_handle_reply got corrupt (short) reply\n"); + pr_err("%s: got corrupt (short) reply\n", __func__); ceph_msg_dump(msg); return; } @@ -2499,9 +2501,10 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) /* correct session? */ if (req->r_session != session) { - pr_err("mdsc_handle_reply got %llu on session mds%d" - " not mds%d\n", tid, session->s_mds, - req->r_session ? req->r_session->s_mds : -1); + pr_err("%s: got %llu on session mds%d" + " not mds%d\n", + __func__, tid, session->s_mds, + req->r_session ? req->r_session->s_mds : -1); mutex_unlock(&mdsc->mutex); goto out; } @@ -2592,7 +2595,8 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) mutex_lock(&session->s_mutex); if (err < 0) { - pr_err("mdsc_handle_reply got corrupt reply mds%d(tid:%lld)\n", mds, tid); + pr_err("%s: got corrupt reply mds%d(tid:%lld)\n", + __func__, mds, tid); ceph_msg_dump(msg); goto out_err; } @@ -2708,7 +2712,7 @@ static void handle_forward(struct ceph_mds_client *mdsc, return; bad: - pr_err("mdsc_handle_forward decode error err=%d\n", err); + pr_err("%s: decode error err=%d\n", __func__, err); } /* @@ -2811,7 +2815,7 @@ static void handle_session(struct ceph_mds_session *session, break; default: - pr_err("mdsc_handle_session bad op %d mds%d\n", op, mds); + pr_err("%s: bad op %d mds%d\n", __func__, op, mds); WARN_ON(1); } @@ -2828,7 +2832,7 @@ static void handle_session(struct ceph_mds_session *session, return; bad: - pr_err("mdsc_handle_session corrupt message mds%d len %d\n", mds, + pr_err("%s: corrupt message mds%d len %d\n", __func__, mds, (int)msg->front.iov_len); ceph_msg_dump(msg); return; @@ -3195,7 +3199,8 @@ static void send_mds_reconnect(struct ceph_mds_client *mdsc, fail_nomsg: ceph_pagelist_release(pagelist); fail_nopagelist: - pr_err("error %d preparing reconnect for mds%d\n", err, mds); + pr_err("%s: error %d preparing reconnect for mds%d\n", + __func__, err, mds); return; } @@ -3429,7 +3434,7 @@ static void handle_lease(struct ceph_mds_client *mdsc, return; bad: - pr_err("corrupt lease message\n"); + pr_err("%s: corrupt lease message\n", __func__); ceph_msg_dump(msg); } @@ -3936,7 +3941,7 @@ void ceph_mdsc_handle_fsmap(struct ceph_mds_client *mdsc, struct ceph_msg *msg) return; bad: - pr_err("error decoding fsmap\n"); + pr_err("%s: error decoding fsmap\n", __func__); err_out: mutex_lock(&mdsc->mutex); mdsc->mdsmap_err = err; @@ -4002,7 +4007,7 @@ void ceph_mdsc_handle_mdsmap(struct ceph_mds_client *mdsc, struct ceph_msg *msg) bad_unlock: mutex_unlock(&mdsc->mutex); bad: - pr_err("error decoding mdsmap %d\n", err); + pr_err("%s: error decoding mdsmap %d\n", __func__, err); return; } @@ -4079,8 +4084,9 @@ static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) break; default: - pr_err("received unknown message type %d %s\n", type, - ceph_msg_type_name(type)); + pr_err("%s: received unknown message type %d %s\n", + __func__, type, + ceph_msg_type_name(type)); } out: ceph_msg_put(msg); @@ -4156,8 +4162,8 @@ static struct ceph_msg *mds_alloc_msg(struct ceph_connection *con, *skip = 0; msg = ceph_msg_new(type, front_len, GFP_NOFS, false); if (!msg) { - pr_err("unable to allocate msg type %d len %d\n", - type, front_len); + pr_err("%s: unable to allocate msg type %d len %d\n", + __func__, type, front_len); return NULL; } diff --git a/fs/ceph/mdsmap.c b/fs/ceph/mdsmap.c index 44e53ab..270bb71 100644 --- a/fs/ceph/mdsmap.c +++ b/fs/ceph/mdsmap.c @@ -359,7 +359,7 @@ struct ceph_mdsmap *ceph_mdsmap_decode(void **p, void *end) err = -ENOMEM; goto out_err; bad: - pr_err("corrupt mdsmap\n"); + pr_err("%s: corrupt mdsmap\n", __func__); print_hex_dump(KERN_DEBUG, "mdsmap: ", DUMP_PREFIX_OFFSET, 16, 1, start, end - start, true); diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c index 07cf95e..35ed296 100644 --- a/fs/ceph/snap.c +++ b/fs/ceph/snap.c @@ -390,7 +390,7 @@ static int build_snap_context(struct ceph_snap_realm *realm, ceph_put_snap_context(realm->cached_context); realm->cached_context = NULL; } - pr_err("build_snap_context %llx %p fail %d\n", realm->ino, + pr_err("%s: %llx %p fail %d\n", __func__, realm->ino, realm, err); return err; } @@ -464,7 +464,8 @@ void ceph_queue_cap_snap(struct ceph_inode_info *ci) capsnap = kzalloc(sizeof(*capsnap), GFP_NOFS); if (!capsnap) { - pr_err("ENOMEM allocating ceph_cap_snap on %p\n", inode); + pr_err("%s: ENOMEM allocating ceph_cap_snap on %p\n", + __func__, inode); return; } @@ -770,7 +771,7 @@ int ceph_update_snap_trace(struct ceph_mds_client *mdsc, ceph_put_snap_realm(mdsc, realm); if (first_realm) ceph_put_snap_realm(mdsc, first_realm); - pr_err("update_snap_trace error %d\n", err); + pr_err("%s: error %d\n", __func__, err); return err; } @@ -976,7 +977,7 @@ void ceph_handle_snap(struct ceph_mds_client *mdsc, return; bad: - pr_err("corrupt snap message from mds%d\n", mds); + pr_err("%s: corrupt snap message from mds%d\n", __func__, mds); ceph_msg_dump(msg); out: if (locked_rwsem) diff --git a/fs/ceph/super.c b/fs/ceph/super.c index a62d2a9..3d73bc0 100644 --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -203,8 +203,8 @@ static int parse_fsopt_token(char *c, void *private) if (token < Opt_last_int) { ret = match_int(&argstr[0], &intval); if (ret < 0) { - pr_err("bad mount option arg (not int) " - "at '%s'\n", c); + pr_err("%s: bad mount option arg (not int) " + "at '%s'\n", __func__, c); return ret; } dout("got int token %d val %d\n", token, intval); @@ -454,8 +454,8 @@ static int parse_mount_options(struct ceph_mount_options **pfsopt, err = -EINVAL; dev_name_end--; /* back up to ':' separator */ if (dev_name_end < dev_name || *dev_name_end != ':') { - pr_err("device name is missing path (no : separator in %s)\n", - dev_name); + pr_err("%s: device name is missing path (no : separator in %s)\n", + __func__, dev_name); goto out; } dout("device name '%.*s'\n", (int)(dev_name_end - dev_name), dev_name);