From patchwork Mon Jun 29 23:43:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24016913 for ; Mon, 29 Jun 2020 23:43:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E6CF920781 for ; Mon, 29 Jun 2020 23:43:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="GKBw9RWX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726164AbgF2Xn2 (ORCPT ); Mon, 29 Jun 2020 19:43:28 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:21806 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727065AbgF2Xn1 (ORCPT ); Mon, 29 Jun 2020 19:43:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474206; x=1625010206; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qVDCIfxvDPbLoHMV2s/ADWRyelaRFbPKJr2dCvm/hJA=; b=GKBw9RWXpxYwRf/q/X0niKyUC5Un1PS9kYHlrbRgMHHB97ggpzaZJJXT MMQy7PWxq41P4zLAnLW6o1udFskYoMAuWUIokWdCc6/xyambuNzYX7OII qmPJFRT8Y+GY0utnUBUuO+AfuFmF4JjTo12ilLeUZ/P1lOP8n0LEJ5JpO v77TTNYOl1JEGXaPRXImVnKS4pzaYA299p8WQ6rblB/+TV0Mt7vJBL/Nw J16kafNaF5Z7AmZfPJU2oTjbppyLp0WXJ+Hp8ZLFtGPd1UpnKRccTLL4O xj4bw8sos8iBCnZuRp5mI2dXxg776UJBpNPc2ARm3ZeQHfsCrsveKRBVw A==; IronPort-SDR: yIJdzmMUFQHzbID6oTq3/bv+UMQbC3XaJvsUIuSR0vkTgwUS5bkhgywNmOQuPOTmlSkheYkgmz rAwsZbfaHfdh1k/qldXRr8t4WgRbOmRcICuX2956LJ3Oj6bF8B2Oc8kyixR45W5/uHFKr6XfKN 92RLZNvMZi0GYF+F2hEq90N5/uh8grUS+vYBzHkf0ZFIcLvK1L2TCJNdfu0zzH0E+/redcneax SR1skW0ZmYMO2zxoQJfkQU9soyEhmOSA76a4aqMKIOhXxpJ+Kdm2bw+o1BRY581Y5DqbPSl/Su 1zI= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="142577097" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:43:26 +0800 IronPort-SDR: WefL1+CHZakXRZKKnue8nxymOHbbobeaTMG5yuXVkKQyW1F95fKA+jDmyEtqHVt10UK4y6Ry29 4XZuk3GtXVPRvYmD6nYqJyjxPsUPPHTeo= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:18 -0700 IronPort-SDR: uG3fCcwY/B5SNgcEE22NujsRNqT1GdU67QrZWUOjlZYVNt4+3mwyZLYWV8/1jksNk3cqfuKMTf p1mBQuvFnqzQ== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:43:25 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 01/11] block: blktrace framework cleanup Date: Mon, 29 Jun 2020 16:43:04 -0700 Message-Id: <20200629234314.10509-2-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This patch removes the extra variables from the trace events and overall kernel blktrace framework. The removed members can easily be extracted from the remaining argument which reduces the code significantly and allows us to optimize the several tracepoints like the next patch in the series.       Signed-off-by: Chaitanya Kulkarni --- block/blk-core.c | 6 +-- block/blk-merge.c | 4 +- block/blk-mq-sched.c | 2 +- block/blk-mq.c | 10 ++-- block/bounce.c | 2 +- drivers/md/dm.c | 4 +- include/trace/events/block.h | 87 +++++++++++++------------------- kernel/trace/blktrace.c | 98 ++++++++++++++++++------------------ 8 files changed, 98 insertions(+), 115 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 72b102a389a5..6d4c57ef4533 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -674,7 +674,7 @@ bool bio_attempt_back_merge(struct request *req, struct bio *bio, if (!ll_back_merge_fn(req, bio, nr_segs)) return false; - trace_block_bio_backmerge(req->q, req, bio); + trace_block_bio_backmerge(bio); rq_qos_merge(req->q, req, bio); if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff) @@ -698,7 +698,7 @@ bool bio_attempt_front_merge(struct request *req, struct bio *bio, if (!ll_front_merge_fn(req, bio, nr_segs)) return false; - trace_block_bio_frontmerge(req->q, req, bio); + trace_block_bio_frontmerge(bio); rq_qos_merge(req->q, req, bio); if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff) @@ -1082,7 +1082,7 @@ generic_make_request_checks(struct bio *bio) return false; if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) { - trace_block_bio_queue(q, bio); + trace_block_bio_queue(bio); /* Now that enqueuing has been traced, we need to trace * completion as well. */ diff --git a/block/blk-merge.c b/block/blk-merge.c index 9c9fb21584b6..8333ccd60ee1 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -337,7 +337,7 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio, split->bi_opf |= REQ_NOMERGE; bio_chain(split, *bio); - trace_block_split(q, split, (*bio)->bi_iter.bi_sector); + trace_block_split(split, (*bio)->bi_iter.bi_sector); generic_make_request(*bio); *bio = split; } @@ -793,7 +793,7 @@ static struct request *attempt_merge(struct request_queue *q, */ blk_account_io_merge_request(next); - trace_block_rq_merge(q, next); + trace_block_rq_merge(next); /* * ownership of bio passed from next to req, return 'next' for diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index fdcc2c1dd178..a3cade16ef80 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -409,7 +409,7 @@ EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge); void blk_mq_sched_request_inserted(struct request *rq) { - trace_block_rq_insert(rq->q, rq); + trace_block_rq_insert(rq); } EXPORT_SYMBOL_GPL(blk_mq_sched_request_inserted); diff --git a/block/blk-mq.c b/block/blk-mq.c index b8738b3c6d06..dbb98b2bc868 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -742,7 +742,7 @@ void blk_mq_start_request(struct request *rq) { struct request_queue *q = rq->q; - trace_block_rq_issue(q, rq); + trace_block_rq_issue(rq); if (test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) { rq->io_start_time_ns = ktime_get_ns(); @@ -769,7 +769,7 @@ static void __blk_mq_requeue_request(struct request *rq) blk_mq_put_driver_tag(rq); - trace_block_rq_requeue(q, rq); + trace_block_rq_requeue(rq); rq_qos_requeue(q, rq); if (blk_mq_request_started(rq)) { @@ -1758,7 +1758,7 @@ static inline void __blk_mq_insert_req_list(struct blk_mq_hw_ctx *hctx, lockdep_assert_held(&ctx->lock); - trace_block_rq_insert(hctx->queue, rq); + trace_block_rq_insert(rq); if (at_head) list_add(&rq->queuelist, &ctx->rq_lists[type]); @@ -1814,7 +1814,7 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, */ list_for_each_entry(rq, list, queuelist) { BUG_ON(rq->mq_ctx != ctx); - trace_block_rq_insert(hctx->queue, rq); + trace_block_rq_insert(rq); } spin_lock(&ctx->lock); @@ -2111,7 +2111,7 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) goto queue_exit; } - trace_block_getrq(q, bio, bio->bi_opf); + trace_block_getrq(bio, bio->bi_opf); rq_qos_track(q, rq, bio); diff --git a/block/bounce.c b/block/bounce.c index c3aaed070124..9550640b1f86 100644 --- a/block/bounce.c +++ b/block/bounce.c @@ -341,7 +341,7 @@ static void __blk_queue_bounce(struct request_queue *q, struct bio **bio_orig, } } - trace_block_bio_bounce(q, *bio_orig); + trace_block_bio_bounce(*bio_orig); bio->bi_flags |= (1 << BIO_BOUNCED); diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 109e81f33edb..4b9ff226904d 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -1678,7 +1678,7 @@ static blk_qc_t __split_and_process_bio(struct mapped_device *md, part_stat_unlock(); bio_chain(b, bio); - trace_block_split(md->queue, b, bio->bi_iter.bi_sector); + trace_block_split(b, bio->bi_iter.bi_sector); ret = generic_make_request(bio); break; } @@ -1745,7 +1745,7 @@ static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struc struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split); bio_chain(split, *bio); - trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector); + trace_block_split(split, (*bio)->bi_iter.bi_sector); generic_make_request(*bio); *bio = split; } diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 34d64ca306b1..237d40a48429 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -64,7 +64,6 @@ DEFINE_EVENT(block_buffer, block_dirty_buffer, /** * block_rq_requeue - place block IO request back on a queue - * @q: queue holding operation * @rq: block IO operation request * * The block operation request @rq is being placed back into queue @@ -73,9 +72,9 @@ DEFINE_EVENT(block_buffer, block_dirty_buffer, */ TRACE_EVENT(block_rq_requeue, - TP_PROTO(struct request_queue *q, struct request *rq), + TP_PROTO(struct request *rq), - TP_ARGS(q, rq), + TP_ARGS(rq), TP_STRUCT__entry( __field( dev_t, dev ) @@ -103,7 +102,6 @@ TRACE_EVENT(block_rq_requeue, /** * block_rq_complete - block IO operation completed by device driver - * @rq: block operations request * @error: status code * @nr_bytes: number of completed bytes * @@ -147,9 +145,9 @@ TRACE_EVENT(block_rq_complete, DECLARE_EVENT_CLASS(block_rq, - TP_PROTO(struct request_queue *q, struct request *rq), + TP_PROTO(struct request *rq), - TP_ARGS(q, rq), + TP_ARGS(rq), TP_STRUCT__entry( __field( dev_t, dev ) @@ -181,24 +179,22 @@ DECLARE_EVENT_CLASS(block_rq, /** * block_rq_insert - insert block operation request into queue - * @q: target queue * @rq: block IO operation request * * Called immediately before block operation request @rq is inserted - * into queue @q. The fields in the operation request @rq struct can + * into queue. The fields in the operation request @rq struct can * be examined to determine which device and sectors the pending * operation would access. */ DEFINE_EVENT(block_rq, block_rq_insert, - TP_PROTO(struct request_queue *q, struct request *rq), + TP_PROTO(struct request *rq), - TP_ARGS(q, rq) + TP_ARGS(rq) ); /** * block_rq_issue - issue pending block IO request operation to device driver - * @q: queue holding operation * @rq: block IO operation operation request * * Called when block operation request @rq from queue @q is sent to a @@ -206,32 +202,30 @@ DEFINE_EVENT(block_rq, block_rq_insert, */ DEFINE_EVENT(block_rq, block_rq_issue, - TP_PROTO(struct request_queue *q, struct request *rq), + TP_PROTO(struct request *rq), - TP_ARGS(q, rq) + TP_ARGS(rq) ); /** * block_rq_merge - merge request with another one in the elevator - * @q: queue holding operation * @rq: block IO operation operation request * - * Called when block operation request @rq from queue @q is merged to another + * Called when block operation request @rq from queue is merged to another * request queued in the elevator. */ DEFINE_EVENT(block_rq, block_rq_merge, - TP_PROTO(struct request_queue *q, struct request *rq), + TP_PROTO(struct request *rq), - TP_ARGS(q, rq) + TP_ARGS(rq) ); /** * block_bio_bounce - used bounce buffer when processing block operation - * @q: queue holding the block operation * @bio: block operation * - * A bounce buffer was used to handle the block operation @bio in @q. + * A bounce buffer was used to handle the block operation @bio in queue. * This occurs when hardware limitations prevent a direct transfer of * data between the @bio data memory area and the IO device. Use of a * bounce buffer requires extra copying of data and decreases @@ -239,9 +233,9 @@ DEFINE_EVENT(block_rq, block_rq_merge, */ TRACE_EVENT(block_bio_bounce, - TP_PROTO(struct request_queue *q, struct bio *bio), + TP_PROTO(struct bio *bio), - TP_ARGS(q, bio), + TP_ARGS(bio), TP_STRUCT__entry( __field( dev_t, dev ) @@ -303,9 +297,9 @@ TRACE_EVENT(block_bio_complete, DECLARE_EVENT_CLASS(block_bio_merge, - TP_PROTO(struct request_queue *q, struct request *rq, struct bio *bio), + TP_PROTO(struct bio *bio), - TP_ARGS(q, rq, bio), + TP_ARGS(bio), TP_STRUCT__entry( __field( dev_t, dev ) @@ -331,48 +325,43 @@ DECLARE_EVENT_CLASS(block_bio_merge, /** * block_bio_backmerge - merging block operation to the end of an existing operation - * @q: queue holding operation - * @rq: request bio is being merged into * @bio: new block operation to merge * * Merging block request @bio to the end of an existing block request - * in queue @q. + * in queue. */ DEFINE_EVENT(block_bio_merge, block_bio_backmerge, - TP_PROTO(struct request_queue *q, struct request *rq, struct bio *bio), + TP_PROTO(struct bio *bio), - TP_ARGS(q, rq, bio) + TP_ARGS(bio) ); /** * block_bio_frontmerge - merging block operation to the beginning of an existing operation - * @q: queue holding operation - * @rq: request bio is being merged into * @bio: new block operation to merge * * Merging block IO operation @bio to the beginning of an existing block - * operation in queue @q. + * operation in queue. */ DEFINE_EVENT(block_bio_merge, block_bio_frontmerge, - TP_PROTO(struct request_queue *q, struct request *rq, struct bio *bio), + TP_PROTO(struct bio *bio), - TP_ARGS(q, rq, bio) + TP_ARGS(bio) ); /** * block_bio_queue - putting new block IO operation in queue - * @q: queue holding operation * @bio: new block operation * - * About to place the block IO operation @bio into queue @q. + * About to place the block IO operation @bio into queue. */ TRACE_EVENT(block_bio_queue, - TP_PROTO(struct request_queue *q, struct bio *bio), + TP_PROTO(struct bio *bio), - TP_ARGS(q, bio), + TP_ARGS(bio), TP_STRUCT__entry( __field( dev_t, dev ) @@ -398,9 +387,9 @@ TRACE_EVENT(block_bio_queue, DECLARE_EVENT_CLASS(block_get_rq, - TP_PROTO(struct request_queue *q, struct bio *bio, int rw), + TP_PROTO(struct bio *bio, int rw), - TP_ARGS(q, bio, rw), + TP_ARGS(bio, rw), TP_STRUCT__entry( __field( dev_t, dev ) @@ -427,36 +416,34 @@ DECLARE_EVENT_CLASS(block_get_rq, /** * block_getrq - get a free request entry in queue for block IO operations - * @q: queue for operations * @bio: pending block IO operation (can be %NULL) * @rw: low bit indicates a read (%0) or a write (%1) * - * A request struct for queue @q has been allocated to handle the + * A request struct for queue has been allocated to handle the * block IO operation @bio. */ DEFINE_EVENT(block_get_rq, block_getrq, - TP_PROTO(struct request_queue *q, struct bio *bio, int rw), + TP_PROTO(struct bio *bio, int rw), - TP_ARGS(q, bio, rw) + TP_ARGS(bio, rw) ); /** * block_sleeprq - waiting to get a free request entry in queue for block IO operation - * @q: queue for operation * @bio: pending block IO operation (can be %NULL) * @rw: low bit indicates a read (%0) or a write (%1) * - * In the case where a request struct cannot be provided for queue @q + * In the case where a request struct cannot be provided for queue * the process needs to wait for an request struct to become * available. This tracepoint event is generated each time the * process goes to sleep waiting for request struct become available. */ DEFINE_EVENT(block_get_rq, block_sleeprq, - TP_PROTO(struct request_queue *q, struct bio *bio, int rw), + TP_PROTO(struct bio *bio, int rw), - TP_ARGS(q, bio, rw) + TP_ARGS(bio, rw) ); /** @@ -521,7 +508,6 @@ DEFINE_EVENT(block_unplug, block_unplug, /** * block_split - split a single bio struct into two bio structs - * @q: queue containing the bio * @bio: block operation being split * @new_sector: The starting sector for the new bio * @@ -532,10 +518,9 @@ DEFINE_EVENT(block_unplug, block_unplug, */ TRACE_EVENT(block_split, - TP_PROTO(struct request_queue *q, struct bio *bio, - unsigned int new_sector), + TP_PROTO(struct bio *bio, unsigned int new_sector), - TP_ARGS(q, bio, new_sector), + TP_ARGS(bio, new_sector), TP_STRUCT__entry( __field( dev_t, dev ) diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index 7ba62d68885a..7b72781a591d 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -28,6 +28,11 @@ #ifdef CONFIG_BLK_DEV_IO_TRACE +static inline struct request_queue *bio_q(struct bio *bio) +{ + return bio->bi_disk->queue; +} + static unsigned int blktrace_seq __read_mostly = 1; static struct trace_array *blk_tr; @@ -846,33 +851,28 @@ static void blk_add_trace_rq(struct request *rq, int error, rcu_read_unlock(); } -static void blk_add_trace_rq_insert(void *ignore, - struct request_queue *q, struct request *rq) +static void blk_add_trace_rq_insert(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_INSERT, - blk_trace_request_get_cgid(q, rq)); + blk_trace_request_get_cgid(rq->q, rq)); } -static void blk_add_trace_rq_issue(void *ignore, - struct request_queue *q, struct request *rq) +static void blk_add_trace_rq_issue(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_ISSUE, - blk_trace_request_get_cgid(q, rq)); + blk_trace_request_get_cgid(rq->q, rq)); } -static void blk_add_trace_rq_merge(void *ignore, - struct request_queue *q, struct request *rq) +static void blk_add_trace_rq_merge(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_BACKMERGE, - blk_trace_request_get_cgid(q, rq)); + blk_trace_request_get_cgid(rq->q, rq)); } -static void blk_add_trace_rq_requeue(void *ignore, - struct request_queue *q, - struct request *rq) +static void blk_add_trace_rq_requeue(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_REQUEUE, - blk_trace_request_get_cgid(q, rq)); + blk_trace_request_get_cgid(rq->q, rq)); } static void blk_add_trace_rq_complete(void *ignore, struct request *rq, @@ -893,13 +893,12 @@ static void blk_add_trace_rq_complete(void *ignore, struct request *rq, * Records an action against a bio. Will log the bio offset + size. * **/ -static void blk_add_trace_bio(struct request_queue *q, struct bio *bio, - u32 what, int error) +static void blk_add_trace_bio(struct bio *bio, u32 what, int error) { struct blk_trace *bt; rcu_read_lock(); - bt = rcu_dereference(q->blk_trace); + bt = rcu_dereference(bio_q(bio)->blk_trace); if (likely(!bt)) { rcu_read_unlock(); return; @@ -907,56 +906,59 @@ static void blk_add_trace_bio(struct request_queue *q, struct bio *bio, __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size, bio_op(bio), bio->bi_opf, what, error, 0, NULL, - blk_trace_bio_get_cgid(q, bio)); + blk_trace_bio_get_cgid(bio_q(bio), bio)); rcu_read_unlock(); } -static void blk_add_trace_bio_bounce(void *ignore, - struct request_queue *q, struct bio *bio) +static void blk_add_trace_bio_bounce(void *ignore, struct bio *bio) { - blk_add_trace_bio(q, bio, BLK_TA_BOUNCE, 0); + blk_add_trace_bio(bio, BLK_TA_BOUNCE, 0); } -static void blk_add_trace_bio_complete(void *ignore, - struct request_queue *q, struct bio *bio) +static void blk_add_trace_bio_complete(void *ignore, struct request_queue *q, + struct bio *bio) { - blk_add_trace_bio(q, bio, BLK_TA_COMPLETE, - blk_status_to_errno(bio->bi_status)); + struct blk_trace *bt; + + rcu_read_lock(); + bt = rcu_dereference(q->blk_trace); + if (likely(!bt)) { + rcu_read_unlock(); + return; + } + + __blk_add_trace(bt, bio->bi_iter.bi_sector, bio->bi_iter.bi_size, + bio_op(bio), bio->bi_opf, BLK_TA_COMPLETE, + blk_status_to_errno(bio->bi_status), 0, NULL, + blk_trace_bio_get_cgid(q, bio)); + rcu_read_unlock(); } -static void blk_add_trace_bio_backmerge(void *ignore, - struct request_queue *q, - struct request *rq, - struct bio *bio) +static void blk_add_trace_bio_backmerge(void *ignore, struct bio *bio) { - blk_add_trace_bio(q, bio, BLK_TA_BACKMERGE, 0); + blk_add_trace_bio(bio, BLK_TA_BACKMERGE, 0); } -static void blk_add_trace_bio_frontmerge(void *ignore, - struct request_queue *q, - struct request *rq, - struct bio *bio) +static void blk_add_trace_bio_frontmerge(void *ignore, struct bio *bio) { - blk_add_trace_bio(q, bio, BLK_TA_FRONTMERGE, 0); + blk_add_trace_bio(bio, BLK_TA_FRONTMERGE, 0); } -static void blk_add_trace_bio_queue(void *ignore, - struct request_queue *q, struct bio *bio) +static void blk_add_trace_bio_queue(void *ignore, struct bio *bio) { - blk_add_trace_bio(q, bio, BLK_TA_QUEUE, 0); + blk_add_trace_bio(bio, BLK_TA_QUEUE, 0); } static void blk_add_trace_getrq(void *ignore, - struct request_queue *q, struct bio *bio, int rw) { if (bio) - blk_add_trace_bio(q, bio, BLK_TA_GETRQ, 0); + blk_add_trace_bio(bio, BLK_TA_GETRQ, 0); else { struct blk_trace *bt; rcu_read_lock(); - bt = rcu_dereference(q->blk_trace); + bt = rcu_dereference(bio_q(bio)->blk_trace); if (bt) __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_GETRQ, 0, 0, NULL, 0); @@ -965,17 +967,15 @@ static void blk_add_trace_getrq(void *ignore, } -static void blk_add_trace_sleeprq(void *ignore, - struct request_queue *q, - struct bio *bio, int rw) +static void blk_add_trace_sleeprq(void *ignore, struct bio *bio, int rw) { if (bio) - blk_add_trace_bio(q, bio, BLK_TA_SLEEPRQ, 0); + blk_add_trace_bio(bio, BLK_TA_SLEEPRQ, 0); else { struct blk_trace *bt; rcu_read_lock(); - bt = rcu_dereference(q->blk_trace); + bt = rcu_dereference(bio_q(bio)->blk_trace); if (bt) __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_SLEEPRQ, 0, 0, NULL, 0); @@ -1015,14 +1015,12 @@ static void blk_add_trace_unplug(void *ignore, struct request_queue *q, rcu_read_unlock(); } -static void blk_add_trace_split(void *ignore, - struct request_queue *q, struct bio *bio, - unsigned int pdu) +static void blk_add_trace_split(void *ignore, struct bio *bio, unsigned int pdu) { struct blk_trace *bt; rcu_read_lock(); - bt = rcu_dereference(q->blk_trace); + bt = rcu_dereference(bio_q(bio)->blk_trace); if (bt) { __be64 rpdu = cpu_to_be64(pdu); @@ -1031,7 +1029,7 @@ static void blk_add_trace_split(void *ignore, BLK_TA_SPLIT, blk_status_to_errno(bio->bi_status), sizeof(rpdu), &rpdu, - blk_trace_bio_get_cgid(q, bio)); + blk_trace_bio_get_cgid(bio_q(bio), bio)); } rcu_read_unlock(); } From patchwork Mon Jun 29 23:43:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632835 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BAC6392A for ; Mon, 29 Jun 2020 23:43:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A386820781 for ; Mon, 29 Jun 2020 23:43:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="h8xKnXlj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727093AbgF2Xni (ORCPT ); Mon, 29 Jun 2020 19:43:38 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:44062 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727065AbgF2Xnh (ORCPT ); Mon, 29 Jun 2020 19:43:37 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474217; x=1625010217; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jsU245lIDaILp34LuMNXdbl72QdPFqclivvIUwUwwdg=; b=h8xKnXljvfbwDzFJZgwSw1WNDwKFm0vLCid/h6Obv6GVWH4X0fIQdans yPQ3CSYAcukQaM7Vk7TuLLd1NhGudbcTOPAwxirAD6plNWo6ytcsLt/aD DE+KrPYXOOgyFeg9ske1e3wQfjoWfVyO0pjrlMT56Nc3RYNxYAQ0aVD0T n9OHU7luZINKYFqnkijq7dT53WV8kfU+Qw+bYYfcGBjNNGl+9qDCQzdoH dkFNfJaz8wOgQiditIEbpmoNbbORFZckWy+v9PQee48B2WT9ilGiUpHnH rBDKBXcHK1leSvKLpvcG2R7MAkrtauOtS7eUpyc9MDA60Icpc9xWR9JO1 A==; IronPort-SDR: F1Coerj3qfTpYWB5usOeHkjigattSWjrspp5mFOy3wtfLkIgA+1mroFv89uZ1uDm2zDgttd3DF FsbqRmn4sfOD/z7wCS6+8nTkaPjizQ59SG7/pFu8lOTqPT2wsRCY/NZ2litfi1bkmYVEsawaAM 4pgz6wpue9paJ1OsqWDcYjmLFp2m1AtPriA5YZnUNotjXQf0NXDbGHoHyEQ+3TfKN1/H7ZFfa+ kpR3Gai9zI27e5BefI4ta0SlLUAOdg8dfm6JbnZHKdD+QUVupdXAVgObUh7cZX3VTB3uy3w2sp xc0= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="145544718" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:43:37 +0800 IronPort-SDR: gxvn98cM/J2zFbRV0K8mvJwcXF4VViRZkJt3GdDl8mRH6WVAHLoL/TuO5wYwuH7L1ypDzxisw4 5GbwitwmR9UgL9eZ2Lj+pVrxBTFgweiUo= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:29 -0700 IronPort-SDR: TxSS+fRz0Jce2zgM3RsllAdgIl3rG5wnJxH46xqhp7x9Qs8459WU1bBxhoOsNSoV8SD7hkEBrG UuIG+hqqogAA== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:43:36 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 02/11] block: rename block_bio_merge class to block_bio Date: Mon, 29 Jun 2020 16:43:05 -0700 Message-Id: <20200629234314.10509-3-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org There are identical TRACE_EVENTS presents which can now take an advantage of the block_bio_merge trace event class. This is a prep patch which renames block_bio_merge to block_bio so that the next patches in this series will be able to resue it. Signed-off-by: Chaitanya Kulkarni --- include/trace/events/block.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 237d40a48429..b5be387c4115 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -295,7 +295,7 @@ TRACE_EVENT(block_bio_complete, __entry->nr_sector, __entry->error) ); -DECLARE_EVENT_CLASS(block_bio_merge, +DECLARE_EVENT_CLASS(block_bio, TP_PROTO(struct bio *bio), @@ -330,7 +330,7 @@ DECLARE_EVENT_CLASS(block_bio_merge, * Merging block request @bio to the end of an existing block request * in queue. */ -DEFINE_EVENT(block_bio_merge, block_bio_backmerge, +DEFINE_EVENT(block_bio, block_bio_backmerge, TP_PROTO(struct bio *bio), @@ -344,7 +344,7 @@ DEFINE_EVENT(block_bio_merge, block_bio_backmerge, * Merging block IO operation @bio to the beginning of an existing block * operation in queue. */ -DEFINE_EVENT(block_bio_merge, block_bio_frontmerge, +DEFINE_EVENT(block_bio, block_bio_frontmerge, TP_PROTO(struct bio *bio), From patchwork Mon Jun 29 23:43:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632837 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 95AD9913 for ; Mon, 29 Jun 2020 23:43:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7EEB720781 for ; Mon, 29 Jun 2020 23:43:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="PKO1KNHI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726604AbgF2Xns (ORCPT ); Mon, 29 Jun 2020 19:43:48 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:29188 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726826AbgF2Xnr (ORCPT ); Mon, 29 Jun 2020 19:43:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474227; x=1625010227; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JkqBOl6ByFPNhL3mDt1GE9twMF0WE7QxGGJtc3EklEU=; b=PKO1KNHI7yNob6I1QKxUfK31R3zJoPiIxuo9sMww+VfXGmbmJ1DhbZow U8ApLayZ9wSypKuMFlKoNwF+JPO88w1L55OWrvNnlkcUNCN6k0Gkqkvgd 4E/DIMPG4v00TlBKs6914k5mZz/mwWcQImNSeWgQPRXW1fSWhfsLbWMpn eph2Yxqk2+/iLWo6o1quxKAYnDLmV2bxhbq3hBmMbZyTZeQz/vLnOCErG oSbLkwIMZQ23/kZIH+4jTLgq3J49piEAVfuFmHil5NdKnN1Nzls3V2wKZ Y/86FSBB02RoAKnfR0yQPG3wy6do3OG2tmeYvtpeb6CSDlhSPGaj1ngUY A==; IronPort-SDR: 0ZLMGciP4jHZ/bd9JETKs6B6jtv9POYDCqZS6KuNlqSoozRKA804A10ujPvSlJHLgKI+uTyfp5 igcJfJv5/ZU3GIsLDcH2gRDa1lCVR4TTUcylWVuW79uTgcNv/C55EqpYWO+ON6h+77zEKqnu1E XaZlVpbkt4U31tOySXgHuTr4/nb/ZLWMC1W6n0WISvifgJtOt0St5MWCZfwKdDQnT8y2JKuZto tTI2HSN2F7vQmg7Cx97zOtGnV9ojHxHEeKpFaHcc+dcLgnREICFLghvU9MbKZX7W97+T3RUwne rGU= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="141431395" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:43:47 +0800 IronPort-SDR: Te0kU+pbY1z5RcGLdCjtWuxJWrb5CIqrv2tK3XNli5wyhIfm9rhA+s1YZC15inOKcBJxJOvlCA VHR/4CZ2TM6hW4v7XqauwZyu4AaWNa2V8= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:07 -0700 IronPort-SDR: GWpo01VB/dDgqIVJbMjgir3MmktW2jUd980tnAe2STAeEiBIqW3F877xTOSjZFjHfBZPYVTjFl 4XXqzgO3aVug== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:43:46 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 03/11] block: use block_bio event class for bio_queue Date: Mon, 29 Jun 2020 16:43:06 -0700 Message-Id: <20200629234314.10509-4-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Remove block_bio_queue trace event which shares the code with block_bio. Signed-off-by: Chaitanya Kulkarni --- include/trace/events/block.h | 26 +++----------------------- 1 file changed, 3 insertions(+), 23 deletions(-) diff --git a/include/trace/events/block.h b/include/trace/events/block.h index b5be387c4115..5e9f46469f50 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -357,32 +357,12 @@ DEFINE_EVENT(block_bio, block_bio_frontmerge, * * About to place the block IO operation @bio into queue. */ -TRACE_EVENT(block_bio_queue, - TP_PROTO(struct bio *bio), - - TP_ARGS(bio), - - TP_STRUCT__entry( - __field( dev_t, dev ) - __field( sector_t, sector ) - __field( unsigned int, nr_sector ) - __array( char, rwbs, RWBS_LEN ) - __array( char, comm, TASK_COMM_LEN ) - ), +DEFINE_EVENT(block_bio, block_bio_queue, - TP_fast_assign( - __entry->dev = bio_dev(bio); - __entry->sector = bio->bi_iter.bi_sector; - __entry->nr_sector = bio_sectors(bio); - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); - memcpy(__entry->comm, current->comm, TASK_COMM_LEN); - ), + TP_PROTO(struct bio *bio), - TP_printk("%d,%d %s %llu + %u [%s]", - MAJOR(__entry->dev), MINOR(__entry->dev), __entry->rwbs, - (unsigned long long)__entry->sector, - __entry->nr_sector, __entry->comm) + TP_ARGS(bio) ); DECLARE_EVENT_CLASS(block_get_rq, From patchwork Mon Jun 29 23:43:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632839 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 787CD913 for ; Mon, 29 Jun 2020 23:43:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6170820781 for ; Mon, 29 Jun 2020 23:43:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="gbNO9hXT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726826AbgF2Xn5 (ORCPT ); Mon, 29 Jun 2020 19:43:57 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:59286 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726865AbgF2Xn5 (ORCPT ); Mon, 29 Jun 2020 19:43:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474236; x=1625010236; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xs57aBIX/QG6FMReIDHBcUiSTqCFtsowWQTCYBYhykU=; b=gbNO9hXTRN4k9m1jyfEAMC5jiA2MEBuUke8HD9Klqs1UGj4R78e4k/x1 6FpZyrCVWgZibgiSEu45ApEqa6AEkC5m3HhLTUyQpPE/XsqC3F4jey6db L1AT3XDy/6F+Vta6nIt53rRokrJmbgqvpDVsGDHdx8hwQn4V4gjF2s1VN TaIAt80gprfkZKId07LDiTsIWoOaKRXtbFlw8SK8/aeoBSUzBOJtHiDel UOUTZXWvDo5w3+pP4dqDp/Z3N3ZkChvpBNignp2pmjS1KyICr5Nm+mVFV IrJ/LdYtWh/59y5RWbXR7XgulYy4kjZthP77K++ehDkqqmV1Q1iUJSPG7 Q==; IronPort-SDR: 85do+UdFw05HLC6xkbMyWf9IeLh3+UfDU6G5Njn+hKkG5rme/5vdzVIz8/ShsuwbjMbQwpzVIg XW8XZfSIW7rdZ+lrn2RpDLMKu1IHcYHXUA4RVSSVx6q42sYk1XhAuoG9u17TGTGFfdlg4mGTyD tGAQXrB23non4FJlOPwO3balP540t7KkntQBhABZrb1PBwu4G/0JxEL7SkHMrdoWTDjhopu+iK HWuhphgz4ZLMfgK6KrwgVfNQ+OQQZDRr/vfuIbmj0WvAJmfDm6z9LPowGY+h+ekyDUMi3fbAdw rBU= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="250451441" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:43:56 +0800 IronPort-SDR: cF+hzerHZGREkKCJgtPuIYfGpay3mDuEM6JjW9Z8PwhSZiKr5RbWGK/DMqlVVAzZg/yqQKjENr GX8uOLyAXQ7jxmVrcxewrYqPn4s4MA7dg= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:16 -0700 IronPort-SDR: 2TTYKV2UBhmVUliSx1iiNA0qEyTuvXyPEcykDKJ/0JtkuwHLf1c4l7haqfPhSdQFt2rCuSffO0 J8bjRRwkx1QA== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:43:56 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 04/11] block: use block_bio event class for bio_bounce Date: Mon, 29 Jun 2020 16:43:07 -0700 Message-Id: <20200629234314.10509-5-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Remove block_bio_bounce trace event which shares the code with block_bio. Signed-off-by: Chaitanya Kulkarni --- include/trace/events/block.h | 56 ++++++++++++------------------------ 1 file changed, 18 insertions(+), 38 deletions(-) diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 5e9f46469f50..d7289576f1fd 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -221,44 +221,6 @@ DEFINE_EVENT(block_rq, block_rq_merge, TP_ARGS(rq) ); -/** - * block_bio_bounce - used bounce buffer when processing block operation - * @bio: block operation - * - * A bounce buffer was used to handle the block operation @bio in queue. - * This occurs when hardware limitations prevent a direct transfer of - * data between the @bio data memory area and the IO device. Use of a - * bounce buffer requires extra copying of data and decreases - * performance. - */ -TRACE_EVENT(block_bio_bounce, - - TP_PROTO(struct bio *bio), - - TP_ARGS(bio), - - TP_STRUCT__entry( - __field( dev_t, dev ) - __field( sector_t, sector ) - __field( unsigned int, nr_sector ) - __array( char, rwbs, RWBS_LEN ) - __array( char, comm, TASK_COMM_LEN ) - ), - - TP_fast_assign( - __entry->dev = bio_dev(bio); - __entry->sector = bio->bi_iter.bi_sector; - __entry->nr_sector = bio_sectors(bio); - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); - memcpy(__entry->comm, current->comm, TASK_COMM_LEN); - ), - - TP_printk("%d,%d %s %llu + %u [%s]", - MAJOR(__entry->dev), MINOR(__entry->dev), __entry->rwbs, - (unsigned long long)__entry->sector, - __entry->nr_sector, __entry->comm) -); - /** * block_bio_complete - completed all work on the block operation * @q: queue holding the block operation @@ -351,6 +313,24 @@ DEFINE_EVENT(block_bio, block_bio_frontmerge, TP_ARGS(bio) ); +/** + * block_bio_bounce - used bounce buffer when processing block operation + * @bio: block operation + * + * A bounce buffer was used to handle the block operation @bio in queue. + * This occurs when hardware limitations prevent a direct transfer of + * data between the @bio data memory area and the IO device. Use of a + * bounce buffer requires extra copying of data and decreases + * performance. + */ + +DEFINE_EVENT(block_bio, block_bio_bounce, + + TP_PROTO(struct bio *bio), + + TP_ARGS(bio) +); + /** * block_bio_queue - putting new block IO operation in queue * @bio: new block operation From patchwork Mon Jun 29 23:43:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632841 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD8EE913 for ; Mon, 29 Jun 2020 23:44:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B612E20780 for ; Mon, 29 Jun 2020 23:44:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="Yoxa5+ad" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727016AbgF2XoH (ORCPT ); Mon, 29 Jun 2020 19:44:07 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:29213 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726865AbgF2XoG (ORCPT ); Mon, 29 Jun 2020 19:44:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474246; x=1625010246; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GiUEElx2QbqINHaUk6iKGLQxFnfSxd4GlrHiez/uEUc=; b=Yoxa5+adGIzV4n7VITtzLTxbAOSL+iaFwfWkKJkzKaY1q9xk7AGHDAeY vtYmVLTjOUFsl+p9C1dtJUq0YgrFPhMbrDd4Gj8m7IkOPEPkiwLq0QxkD bOTc0oE3n5VGKDczYWmo2ImhGoonEuIqJzKxlGKi7karjMM0WAyzivKfw fE/R8FYemK6St5bQwI7E0JhYPauVfKc1mZTT0D2hD+R2D5qoppTjEquX5 Xt4uPvm5QoVY8FRgNGg2snZBTtNgVD0WJZwofkHZB57Jh4H4V07ZIOC07 pgosc93tPAumAN1nNbVBs96IdqqrWBqUofauZpmgqKJrLy26hGCbzW94I g==; IronPort-SDR: o/4KzG0O6DkaE9NcN+lDJw2707uW6WGDETMrhHogvRi9Tg7IjZrcaHUBNziPgv9N4tGNam4Eok 3oab6JsO7muCik8jm1xZxF4aKGJARfnca046qJCCKjgOXwNwLFuVYwAqeqk7IawejhpcXoTnNf YpMS2QgeizOhmhN2IFxNM9JyED/JUI+BIUlfJ71XUHxeIY01DaQbtrq8JOZWnSDBddZfrc7kMX AvWxao4WePp2/sRzle+ML5oxLU0Y+JcY83TJ+66SjTavtB0hYjK8viXNMpixdwO3M9YSApfooN J+A= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="141431414" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:44:06 +0800 IronPort-SDR: jAwvjdCdFlVL5Ybf3+kKFjif1ZlvN6/NxKuhCsBTaCQpblutUh/3soykVPpMuvLJmdw63UVn6x m4YSDTUkDr6g97s0V6mBSN8V1TICubbLk= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:25 -0700 IronPort-SDR: yn+fTkYzKIONWd/yKRpIXTVlm4kdhQsPPaJj0Ll+2fNjLbgmv5tqXRTu4wtwdEXU91DTXuFxG3 CVJwlbT8xd2Q== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:05 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 05/11] block: get rid of the trace rq insert wrapper Date: Mon, 29 Jun 2020 16:43:08 -0700 Message-Id: <20200629234314.10509-6-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Get rid of the wrapper for trace_block_rq_insert() and call the function directly. Signed-off-by: Chaitanya Kulkarni --- block/bfq-iosched.c | 4 +++- block/blk-mq-sched.c | 6 ------ block/blk-mq-sched.h | 1 - block/kyber-iosched.c | 4 +++- block/mq-deadline.c | 4 +++- 5 files changed, 9 insertions(+), 10 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 50c8f034c01c..e2b9b700ed34 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -125,6 +125,8 @@ #include #include +#include + #include "blk.h" #include "blk-mq.h" #include "blk-mq-tag.h" @@ -5507,7 +5509,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, spin_unlock_irq(&bfqd->lock); - blk_mq_sched_request_inserted(rq); + trace_block_rq_insert(rq); spin_lock_irq(&bfqd->lock); bfqq = bfq_init_rq(rq); diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index a3cade16ef80..20b6a59fbd5a 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -407,12 +407,6 @@ bool blk_mq_sched_try_insert_merge(struct request_queue *q, struct request *rq) } EXPORT_SYMBOL_GPL(blk_mq_sched_try_insert_merge); -void blk_mq_sched_request_inserted(struct request *rq) -{ - trace_block_rq_insert(rq); -} -EXPORT_SYMBOL_GPL(blk_mq_sched_request_inserted); - static bool blk_mq_sched_bypass_insert(struct blk_mq_hw_ctx *hctx, bool has_sched, struct request *rq) diff --git a/block/blk-mq-sched.h b/block/blk-mq-sched.h index 126021fc3a11..04c40c695bf0 100644 --- a/block/blk-mq-sched.h +++ b/block/blk-mq-sched.h @@ -10,7 +10,6 @@ void blk_mq_sched_free_hctx_data(struct request_queue *q, void blk_mq_sched_assign_ioc(struct request *rq); -void blk_mq_sched_request_inserted(struct request *rq); bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio, unsigned int nr_segs, struct request **merged_request); bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c index a38c5ab103d1..e42d78deee90 100644 --- a/block/kyber-iosched.c +++ b/block/kyber-iosched.c @@ -13,6 +13,8 @@ #include #include +#include + #include "blk.h" #include "blk-mq.h" #include "blk-mq-debugfs.h" @@ -602,7 +604,7 @@ static void kyber_insert_requests(struct blk_mq_hw_ctx *hctx, list_move_tail(&rq->queuelist, head); sbitmap_set_bit(&khd->kcq_map[sched_domain], rq->mq_ctx->index_hw[hctx->type]); - blk_mq_sched_request_inserted(rq); + trace_block_rq_insert(rq); spin_unlock(&kcq->lock); } } diff --git a/block/mq-deadline.c b/block/mq-deadline.c index b57470e154c8..f3631a287466 100644 --- a/block/mq-deadline.c +++ b/block/mq-deadline.c @@ -18,6 +18,8 @@ #include #include +#include + #include "blk.h" #include "blk-mq.h" #include "blk-mq-debugfs.h" @@ -496,7 +498,7 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, if (blk_mq_sched_try_insert_merge(q, rq)) return; - blk_mq_sched_request_inserted(rq); + trace_block_rq_insert(rq); if (at_head || blk_rq_is_passthrough(rq)) { if (at_head) From patchwork Mon Jun 29 23:43:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632843 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC24F92A for ; Mon, 29 Jun 2020 23:44:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3F3F20781 for ; Mon, 29 Jun 2020 23:44:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="rNKfuhCz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727065AbgF2XoQ (ORCPT ); Mon, 29 Jun 2020 19:44:16 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:29224 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726865AbgF2XoP (ORCPT ); Mon, 29 Jun 2020 19:44:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474255; x=1625010255; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0TV88dsrPxDiMSQl2gPAupgdWEaQUiQ8JI8KTgMEfVo=; b=rNKfuhCz2AO7QWhPIoBQLOajtojO1tQIJHN2hyGOgAWe6zGSZm78WCJ1 U2WGJb58QBy8TxPhadMSDDb1TwAJrJ1TZnsvIuPZrGpOizpotV5QIQP6m JvNfy3xo3ze/CMD2PP+ckFQRs9m5ZbaPxgVqfWpJqWRfld01b9tbJB42P KdbSAc55FHISH0AMTb1U2y60RoQWjuGbynZHrPS9YjZFBd0tzCdvSDk1E ED/A+mVEPrTqDbAyg+cVr3GBzCoAD7DmQimTJ2v4YK3NEU8ISCFzGEQyC gMjqZeuFj4UQl+0WMOzT1p1DAPhYETzQsQ2SkT4xab37mZIFhkD/A8GR0 Q==; IronPort-SDR: mXcSi05VFLRU3AVJA7Es12d6iqyOYICt32ksuS64QVZyjF9nRbYw214d3eZLaJGOyqCCdSGoOM 75ewU8/CCtbAC0I5iZQlUwYie/BZSzkphil/lZTUEv4ghLqa2D+G3OTRgM6V6lTdje9FIvmr69 v0nn/DPRYpBOyEtDpolRQyWgnWFr+eyzAf9hgGViAoGsfbVzlEH6VMdiAaPRe5HowuIpZYmyyt RgY38x11ShyuoCPBOh2dHVVhp2lVzhiXUFUZvqBmYMf2S4v4JTsEo74+fTIpC3cuXVWPFyIl2o 8Ic= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="141431430" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:44:15 +0800 IronPort-SDR: 1ZM4JW0RsdGniHFDWXhMKkGxaEEWvyWIU/zAaohh9M/CDb+SIZS9mhoiHHiBEqu92MRFj1NgSB Uoc+gt/Th1jJA51GvmyYAqQkTej3XHq6A= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:34 -0700 IronPort-SDR: QFmmXIwnVwMqHDqonKjxZOGDQQVIf2sH9oQwNx0KXycFd6uadSP94t6PxOe0NRoAWO/UJAx0RJ 8ovtlsScHh/Q== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:14 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 06/11] block: remove extra param for trace_block_getrq() Date: Mon, 29 Jun 2020 16:43:09 -0700 Message-Id: <20200629234314.10509-7-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Remove the extra parameter for trace_block_getrq() since we can derive I/O direction from bio->bi_opf. Signed-off-by: Chaitanya Kulkarni --- block/blk-mq.c | 2 +- include/trace/events/block.h | 14 ++++++-------- kernel/trace/blktrace.c | 13 ++++++------- 3 files changed, 13 insertions(+), 16 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index dbb98b2bc868..d66bb299d4ae 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2111,7 +2111,7 @@ blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) goto queue_exit; } - trace_block_getrq(bio, bio->bi_opf); + trace_block_getrq(bio); rq_qos_track(q, rq, bio); diff --git a/include/trace/events/block.h b/include/trace/events/block.h index d7289576f1fd..3d8923062fc4 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -347,9 +347,9 @@ DEFINE_EVENT(block_bio, block_bio_queue, DECLARE_EVENT_CLASS(block_get_rq, - TP_PROTO(struct bio *bio, int rw), + TP_PROTO(struct bio *bio), - TP_ARGS(bio, rw), + TP_ARGS(bio), TP_STRUCT__entry( __field( dev_t, dev ) @@ -377,22 +377,20 @@ DECLARE_EVENT_CLASS(block_get_rq, /** * block_getrq - get a free request entry in queue for block IO operations * @bio: pending block IO operation (can be %NULL) - * @rw: low bit indicates a read (%0) or a write (%1) * * A request struct for queue has been allocated to handle the * block IO operation @bio. */ DEFINE_EVENT(block_get_rq, block_getrq, - TP_PROTO(struct bio *bio, int rw), + TP_PROTO(struct bio *bio), - TP_ARGS(bio, rw) + TP_ARGS(bio) ); /** * block_sleeprq - waiting to get a free request entry in queue for block IO operation * @bio: pending block IO operation (can be %NULL) - * @rw: low bit indicates a read (%0) or a write (%1) * * In the case where a request struct cannot be provided for queue * the process needs to wait for an request struct to become @@ -401,9 +399,9 @@ DEFINE_EVENT(block_get_rq, block_getrq, */ DEFINE_EVENT(block_get_rq, block_sleeprq, - TP_PROTO(struct bio *bio, int rw), + TP_PROTO(struct bio *bio), - TP_ARGS(bio, rw) + TP_ARGS(bio) ); /** diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index 7b72781a591d..1d36e6153ab8 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -949,8 +949,7 @@ static void blk_add_trace_bio_queue(void *ignore, struct bio *bio) blk_add_trace_bio(bio, BLK_TA_QUEUE, 0); } -static void blk_add_trace_getrq(void *ignore, - struct bio *bio, int rw) +static void blk_add_trace_getrq(void *ignore, struct bio *bio) { if (bio) blk_add_trace_bio(bio, BLK_TA_GETRQ, 0); @@ -960,14 +959,14 @@ static void blk_add_trace_getrq(void *ignore, rcu_read_lock(); bt = rcu_dereference(bio_q(bio)->blk_trace); if (bt) - __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_GETRQ, 0, 0, - NULL, 0); + __blk_add_trace(bt, 0, 0, bio->bi_opf, 0, + BLK_TA_GETRQ, 0, 0, NULL, 0); rcu_read_unlock(); } } -static void blk_add_trace_sleeprq(void *ignore, struct bio *bio, int rw) +static void blk_add_trace_sleeprq(void *ignore, struct bio *bio) { if (bio) blk_add_trace_bio(bio, BLK_TA_SLEEPRQ, 0); @@ -977,8 +976,8 @@ static void blk_add_trace_sleeprq(void *ignore, struct bio *bio, int rw) rcu_read_lock(); bt = rcu_dereference(bio_q(bio)->blk_trace); if (bt) - __blk_add_trace(bt, 0, 0, rw, 0, BLK_TA_SLEEPRQ, - 0, 0, NULL, 0); + __blk_add_trace(bt, 0, 0, bio->bi_opf, 0, + BLK_TA_SLEEPRQ, 0, 0, NULL, 0); rcu_read_unlock(); } } From patchwork Mon Jun 29 23:43:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632845 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92A5E92A for ; Mon, 29 Jun 2020 23:44:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7D45C20781 for ; Mon, 29 Jun 2020 23:44:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="hwDlQASi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727069AbgF2XoY (ORCPT ); Mon, 29 Jun 2020 19:44:24 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:29235 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726865AbgF2XoY (ORCPT ); Mon, 29 Jun 2020 19:44:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474263; x=1625010263; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tijSnB8jSt60tSYH9WyDHzSpzATYyWUOHH0kfxtZ+Tc=; b=hwDlQASiQBjRH7kpZM3CylG9ixqE8Kia+BMjdnhcesv3HqiG6RkSok53 rqN0JTFO83gWqN/H8JkIwOj9WENWCrAlUjuoqqWvsn13caKdzE5ITCefE U/E5atoxm3tBnk5vw1K/NEgFoMwfyUqv3zvI7rt6dBHLzlRtzdPoamHYp CuFPBhlNqvpxi+vfCEiPMCHXano0SrjUdYDE7Q51DLm0a2VKTwdg+rz8C pXHcA33cA9olhEZ9aIqmMj3MNhfhCLCFKDOYJgsXokKXuVTcI3iufaG/q EdJXBxWwPXEH/lfDiOuKQDSfwJeqq6I1nN54dnKjsh+QH+VIvvt1XsCCd Q==; IronPort-SDR: BOXzo4dYIODKPDbC/tH1uEj9evowgMSPAvh6Ftb3MdCtrVmGHqXq3R5Ye43c5vwbBA0fJ+A9gi OozYW57QBKuz28bi9ATg8KByetwPFB0nAIgaHGcsEoFaPTPgRU7ZqGEKi3AaZcgikvrXcG3phT GyKc4bvlP/9gUrA01jwsXC/VBnU5B9cD9KRb7NKoQCA5EeVbFN8s5h6n2BkiACslWEvjvgb3nW cfG0Hf8d0A0DVUPbfOslBg0rEPH+nVCw9pQHJwH8tRqXCz40R0/Jp2YC23WRmqKkWTF9VqBv7q uzU= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="141431439" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:44:23 +0800 IronPort-SDR: LEMweQhxhEPySHkR+hWaMOqe9cHGpq96oUEcUwYoOet1kawzvkP17CBPBDi9V0Me43mvXNP6p1 U0jU5H8br+4GxSD+uwqeqburI3mbory5c= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:32:43 -0700 IronPort-SDR: TMjCv1P8V1EeKz6bnWpEiZmnvuYzuTYBmaPtIh375x1fm74/Pz/7R5PydgzYumXw90ZHscENtH UmSWjd8gVN2A== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:22 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 07/11] block: get rid of blk_trace_request_get_cgid() Date: Mon, 29 Jun 2020 16:43:10 -0700 Message-Id: <20200629234314.10509-8-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that we have done cleanup we can safely get rid of the blk_trace_request_get_cgid() and replace it with blk_trace_bio_get_cgid(). Signed-off-by: Chaitanya Kulkarni --- kernel/trace/blktrace.c | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index 1d36e6153ab8..bb864a50307f 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -804,15 +804,6 @@ u64 blk_trace_bio_get_cgid(struct request_queue *q, struct bio *bio) } #endif -static u64 -blk_trace_request_get_cgid(struct request_queue *q, struct request *rq) -{ - if (!rq->bio) - return 0; - /* Use the first bio */ - return blk_trace_bio_get_cgid(q, rq->bio); -} - /* * blktrace probes */ @@ -854,32 +845,32 @@ static void blk_add_trace_rq(struct request *rq, int error, static void blk_add_trace_rq_insert(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_INSERT, - blk_trace_request_get_cgid(rq->q, rq)); + rq->bio ? blk_trace_bio_get_cgid(rq->q, rq->bio) : 0); } static void blk_add_trace_rq_issue(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_ISSUE, - blk_trace_request_get_cgid(rq->q, rq)); + rq->bio ? blk_trace_bio_get_cgid(rq->q, rq->bio) : 0); } static void blk_add_trace_rq_merge(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_BACKMERGE, - blk_trace_request_get_cgid(rq->q, rq)); + rq->bio ? blk_trace_bio_get_cgid(rq->q, rq->bio) : 0); } static void blk_add_trace_rq_requeue(void *ignore, struct request *rq) { blk_add_trace_rq(rq, 0, blk_rq_bytes(rq), BLK_TA_REQUEUE, - blk_trace_request_get_cgid(rq->q, rq)); + rq->bio ? blk_trace_bio_get_cgid(rq->q, rq->bio) : 0); } static void blk_add_trace_rq_complete(void *ignore, struct request *rq, int error, unsigned int nr_bytes) { blk_add_trace_rq(rq, error, nr_bytes, BLK_TA_COMPLETE, - blk_trace_request_get_cgid(rq->q, rq)); + rq->bio ? blk_trace_bio_get_cgid(rq->q, rq->bio) : 0); } /** @@ -1105,7 +1096,8 @@ static void blk_add_trace_rq_remap(void *ignore, __blk_add_trace(bt, blk_rq_pos(rq), blk_rq_bytes(rq), rq_data_dir(rq), 0, BLK_TA_REMAP, 0, - sizeof(r), &r, blk_trace_request_get_cgid(q, rq)); + sizeof(r), &r, + rq->bio ? blk_trace_bio_get_cgid(q, rq->bio) : 0); rcu_read_unlock(); } @@ -1134,8 +1126,8 @@ void blk_add_driver_data(struct request_queue *q, } __blk_add_trace(bt, blk_rq_trace_sector(rq), blk_rq_bytes(rq), 0, 0, - BLK_TA_DRV_DATA, 0, len, data, - blk_trace_request_get_cgid(q, rq)); + BLK_TA_DRV_DATA, 0, len, data, + rq->bio ? blk_trace_bio_get_cgid(q, rq->bio) : 0); rcu_read_unlock(); } EXPORT_SYMBOL_GPL(blk_add_driver_data); From patchwork Mon Jun 29 23:43:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632847 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 345E7913 for ; Mon, 29 Jun 2020 23:44:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D6EA20781 for ; Mon, 29 Jun 2020 23:44:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="KBRK5fMJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726972AbgF2Xoc (ORCPT ); Mon, 29 Jun 2020 19:44:32 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:21893 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726865AbgF2Xob (ORCPT ); Mon, 29 Jun 2020 19:44:31 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474272; x=1625010272; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6GUmPvWUi4EC1ipsj6q7XUj5o6fMzooGGjrXcT0iTt0=; b=KBRK5fMJtWWkaWvCxWUnhVPaKtFYw31WYv8e4XeUezbYLsQUVeXn0HmL NAUZX+rMKxxXTzaspNJEDuH7lqT5xwfF5hy606OQ1MYT2ZGEXPmOKmlF+ M55arxAGIdVNmfLotjEI9Ta6ZzC2ozmOAwL8DCcu5+iPL1vgowk9zxe5K U2qF965TTStUUMLDD3cia26wtlN+3lUruJFNrY7uhRpAFxMJcWh+PxG2A kVqvbDi+KCyolGwaDtK7YXpNNL+KvlwAMun0Qv8BqvglzYohJo+45zMOI r9wPwH7WlkIv1wJa2nOY//laMAzTm48N4qeXZIov+eUADCY+gRj5pgoL1 w==; IronPort-SDR: FEO4hF6Ph6zB8qbV4DOu0A9xalHyRS+PFxr3BYig8WThEPrbwC76kv8/Vzmi6gKJdTG0s+eJx2 uv+VF0IEeqty1AnB9sSdNMztE3osCvUg8dhTdgaVsGwjhG8AJUqcQ7d3qiCctvUUQ06BvpnRAZ /z9sGcidqZvmoM6MjIpbclU9RPhg8GCgHvd1yKhC8Y28uhP/7x1YGAdbERPDlsZquqPjjeoBHd qRIi2+Gm04HkbEBH6XTorzacwz6WBHweBET+LN9FcDm7KBFENeFK976U6GBVlzPrlgE82f4GCs uo8= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="142577148" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:44:31 +0800 IronPort-SDR: YfyfAq65ybpZ38TJ3lHU4kkF9c98Sn7nkmFZ/JBGIVseqTVHW5OewoxRJGLKtzBZeVQLEQ11Om P+ZyMgZaqkKgZVTpESUbxg+CY3RYpYzxU= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:33:23 -0700 IronPort-SDR: ZSdzI9vxkzz1tOehmS7qpaytmCk5G6bJUtY9juZKJEfzT0tB+0d518YlIGIFXkDRR/2aHlo+9/ 9Lze0F0Ur6GQ== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:30 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 08/11] block: fix the comments in the trace event block.h Date: Mon, 29 Jun 2020 16:43:11 -0700 Message-Id: <20200629234314.10509-9-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is purely cleanup patch which fixes the comment in trace event header for block_rq_issue() and block_rq_merge() events. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Christoph Hellwig --- include/trace/events/block.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 3d8923062fc4..6a74fb9f4dba 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -195,7 +195,7 @@ DEFINE_EVENT(block_rq, block_rq_insert, /** * block_rq_issue - issue pending block IO request operation to device driver - * @rq: block IO operation operation request + * @rq: block IO operation request * * Called when block operation request @rq from queue @q is sent to a * device driver for processing. @@ -209,7 +209,7 @@ DEFINE_EVENT(block_rq, block_rq_issue, /** * block_rq_merge - merge request with another one in the elevator - * @rq: block IO operation operation request + * @rq: block IO operation request * * Called when block operation request @rq from queuei is merged to another * request queued in the elevator. From patchwork Mon Jun 29 23:43:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632849 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D113C92A for ; Mon, 29 Jun 2020 23:44:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B271020781 for ; Mon, 29 Jun 2020 23:44:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="n+raiAXE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726865AbgF2Xor (ORCPT ); Mon, 29 Jun 2020 19:44:47 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:41330 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727008AbgF2Xoq (ORCPT ); Mon, 29 Jun 2020 19:44:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474285; x=1625010285; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=1BxDymziNGG1Dva2mVghHFYrWVwBmChnGv4xA8iUaxQ=; b=n+raiAXEZrfHQHk3f3bTCduxAfULCDmzm2yLBE9PdIm9gGWOX29u617m Uz6A0xN2xChFGq++1ZiAupFG5BeF+2LA/G73G1eVQ5zMnVU82UPU9WN0/ H2WMhyreSMQ8PIBc3hVLiwJzh7rKaH7g9wdt/gl/XZiVbWO74JOwfr7vw RWzTm6Xe060srh56zgHTmap2cxD502oK7a64TVYkxucWG66AZ+lZ+Ip4Y 0i2SXGiN42M9LCuXOzrHrxMJlhzN2wtypcoxn3NDlIB6ZVSEuoOmEe3Ag WzTnn44L8hoL97MCa17zNFt/HdHYKjfn59l0nz4xFJ3dyLenM8qmmjgWK w==; IronPort-SDR: aN51CsZcKYXZkourCGX212DL9elAtrctYMr+lme3TdUbn0i1B+42t+ECeauw8J33W0br0MTs3P ZK2QO7TFBB5mlH+P7VMyPktZBJNKD+mG5B8v1zJtRP2EmJ4dNx8wRGryuXEorJdJKIRat6FWnO SHqzCAh6/NiEDm9Q/szAg93h5ciuRk048tj2OfsdwWOqa2IT5WFXDwh8saw4GKIZzCzHMi425p eONqJ3wNtfRnGb/weDcfWkA+K9qAlE+rDVHcJxFQb+Q2P43qNFSiRFlktVhyU9MyZl/1oiOESn u4E= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="141220125" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:44:45 +0800 IronPort-SDR: wSjYGKqdieAUkB6FW14pV/7PzIO1rpK5rPdzjBIYUp7jD51WdLgGCCqBEf9qepwqNrHF5ttLQQ MdKMCay1rgwG8ciIxzqM/feG2ZNT8DGhU= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:33:38 -0700 IronPort-SDR: eJVPUTuujTN8U32pyHeiRa+NyV6u0EjMR7qOqnZBGn5achSWGJo2gq/hQVknIr/jytb05lKxT/ bIdg4XHqyFag== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:45 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 09/11] block: remove unsed param in blk_fill_rwbs() Date: Mon, 29 Jun 2020 16:43:12 -0700 Message-Id: <20200629234314.10509-10-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The last parameter for the function blk_fill_rwbs() was added in 5782138e47 ("tracing/events: convert block trace points to TRACE_EVENT()") in order to signal read request and use of that parameter was replaced with using switch case REQ_OP_READ with 1b9a9ab78b0 ("blktrace: use op accessors"), but the parameter was never removed. This patch removes unused parameter which also allows us to merge existing trace points in the following patch. Signed-off-by: Chaitanya Kulkarni --- include/linux/blktrace_api.h | 2 +- include/trace/events/bcache.h | 10 +++++----- include/trace/events/block.h | 19 +++++++++---------- kernel/trace/blktrace.c | 2 +- 4 files changed, 16 insertions(+), 17 deletions(-) diff --git a/include/linux/blktrace_api.h b/include/linux/blktrace_api.h index 3b6ff5902edc..80123eebf005 100644 --- a/include/linux/blktrace_api.h +++ b/include/linux/blktrace_api.h @@ -120,7 +120,7 @@ struct compat_blk_user_trace_setup { #endif -extern void blk_fill_rwbs(char *rwbs, unsigned int op, int bytes); +extern void blk_fill_rwbs(char *rwbs, unsigned int op); static inline sector_t blk_rq_trace_sector(struct request *rq) { diff --git a/include/trace/events/bcache.h b/include/trace/events/bcache.h index 0bddea663b3b..41415637e92c 100644 --- a/include/trace/events/bcache.h +++ b/include/trace/events/bcache.h @@ -28,7 +28,7 @@ DECLARE_EVENT_CLASS(bcache_request, __entry->sector = bio->bi_iter.bi_sector; __entry->orig_sector = bio->bi_iter.bi_sector - 16; __entry->nr_sector = bio->bi_iter.bi_size >> 9; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); ), TP_printk("%d,%d %s %llu + %u (from %d,%d @ %llu)", @@ -102,7 +102,7 @@ DECLARE_EVENT_CLASS(bcache_bio, __entry->dev = bio_dev(bio); __entry->sector = bio->bi_iter.bi_sector; __entry->nr_sector = bio->bi_iter.bi_size >> 9; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); ), TP_printk("%d,%d %s %llu + %u", @@ -137,7 +137,7 @@ TRACE_EVENT(bcache_read, __entry->dev = bio_dev(bio); __entry->sector = bio->bi_iter.bi_sector; __entry->nr_sector = bio->bi_iter.bi_size >> 9; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); __entry->cache_hit = hit; __entry->bypass = bypass; ), @@ -168,7 +168,7 @@ TRACE_EVENT(bcache_write, __entry->inode = inode; __entry->sector = bio->bi_iter.bi_sector; __entry->nr_sector = bio->bi_iter.bi_size >> 9; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); __entry->writeback = writeback; __entry->bypass = bypass; ), @@ -238,7 +238,7 @@ TRACE_EVENT(bcache_journal_write, __entry->sector = bio->bi_iter.bi_sector; __entry->nr_sector = bio->bi_iter.bi_size >> 9; __entry->nr_keys = keys; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); ), TP_printk("%d,%d %s %llu + %u keys %u", diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 6a74fb9f4dba..d191d2cd1070 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -89,7 +89,7 @@ TRACE_EVENT(block_rq_requeue, __entry->sector = blk_rq_trace_sector(rq); __entry->nr_sector = blk_rq_trace_nr_sectors(rq); - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq)); + blk_fill_rwbs(__entry->rwbs, rq->cmd_flags); __get_str(cmd)[0] = '\0'; ), @@ -132,7 +132,7 @@ TRACE_EVENT(block_rq_complete, __entry->nr_sector = nr_bytes >> 9; __entry->error = error; - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, nr_bytes); + blk_fill_rwbs(__entry->rwbs, rq->cmd_flags); __get_str(cmd)[0] = '\0'; ), @@ -165,7 +165,7 @@ DECLARE_EVENT_CLASS(block_rq, __entry->nr_sector = blk_rq_trace_nr_sectors(rq); __entry->bytes = blk_rq_bytes(rq); - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq)); + blk_fill_rwbs(__entry->rwbs, rq->cmd_flags); __get_str(cmd)[0] = '\0'; memcpy(__entry->comm, current->comm, TASK_COMM_LEN); ), @@ -248,7 +248,7 @@ TRACE_EVENT(block_bio_complete, __entry->sector = bio->bi_iter.bi_sector; __entry->nr_sector = bio_sectors(bio); __entry->error = blk_status_to_errno(bio->bi_status); - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); ), TP_printk("%d,%d %s %llu + %u [%d]", @@ -275,7 +275,7 @@ DECLARE_EVENT_CLASS(block_bio, __entry->dev = bio_dev(bio); __entry->sector = bio->bi_iter.bi_sector; __entry->nr_sector = bio_sectors(bio); - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); memcpy(__entry->comm, current->comm, TASK_COMM_LEN); ), @@ -363,8 +363,7 @@ DECLARE_EVENT_CLASS(block_get_rq, __entry->dev = bio ? bio_dev(bio) : 0; __entry->sector = bio ? bio->bi_iter.bi_sector : 0; __entry->nr_sector = bio ? bio_sectors(bio) : 0; - blk_fill_rwbs(__entry->rwbs, - bio ? bio->bi_opf : 0, __entry->nr_sector); + blk_fill_rwbs(__entry->rwbs, bio ? bio->bi_opf : 0); memcpy(__entry->comm, current->comm, TASK_COMM_LEN); ), @@ -492,7 +491,7 @@ TRACE_EVENT(block_split, __entry->dev = bio_dev(bio); __entry->sector = bio->bi_iter.bi_sector; __entry->new_sector = new_sector; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); memcpy(__entry->comm, current->comm, TASK_COMM_LEN); ), @@ -535,7 +534,7 @@ TRACE_EVENT(block_bio_remap, __entry->nr_sector = bio_sectors(bio); __entry->old_dev = dev; __entry->old_sector = from; - blk_fill_rwbs(__entry->rwbs, bio->bi_opf, bio->bi_iter.bi_size); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); ), TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu", @@ -581,7 +580,7 @@ TRACE_EVENT(block_rq_remap, __entry->old_dev = dev; __entry->old_sector = from; __entry->nr_bios = blk_rq_count_bios(rq); - blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq)); + blk_fill_rwbs(__entry->rwbs, rq->cmd_flags); ), TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu %u", diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c index bb864a50307f..62cb6197ce1f 100644 --- a/kernel/trace/blktrace.c +++ b/kernel/trace/blktrace.c @@ -1950,7 +1950,7 @@ void blk_trace_remove_sysfs(struct device *dev) #ifdef CONFIG_EVENT_TRACING -void blk_fill_rwbs(char *rwbs, unsigned int op, int bytes) +void blk_fill_rwbs(char *rwbs, unsigned int op) { int i = 0; From patchwork Mon Jun 29 23:43:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632851 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 984E3913 for ; Mon, 29 Jun 2020 23:44:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E8DD20781 for ; Mon, 29 Jun 2020 23:44:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="VaF30aVy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727078AbgF2Xoz (ORCPT ); Mon, 29 Jun 2020 19:44:55 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:41344 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726854AbgF2Xoy (ORCPT ); Mon, 29 Jun 2020 19:44:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474293; x=1625010293; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=G8cNsqQKCATReJdtqjxr6XXjJG7W1w3O1NvqRQNG84Y=; b=VaF30aVyYrsUcxO+lHhvByfO+SeFN3VMnJddHDpuLhsX/hp2GOkrQFAY Zgdu3IfLypdQlkbD0qZts/sSavUb34Bek0KHoGwVugI8NMwc4tKlL6Zd7 D7fTjsbv96/iyH7PapZyEtZMLahhCgTKlz86Fu+lahWXX3YNBtMZrPQy8 XObIvL70P0GoOC2CKsITLuPjHqX6/N4l0rnN7YjZlVgGM3t1i0Edrt26O Bem3E7MPbtVTRiQ1SSPFd6+k5ARXlRSzjLyAAur3vHWhFuY8jjdh63ilH yowvrloa5a35nABWhFz5ABjXvKs7bX189np9AfWOGVC9KAo+/tNjZh0Ii g==; IronPort-SDR: TjA3YxFW1Z67Ch4L91EJ5LtAgVFHWvgT62pszMujQEdAG4n8xVjNjrMzDnJcdVvNaXhCcXHGeP iJhZjtDvE06pVT/Ex7X4LgpfVnLCF3q5d2/2ofIAvSGtkyFYXbH1lxN1jVKLlD6wXBb9RKL/yf Z7gQIUo0KxBtUXwekL4I5H7t1pHiy9z5dTtCu9vqkpWJQA2CVt2iS9a0Mefxh34DCrWPksC4hX yorJkkydsgHKhJIaHo5ADdZHGUKk/U0alhtAymgKSxy3PLp3/WAZ74vae8SW2GJ0wvz+KKl7RG cLU= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="141220134" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:44:53 +0800 IronPort-SDR: 1pMZPj6ShBQRD43Mkyi6PCu5OrTlUnMuc9U/Amk/ghWrsbtWbvTNR6GC0P6EOf7CpJ+d3xy7bu L1GkMwiB09nmGHKCS+a3G9vDtrIVoW16E= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:33:13 -0700 IronPort-SDR: XOE0qzV3gOcUVjTZqV2zR0gAt4486Qd0+ZKTtGC+7TEtI4APUPn2jB3lLbTrA7WSNmPLQxX+9i /fwMiZj37B2w== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:52 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 10/11] block: use block_bio class for getrq and sleeprq Date: Mon, 29 Jun 2020 16:43:13 -0700 Message-Id: <20200629234314.10509-11-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The only difference in block_get_rq and block_bio was the last param passed __entry->nr_sector & bio->bi_iter.bi_size respectively. Since that is not the case anymore replace block_get_rq class with block_bio for block_getrq and block_sleeprq events, also adjust the code to handle null bio case in block_bio. Signed-off-by: Chaitanya Kulkarni --- include/trace/events/block.h | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/include/trace/events/block.h b/include/trace/events/block.h index d191d2cd1070..21f1daaf012b 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -272,11 +272,19 @@ DECLARE_EVENT_CLASS(block_bio, ), TP_fast_assign( - __entry->dev = bio_dev(bio); - __entry->sector = bio->bi_iter.bi_sector; - __entry->nr_sector = bio_sectors(bio); - blk_fill_rwbs(__entry->rwbs, bio->bi_opf); - memcpy(__entry->comm, current->comm, TASK_COMM_LEN); + if (bio) { + __entry->dev = bio_dev(bio); + __entry->sector = bio->bi_iter.bi_sector; + __entry->nr_sector = bio_sectors(bio); + blk_fill_rwbs(__entry->rwbs, bio->bi_opf); + memcpy(__entry->comm, current->comm, TASK_COMM_LEN); + } else { + __entry->dev = 0; + __entry->sector = 0; + __entry->nr_sector = 0; + blk_fill_rwbs(__entry->rwbs, 0); + memcpy(__entry->comm, current->comm, TASK_COMM_LEN); + } ), TP_printk("%d,%d %s %llu + %u [%s]", @@ -380,7 +388,7 @@ DECLARE_EVENT_CLASS(block_get_rq, * A request struct for queue has been allocated to handle the * block IO operation @bio. */ -DEFINE_EVENT(block_get_rq, block_getrq, +DEFINE_EVENT(block_bio, block_getrq, TP_PROTO(struct bio *bio), @@ -396,7 +404,7 @@ DEFINE_EVENT(block_get_rq, block_getrq, * available. This tracepoint event is generated each time the * process goes to sleep waiting for request struct become available. */ -DEFINE_EVENT(block_get_rq, block_sleeprq, +DEFINE_EVENT(block_bio, block_sleeprq, TP_PROTO(struct bio *bio), From patchwork Mon Jun 29 23:43:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11632853 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F27C4913 for ; Mon, 29 Jun 2020 23:45:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D5B5B20780 for ; Mon, 29 Jun 2020 23:45:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="HVi3e2/d" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726834AbgF2XpB (ORCPT ); Mon, 29 Jun 2020 19:45:01 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:21972 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726814AbgF2XpA (ORCPT ); Mon, 29 Jun 2020 19:45:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1593474301; x=1625010301; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rvpgBSd7W4OQS0M7YIDi1QQcSbETeYvKomDK5mWrwfk=; b=HVi3e2/dKX5Xz6qGZgGOKB4RjlqxdtsX49kb8UlBKu/YmCScKN+Gwa2r ItMaayJkx6flVtqj9x4v4JCHEm5wh45tkQYP2vCfWdeE03TFAn+bP/TqC um12BwZ3mZIilnkMQqONWbz6uyBeoJ43atZVZX14309j6jWi1RxEq8RkE OXHmU3Ue+Fj9GCLrjpslmZFG0jKYJc23LB4iQ63Yg0hnSLVkyU7Ojmqb8 HoixVob272d1n59cbXFuZ5O56mSNYUuHjLGOUmrcv69i9zoLekwST3u0g 7MmU7PIlpbA8xxhi3GgRYGRa1mHRlDVYDAw95CR0rNJDdLdQJi3HNJXRw A==; IronPort-SDR: X7VTcvIrDUSSeAebf/fY/nzBdFSJWBaGZ7gDe1WED/4dNsFaVNTiHOS8nk6X6dyAaLIjornkNz KdTdV4neYON1bkkHsuXu+DbOoc/59Q6KSSBkM0IkY64dCw9eziClkkj8ONwVdi7Ea3nZ7KdGi2 3NsE26WREQRQMTzYt1MrJr+bzDHxzTumirb4HOYfbvhWrL+u/lCq7zwFSiWaH+KX4+TxULFx0N Q6p9mBiaM/kstWUM3TSL8i2Y2cCp/kIfSgO2sWOYKrtDCxqQ/BeT86sVgJ8jpRN0Kb0uGD0/A2 ogY= X-IronPort-AV: E=Sophos;i="5.75,296,1589212800"; d="scan'208";a="142577176" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 30 Jun 2020 07:45:00 +0800 IronPort-SDR: 2x6Fh3gxLg99SJ7KGP77CkXv4KRY27PuQSsBsMjPqkiev1FprvGrweOqnPo04TZf+e6ypI5eA0 ewTsxDRi0Y/98hcZh+nsMOALYI4X9N9zQ= Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Jun 2020 16:33:53 -0700 IronPort-SDR: 6lJvjQXfWa7Clf3GSzVOBFEGRf7qj96erZRiTrcownPOnxcf3VeOT+CS0b4eeudQVMs5ebyHky 3Yp09SaAptIg== WDCIronportException: Internal Received: from iouring.labspan.wdc.com (HELO iouring.sc.wdc.com) ([10.6.138.107]) by uls-op-cesaip01.wdc.com with ESMTP; 29 Jun 2020 16:44:59 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, dm-devel@redhat.com Cc: jack@suse.czi, rdunlap@infradead.org, sagi@grimberg.me, mingo@redhat.com, rostedt@goodmis.org, snitzer@redhat.com, agk@redhat.com, axboe@kernel.dk, paolo.valente@linaro.org, ming.lei@redhat.com, bvanassche@acm.org, fangguoju@gmail.com, colyli@suse.de, hch@lst.de, Chaitanya Kulkarni Subject: [PATCH 11/11] block: remove block_get_rq event class Date: Mon, 29 Jun 2020 16:43:14 -0700 Message-Id: <20200629234314.10509-12-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> References: <20200629234314.10509-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that there are no users for the block_get_rq event class remove the event class. Signed-off-by: Chaitanya Kulkarni --- include/trace/events/block.h | 28 ---------------------------- 1 file changed, 28 deletions(-) diff --git a/include/trace/events/block.h b/include/trace/events/block.h index 21f1daaf012b..dc1834250baa 100644 --- a/include/trace/events/block.h +++ b/include/trace/events/block.h @@ -353,34 +353,6 @@ DEFINE_EVENT(block_bio, block_bio_queue, TP_ARGS(bio) ); -DECLARE_EVENT_CLASS(block_get_rq, - - TP_PROTO(struct bio *bio), - - TP_ARGS(bio), - - TP_STRUCT__entry( - __field( dev_t, dev ) - __field( sector_t, sector ) - __field( unsigned int, nr_sector ) - __array( char, rwbs, RWBS_LEN ) - __array( char, comm, TASK_COMM_LEN ) - ), - - TP_fast_assign( - __entry->dev = bio ? bio_dev(bio) : 0; - __entry->sector = bio ? bio->bi_iter.bi_sector : 0; - __entry->nr_sector = bio ? bio_sectors(bio) : 0; - blk_fill_rwbs(__entry->rwbs, bio ? bio->bi_opf : 0); - memcpy(__entry->comm, current->comm, TASK_COMM_LEN); - ), - - TP_printk("%d,%d %s %llu + %u [%s]", - MAJOR(__entry->dev), MINOR(__entry->dev), __entry->rwbs, - (unsigned long long)__entry->sector, - __entry->nr_sector, __entry->comm) -); - /** * block_getrq - get a free request entry in queue for block IO operations * @bio: pending block IO operation (can be %NULL)