Message ID | 11ebb046bf422facf6e438672799306b80038173.1569830385.git.asml.silence@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/1] blk-mq: reuse code in blk_mq_check_inflight*() | expand |
On Mon, Sep 30, 2019 at 11:27:32AM +0300, Pavel Begunkov (Silence) wrote: > From: Pavel Begunkov <asml.silence@gmail.com> > > 1. Reuse the same walker callback for both blk_mq_in_flight() and > blk_mq_in_flight_rw(). > > 2. Store inflight counters immediately in struct mq_inflight. > It's type-safer and removes extra indirection. You really want to split this into two patches. Part 2 looks very sensible to me, but I don't really see how 1 is qn equivalent transformation right now. Splitting it out and writing a non-trivial changelog might help understanding it if you think it really is equivalent as-is.
On 9/30/2019 11:33 AM, Christoph Hellwig wrote: > On Mon, Sep 30, 2019 at 11:27:32AM +0300, Pavel Begunkov (Silence) wrote: >> From: Pavel Begunkov <asml.silence@gmail.com> >> >> 1. Reuse the same walker callback for both blk_mq_in_flight() and >> blk_mq_in_flight_rw(). >> >> 2. Store inflight counters immediately in struct mq_inflight. >> It's type-safer and removes extra indirection. > > You really want to split this into two patches. Part 2 looks very Good point, diff is peculiarly aligned indeed. I will resend it. > sensible to me, but I don't really see how 1 is qn equivalent > transformation right now. Splitting it out and writing a non-trivial > changelog might help understanding it if you think it really is > equivalent as-is. > blk_mq_check_inflight() increments only inflight[0]. blk_mq_check_inflight_rw() increments inflight[0] or inflight[1] depending on a flag, so summing them gives what the first function returns.
diff --git a/block/blk-mq.c b/block/blk-mq.c index c4e5070c2ecd..d97181d9a3ec 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -93,7 +93,7 @@ static void blk_mq_hctx_clear_pending(struct blk_mq_hw_ctx *hctx, struct mq_inflight { struct hd_struct *part; - unsigned int *inflight; + unsigned int inflight[2]; }; static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx, @@ -102,45 +102,29 @@ static bool blk_mq_check_inflight(struct blk_mq_hw_ctx *hctx, { struct mq_inflight *mi = priv; - /* - * index[0] counts the specific partition that was asked for. - */ if (rq->part == mi->part) - mi->inflight[0]++; + mi->inflight[rq_data_dir(rq)]++; return true; } unsigned int blk_mq_in_flight(struct request_queue *q, struct hd_struct *part) { - unsigned inflight[2]; - struct mq_inflight mi = { .part = part, .inflight = inflight, }; + struct mq_inflight mi = { .part = part }; - inflight[0] = inflight[1] = 0; blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi); - return inflight[0]; -} - -static bool blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx, - struct request *rq, void *priv, - bool reserved) -{ - struct mq_inflight *mi = priv; - - if (rq->part == mi->part) - mi->inflight[rq_data_dir(rq)]++; - - return true; + return mi.inflight[0] + mi.inflight[1]; } void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, unsigned int inflight[2]) { - struct mq_inflight mi = { .part = part, .inflight = inflight, }; + struct mq_inflight mi = { .part = part }; - inflight[0] = inflight[1] = 0; - blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight_rw, &mi); + blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi); + inflight[0] = mi.inflight[0]; + inflight[1] = mi.inflight[1]; } void blk_freeze_queue_start(struct request_queue *q)