Message ID | 20220117085455.2269760-4-yukuai3@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq: allow hardware queue to get more tag while sharing a tag set | expand |
diff --git a/block/blk-mq.h b/block/blk-mq.h index 948791ea2a3e..4b059221b265 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -352,6 +352,10 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx, if (bt->sb.depth == 1) return true; + /* Don't use fair share untill some hctx failed to get driver tag */ + if (!atomic_read(&hctx->tags->pending_queues)) + return true; + if (blk_mq_is_shared_tags(hctx->flags)) { struct request_queue *q = hctx->queue;
If there are multiple active queues while sharing a tag set, the avaliable driver tag for each queue is fair share currently. However, we found this way cause performance degradation in our environment: A virtual machine which has 12 scsi disks on the same scsi host, each disk represents a network disk on host machine. In virtual machine, each disk will issue a sg io about every 15s, which will cause active queues to be 12 before the disk is idle(blk_mq_tag_idle() is called), and io performance is bad due to short of driver tag during that time. Thus if there are no hctx ever failed to get driver tag, don't limit the available driver tags as fair share. And if someone do failed to get driver tag, fall back to fair share. Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- block/blk-mq.h | 4 ++++ 1 file changed, 4 insertions(+)