Message ID | 20190919094547.67194-2-hare@suse.de (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq I/O scheduling fixes | expand |
On Thu, Sep 19, 2019 at 11:45:46AM +0200, Hannes Reinecke wrote: > From: Hannes Reinecke <hare@suse.com> > > When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we > need to requeue the I/O, but adding it to the global request list > will mess up with the passed-in request list. So re-add the request We always add request to hctx->dispatch_list after .queue_rq() returns BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE, so what is the messing up? > to the original list and leave it to the caller to handle situations > where the list wasn't completely emptied. > > Signed-off-by: Hannes Reinecke <hare@suse.com> > --- > block/blk-mq.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index b038ec680e84..44ff3c1442a4 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, > if (ret != BLK_STS_OK) { > if (ret == BLK_STS_RESOURCE || > ret == BLK_STS_DEV_RESOURCE) { > - blk_mq_request_bypass_insert(rq, > - list_empty(list)); > + list_add(list, &rq->queuelist); This way may let this request(DONTPREP set) to be merged with other rq or bio, and potential data corruption may be caused, please see commit: c616cbee97ae blk-mq: punt failed direct issue to dispatch list Thanks, Ming
On 9/19/19 11:45 AM, Hannes Reinecke wrote: > From: Hannes Reinecke <hare@suse.com> > > When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we > need to requeue the I/O, but adding it to the global request list > will mess up with the passed-in request list. So re-add the request > to the original list and leave it to the caller to handle situations > where the list wasn't completely emptied. > > Signed-off-by: Hannes Reinecke <hare@suse.com> > --- > block/blk-mq.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index b038ec680e84..44ff3c1442a4 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, > if (ret != BLK_STS_OK) { > if (ret == BLK_STS_RESOURCE || > ret == BLK_STS_DEV_RESOURCE) { > - blk_mq_request_bypass_insert(rq, > - list_empty(list)); > + list_add(list, &rq->queuelist); Just curious, maybe the above should be "list_add(&rq->queuelist, list)". Thanks, Guoqing
On 9/19/19 4:19 PM, Ming Lei wrote: > On Thu, Sep 19, 2019 at 11:45:46AM +0200, Hannes Reinecke wrote: >> From: Hannes Reinecke <hare@suse.com> >> >> When blk_mq_request_issue_directly() returns BLK_STS_RESOURCE we >> need to requeue the I/O, but adding it to the global request list >> will mess up with the passed-in request list. So re-add the request > > We always add request to hctx->dispatch_list after .queue_rq() returns > BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE, so what is the messing up? > >> to the original list and leave it to the caller to handle situations >> where the list wasn't completely emptied. >> >> Signed-off-by: Hannes Reinecke <hare@suse.com> >> --- >> block/blk-mq.c | 3 +-- >> 1 file changed, 1 insertion(+), 2 deletions(-) >> >> diff --git a/block/blk-mq.c b/block/blk-mq.c >> index b038ec680e84..44ff3c1442a4 100644 >> --- a/block/blk-mq.c >> +++ b/block/blk-mq.c >> @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, >> if (ret != BLK_STS_OK) { >> if (ret == BLK_STS_RESOURCE || >> ret == BLK_STS_DEV_RESOURCE) { >> - blk_mq_request_bypass_insert(rq, >> - list_empty(list)); >> + list_add(list, &rq->queuelist); > > This way may let this request(DONTPREP set) to be merged with other rq > or bio, and potential data corruption may be caused, please see commit: > > c616cbee97ae blk-mq: punt failed direct issue to dispatch list > Ok. What triggered this patch is this code: insert: if (bypass_insert) return BLK_STS_RESOURCE; blk_mq_request_bypass_insert(rq, run_queue); return BLK_STS_OK; } static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_qc_t *cookie) { blk_status_t ret; int srcu_idx; might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING); hctx_lock(hctx, &srcu_idx); ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true); if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) blk_mq_request_bypass_insert(rq, true); IE blk_mq_request_bypass_insert() will be called always once we hit the 'insert' label, the only difference being the second parameter of that function. I'd rather have the sequence consolidated, preferably by calling blk_mq_request_bypass_insert() in one place only, and not scatter the calls all over the code. Cheers, Hannes
diff --git a/block/blk-mq.c b/block/blk-mq.c index b038ec680e84..44ff3c1442a4 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1899,8 +1899,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, if (ret != BLK_STS_OK) { if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) { - blk_mq_request_bypass_insert(rq, - list_empty(list)); + list_add(list, &rq->queuelist); break; } blk_mq_end_request(rq, ret);