Message ID | 20210915092010.2087371-7-yukuai3@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | handle unexpected message from server | expand |
On Wed, Sep 15, 2021 at 05:20:10PM +0800, Yu Kuai wrote: > There is a problem that nbd_handle_reply() might access freed request: > > 1) At first, a normal io is submitted and completed with scheduler: > > internel_tag = blk_mq_get_tag -> get tag from sched_tags > blk_mq_rq_ctx_init > sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag] > ... > blk_mq_get_driver_tag > __blk_mq_get_driver_tag -> get tag from tags > tags->rq[tag] = sched_tag->static_rq[internel_tag] > > So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing > to the request: sched_tags->static_rq[internal_tag]. Even if the > io is finished. > > 2) nbd server send a reply with random tag directly: > > recv_work > nbd_handle_reply > blk_mq_tag_to_rq(tags, tag) > rq = tags->rq[tag] > > 3) if the sched_tags->static_rq is freed: > > blk_mq_sched_free_requests > blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i) > -> step 2) access rq before clearing rq mapping > blk_mq_clear_rq_mapping(set, tags, hctx_idx); > __free_pages() -> rq is freed here > > 4) Then, nbd continue to use the freed request in nbd_handle_reply > > Fix the problem by get 'q_usage_counter' before blk_mq_tag_to_rq(), > thus request is ensured not to be freed because 'q_usage_counter' is > not zero. > > Signed-off-by: Yu Kuai <yukuai3@huawei.com> > --- > drivers/block/nbd.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c > index 9a7bbf8ebe74..3e8b70b5d4f9 100644 > --- a/drivers/block/nbd.c > +++ b/drivers/block/nbd.c > @@ -824,6 +824,7 @@ static void recv_work(struct work_struct *work) > work); > struct nbd_device *nbd = args->nbd; > struct nbd_config *config = nbd->config; > + struct request_queue *q = nbd->disk->queue; > struct nbd_sock *nsock; > struct nbd_cmd *cmd; > struct request *rq; > @@ -834,7 +835,24 @@ static void recv_work(struct work_struct *work) > if (nbd_read_reply(nbd, args->index, &reply)) > break; > > + /* > + * Grab ref of q_usage_counter can prevent request being freed > + * during nbd_handle_reply(). If q_usage_counter is zero, then > + * no request is inflight, which means something is wrong since > + * we expect to find a request to complete here. > + */ The above comment is wrong, the purpose is simply for avoiding request pool freed, such as elevator switching won't happen once ->q_usage_counter is grabbed. So no any request UAF can be triggered when calling into nbd_handle_reply(). > + if (!percpu_ref_tryget(&q->q_usage_counter)) { > + dev_err(disk_to_dev(nbd->disk), "%s: no io inflight\n", > + __func__); > + break; > + } > + > cmd = nbd_handle_reply(nbd, args->index, &reply); > + /* > + * It's safe to drop ref before request completion, inflight > + * request will ensure q_usage_counter won't be zero. > + */ The above comment is useless actually.
On 2021/09/16 16:04, Ming Lei wrote: > On Wed, Sep 15, 2021 at 05:20:10PM +0800, Yu Kuai wrote: >> There is a problem that nbd_handle_reply() might access freed request: >> >> 1) At first, a normal io is submitted and completed with scheduler: >> >> internel_tag = blk_mq_get_tag -> get tag from sched_tags >> blk_mq_rq_ctx_init >> sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag] >> ... >> blk_mq_get_driver_tag >> __blk_mq_get_driver_tag -> get tag from tags >> tags->rq[tag] = sched_tag->static_rq[internel_tag] >> >> So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing >> to the request: sched_tags->static_rq[internal_tag]. Even if the >> io is finished. >> >> 2) nbd server send a reply with random tag directly: >> >> recv_work >> nbd_handle_reply >> blk_mq_tag_to_rq(tags, tag) >> rq = tags->rq[tag] >> >> 3) if the sched_tags->static_rq is freed: >> >> blk_mq_sched_free_requests >> blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i) >> -> step 2) access rq before clearing rq mapping >> blk_mq_clear_rq_mapping(set, tags, hctx_idx); >> __free_pages() -> rq is freed here >> >> 4) Then, nbd continue to use the freed request in nbd_handle_reply >> >> Fix the problem by get 'q_usage_counter' before blk_mq_tag_to_rq(), >> thus request is ensured not to be freed because 'q_usage_counter' is >> not zero. >> >> Signed-off-by: Yu Kuai <yukuai3@huawei.com> >> --- >> drivers/block/nbd.c | 18 ++++++++++++++++++ >> 1 file changed, 18 insertions(+) >> >> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c >> index 9a7bbf8ebe74..3e8b70b5d4f9 100644 >> --- a/drivers/block/nbd.c >> +++ b/drivers/block/nbd.c >> @@ -824,6 +824,7 @@ static void recv_work(struct work_struct *work) >> work); >> struct nbd_device *nbd = args->nbd; >> struct nbd_config *config = nbd->config; >> + struct request_queue *q = nbd->disk->queue; >> struct nbd_sock *nsock; >> struct nbd_cmd *cmd; >> struct request *rq; >> @@ -834,7 +835,24 @@ static void recv_work(struct work_struct *work) >> if (nbd_read_reply(nbd, args->index, &reply)) >> break; >> >> + /* >> + * Grab ref of q_usage_counter can prevent request being freed >> + * during nbd_handle_reply(). If q_usage_counter is zero, then >> + * no request is inflight, which means something is wrong since >> + * we expect to find a request to complete here. >> + */ > > The above comment is wrong, the purpose is simply for avoiding request > pool freed, such as elevator switching won't happen once > ->q_usage_counter is grabbed. So no any request UAF can be triggered > when calling into nbd_handle_reply(). Do you mean the comment about q_usage_counter is zero is wrong ? > >> + if (!percpu_ref_tryget(&q->q_usage_counter)) { >> + dev_err(disk_to_dev(nbd->disk), "%s: no io inflight\n", >> + __func__); >> + break; >> + } >> + >> cmd = nbd_handle_reply(nbd, args->index, &reply); >> + /* >> + * It's safe to drop ref before request completion, inflight >> + * request will ensure q_usage_counter won't be zero. >> + */ > > The above comment is useless actually. Will remove the comments. Thanks, Kuai >
On Thu, Sep 16, 2021 at 04:47:08PM +0800, yukuai (C) wrote: > On 2021/09/16 16:04, Ming Lei wrote: > > On Wed, Sep 15, 2021 at 05:20:10PM +0800, Yu Kuai wrote: > > > There is a problem that nbd_handle_reply() might access freed request: > > > > > > 1) At first, a normal io is submitted and completed with scheduler: > > > > > > internel_tag = blk_mq_get_tag -> get tag from sched_tags > > > blk_mq_rq_ctx_init > > > sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag] > > > ... > > > blk_mq_get_driver_tag > > > __blk_mq_get_driver_tag -> get tag from tags > > > tags->rq[tag] = sched_tag->static_rq[internel_tag] > > > > > > So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing > > > to the request: sched_tags->static_rq[internal_tag]. Even if the > > > io is finished. > > > > > > 2) nbd server send a reply with random tag directly: > > > > > > recv_work > > > nbd_handle_reply > > > blk_mq_tag_to_rq(tags, tag) > > > rq = tags->rq[tag] > > > > > > 3) if the sched_tags->static_rq is freed: > > > > > > blk_mq_sched_free_requests > > > blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i) > > > -> step 2) access rq before clearing rq mapping > > > blk_mq_clear_rq_mapping(set, tags, hctx_idx); > > > __free_pages() -> rq is freed here > > > > > > 4) Then, nbd continue to use the freed request in nbd_handle_reply > > > > > > Fix the problem by get 'q_usage_counter' before blk_mq_tag_to_rq(), > > > thus request is ensured not to be freed because 'q_usage_counter' is > > > not zero. > > > > > > Signed-off-by: Yu Kuai <yukuai3@huawei.com> > > > --- > > > drivers/block/nbd.c | 18 ++++++++++++++++++ > > > 1 file changed, 18 insertions(+) > > > > > > diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c > > > index 9a7bbf8ebe74..3e8b70b5d4f9 100644 > > > --- a/drivers/block/nbd.c > > > +++ b/drivers/block/nbd.c > > > @@ -824,6 +824,7 @@ static void recv_work(struct work_struct *work) > > > work); > > > struct nbd_device *nbd = args->nbd; > > > struct nbd_config *config = nbd->config; > > > + struct request_queue *q = nbd->disk->queue; > > > struct nbd_sock *nsock; > > > struct nbd_cmd *cmd; > > > struct request *rq; > > > @@ -834,7 +835,24 @@ static void recv_work(struct work_struct *work) > > > if (nbd_read_reply(nbd, args->index, &reply)) > > > break; > > > + /* > > > + * Grab ref of q_usage_counter can prevent request being freed > > > + * during nbd_handle_reply(). If q_usage_counter is zero, then > > > + * no request is inflight, which means something is wrong since > > > + * we expect to find a request to complete here. > > > + */ > > > > The above comment is wrong, the purpose is simply for avoiding request > > pool freed, such as elevator switching won't happen once > > ->q_usage_counter is grabbed. So no any request UAF can be triggered > > when calling into nbd_handle_reply(). > > Do you mean the comment about q_usage_counter is zero is wrong ? How about the following words? /* * Grab .q_usage_counter so request pool won't go away, then no request * use-after-free is possible during nbd_handle_reply(). If queue is frozen, * there won't be any inflight requests, we needn't to handle the incoming * garbage message */ Thanks, Ming
On 2021/09/16 17:06, Ming Lei wrote: > On Thu, Sep 16, 2021 at 04:47:08PM +0800, yukuai (C) wrote: >> On 2021/09/16 16:04, Ming Lei wrote: >>> On Wed, Sep 15, 2021 at 05:20:10PM +0800, Yu Kuai wrote: >>>> There is a problem that nbd_handle_reply() might access freed request: >>>> >>>> 1) At first, a normal io is submitted and completed with scheduler: >>>> >>>> internel_tag = blk_mq_get_tag -> get tag from sched_tags >>>> blk_mq_rq_ctx_init >>>> sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag] >>>> ... >>>> blk_mq_get_driver_tag >>>> __blk_mq_get_driver_tag -> get tag from tags >>>> tags->rq[tag] = sched_tag->static_rq[internel_tag] >>>> >>>> So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing >>>> to the request: sched_tags->static_rq[internal_tag]. Even if the >>>> io is finished. >>>> >>>> 2) nbd server send a reply with random tag directly: >>>> >>>> recv_work >>>> nbd_handle_reply >>>> blk_mq_tag_to_rq(tags, tag) >>>> rq = tags->rq[tag] >>>> >>>> 3) if the sched_tags->static_rq is freed: >>>> >>>> blk_mq_sched_free_requests >>>> blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i) >>>> -> step 2) access rq before clearing rq mapping >>>> blk_mq_clear_rq_mapping(set, tags, hctx_idx); >>>> __free_pages() -> rq is freed here >>>> >>>> 4) Then, nbd continue to use the freed request in nbd_handle_reply >>>> >>>> Fix the problem by get 'q_usage_counter' before blk_mq_tag_to_rq(), >>>> thus request is ensured not to be freed because 'q_usage_counter' is >>>> not zero. >>>> >>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com> >>>> --- >>>> drivers/block/nbd.c | 18 ++++++++++++++++++ >>>> 1 file changed, 18 insertions(+) >>>> >>>> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c >>>> index 9a7bbf8ebe74..3e8b70b5d4f9 100644 >>>> --- a/drivers/block/nbd.c >>>> +++ b/drivers/block/nbd.c >>>> @@ -824,6 +824,7 @@ static void recv_work(struct work_struct *work) >>>> work); >>>> struct nbd_device *nbd = args->nbd; >>>> struct nbd_config *config = nbd->config; >>>> + struct request_queue *q = nbd->disk->queue; >>>> struct nbd_sock *nsock; >>>> struct nbd_cmd *cmd; >>>> struct request *rq; >>>> @@ -834,7 +835,24 @@ static void recv_work(struct work_struct *work) >>>> if (nbd_read_reply(nbd, args->index, &reply)) >>>> break; >>>> + /* >>>> + * Grab ref of q_usage_counter can prevent request being freed >>>> + * during nbd_handle_reply(). If q_usage_counter is zero, then >>>> + * no request is inflight, which means something is wrong since >>>> + * we expect to find a request to complete here. >>>> + */ >>> >>> The above comment is wrong, the purpose is simply for avoiding request >>> pool freed, such as elevator switching won't happen once >>> ->q_usage_counter is grabbed. So no any request UAF can be triggered >>> when calling into nbd_handle_reply(). >> >> Do you mean the comment about q_usage_counter is zero is wrong ? > > How about the following words? > > /* > * Grab .q_usage_counter so request pool won't go away, then no request > * use-after-free is possible during nbd_handle_reply(). If queue is frozen, > * there won't be any inflight requests, we needn't to handle the incoming > * garbage message > */ Will use these words. Thanks, Kuai
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 9a7bbf8ebe74..3e8b70b5d4f9 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -824,6 +824,7 @@ static void recv_work(struct work_struct *work) work); struct nbd_device *nbd = args->nbd; struct nbd_config *config = nbd->config; + struct request_queue *q = nbd->disk->queue; struct nbd_sock *nsock; struct nbd_cmd *cmd; struct request *rq; @@ -834,7 +835,24 @@ static void recv_work(struct work_struct *work) if (nbd_read_reply(nbd, args->index, &reply)) break; + /* + * Grab ref of q_usage_counter can prevent request being freed + * during nbd_handle_reply(). If q_usage_counter is zero, then + * no request is inflight, which means something is wrong since + * we expect to find a request to complete here. + */ + if (!percpu_ref_tryget(&q->q_usage_counter)) { + dev_err(disk_to_dev(nbd->disk), "%s: no io inflight\n", + __func__); + break; + } + cmd = nbd_handle_reply(nbd, args->index, &reply); + /* + * It's safe to drop ref before request completion, inflight + * request will ensure q_usage_counter won't be zero. + */ + percpu_ref_put(&q->q_usage_counter); if (IS_ERR(cmd)) break;
There is a problem that nbd_handle_reply() might access freed request: 1) At first, a normal io is submitted and completed with scheduler: internel_tag = blk_mq_get_tag -> get tag from sched_tags blk_mq_rq_ctx_init sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag] ... blk_mq_get_driver_tag __blk_mq_get_driver_tag -> get tag from tags tags->rq[tag] = sched_tag->static_rq[internel_tag] So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing to the request: sched_tags->static_rq[internal_tag]. Even if the io is finished. 2) nbd server send a reply with random tag directly: recv_work nbd_handle_reply blk_mq_tag_to_rq(tags, tag) rq = tags->rq[tag] 3) if the sched_tags->static_rq is freed: blk_mq_sched_free_requests blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i) -> step 2) access rq before clearing rq mapping blk_mq_clear_rq_mapping(set, tags, hctx_idx); __free_pages() -> rq is freed here 4) Then, nbd continue to use the freed request in nbd_handle_reply Fix the problem by get 'q_usage_counter' before blk_mq_tag_to_rq(), thus request is ensured not to be freed because 'q_usage_counter' is not zero. Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- drivers/block/nbd.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)