diff mbox series

[1/3] io_uring: add hash_index and its logic to track req in cancel_hash

Message ID 20220606065716.270879-2-haoxu.linux@icloud.com (mailing list archive)
State New
Headers show
Series cancel_hash per entry lock | expand

Commit Message

Hao Xu June 6, 2022, 6:57 a.m. UTC
From: Hao Xu <howeyxu@tencent.com>

Add a new member hash_index in struct io_kiocb to track the req index
in cancel_hash array. This is needed in later patches.

Signed-off-by: Hao Xu <howeyxu@tencent.com>
---
 io_uring/io_uring_types.h | 1 +
 io_uring/poll.c           | 4 +++-
 2 files changed, 4 insertions(+), 1 deletion(-)

Comments

Pavel Begunkov June 6, 2022, 11:59 a.m. UTC | #1
On 6/6/22 07:57, Hao Xu wrote:
> From: Hao Xu <howeyxu@tencent.com>
> 
> Add a new member hash_index in struct io_kiocb to track the req index
> in cancel_hash array. This is needed in later patches.
> 
> Signed-off-by: Hao Xu <howeyxu@tencent.com>
> ---
>   io_uring/io_uring_types.h | 1 +
>   io_uring/poll.c           | 4 +++-
>   2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h
> index 7c22cf35a7e2..2041ee83467d 100644
> --- a/io_uring/io_uring_types.h
> +++ b/io_uring/io_uring_types.h
> @@ -474,6 +474,7 @@ struct io_kiocb {
>   			u64		extra2;
>   		};
>   	};
> +	unsigned int			hash_index;

Didn't take a closer look, but can we make rid of it?
E.g. computing it again when ejecting a request from
the hash? or keep it in struct io_poll?

>   	/* internal polling, see IORING_FEAT_FAST_POLL */
>   	struct async_poll		*apoll;
>   	/* opcode allocated if it needs to store data for async defer */
> diff --git a/io_uring/poll.c b/io_uring/poll.c
> index 0df5eca93b16..95e28f32b49c 100644
> --- a/io_uring/poll.c
> +++ b/io_uring/poll.c
> @@ -74,8 +74,10 @@ static void io_poll_req_insert(struct io_kiocb *req)
>   {
>   	struct io_ring_ctx *ctx = req->ctx;
>   	struct hlist_head *list;
> +	u32 index = hash_long(req->cqe.user_data, ctx->cancel_hash_bits);
>   
> -	list = &ctx->cancel_hash[hash_long(req->cqe.user_data, ctx->cancel_hash_bits)];
> +	req->hash_index = index;
> +	list = &ctx->cancel_hash[index];
>   	hlist_add_head(&req->hash_node, list);
>   }
>
Hao Xu June 6, 2022, 1:47 p.m. UTC | #2
On 6/6/22 19:59, Pavel Begunkov wrote:
> On 6/6/22 07:57, Hao Xu wrote:
>> From: Hao Xu <howeyxu@tencent.com>
>>
>> Add a new member hash_index in struct io_kiocb to track the req index
>> in cancel_hash array. This is needed in later patches.
>>
>> Signed-off-by: Hao Xu <howeyxu@tencent.com>
>> ---
>>   io_uring/io_uring_types.h | 1 +
>>   io_uring/poll.c           | 4 +++-
>>   2 files changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h
>> index 7c22cf35a7e2..2041ee83467d 100644
>> --- a/io_uring/io_uring_types.h
>> +++ b/io_uring/io_uring_types.h
>> @@ -474,6 +474,7 @@ struct io_kiocb {
>>               u64        extra2;
>>           };
>>       };
>> +    unsigned int            hash_index;
> 
> Didn't take a closer look, but can we make rid of it?
> E.g. computing it again when ejecting a request from
> the hash? or keep it in struct io_poll?

Good point, I prefer moving it to io_poll to computing it again since
this patchset is to try to make it faster.
diff mbox series

Patch

diff --git a/io_uring/io_uring_types.h b/io_uring/io_uring_types.h
index 7c22cf35a7e2..2041ee83467d 100644
--- a/io_uring/io_uring_types.h
+++ b/io_uring/io_uring_types.h
@@ -474,6 +474,7 @@  struct io_kiocb {
 			u64		extra2;
 		};
 	};
+	unsigned int			hash_index;
 	/* internal polling, see IORING_FEAT_FAST_POLL */
 	struct async_poll		*apoll;
 	/* opcode allocated if it needs to store data for async defer */
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 0df5eca93b16..95e28f32b49c 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -74,8 +74,10 @@  static void io_poll_req_insert(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
 	struct hlist_head *list;
+	u32 index = hash_long(req->cqe.user_data, ctx->cancel_hash_bits);
 
-	list = &ctx->cancel_hash[hash_long(req->cqe.user_data, ctx->cancel_hash_bits)];
+	req->hash_index = index;
+	list = &ctx->cancel_hash[index];
 	hlist_add_head(&req->hash_node, list);
 }