diff mbox series

[3/3] block: use on-stack page vec for <= UIO_FASTIOV

Message ID 20220806152004.382170-4-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series passthru block optimizations | expand

Commit Message

Jens Axboe Aug. 6, 2022, 3:20 p.m. UTC
Avoid a kmalloc+kfree for each page array, if we only have a few pages
that are mapped. An alloc+free for each IO is quite expensive, and
it's pretty pointless if we're only dealing with 1 or a few vecs.

Use UIO_FASTIOV like we do in other spots to set a sane limit for how
big of an IO we want to avoid allocations for.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 block/blk-map.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

Comments

Chaitanya Kulkarni Aug. 7, 2022, 9:30 a.m. UTC | #1
On 8/6/22 08:20, Jens Axboe wrote:
> Avoid a kmalloc+kfree for each page array, if we only have a few pages
> that are mapped. An alloc+free for each IO is quite expensive, and
> it's pretty pointless if we're only dealing with 1 or a few vecs.
> 
> Use UIO_FASTIOV like we do in other spots to set a sane limit for how
> big of an IO we want to avoid allocations for.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>   block/blk-map.c | 14 +++++++++++---
>   1 file changed, 11 insertions(+), 3 deletions(-)
> 


Looks good.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck
diff mbox series

Patch

diff --git a/block/blk-map.c b/block/blk-map.c
index 5da03f2614eb..d0ff80a9902e 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -268,12 +268,19 @@  static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 	}
 
 	while (iov_iter_count(iter)) {
-		struct page **pages;
+		struct page **pages, *stack_pages[UIO_FASTIOV];
 		ssize_t bytes;
 		size_t offs, added = 0;
 		int npages;
 
-		bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs);
+		if (nr_vecs < ARRAY_SIZE(stack_pages)) {
+			pages = stack_pages;
+			bytes = iov_iter_get_pages(iter, pages, LONG_MAX,
+							nr_vecs, &offs);
+		} else {
+			bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX,
+							&offs);
+		}
 		if (unlikely(bytes <= 0)) {
 			ret = bytes ? bytes : -EFAULT;
 			goto out_unmap;
@@ -310,7 +317,8 @@  static int bio_map_user_iov(struct request *rq, struct iov_iter *iter,
 		 */
 		while (j < npages)
 			put_page(pages[j++]);
-		kvfree(pages);
+		if (pages != stack_pages)
+			kvfree(pages);
 		/* couldn't stuff something into bio? */
 		if (bytes)
 			break;