diff mbox

[1/2] block/loop: set hw_sectors

Message ID 3742ed80f48b4ca55ad4ed7dcee6a667f0e82024.1503531709.git.shli@fb.com (mailing list archive)
State New, archived
Headers show

Commit Message

Shaohua Li Aug. 23, 2017, 11:49 p.m. UTC
From: Shaohua Li <shli@fb.com>

Loop can handle any size of request. Limiting it to 255 sectors just
burns the CPU for bio split and request merge for underlayer disk and
also cause bad fs block allocation in directio mode.

Signed-off-by: Shaohua Li <shli@fb.com>
---
 drivers/block/loop.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Omar Sandoval Aug. 24, 2017, 5:36 p.m. UTC | #1
On Wed, Aug 23, 2017 at 04:49:23PM -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@fb.com>
> 
> Loop can handle any size of request. Limiting it to 255 sectors just
> burns the CPU for bio split and request merge for underlayer disk and
> also cause bad fs block allocation in directio mode.

Reviewed-by: Omar Sandoval <osandov@fb.com>

Note that this will conflict with my loop blocksize series, we can fix
up whichever series goes in second.

> Signed-off-by: Shaohua Li <shli@fb.com>
> ---
>  drivers/block/loop.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index b55a1f8..428da07 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1799,6 +1799,7 @@ static int loop_add(struct loop_device **l, int i)
>  	}
>  	lo->lo_queue->queuedata = lo;
>  
> +	blk_queue_max_hw_sectors(lo->lo_queue, BLK_DEF_MAX_SECTORS);
>  	/*
>  	 * It doesn't make sense to enable merge because the I/O
>  	 * submitted to backing file is handled page by page.
> -- 
> 2.9.5
>
diff mbox

Patch

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index b55a1f8..428da07 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1799,6 +1799,7 @@  static int loop_add(struct loop_device **l, int i)
 	}
 	lo->lo_queue->queuedata = lo;
 
+	blk_queue_max_hw_sectors(lo->lo_queue, BLK_DEF_MAX_SECTORS);
 	/*
 	 * It doesn't make sense to enable merge because the I/O
 	 * submitted to backing file is handled page by page.