Message ID | 20210411162658.251456-1-mgurtovoy@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/1] null_blk: add option for managing virtual boundary | expand |
On 4/11/21 09:30, Max Gurtovoy wrote: > diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c > index d6c821d48090..9ca80e38f7e5 100644 > --- a/drivers/block/null_blk/main.c > +++ b/drivers/block/null_blk/main.c > @@ -84,6 +84,10 @@ enum { > NULL_Q_MQ = 2, > }; > > +static bool g_virt_boundary = false; > +module_param_named(virt_boundary, g_virt_boundary, bool, 0444); > +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False"); > + The patch looks useful to me. But this will unconditionally enable the virt boundry for all the devices created from configfs based on the module param. Such enforcing will create an inflexibility when user wants to create different devices for testing with different configuration (e.g. virt bouncry on/off) with single modprobe.
On 2021/04/12 1:30, Max Gurtovoy wrote: > This will enable changing the virtual boundary of null blk devices. For > now, null blk devices didn't have any restriction on the scatter/gather > elements received from the block layer. Add a module parameter that will > control the virtual boundary. This will enable testing the efficiency of > the block layer bounce buffer in case a suitable application will send > discontiguous IO to the given device. > > Initial testing with patched FIO showed the following results (64 jobs, > 128 iodepth): > IO size READ (virt=false) READ (virt=true) Write (virt=false) Write (virt=true) > ---------- ------------------- ----------------- ------------------- ------------------- > 1k 10.7M 8482k 10.8M 8471k > 2k 10.4M 8266k 10.4M 8271k > 4k 10.4M 8274k 10.3M 8226k > 8k 10.2M 8131k 9800k 7933k > 16k 9567k 7764k 8081k 6828k > 32k 8865k 7309k 5570k 5153k > 64k 7695k 6586k 2682k 2617k > 128k 5346k 5489k 1320k 1296k > > Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> > --- > drivers/block/null_blk/main.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c > index d6c821d48090..9ca80e38f7e5 100644 > --- a/drivers/block/null_blk/main.c > +++ b/drivers/block/null_blk/main.c > @@ -84,6 +84,10 @@ enum { > NULL_Q_MQ = 2, > }; > > +static bool g_virt_boundary = false; > +module_param_named(virt_boundary, g_virt_boundary, bool, 0444); > +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False"); > + > static int g_no_sched; > module_param_named(no_sched, g_no_sched, int, 0444); > MODULE_PARM_DESC(no_sched, "No io scheduler"); > @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev) > BLK_DEF_MAX_SECTORS); > blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); > > + if (g_virt_boundary) > + blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); > + > null_config_discard(nullb); > > sprintf(nullb->disk_name, "nullb%d", nullb->index); > Looks good to me, but could you also add the configfs equivalent setting ?
On 2021/04/12 8:34, Damien Le Moal wrote: > On 2021/04/12 1:30, Max Gurtovoy wrote: >> This will enable changing the virtual boundary of null blk devices. For >> now, null blk devices didn't have any restriction on the scatter/gather >> elements received from the block layer. Add a module parameter that will >> control the virtual boundary. This will enable testing the efficiency of >> the block layer bounce buffer in case a suitable application will send >> discontiguous IO to the given device. >> >> Initial testing with patched FIO showed the following results (64 jobs, >> 128 iodepth): >> IO size READ (virt=false) READ (virt=true) Write (virt=false) Write (virt=true) >> ---------- ------------------- ----------------- ------------------- ------------------- >> 1k 10.7M 8482k 10.8M 8471k >> 2k 10.4M 8266k 10.4M 8271k >> 4k 10.4M 8274k 10.3M 8226k >> 8k 10.2M 8131k 9800k 7933k >> 16k 9567k 7764k 8081k 6828k >> 32k 8865k 7309k 5570k 5153k >> 64k 7695k 6586k 2682k 2617k >> 128k 5346k 5489k 1320k 1296k >> >> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> >> --- >> drivers/block/null_blk/main.c | 7 +++++++ >> 1 file changed, 7 insertions(+) >> >> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c >> index d6c821d48090..9ca80e38f7e5 100644 >> --- a/drivers/block/null_blk/main.c >> +++ b/drivers/block/null_blk/main.c >> @@ -84,6 +84,10 @@ enum { >> NULL_Q_MQ = 2, >> }; >> >> +static bool g_virt_boundary = false; >> +module_param_named(virt_boundary, g_virt_boundary, bool, 0444); >> +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False"); >> + >> static int g_no_sched; >> module_param_named(no_sched, g_no_sched, int, 0444); >> MODULE_PARM_DESC(no_sched, "No io scheduler"); >> @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev) >> BLK_DEF_MAX_SECTORS); >> blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); >> >> + if (g_virt_boundary) >> + blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); >> + >> null_config_discard(nullb); >> >> sprintf(nullb->disk_name, "nullb%d", nullb->index); >> > > Looks good to me, but could you also add the configfs equivalent setting ? Oops. Chaitanya already had pointed this out... Sorry about the noise :)
On 4/12/2021 2:36 AM, Damien Le Moal wrote: > On 2021/04/12 8:34, Damien Le Moal wrote: >> On 2021/04/12 1:30, Max Gurtovoy wrote: >>> This will enable changing the virtual boundary of null blk devices. For >>> now, null blk devices didn't have any restriction on the scatter/gather >>> elements received from the block layer. Add a module parameter that will >>> control the virtual boundary. This will enable testing the efficiency of >>> the block layer bounce buffer in case a suitable application will send >>> discontiguous IO to the given device. >>> >>> Initial testing with patched FIO showed the following results (64 jobs, >>> 128 iodepth): >>> IO size READ (virt=false) READ (virt=true) Write (virt=false) Write (virt=true) >>> ---------- ------------------- ----------------- ------------------- ------------------- >>> 1k 10.7M 8482k 10.8M 8471k >>> 2k 10.4M 8266k 10.4M 8271k >>> 4k 10.4M 8274k 10.3M 8226k >>> 8k 10.2M 8131k 9800k 7933k >>> 16k 9567k 7764k 8081k 6828k >>> 32k 8865k 7309k 5570k 5153k >>> 64k 7695k 6586k 2682k 2617k >>> 128k 5346k 5489k 1320k 1296k >>> >>> Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> >>> --- >>> drivers/block/null_blk/main.c | 7 +++++++ >>> 1 file changed, 7 insertions(+) >>> >>> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c >>> index d6c821d48090..9ca80e38f7e5 100644 >>> --- a/drivers/block/null_blk/main.c >>> +++ b/drivers/block/null_blk/main.c >>> @@ -84,6 +84,10 @@ enum { >>> NULL_Q_MQ = 2, >>> }; >>> >>> +static bool g_virt_boundary = false; >>> +module_param_named(virt_boundary, g_virt_boundary, bool, 0444); >>> +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False"); >>> + >>> static int g_no_sched; >>> module_param_named(no_sched, g_no_sched, int, 0444); >>> MODULE_PARM_DESC(no_sched, "No io scheduler"); >>> @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev) >>> BLK_DEF_MAX_SECTORS); >>> blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); >>> >>> + if (g_virt_boundary) >>> + blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); >>> + >>> null_config_discard(nullb); >>> >>> sprintf(nullb->disk_name, "nullb%d", nullb->index); >>> >> Looks good to me, but could you also add the configfs equivalent setting ? > Oops. Chaitanya already had pointed this out... Sorry about the noise :) Sure, I'll add it in v2. Thanks. > >
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c index d6c821d48090..9ca80e38f7e5 100644 --- a/drivers/block/null_blk/main.c +++ b/drivers/block/null_blk/main.c @@ -84,6 +84,10 @@ enum { NULL_Q_MQ = 2, }; +static bool g_virt_boundary = false; +module_param_named(virt_boundary, g_virt_boundary, bool, 0444); +MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False"); + static int g_no_sched; module_param_named(no_sched, g_no_sched, int, 0444); MODULE_PARM_DESC(no_sched, "No io scheduler"); @@ -1880,6 +1884,9 @@ static int null_add_dev(struct nullb_device *dev) BLK_DEF_MAX_SECTORS); blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); + if (g_virt_boundary) + blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1); + null_config_discard(nullb); sprintf(nullb->disk_name, "nullb%d", nullb->index);
This will enable changing the virtual boundary of null blk devices. For now, null blk devices didn't have any restriction on the scatter/gather elements received from the block layer. Add a module parameter that will control the virtual boundary. This will enable testing the efficiency of the block layer bounce buffer in case a suitable application will send discontiguous IO to the given device. Initial testing with patched FIO showed the following results (64 jobs, 128 iodepth): IO size READ (virt=false) READ (virt=true) Write (virt=false) Write (virt=true) ---------- ------------------- ----------------- ------------------- ------------------- 1k 10.7M 8482k 10.8M 8471k 2k 10.4M 8266k 10.4M 8271k 4k 10.4M 8274k 10.3M 8226k 8k 10.2M 8131k 9800k 7933k 16k 9567k 7764k 8081k 6828k 32k 8865k 7309k 5570k 5153k 64k 7695k 6586k 2682k 2617k 128k 5346k 5489k 1320k 1296k Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> --- drivers/block/null_blk/main.c | 7 +++++++ 1 file changed, 7 insertions(+)