Message ID | 20190529080836.13031-1-xiubli@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RFC] nbd: set the default nbds_max to 0 | expand |
On Wed, May 29, 2019 at 04:08:36PM +0800, xiubli@redhat.com wrote: > From: Xiubo Li <xiubli@redhat.com> > > There is one problem that when trying to check the nbd device > NBD_CMD_STATUS and at the same time insert the nbd.ko module, > we can randomly get some of the 16 /dev/nbd{0~15} are connected, > but they are not. This is because that the udev service in user > space will try to open /dev/nbd{0~15} devices to do some sanity > check when they are added in "__init nbd_init()" and then close > it asynchronousely. > > Signed-off-by: Xiubo Li <xiubli@redhat.com> > --- > > Not sure whether this patch make sense here, coz this issue can be > avoided by setting the "nbds_max=0" when inserting the nbd.ko modules. > Yeah I'd rather not make this the default, as of right now most people still probably use the old method of configuration and it may surprise them to suddenly have to do nbds_max=16 to make their stuff work. Thanks, Josef
On 2019/5/29 21:48, Josef Bacik wrote: > On Wed, May 29, 2019 at 04:08:36PM +0800, xiubli@redhat.com wrote: >> From: Xiubo Li <xiubli@redhat.com> >> >> There is one problem that when trying to check the nbd device >> NBD_CMD_STATUS and at the same time insert the nbd.ko module, >> we can randomly get some of the 16 /dev/nbd{0~15} are connected, >> but they are not. This is because that the udev service in user >> space will try to open /dev/nbd{0~15} devices to do some sanity >> check when they are added in "__init nbd_init()" and then close >> it asynchronousely. >> >> Signed-off-by: Xiubo Li <xiubli@redhat.com> >> --- >> >> Not sure whether this patch make sense here, coz this issue can be >> avoided by setting the "nbds_max=0" when inserting the nbd.ko modules. >> > Yeah I'd rather not make this the default, as of right now most people still > probably use the old method of configuration and it may surprise them to > suddenly have to do nbds_max=16 to make their stuff work. Thanks, Sure, make sense to me :-) So this patch here in the mail list will as one note and reminder to other who may hit the same issue in future. Thanks. BRs Xiubo > Josef >
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 4c1de1c..98be6ca 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -137,7 +137,7 @@ struct nbd_cmd { #define NBD_DEF_BLKSIZE 1024 -static unsigned int nbds_max = 16; +static unsigned int nbds_max; static int max_part = 16; static struct workqueue_struct *recv_workqueue; static int part_shift; @@ -2310,6 +2310,6 @@ static void __exit nbd_cleanup(void) MODULE_LICENSE("GPL"); module_param(nbds_max, int, 0444); -MODULE_PARM_DESC(nbds_max, "number of network block devices to initialize (default: 16)"); +MODULE_PARM_DESC(nbds_max, "number of network block devices to initialize (default: 0)"); module_param(max_part, int, 0444); MODULE_PARM_DESC(max_part, "number of partitions per device (default: 16)");