Message ID | 1571259116-102015-7-git-send-email-longli@linuxonhyperv.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | cifs: smbd: Improve reliability on transport reconnect | expand |
I cleaned up minor cosmetic nit spotted by checkpatch $ scripts/checkpatch.pl 0001-cifs-smbd-Only-queue-work-for-error-recovery-on-memo.patch WARNING: Possible unwrapped commit description (prefer a maximum 75 chars per line) #7: It's not necessary to queue invalidated memory registration to work queue, as WARNING: Block comments use a trailing */ on a separate line #58: FILE: fs/cifs/smbdirect.c:2614: + * current I/O */ total: 0 errors, 2 warnings, 38 lines checked On Wed, Oct 16, 2019 at 4:11 PM longli--- via samba-technical <samba-technical@lists.samba.org> wrote: > > From: Long Li <longli@microsoft.com> > > It's not necessary to queue invalidated memory registration to work queue, as > all we need to do is to unmap the SG and make it usable again. This can save > CPU cycles in normal data paths as memory registration errors are rare and > normally only happens during reconnection. > > Signed-off-by: Long Li <longli@microsoft.com> > Cc: stable@vger.kernel.org > --- > fs/cifs/smbdirect.c | 26 +++++++++++++++----------- > 1 file changed, 15 insertions(+), 11 deletions(-) > > diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c > index cf001f10d555..c00629a41d81 100644 > --- a/fs/cifs/smbdirect.c > +++ b/fs/cifs/smbdirect.c > @@ -2269,12 +2269,7 @@ static void smbd_mr_recovery_work(struct work_struct *work) > int rc; > > list_for_each_entry(smbdirect_mr, &info->mr_list, list) { > - if (smbdirect_mr->state == MR_INVALIDATED) > - ib_dma_unmap_sg( > - info->id->device, smbdirect_mr->sgl, > - smbdirect_mr->sgl_count, > - smbdirect_mr->dir); > - else if (smbdirect_mr->state == MR_ERROR) { > + if (smbdirect_mr->state == MR_ERROR) { > > /* recover this MR entry */ > rc = ib_dereg_mr(smbdirect_mr->mr); > @@ -2602,11 +2597,20 @@ int smbd_deregister_mr(struct smbd_mr *smbdirect_mr) > */ > smbdirect_mr->state = MR_INVALIDATED; > > - /* > - * Schedule the work to do MR recovery for future I/Os > - * MR recovery is slow and we don't want it to block the current I/O > - */ > - queue_work(info->workqueue, &info->mr_recovery_work); > + if (smbdirect_mr->state == MR_INVALIDATED) { > + ib_dma_unmap_sg( > + info->id->device, smbdirect_mr->sgl, > + smbdirect_mr->sgl_count, > + smbdirect_mr->dir); > + smbdirect_mr->state = MR_READY; > + if (atomic_inc_return(&info->mr_ready_count) == 1) > + wake_up_interruptible(&info->wait_mr); > + } else > + /* > + * Schedule the work to do MR recovery for future I/Os > + * MR recovery is slow and we don't want it to block the > + * current I/O */ > + queue_work(info->workqueue, &info->mr_recovery_work); > > done: > if (atomic_dec_and_test(&info->mr_used_count)) > -- > 2.17.1 > >
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index cf001f10d555..c00629a41d81 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -2269,12 +2269,7 @@ static void smbd_mr_recovery_work(struct work_struct *work) int rc; list_for_each_entry(smbdirect_mr, &info->mr_list, list) { - if (smbdirect_mr->state == MR_INVALIDATED) - ib_dma_unmap_sg( - info->id->device, smbdirect_mr->sgl, - smbdirect_mr->sgl_count, - smbdirect_mr->dir); - else if (smbdirect_mr->state == MR_ERROR) { + if (smbdirect_mr->state == MR_ERROR) { /* recover this MR entry */ rc = ib_dereg_mr(smbdirect_mr->mr); @@ -2602,11 +2597,20 @@ int smbd_deregister_mr(struct smbd_mr *smbdirect_mr) */ smbdirect_mr->state = MR_INVALIDATED; - /* - * Schedule the work to do MR recovery for future I/Os - * MR recovery is slow and we don't want it to block the current I/O - */ - queue_work(info->workqueue, &info->mr_recovery_work); + if (smbdirect_mr->state == MR_INVALIDATED) { + ib_dma_unmap_sg( + info->id->device, smbdirect_mr->sgl, + smbdirect_mr->sgl_count, + smbdirect_mr->dir); + smbdirect_mr->state = MR_READY; + if (atomic_inc_return(&info->mr_ready_count) == 1) + wake_up_interruptible(&info->wait_mr); + } else + /* + * Schedule the work to do MR recovery for future I/Os + * MR recovery is slow and we don't want it to block the + * current I/O */ + queue_work(info->workqueue, &info->mr_recovery_work); done: if (atomic_dec_and_test(&info->mr_used_count))