Message ID | b8722f14e7ed81452f791764a26d2ed4cfa11478.1698256179.git.leon@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Delegated to: | Jason Gunthorpe |
Headers | show |
Series | [rdma-rc] RDMA/mlx5: Fix mkey cache WQ flush | expand |
On Wed, Oct 25, 2023 at 08:49:59PM +0300, Leon Romanovsky wrote: > From: Moshe Shemesh <moshe@nvidia.com> > > The cited patch tries to ensure no pending works on the mkey cache > workqueue by disabling adding new works and call flush_workqueue(). > But this workqueue also has delayed works which might still be pending > the delay time to be queued. > > Add cancel_delayed_work() for the delayed works which waits to be queued > and then the flush_workqueue() will flush all works which are already > queued and running. > > Fixes: 374012b00457 ("RDMA/mlx5: Fix mkey cache possible deadlock on cleanup") > Signed-off-by: Moshe Shemesh <moshe@nvidia.com> > Signed-off-by: Leon Romanovsky <leonro@nvidia.com> > --- > drivers/infiniband/hw/mlx5/mr.c | 2 ++ > 1 file changed, 2 insertions(+) Applied to for-next, thanks Jason
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 8a3762d9ff58..e0629898c3c0 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -1026,11 +1026,13 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev) return; mutex_lock(&dev->cache.rb_lock); + cancel_delayed_work(&dev->cache.remove_ent_dwork); for (node = rb_first(root); node; node = rb_next(node)) { ent = rb_entry(node, struct mlx5_cache_ent, node); xa_lock_irq(&ent->mkeys); ent->disabled = true; xa_unlock_irq(&ent->mkeys); + cancel_delayed_work(&ent->dwork); } mutex_unlock(&dev->cache.rb_lock);