Message ID | 20200420223936.6773-1-schatzberg.dan@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | Charge loop device i/o to issuing cgroup | expand |
On Mon, 20 Apr 2020 18:39:29 -0400 Dan Schatzberg wrote: > > @@ -1140,8 +1215,17 @@ static int __loop_clr_fd(struct loop_device *lo, bool release) > blk_mq_freeze_queue(lo->lo_queue); > > spin_lock_irq(&lo->lo_lock); > + destroy_workqueue(lo->workqueue); Destruct it out of atomic context. > lo->lo_backing_file = NULL; > + list_for_each_entry_safe(worker, pos, &lo->idle_worker_list, > + idle_list) { > + list_del(&worker->idle_list); > + rb_erase(&worker->rb_node, &lo->worker_tree); > + css_put(worker->css); > + kfree(worker); > + } > spin_unlock_irq(&lo->lo_lock);
On Tue, Apr 21, 2020 at 10:48:45AM +0800, Hillf Danton wrote: > > On Mon, 20 Apr 2020 18:39:29 -0400 Dan Schatzberg wrote: > > > > @@ -1140,8 +1215,17 @@ static int __loop_clr_fd(struct loop_device *lo, bool release) > > blk_mq_freeze_queue(lo->lo_queue); > > > > spin_lock_irq(&lo->lo_lock); > > + destroy_workqueue(lo->workqueue); > > Destruct it out of atomic context. I may as well do this, but it doesn't matter, does it? The blk_mq_freeze_queue above should drain all I/O so the workqueue will be idle.
On Tue, Apr 21, 2020 at 11:33:37AM +0800, Hillf Danton wrote: > > On Mon, 20 Apr 2020 18:39:32 -0400 Dan Schatzberg wrote: > > > > @@ -964,13 +960,16 @@ static void loop_queue_work(struct loop_device *lo, struct loop_cmd *cmd) > > worker = kzalloc(sizeof(struct loop_worker), GFP_NOWAIT | __GFP_NOWARN); > > /* > > * In the event we cannot allocate a worker, just queue on the > > - * rootcg worker > > + * rootcg worker and issue the I/O as the rootcg > > */ > > - if (!worker) > > + if (!worker) { > > + cmd->blkcg_css = NULL; > > + cmd->memcg_css = NULL; > > Dunno if css_put(cmd->memcg_css); Good catch. Need to drop the reference here.