Message ID | 1456886879-28128-1-git-send-email-majun258@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, Mar 02, 2016 at 10:47:59AM +0800, MaJun wrote: > From: Ma Jun <majun258@huawei.com> > > The spin_lock/unlock_irq interface is not safe when this function is called > at some case which need irq disabled. > For example: > spin_lock_irqsave() > | > request_irq() --> proc_alloc_inum() > | > spin_unlock_irqrestore() Do you even read your own patch? > if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) ^^^^^^^^^^ This. It can block. You *can't* call that under spin_lock_irqsave(). At all. You also can't do request_irq() under a spinlock, no matter whether you disable irqs or not - it also blocks. So does proc_mkdir(), for that matter, and not only in proc_alloc_inum(). NAKed. Don't do it. request_irq() is not to be called under spinlocks, with or without irqs disabled.
? 2016/3/2 11:09, Al Viro ??: > On Wed, Mar 02, 2016 at 10:47:59AM +0800, MaJun wrote: >> From: Ma Jun <majun258@huawei.com> >> >> The spin_lock/unlock_irq interface is not safe when this function is called >> at some case which need irq disabled. > >> For example: >> spin_lock_irqsave() >> | >> request_irq() --> proc_alloc_inum() >> | >> spin_unlock_irqrestore() > > Do you even read your own patch? > >> if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) > ^^^^^^^^^^ > This. > > It can block. You *can't* call that under spin_lock_irqsave(). At all. > You also can't do request_irq() under a spinlock, no matter whether you > disable irqs or not - it also blocks. So does proc_mkdir(), for that > matter, and not only in proc_alloc_inum(). > > NAKed. Don't do it. request_irq() is not to be called under spinlocks, > with or without irqs disabled. > Sorry,I made a wrong example for this problem. I want to say this interface may change the irq status after this function be called. Thanks! MaJun > . >
On Wed, Mar 02, 2016 at 02:32:28PM +0800, majun (F) wrote: > Sorry,I made a wrong example for this problem. > I want to say this interface may change the irq status after this function > be called. It can't - either it's called with irqs enabled, in which case it returns the same way, or it's called with irqs disabled, in which case it's a trouble waiting to happen as soon as the allocation there (or in proc_mkdir(), etc.) happens to block and failure to restore irq state is the least of your concerns, because when you return from schedule() you *will* have irq enabled, no matter what. Take a look at __schedule(): ... local_irq_disable(); rcu_note_context_switch(); /* * Make sure that signal_pending_state()->signal_pending() below * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) * done by the caller to avoid the race with signal_wake_up(). */ smp_mb__before_spinlock(); raw_spin_lock(&rq->lock); ... rq = context_switch(rq, prev, next); /* unlocks the rq */ and in context_switch() (right after switch_to()) we call finish_task_switch(), which calls finish_lock_switch(), which does raw_spin_unlock_irq(&rq->lock), which does local_irq_enable(). And no, it doesn't save the irq state anywhere - both disable and enable are unconditional. schedule() always returns with irqs enabled. Don't call blocking things with irqs disabled. If design of some of your drivers depends on being able to do that, sorry, but it'll have to be changed.
On Wed, Mar 02, 2016 at 05:29:54PM +0000, Al Viro wrote: > And no, it doesn't save the irq state anywhere - both disable and enable > are unconditional. schedule() always returns with irqs enabled. PS: look at it that way: how would you expect a context switch to behave? Suppose we blocked because we needed to write some dirty pages on disk to be able to free them; we *can't* keep irqs disabled through all of that, right? After all, disk controller needs to be able to tell us it's done writing; hard to do that with interrupts disabled, not to mention that keeping them disabled for typical duration of disk write would be rather antisocial. So no matter how schedule() behaves wrt irqs, doing it with irqs disabled would either invite deadlocks, or enable irqs at least for a while. Even if it remembered that you used to have them disabled and re-disabled them when switching back, you would still lose whatever protection you were getting from having them disabled in the first place. If e.g. you call request_irq() before being done setting the things up for interrupt handler and count on finishing that before reenabling irqs, you would need irqs to _stay_ disabled through all of that. And with any blocking allocations there's no way to guarantee that.
diff --git a/fs/proc/generic.c b/fs/proc/generic.c index ff3ffc7..4fc1502 100644 --- a/fs/proc/generic.c +++ b/fs/proc/generic.c @@ -191,23 +191,24 @@ int proc_alloc_inum(unsigned int *inum) { unsigned int i; int error; + unsigned long flags; retry: if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) return -ENOMEM; - spin_lock_irq(&proc_inum_lock); + spin_lock_irqsave(&proc_inum_lock, flags); error = ida_get_new(&proc_inum_ida, &i); - spin_unlock_irq(&proc_inum_lock); + spin_unlock_irqrestore(&proc_inum_lock, flags); if (error == -EAGAIN) goto retry; else if (error) return error; if (i > UINT_MAX - PROC_DYNAMIC_FIRST) { - spin_lock_irq(&proc_inum_lock); + spin_lock_irqsave(&proc_inum_lock, flags); ida_remove(&proc_inum_ida, i); - spin_unlock_irq(&proc_inum_lock); + spin_unlock_irqrestore(&proc_inum_lock, flags); return -ENOSPC; } *inum = PROC_DYNAMIC_FIRST + i;