Message ID | CAPm50a+gcug5XOsg_Z=7R+3j+VUxHMrzyGNbps7-okR625KB_w@mail.gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | kvm: x86: Keep the lock order consistent | expand |
On Fri, Oct 07, 2022, Hao Peng wrote: > From: Peng Hao <flyingpeng@tencent.com> > > srcu read side in critical section may sleep, so it should precede > the read lock, I agree with the patch, but not necessarily with this statement. The above implies that it's not safe to acquire SRCU while in a non-sleepable context, which is incorrect. E.g. at first I thought the above implied there is an incorrect sleep buried in this code. > while other paths such as kvm_xen_set_evtchn_fast Please put parantheses after function names, e.g. kvm_xen_set_evtchn_fast() and srcu_read_lock(). > execute srcu_read_lock before acquiring the read lock. How about this for a changelog? Acquire SRCU before taking the gpc spinlock in wait_pending_event() so as to be consistent with all other functions that acquire both locks. It's not illegal to acquire SRCU inside a spinlock, nor is there deadlock potential, but in general it's preferable to order locks from least restrictive to most restrictive, e.g. if wait_pending_event() needed to sleep for whatever reason, it could do so while holding SRCU, but would need to drop the spinlock.
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c index 280cb5dc7341..fa6e54b13afb 100644 --- a/arch/x86/kvm/xen.c +++ b/arch/x86/kvm/xen.c @@ -965,8 +965,8 @@ static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports, bool ret = true; int idx, i; - read_lock_irqsave(&gpc->lock, flags); idx = srcu_read_lock(&kvm->srcu); + read_lock_irqsave(&gpc->lock, flags); if (!kvm_gfn_to_pfn_cache_check(kvm, gpc, gpc->gpa, PAGE_SIZE)) goto out_rcu;