Message ID | 1472813240-11011-3-git-send-email-yu.c.zhang@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
>>> On 02.09.16 at 12:47, <yu.c.zhang@linux.intel.com> wrote: > Routine hvmemul_do_io() may need to peek the p2m type of a gfn to > select the ioreq server. For example, operations on gfns with > p2m_ioreq_server type will be delivered to a corresponding ioreq > server, and this requires that the p2m type not be switched back > to p2m_ram_rw during the emulation process. To avoid this race > condition, we delay the release of p2m lock in hvm_hap_nested_page_fault() > until mmio is handled. > > Note: previously in hvm_hap_nested_page_fault(), put_gfn() was moved > before the handling of mmio, due to a deadlock risk between the p2m > lock and the event lock(in commit 77b8dfe). Later, a per-event channel > lock was introduced in commit de6acb7, to send events. So we do not > need to worry about the deadlock issue. > > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com> Reviewed-by: Jan Beulich <jbeulich@suse.com> However, shouldn't this go _before_ what is now patch 1? Jan
On 9/9/2016 1:24 PM, Yu Zhang wrote: > > > On 9/2/2016 6:47 PM, Yu Zhang wrote: >> Routine hvmemul_do_io() may need to peek the p2m type of a gfn to >> select the ioreq server. For example, operations on gfns with >> p2m_ioreq_server type will be delivered to a corresponding ioreq >> server, and this requires that the p2m type not be switched back >> to p2m_ram_rw during the emulation process. To avoid this race >> condition, we delay the release of p2m lock in >> hvm_hap_nested_page_fault() >> until mmio is handled. >> >> Note: previously in hvm_hap_nested_page_fault(), put_gfn() was moved >> before the handling of mmio, due to a deadlock risk between the p2m >> lock and the event lock(in commit 77b8dfe). Later, a per-event channel >> lock was introduced in commit de6acb7, to send events. So we do not >> need to worry about the deadlock issue. >> >> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com> >> > > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> > > However, shouldn't this go _before_ what is now patch 1? > Yes. This should be the patch 1/4. Thanks! :) Yu
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index e969735..9b419d2 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1865,18 +1865,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, (npfec.write_access && (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) { - __put_gfn(p2m, gfn); - if ( ap2m_active ) - __put_gfn(hostp2m, gfn); - rc = 0; if ( unlikely(is_pvh_domain(currd)) ) - goto out; + goto out_put_gfn; if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) ) hvm_inject_hw_exception(TRAP_gp_fault, 0); rc = 1; - goto out; + goto out_put_gfn; } /* Check if the page has been paged out */
Routine hvmemul_do_io() may need to peek the p2m type of a gfn to select the ioreq server. For example, operations on gfns with p2m_ioreq_server type will be delivered to a corresponding ioreq server, and this requires that the p2m type not be switched back to p2m_ram_rw during the emulation process. To avoid this race condition, we delay the release of p2m lock in hvm_hap_nested_page_fault() until mmio is handled. Note: previously in hvm_hap_nested_page_fault(), put_gfn() was moved before the handling of mmio, due to a deadlock risk between the p2m lock and the event lock(in commit 77b8dfe). Later, a per-event channel lock was introduced in commit de6acb7, to send events. So we do not need to worry about the deadlock issue. Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com> --- Cc: Paul Durrant <paul.durrant@citrix.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> --- xen/arch/x86/hvm/hvm.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)