From patchwork Fri Apr 29 08:13:35 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 8978301 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CF132BF29F for ; Fri, 29 Apr 2016 08:16:21 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 11B4B201FA for ; Fri, 29 Apr 2016 08:16:20 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 444D72021B for ; Fri, 29 Apr 2016 08:16:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aw3YS-0003BN-67; Fri, 29 Apr 2016 08:13:40 +0000 Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aw3YP-0003AY-Tb for xen-devel@lists.xenproject.org; Fri, 29 Apr 2016 08:13:38 +0000 Received: from [193.109.254.147] by server-14.bemta-14.messagelabs.com id 9E/F9-26345-1B713275; Fri, 29 Apr 2016 08:13:37 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrGIsWRWlGSWpSXmKPExsXS6fjDS3eDuHK 4wYp7LBbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bkfZsYC94cZaw48PkRSwNj20zGLkZODiGB PImnO7+B2bwCdhJLb61gB7ElBAwl9s1fxQZiswioSqw73wYWZxNQl2h7tp21i5GDQ0TAQOLc0 SSQMLNApsSek0vAyoUF0iW+39vCDFLCKyAo8XeHMESJncS/mf9YJjByzULIzEKSgbC1JB7+ug Vla0ssW/iaGaScWUBaYvk/DoiwjURfTxsbqhIQ211i2Yz9jAsYOVYxqhenFpWlFuma6CUVZaZ nlOQmZuboGhqa6OWmFhcnpqfmJCYV6yXn525iBIYfAxDsYFyx0PkQoyQHk5IobwincrgQX1J+ SmVGYnFGfFFpTmrxIUYZDg4lCd4FYkA5waLU9NSKtMwcYCTApCU4eJREeHVFgdK8xQWJucWZ6 RCpU4yKUuK8j0D6BEASGaV5cG2w6LvEKCslzMsIdIgQT0FqUW5mCar8K0ZxDkYlYYgpPJl5JX DTXwEtZgJaLLBJEWRxSSJCSqqBsSk/xPG3xkmmQ/JL5jhkX+pZsPBlwtP37P+jFlz3lcq+u3D xNJHdx3o2Vtkm1dor3ZoT5PDURynNqfeXZ/QBseC9mmuyAgpN/k/epOVz6NVezsgXNyrlL595 9FQ3drXxIcewpVkvmp4+at6ZdeO8qOJ0f24PI4/fB90Xa121SJWdEfN9g9aD+UosxRmJhlrMR cWJANZ3lu+5AgAA X-Env-Sender: JBeulich@suse.com X-Msg-Ref: server-7.tower-27.messagelabs.com!1461917613!38533212!1 X-Originating-IP: [137.65.248.74] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 34472 invoked from network); 29 Apr 2016 08:13:35 -0000 Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com) (137.65.248.74) by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 29 Apr 2016 08:13:35 -0000 Received: from INET-PRV-MTA by prv-mh.provo.novell.com with Novell_GroupWise; Fri, 29 Apr 2016 02:13:33 -0600 Message-Id: <572333CF02000078000E719C@prv-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 14.2.0 Date: Fri, 29 Apr 2016 02:13:35 -0600 From: "Jan Beulich" To: "xen-devel" Mime-Version: 1.0 Cc: Tim Deegan , Paul Durrant , Wei Liu Subject: [Xen-devel] [PATCH] x86/shadow: account for ioreq server pages before complaining about not found mapping X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP prepare_ring_for_helper(), just like share_xen_page_with_guest(), takes a write reference on the page, and hence should similarly be accounted for when determining whether to log a complaint. This requires using recursive locking for the ioreq server lock, as the offending invocation of sh_remove_all_mappings() is down the call stack from hvm_set_ioreq_server_state(). (While not strictly needed to be done in all other instances too, convert all of them for consistency.) At once improve the usefulness of the shadow error message: Log all values involved in triggering it as well as the GFN (to aid understanding which guest page it is that there is a problem with - in cases like the one here the GFN is invariant across invocations, while the MFN obviously can [and will] vary). Signed-off-by: Jan Beulich x86/shadow: account for ioreq server pages before complaining about not found mapping prepare_ring_for_helper(), just like share_xen_page_with_guest(), takes a write reference on the page, and hence should similarly be accounted for when determining whether to log a complaint. This requires using recursive locking for the ioreq server lock, as the offending invocation of sh_remove_all_mappings() is down the call stack from hvm_set_ioreq_server_state(). (While not strictly needed to be done in all other instances too, convert all of them for consistency.) At once improve the usefulness of the shadow error message: Log all values involved in triggering it as well as the GFN (to aid understanding which guest page it is that there is a problem with - in cases like the one here the GFN is invariant across invocations, while the MFN obviously can [and will] vary). Signed-off-by: Jan Beulich --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -240,6 +240,30 @@ static int hvm_map_ioreq_page( return 0; } +bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + bool_t found = 0; + + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + list_for_each_entry ( s, + &d->arch.hvm_domain.ioreq_server.list, + list_entry ) + { + if ( (s->ioreq.va && s->ioreq.page == page) || + (s->bufioreq.va && s->bufioreq.page == page) ) + { + found = 1; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + return found; +} + static void hvm_remove_ioreq_gmfn( struct domain *d, struct hvm_ioreq_page *iorp) { @@ -671,7 +695,7 @@ int hvm_create_ioreq_server(struct domai goto fail1; domain_pause(d); - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -EEXIST; if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL ) @@ -694,14 +718,14 @@ int hvm_create_ioreq_server(struct domai if ( id ) *id = s->id; - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); domain_unpause(d); return 0; fail3: fail2: - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); domain_unpause(d); xfree(s); @@ -714,7 +738,7 @@ int hvm_destroy_ioreq_server(struct doma struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -743,7 +767,7 @@ int hvm_destroy_ioreq_server(struct doma break; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -756,7 +780,7 @@ int hvm_get_ioreq_server_info(struct dom struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -781,7 +805,7 @@ int hvm_get_ioreq_server_info(struct dom break; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -793,7 +817,7 @@ int hvm_map_io_range_to_ioreq_server(str struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -833,7 +857,7 @@ int hvm_map_io_range_to_ioreq_server(str } } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -845,7 +869,7 @@ int hvm_unmap_io_range_from_ioreq_server struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -885,7 +909,7 @@ int hvm_unmap_io_range_from_ioreq_server } } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -896,7 +920,7 @@ int hvm_set_ioreq_server_state(struct do struct list_head *entry; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each ( entry, @@ -925,7 +949,7 @@ int hvm_set_ioreq_server_state(struct do break; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -934,7 +958,7 @@ int hvm_all_ioreq_servers_add_vcpu(struc struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); list_for_each_entry ( s, &d->arch.hvm_domain.ioreq_server.list, @@ -947,7 +971,7 @@ int hvm_all_ioreq_servers_add_vcpu(struc goto fail; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return 0; @@ -957,7 +981,7 @@ int hvm_all_ioreq_servers_add_vcpu(struc list_entry ) hvm_ioreq_server_remove_vcpu(s, v); - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -966,21 +990,21 @@ void hvm_all_ioreq_servers_remove_vcpu(s { struct hvm_ioreq_server *s; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); list_for_each_entry ( s, &d->arch.hvm_domain.ioreq_server.list, list_entry ) hvm_ioreq_server_remove_vcpu(s, v); - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); } void hvm_destroy_all_ioreq_servers(struct domain *d) { struct hvm_ioreq_server *s, *next; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); /* No need to domain_pause() as the domain is being torn down */ @@ -1003,7 +1027,7 @@ void hvm_destroy_all_ioreq_servers(struc xfree(s); } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); } static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid, @@ -1027,7 +1051,7 @@ int hvm_set_dm_domain(struct domain *d, struct hvm_ioreq_server *s; int rc = 0; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); /* * Lack of ioreq server is not a failure. HVM_PARAM_DM_DOMAIN will @@ -1076,7 +1100,7 @@ int hvm_set_dm_domain(struct domain *d, domain_unpause(d); done: - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include "private.h" @@ -2591,7 +2592,8 @@ int sh_remove_write_access_from_sl1p(str /* Remove all mappings of a guest frame from the shadow tables. * Returns non-zero if we need to flush TLBs. */ -static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn) +static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, + unsigned long gfn) { struct page_info *page = mfn_to_page(gmfn); @@ -2643,19 +2645,24 @@ static int sh_remove_all_mappings(struct /* If that didn't catch the mapping, something is very wrong */ if ( !sh_check_page_has_no_refs(page) ) { - /* Don't complain if we're in HVM and there are some extra mappings: + /* + * Don't complain if we're in HVM and there are some extra mappings: * The qemu helper process has an untyped mapping of this dom's RAM * and the HVM restore program takes another. - * Also allow one typed refcount for xenheap pages, to match - * share_xen_page_with_guest(). */ + * Also allow one typed refcount for + * - Xen heap pages, to match share_xen_page_with_guest(), + * - ioreq server pages, to match prepare_ring_for_helper(). + */ if ( !(shadow_mode_external(d) && (page->count_info & PGC_count_mask) <= 3 && ((page->u.inuse.type_info & PGT_count_mask) - == !!is_xen_heap_page(page))) ) + == (is_xen_heap_page(page) || + is_ioreq_server_page(d, page)))) ) { - SHADOW_ERROR("can't find all mappings of mfn %lx: " - "c=%08lx t=%08lx\n", mfn_x(gmfn), - page->count_info, page->u.inuse.type_info); + SHADOW_ERROR("can't find all mappings of mfn %lx (gfn %lx): " + "c=%lx t=%lx x=%d i=%d\n", mfn_x(gmfn), gfn, + page->count_info, page->u.inuse.type_info, + !!is_xen_heap_page(page), is_ioreq_server_page(d, page)); } } @@ -3515,7 +3522,7 @@ static void sh_unshadow_for_p2m_change(s if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(mfn) ) { sh_remove_all_shadows_and_parents(d, mfn); - if ( sh_remove_all_mappings(d, mfn) ) + if ( sh_remove_all_mappings(d, mfn, gfn) ) flush_tlb_mask(d->domain_dirty_cpumask); } } @@ -3550,7 +3557,8 @@ static void sh_unshadow_for_p2m_change(s { /* This GFN->MFN mapping has gone away */ sh_remove_all_shadows_and_parents(d, omfn); - if ( sh_remove_all_mappings(d, omfn) ) + if ( sh_remove_all_mappings(d, omfn, + gfn + (i << PAGE_SHIFT)) ) cpumask_or(&flushmask, &flushmask, d->domain_dirty_cpumask); } @@ -3766,7 +3774,8 @@ int shadow_track_dirty_vram(struct domai dirty = 1; /* TODO: Heuristics for finding the single mapping of * this gmfn */ - flush_tlb |= sh_remove_all_mappings(d, mfn); + flush_tlb |= sh_remove_all_mappings(d, mfn, + begin_pfn + i); } else { --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -21,6 +21,7 @@ bool_t hvm_io_pending(struct vcpu *v); bool_t handle_hvm_io_completion(struct vcpu *v); +bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page); int hvm_create_ioreq_server(struct domain *d, domid_t domid, bool_t is_default, int bufioreq_handling, Reviewed-by: Paul Durrant Reviewed-by: Andrew Cooper , albeit with one --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -240,6 +240,30 @@ static int hvm_map_ioreq_page( return 0; } +bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page) +{ + const struct hvm_ioreq_server *s; + bool_t found = 0; + + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + list_for_each_entry ( s, + &d->arch.hvm_domain.ioreq_server.list, + list_entry ) + { + if ( (s->ioreq.va && s->ioreq.page == page) || + (s->bufioreq.va && s->bufioreq.page == page) ) + { + found = 1; + break; + } + } + + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); + + return found; +} + static void hvm_remove_ioreq_gmfn( struct domain *d, struct hvm_ioreq_page *iorp) { @@ -671,7 +695,7 @@ int hvm_create_ioreq_server(struct domai goto fail1; domain_pause(d); - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -EEXIST; if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL ) @@ -694,14 +718,14 @@ int hvm_create_ioreq_server(struct domai if ( id ) *id = s->id; - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); domain_unpause(d); return 0; fail3: fail2: - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); domain_unpause(d); xfree(s); @@ -714,7 +738,7 @@ int hvm_destroy_ioreq_server(struct doma struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -743,7 +767,7 @@ int hvm_destroy_ioreq_server(struct doma break; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -756,7 +780,7 @@ int hvm_get_ioreq_server_info(struct dom struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -781,7 +805,7 @@ int hvm_get_ioreq_server_info(struct dom break; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -793,7 +817,7 @@ int hvm_map_io_range_to_ioreq_server(str struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -833,7 +857,7 @@ int hvm_map_io_range_to_ioreq_server(str } } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -845,7 +869,7 @@ int hvm_unmap_io_range_from_ioreq_server struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each_entry ( s, @@ -885,7 +909,7 @@ int hvm_unmap_io_range_from_ioreq_server } } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -896,7 +920,7 @@ int hvm_set_ioreq_server_state(struct do struct list_head *entry; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); rc = -ENOENT; list_for_each ( entry, @@ -925,7 +949,7 @@ int hvm_set_ioreq_server_state(struct do break; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -934,7 +958,7 @@ int hvm_all_ioreq_servers_add_vcpu(struc struct hvm_ioreq_server *s; int rc; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); list_for_each_entry ( s, &d->arch.hvm_domain.ioreq_server.list, @@ -947,7 +971,7 @@ int hvm_all_ioreq_servers_add_vcpu(struc goto fail; } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return 0; @@ -957,7 +981,7 @@ int hvm_all_ioreq_servers_add_vcpu(struc list_entry ) hvm_ioreq_server_remove_vcpu(s, v); - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } @@ -966,21 +990,21 @@ void hvm_all_ioreq_servers_remove_vcpu(s { struct hvm_ioreq_server *s; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); list_for_each_entry ( s, &d->arch.hvm_domain.ioreq_server.list, list_entry ) hvm_ioreq_server_remove_vcpu(s, v); - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); } void hvm_destroy_all_ioreq_servers(struct domain *d) { struct hvm_ioreq_server *s, *next; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); /* No need to domain_pause() as the domain is being torn down */ @@ -1003,7 +1027,7 @@ void hvm_destroy_all_ioreq_servers(struc xfree(s); } - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); } static int hvm_replace_event_channel(struct vcpu *v, domid_t remote_domid, @@ -1027,7 +1051,7 @@ int hvm_set_dm_domain(struct domain *d, struct hvm_ioreq_server *s; int rc = 0; - spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); /* * Lack of ioreq server is not a failure. HVM_PARAM_DM_DOMAIN will @@ -1076,7 +1100,7 @@ int hvm_set_dm_domain(struct domain *d, domain_unpause(d); done: - spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock); return rc; } --- a/xen/arch/x86/mm/shadow/common.c +++ b/xen/arch/x86/mm/shadow/common.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include "private.h" @@ -2591,7 +2592,8 @@ int sh_remove_write_access_from_sl1p(str /* Remove all mappings of a guest frame from the shadow tables. * Returns non-zero if we need to flush TLBs. */ -static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn) +static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, + unsigned long gfn) { struct page_info *page = mfn_to_page(gmfn); @@ -2643,19 +2645,24 @@ static int sh_remove_all_mappings(struct /* If that didn't catch the mapping, something is very wrong */ if ( !sh_check_page_has_no_refs(page) ) { - /* Don't complain if we're in HVM and there are some extra mappings: + /* + * Don't complain if we're in HVM and there are some extra mappings: * The qemu helper process has an untyped mapping of this dom's RAM * and the HVM restore program takes another. - * Also allow one typed refcount for xenheap pages, to match - * share_xen_page_with_guest(). */ + * Also allow one typed refcount for + * - Xen heap pages, to match share_xen_page_with_guest(), + * - ioreq server pages, to match prepare_ring_for_helper(). + */ if ( !(shadow_mode_external(d) && (page->count_info & PGC_count_mask) <= 3 && ((page->u.inuse.type_info & PGT_count_mask) - == !!is_xen_heap_page(page))) ) + == (is_xen_heap_page(page) || + is_ioreq_server_page(d, page)))) ) { - SHADOW_ERROR("can't find all mappings of mfn %lx: " - "c=%08lx t=%08lx\n", mfn_x(gmfn), - page->count_info, page->u.inuse.type_info); + SHADOW_ERROR("can't find all mappings of mfn %lx (gfn %lx): " + "c=%lx t=%lx x=%d i=%d\n", mfn_x(gmfn), gfn, + page->count_info, page->u.inuse.type_info, + !!is_xen_heap_page(page), is_ioreq_server_page(d, page)); } } @@ -3515,7 +3522,7 @@ static void sh_unshadow_for_p2m_change(s if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(mfn) ) { sh_remove_all_shadows_and_parents(d, mfn); - if ( sh_remove_all_mappings(d, mfn) ) + if ( sh_remove_all_mappings(d, mfn, gfn) ) flush_tlb_mask(d->domain_dirty_cpumask); } } @@ -3550,7 +3557,8 @@ static void sh_unshadow_for_p2m_change(s { /* This GFN->MFN mapping has gone away */ sh_remove_all_shadows_and_parents(d, omfn); - if ( sh_remove_all_mappings(d, omfn) ) + if ( sh_remove_all_mappings(d, omfn, + gfn + (i << PAGE_SHIFT)) ) cpumask_or(&flushmask, &flushmask, d->domain_dirty_cpumask); } @@ -3766,7 +3774,8 @@ int shadow_track_dirty_vram(struct domai dirty = 1; /* TODO: Heuristics for finding the single mapping of * this gmfn */ - flush_tlb |= sh_remove_all_mappings(d, mfn); + flush_tlb |= sh_remove_all_mappings(d, mfn, + begin_pfn + i); } else { --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -21,6 +21,7 @@ bool_t hvm_io_pending(struct vcpu *v); bool_t handle_hvm_io_completion(struct vcpu *v); +bool_t is_ioreq_server_page(struct domain *d, const struct page_info *page); int hvm_create_ioreq_server(struct domain *d, domid_t domid, bool_t is_default, int bufioreq_handling,