From patchwork Mon Apr 25 10:35:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhang X-Patchwork-Id: 8925421 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5955F9F372 for ; Mon, 25 Apr 2016 10:50:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AB10920125 for ; Mon, 25 Apr 2016 10:50:33 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DB0BC2011B for ; Mon, 25 Apr 2016 10:50:31 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aue4B-0007IB-MD; Mon, 25 Apr 2016 10:48:35 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aue4A-0007FT-SJ for xen-devel@lists.xen.org; Mon, 25 Apr 2016 10:48:35 +0000 Received: from [85.158.139.211] by server-13.bemta-5.messagelabs.com id CA/88-28710-206FD175; Mon, 25 Apr 2016 10:48:34 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrFLMWRWlGSWpSXmKPExsVywNwkQpfxm2y 4wbd5FhZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8bP7avZC/6vZqyYv2kVYwPjxtouRk4OIYEK iYuPbjGC2BICvBJHls1ghbD9Jf49es0KUVMncXb1AjYQm01AW+LH6t9g9SIC0hLXPl8Gsrk4m AU+Mkns6roIlhAWyJN4sWM1WAOLgKpE286tLCA2r4CnxLbnS5ggFshJnDw2GWwBp4CXxJH+f0 wQyzwljnxvZpzAyLuAkWEVo3pxalFZapGuhV5SUWZ6RkluYmaOrqGBqV5uanFxYnpqTmJSsV5 yfu4mRmA4MADBDsaDzc6HGCU5mJREeU8ekg0X4kvKT6nMSCzOiC8qzUktPsQow8GhJMF74gtQ TrAoNT21Ii0zBxiYMGkJDh4lEd7HIGne4oLE3OLMdIjUKUZdjmMvb69lEmLJy89LlRLntfsKV CQAUpRRmgc3AhYllxhlpYR5GYGOEuIpSC3KzSxBlX/FKM7BqCTMWwuyiiczrwRu0yugI5iAjr h8COyIkkSElFQD44pX+xj3x/Gv9lLbEbbSi+9spuGU7O/LMp7UaW/Ykeixf9fppzY+bM/fXJS Zc0c65sE5qfTNT1P3Zm6yvfu98a71SqFq1d3GidwsfSsa30rXzW6P7pg7Z+7rFpsKkYmXlvNw yAgETj4bfvrOjEwWRWMbRakwnbYHrzfdfXXefc4dtztcBjEJwkosxRmJhlrMRcWJACxuKdSNA gAA X-Env-Sender: yu.c.zhang@linux.intel.com X-Msg-Ref: server-14.tower-206.messagelabs.com!1461581303!3568679!4 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 60961 invoked from network); 25 Apr 2016 10:48:32 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-14.tower-206.messagelabs.com with SMTP; 25 Apr 2016 10:48:32 -0000 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 25 Apr 2016 03:48:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,532,1455004800"; d="scan'208";a="791819046" Received: from zhangyu-xengt.bj.intel.com ([10.238.157.66]) by orsmga003.jf.intel.com with ESMTP; 25 Apr 2016 03:48:28 -0700 From: Yu Zhang To: xen-devel@lists.xen.org Date: Mon, 25 Apr 2016 18:35:40 +0800 Message-Id: <1461580540-9314-4-git-send-email-yu.c.zhang@linux.intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1461580540-9314-1-git-send-email-yu.c.zhang@linux.intel.com> References: <1461580540-9314-1-git-send-email-yu.c.zhang@linux.intel.com> Cc: Kevin Tian , Keir Fraser , Jun Nakajima , George Dunlap , Andrew Cooper , Tim Deegan , Paul Durrant , zhiyuan.lv@intel.com, Jan Beulich , wei.liu2@citrix.com Subject: [Xen-devel] [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server. X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to let one ioreq server claim/disclaim its responsibility for the handling of guest pages with p2m type p2m_ioreq_server. Users of this HVMOP can specify which kind of operation is supposed to be emulated in a parameter named flags. Currently, this HVMOP only support the emulation of write operations. And it can be easily extended to support the emulation of read ones if an ioreq server has such requirement in the future. For now, we only support one ioreq server for this p2m type, so once an ioreq server has claimed its ownership, subsequent calls of the HVMOP_map_mem_type_to_ioreq_server will fail. Users can also disclaim the ownership of guest ram pages with p2m_ioreq_server, by triggering this new HVMOP, with ioreq server id set to the current owner's and flags parameter set to 0. Note both HVMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server are only supported for HVMs with HAP enabled. Also note that only after one ioreq server claims its ownership of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server be allowed. changes in v3: - Only support write emulation in this patch; - Remove the code to handle race condition in hvmemul_do_io(), - No need to reset the p2m type after an ioreq server has disclaimed its ownership of p2m_ioreq_server; - Only allow p2m type change to p2m_ioreq_server after an ioreq server has claimed its ownership of p2m_ioreq_server; - Only allow p2m type change to p2m_ioreq_server from pages with type p2m_ram_rw, and vice versa; - HVMOP_map_mem_type_to_ioreq_server interface change - use uint16, instead of enum to specify the memory type; - Function prototype change to p2m_get_ioreq_server(); - Coding style changes; - Commit message changes; - Add Tim's Acked-by. changes in v2: - Only support HAP enabled HVMs; - Replace p2m_mem_type_changed() with p2m_change_entry_type_global() to reset the p2m type, when an ioreq server tries to claim/disclaim its ownership of p2m_ioreq_server; - Comments changes. Signed-off-by: Paul Durrant Signed-off-by: Yu Zhang Acked-by: Tim Deegan Cc: Keir Fraser Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Jun Nakajima Cc: Kevin Tian Cc: Tim Deegan --- xen/arch/x86/hvm/emulate.c | 32 ++++++++++++++++-- xen/arch/x86/hvm/hvm.c | 63 ++++++++++++++++++++++++++++++++++-- xen/arch/x86/hvm/ioreq.c | 41 +++++++++++++++++++++++ xen/arch/x86/mm/hap/nested_hap.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 7 +++- xen/arch/x86/mm/p2m-pt.c | 23 +++++++++---- xen/arch/x86/mm/p2m.c | 70 ++++++++++++++++++++++++++++++++++++++++ xen/arch/x86/mm/shadow/multi.c | 3 +- xen/include/asm-x86/hvm/ioreq.h | 2 ++ xen/include/asm-x86/p2m.h | 30 +++++++++++++++-- xen/include/public/hvm/hvm_op.h | 31 ++++++++++++++++++ 11 files changed, 286 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index cc0b841..fd350bd 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -100,6 +100,7 @@ static int hvmemul_do_io( uint8_t dir, bool_t df, bool_t data_is_addr, uintptr_t data) { struct vcpu *curr = current; + struct domain *currd = curr->domain; struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io; ioreq_t p = { .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO, @@ -141,7 +142,7 @@ static int hvmemul_do_io( (p.dir != dir) || (p.df != df) || (p.data_is_ptr != data_is_addr) ) - domain_crash(curr->domain); + domain_crash(currd); if ( data_is_addr ) return X86EMUL_UNHANDLEABLE; @@ -169,8 +170,33 @@ static int hvmemul_do_io( break; case X86EMUL_UNHANDLEABLE: { - struct hvm_ioreq_server *s = - hvm_select_ioreq_server(curr->domain, &p); + struct hvm_ioreq_server *s; + p2m_type_t p2mt; + + if ( is_mmio ) + { + unsigned long gmfn = paddr_to_pfn(addr); + + (void) get_gfn_query_unlocked(currd, gmfn, &p2mt); + + if ( p2mt == p2m_ioreq_server ) + { + unsigned long flags; + + s = p2m_get_ioreq_server(currd, &flags); + + if ( dir == IOREQ_WRITE && + !(flags & P2M_IOREQ_HANDLE_WRITE_ACCESS) ) + s = NULL; + } + else + s = hvm_select_ioreq_server(currd, &p); + } + else + { + p2mt = p2m_invalid; + s = hvm_select_ioreq_server(currd, &p); + } /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 607546c..0ba66a2 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4711,6 +4711,40 @@ static int hvmop_unmap_io_range_from_ioreq_server( return rc; } +static int hvmop_map_mem_type_to_ioreq_server( + XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop) +{ + xen_hvm_map_mem_type_to_ioreq_server_t op; + struct domain *d; + int rc; + + if ( copy_from_guest(&op, uop, 1) ) + return -EFAULT; + + rc = rcu_lock_remote_domain_by_id(op.domid, &d); + if ( rc != 0 ) + return rc; + + rc = -EINVAL; + if ( !is_hvm_domain(d) ) + goto out; + + /* Only support for HAP enabled hvm */ + if ( !hap_enabled(d) ) + goto out; + + rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, + HVMOP_map_mem_type_to_ioreq_server); + if ( rc != 0 ) + goto out; + + rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags); + + out: + rcu_unlock_domain(d); + return rc; +} + static int hvmop_set_ioreq_server_state( XEN_GUEST_HANDLE_PARAM(xen_hvm_set_ioreq_server_state_t) uop) { @@ -5344,9 +5378,14 @@ static int hvmop_get_mem_type( static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new) { + if ( new == p2m_ioreq_server ) + return old == p2m_ram_rw; + + if ( old == p2m_ioreq_server ) + return new == p2m_ram_rw; + if ( p2m_is_ram(old) || - (p2m_is_hole(old) && new == p2m_mmio_dm) || - (old == p2m_ioreq_server && new == p2m_ram_rw) ) + (p2m_is_hole(old) && new == p2m_mmio_dm) ) return 1; return 0; @@ -5381,6 +5420,21 @@ static int hvmop_set_mem_type( if ( !is_hvm_domain(d) ) goto out; + if ( a.hvmmem_type == HVMMEM_ioreq_server ) + { + unsigned long flags; + struct hvm_ioreq_server *s; + + /* HVMMEM_ioreq_server is only supported for HAP enabled hvm. */ + if ( !hap_enabled(d) ) + goto out; + + /* Do not change to HVMMEM_ioreq_server if no ioreq server mapped. */ + s = p2m_get_ioreq_server(d, &flags); + if ( s == NULL ) + goto out; + } + rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type); if ( rc ) goto out; @@ -5482,6 +5536,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg) guest_handle_cast(arg, xen_hvm_io_range_t)); break; + case HVMOP_map_mem_type_to_ioreq_server: + rc = hvmop_map_mem_type_to_ioreq_server( + guest_handle_cast(arg, xen_hvm_map_mem_type_to_ioreq_server_t)); + break; + case HVMOP_set_ioreq_server_state: rc = hvmop_set_ioreq_server_state( guest_handle_cast(arg, xen_hvm_set_ioreq_server_state_t)); diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 690e478..2e4dea5 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -729,6 +729,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) domain_pause(d); + p2m_destroy_ioreq_server(d, s); + hvm_ioreq_server_disable(s, 0); list_del(&s->list_entry); @@ -890,6 +892,45 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, return rc; } +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint16_t type, uint32_t flags) +{ + struct hvm_ioreq_server *s; + int rc; + + /* For now, only HVMMEM_ioreq_server is supported. */ + if ( type != HVMMEM_ioreq_server ) + return -EINVAL; + + /* For now, only write emulation is supported. */ + if ( flags & ~(HVMOP_IOREQ_MEM_ACCESS_WRITE) ) + return -EINVAL; + + spin_lock(&d->arch.hvm_domain.ioreq_server.lock); + + rc = -ENOENT; + list_for_each_entry ( s, + &d->arch.hvm_domain.ioreq_server.list, + list_entry ) + { + if ( s == d->arch.hvm_domain.default_ioreq_server ) + continue; + + if ( s->id == id ) + { + rc = p2m_set_ioreq_server(d, flags, s); + if ( rc == 0 ) + dprintk(XENLOG_DEBUG, "%u %s type HVMMEM_ioreq_server.\n", + s->id, (flags != 0) ? "mapped to" : "unmapped from"); + + break; + } + } + + spin_unlock(&d->arch.hvm_domain.ioreq_server.lock); + return rc; +} + int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool_t enabled) { diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c index 9cee5a0..bbb6d85 100644 --- a/xen/arch/x86/mm/hap/nested_hap.c +++ b/xen/arch/x86/mm/hap/nested_hap.c @@ -174,7 +174,7 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa, if ( *p2mt == p2m_mmio_direct ) goto direct_mmio_out; rc = NESTEDHVM_PAGEFAULT_MMIO; - if ( *p2mt == p2m_mmio_dm ) + if ( *p2mt == p2m_mmio_dm || *p2mt == p2m_ioreq_server ) goto out; rc = NESTEDHVM_PAGEFAULT_L0_ERROR; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 380ec25..b803694 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -132,6 +132,12 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, entry->r = entry->w = entry->x = 1; entry->a = entry->d = !!cpu_has_vmx_ept_ad; break; + case p2m_ioreq_server: + entry->r = entry->x = 1; + entry->w = !(p2m->ioreq.flags & P2M_IOREQ_HANDLE_WRITE_ACCESS); + entry->a = !!cpu_has_vmx_ept_ad; + entry->d = entry->w && cpu_has_vmx_ept_ad; + break; case p2m_mmio_direct: entry->r = entry->x = 1; entry->w = !rangeset_contains_singleton(mmio_ro_ranges, @@ -171,7 +177,6 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry, entry->a = entry->d = !!cpu_has_vmx_ept_ad; break; case p2m_grant_map_ro: - case p2m_ioreq_server: entry->r = 1; entry->w = entry->x = 0; entry->a = !!cpu_has_vmx_ept_ad; diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index eabd2e3..bf75afa 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -72,7 +72,9 @@ static const unsigned long pgt[] = { PGT_l3_page_table }; -static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn, +static unsigned long p2m_type_to_flags(const struct p2m_domain *p2m, + p2m_type_t t, + mfn_t mfn, unsigned int level) { unsigned long flags; @@ -94,8 +96,16 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn, default: return flags | _PAGE_NX_BIT; case p2m_grant_map_ro: - case p2m_ioreq_server: return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT; + case p2m_ioreq_server: + { + flags |= P2M_BASE_FLAGS | _PAGE_RW; + + if ( p2m->ioreq.flags & P2M_IOREQ_HANDLE_WRITE_ACCESS ) + return flags & ~_PAGE_RW; + else + return flags; + } case p2m_ram_ro: case p2m_ram_logdirty: case p2m_ram_shared: @@ -442,7 +452,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn) p2m_type_t p2mt = p2m_is_logdirty_range(p2m, gfn & mask, gfn | ~mask) ? p2m_ram_logdirty : p2m_ram_rw; unsigned long mfn = l1e_get_pfn(e); - unsigned long flags = p2m_type_to_flags(p2mt, _mfn(mfn), level); + unsigned long flags = p2m_type_to_flags(p2m, p2mt, + _mfn(mfn), level); if ( level ) { @@ -579,7 +590,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct); l3e_content = mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) ? l3e_from_pfn(mfn_x(mfn), - p2m_type_to_flags(p2mt, mfn, 2) | _PAGE_PSE) + p2m_type_to_flags(p2m, p2mt, mfn, 2) | _PAGE_PSE) : l3e_empty(); entry_content.l1 = l3e_content.l3; @@ -615,7 +626,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) ) entry_content = p2m_l1e_from_pfn(mfn_x(mfn), - p2m_type_to_flags(p2mt, mfn, 0)); + p2m_type_to_flags(p2m, p2mt, mfn, 0)); else entry_content = l1e_empty(); @@ -651,7 +662,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct); if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) ) l2e_content = l2e_from_pfn(mfn_x(mfn), - p2m_type_to_flags(p2mt, mfn, 1) | + p2m_type_to_flags(p2m, p2mt, mfn, 1) | _PAGE_PSE); else l2e_content = l2e_empty(); diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index b3fce1b..231d41d 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -83,6 +83,8 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m) else p2m_pt_init(p2m); + spin_lock_init(&p2m->ioreq.lock); + return ret; } @@ -289,6 +291,74 @@ void p2m_memory_type_changed(struct domain *d) } } +int p2m_set_ioreq_server(struct domain *d, + unsigned long flags, + struct hvm_ioreq_server *s) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + int rc; + + spin_lock(&p2m->ioreq.lock); + + if ( flags == 0 ) + { + rc = -EINVAL; + if ( p2m->ioreq.server != s ) + goto out; + + /* Unmap ioreq server from p2m type by passing flags with 0. */ + p2m->ioreq.server = NULL; + p2m->ioreq.flags = 0; + } + else + { + rc = -EBUSY; + if ( p2m->ioreq.server != NULL ) + goto out; + + p2m->ioreq.server = s; + p2m->ioreq.flags = flags; + } + + rc = 0; + + out: + spin_unlock(&p2m->ioreq.lock); + + return rc; +} + +struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned long *flags) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + struct hvm_ioreq_server *s; + + spin_lock(&p2m->ioreq.lock); + + s = p2m->ioreq.server; + *flags = p2m->ioreq.flags; + + spin_unlock(&p2m->ioreq.lock); + return s; +} + +void p2m_destroy_ioreq_server(struct domain *d, + struct hvm_ioreq_server *s) +{ + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + spin_lock(&p2m->ioreq.lock); + + if ( p2m->ioreq.server == s ) + { + p2m->ioreq.server = NULL; + p2m->ioreq.flags = 0; + } + + spin_unlock(&p2m->ioreq.lock); +} + void p2m_enable_hardware_log_dirty(struct domain *d) { struct p2m_domain *p2m = p2m_get_hostp2m(d); diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c index c81302a..2e0d258 100644 --- a/xen/arch/x86/mm/shadow/multi.c +++ b/xen/arch/x86/mm/shadow/multi.c @@ -3224,8 +3224,7 @@ static int sh_page_fault(struct vcpu *v, } /* Need to hand off device-model MMIO to the device model */ - if ( p2mt == p2m_mmio_dm - || (p2mt == p2m_ioreq_server && ft == ft_demand_write) ) + if ( p2mt == p2m_mmio_dm ) { gpa = guest_walk_to_gpa(&gw); goto mmio; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index a1778ee..b03c64d 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -36,6 +36,8 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, uint64_t end); +int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, + uint16_t type, uint32_t flags); int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, bool_t enabled); diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index ee2ea9c..358605b 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -89,7 +89,8 @@ typedef unsigned int p2m_query_t; | p2m_to_mask(p2m_ram_paging_out) \ | p2m_to_mask(p2m_ram_paged) \ | p2m_to_mask(p2m_ram_paging_in) \ - | p2m_to_mask(p2m_ram_shared)) + | p2m_to_mask(p2m_ram_shared) \ + | p2m_to_mask(p2m_ioreq_server)) /* Types that represent a physmap hole that is ok to replace with a shared * entry */ @@ -111,8 +112,7 @@ typedef unsigned int p2m_query_t; #define P2M_RO_TYPES (p2m_to_mask(p2m_ram_logdirty) \ | p2m_to_mask(p2m_ram_ro) \ | p2m_to_mask(p2m_grant_map_ro) \ - | p2m_to_mask(p2m_ram_shared) \ - | p2m_to_mask(p2m_ioreq_server)) + | p2m_to_mask(p2m_ram_shared)) /* Write-discard types, which should discard the write operations */ #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \ @@ -320,6 +320,24 @@ struct p2m_domain { struct ept_data ept; /* NPT-equivalent structure could be added here. */ }; + + struct { + spinlock_t lock; + /* + * ioreq server who's responsible for the emulation of + * gfns with specific p2m type(for now, p2m_ioreq_server). + */ + struct hvm_ioreq_server *server; + /* + * flags specifies whether read, write or both operations + * are to be emulated by an ioreq server. + */ + unsigned int flags; + +#define P2M_IOREQ_HANDLE_WRITE_ACCESS HVMOP_IOREQ_MEM_ACCESS_WRITE +#define P2M_IOREQ_HANDLE_READ_ACCESS HVMOP_IOREQ_MEM_ACCESS_READ + + } ioreq; }; /* get host p2m table */ @@ -821,6 +839,12 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt) return flags; } +int p2m_set_ioreq_server(struct domain *d, unsigned long flags, + struct hvm_ioreq_server *s); +struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, + unsigned long *flags); +void p2m_destroy_ioreq_server(struct domain *d, struct hvm_ioreq_server *s); + #endif /* _XEN_P2M_H */ /* diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h index b3e45cf..6a6de43 100644 --- a/xen/include/public/hvm/hvm_op.h +++ b/xen/include/public/hvm/hvm_op.h @@ -383,6 +383,37 @@ struct xen_hvm_set_ioreq_server_state { typedef struct xen_hvm_set_ioreq_server_state xen_hvm_set_ioreq_server_state_t; DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t); +/* + * HVMOP_map_mem_type_to_ioreq_server : map or unmap the IOREQ Server + * to specific memroy type + * for specific accesses + * + * For now, flags only accept the value of HVMOP_IOREQ_MEM_ACCESS_WRITE, + * which means only write operations are to be forwarded to an ioreq server. + * Support for the emulation of read operations can be added when an ioreq + * server has such requirement in future. + */ +#define HVMOP_map_mem_type_to_ioreq_server 26 +struct xen_hvm_map_mem_type_to_ioreq_server { + domid_t domid; /* IN - domain to be serviced */ + ioservid_t id; /* IN - ioreq server id */ + uint16_t type; /* IN - memory type */ + uint16_t pad; + uint32_t flags; /* IN - types of accesses to be forwarded to the + ioreq server. flags with 0 means to unmap the + ioreq server */ +#define _HVMOP_IOREQ_MEM_ACCESS_READ 0 +#define HVMOP_IOREQ_MEM_ACCESS_READ \ + (1u << _HVMOP_IOREQ_MEM_ACCESS_READ) + +#define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1 +#define HVMOP_IOREQ_MEM_ACCESS_WRITE \ + (1u << _HVMOP_IOREQ_MEM_ACCESS_WRITE) +}; +typedef struct xen_hvm_map_mem_type_to_ioreq_server + xen_hvm_map_mem_type_to_ioreq_server_t; +DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_mem_type_to_ioreq_server_t); + #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */ #if defined(__i386__) || defined(__x86_64__)