From patchwork Wed Aug 21 14:58:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D0FA1399 for ; Wed, 21 Aug 2019 15:01:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4922720870 for ; Wed, 21 Aug 2019 15:01:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="TpLdtbR/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4922720870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5K-00007Q-Br; Wed, 21 Aug 2019 14:59:38 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5J-00007L-5r for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:37 +0000 X-Inumbo-ID: 4a53e92c-c424-11e9-b95f-bc764e2007e4 Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4a53e92c-c424-11e9-b95f-bc764e2007e4; Wed, 21 Aug 2019 14:59:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399576; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jfrm7P1uSFPGn/BnajpGvRnL4WEoI68X5NrCqeeJ/b8=; b=TpLdtbR/THrOnYT/H2CePRFhqcXNKn28JiEOt7kvwbqMskt3XiWnZ+ok edZz2F5AETrP2BobHxiGah5kRHN1UPAhB9cW4si/NX1Ec58D2Ru9fksoy poLg+zBPEACCNwW2J8TF+ojvgV9gRbD5GZvn4BxgiEAbKVtbKP+x81g15 Q=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: R1VVtMNFoZOb42hJE6J45f3WtQx++29mF4UIXerHXvKQytdWZZ2KyAokbUBibvcTo4ZkfXMcK7 /u8IUP4BwKRaqQ+5o34pB5Ha0u2r/b8rCQ3mYXFO2uJlFYH67ex3gC50isdfz+zAlxwDypO+wm 5ZYzdLZ0AJcSEuAiWOM2Rrm/ZCog7QMvgsYnPY6ps6WA5NQr8gE/4znYPaZbAH7nDj5Hwy7W+z a/E+Dh1WNA5sXiz06PWiW0CjZziX+TOaEMikcnylxKUSTeAqUjec1as45/H+SibeZTrG7+UqeY 880= X-SBRS: 2.7 X-MesageID: 4533606 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4533606" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:58:57 +0200 Message-ID: <20190821145903.45934-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 1/7] ioreq: add fields to allow internal ioreq servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Internal ioreq servers are plain function handlers implemented inside of the hypervisor. Note that most fields used by current (external) ioreq servers are not needed for internal ones, and hence have been placed inside of a struct and packed in an union together with the only internal specific field, a function pointer to a handler. This is required in order to have PCI config accesses forwarded to external ioreq servers or to internal ones (ie: QEMU emulated devices vs vPCI passthrough), and is the first step in order to allow unprivileged domains to use vPCI. Signed-off-by: Roger Pau Monné --- xen/include/asm-x86/hvm/domain.h | 30 +++++++++++++++++++----------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index 6c7c4f5aa6..f0be303517 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -52,21 +52,29 @@ struct hvm_ioreq_vcpu { #define MAX_NR_IO_RANGES 256 struct hvm_ioreq_server { - struct domain *target, *emulator; - + struct domain *target; /* Lock to serialize toolstack modifications */ spinlock_t lock; - - struct hvm_ioreq_page ioreq; - struct list_head ioreq_vcpu_list; - struct hvm_ioreq_page bufioreq; - - /* Lock to serialize access to buffered ioreq ring */ - spinlock_t bufioreq_lock; - evtchn_port_t bufioreq_evtchn; struct rangeset *range[NR_IO_RANGE_TYPES]; bool enabled; - uint8_t bufioreq_handling; + bool internal; + + union { + struct { + struct domain *emulator; + struct hvm_ioreq_page ioreq; + struct list_head ioreq_vcpu_list; + struct hvm_ioreq_page bufioreq; + + /* Lock to serialize access to buffered ioreq ring */ + spinlock_t bufioreq_lock; + evtchn_port_t bufioreq_evtchn; + uint8_t bufioreq_handling; + }; + struct { + int (*handler)(struct vcpu *v, ioreq_t *); + }; + }; }; /* From patchwork Wed Aug 21 14:58:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54F601399 for ; Wed, 21 Aug 2019 15:01:28 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2625C20870 for ; Wed, 21 Aug 2019 15:01:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="LsCmpDQn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2625C20870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5P-00008L-TU; Wed, 21 Aug 2019 14:59:43 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5O-00007p-5W for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:42 +0000 X-Inumbo-ID: 4b3f2e1e-c424-11e9-8980-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4b3f2e1e-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399577; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5bT8VNZBi8pv4KOOjesiQnYptMnqBRVha5j2EfSm4GY=; b=LsCmpDQn86NiMcfeHZX0OP69Kk5ioTjVwZB2XpbP0rH9S3DJvn7fjS2w FExpL9igH9k0IyPC6ptH6jGsdy/oO08AhiV/hJjbdaBMaQTru3LVYLt5/ 2WloDlqU11DLOsr8VQXFUDCQTxbBr09gCKnm45uJqaTYjhkXeBHRkO4IC w=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: nCXijWtaoMBfqm+VIndD5WjgmqD16URcrxVUc/eP4rVfGQFzRKlW/bEesPhzOmJCqT78C4eT4k rwF+TdsPceBSaxVriVN+/JXo2cAALyOgC1UgVPGWzNxVhNZS1uS13L/w5Rsp9IjHZ56E8xlDUI yiGQw5HnMmj3SkhFnOIOO2nDm21U8NOW3YyZr+eLpKNMvmIBsN2k1bPdDF52at331l2aroQ/MB AsCQ+HT3kz/8fGd9+AaCO3PGXw/l0frgyamdGXO9pF9suKTota2pPQdZdftTuAJZDdWOrtURgO CXI= X-SBRS: 2.7 X-MesageID: 4717068 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4717068" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:58:58 +0200 Message-ID: <20190821145903.45934-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 2/7] ioreq: add internal ioreq initialization support X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Add support for internal ioreq servers to initialization and deinitialization routines, prevent some functions from being executed against internal ioreq servers and add guards to only allow internal callers to modify internal ioreq servers. External callers (ie: from hypercalls) are only allowed to deal with external ioreq servers. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/dm.c | 9 +- xen/arch/x86/hvm/ioreq.c | 150 +++++++++++++++++++++----------- xen/include/asm-x86/hvm/ioreq.h | 8 +- 3 files changed, 108 insertions(+), 59 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d6d0e8be89..5ca8b66d67 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -417,7 +417,7 @@ static int dm_op(const struct dmop_args *op_args) break; rc = hvm_create_ioreq_server(d, data->handle_bufioreq, - &data->id); + &data->id, false); break; } @@ -452,7 +452,7 @@ static int dm_op(const struct dmop_args *op_args) break; rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type, - data->start, data->end); + data->start, data->end, false); break; } @@ -466,7 +466,8 @@ static int dm_op(const struct dmop_args *op_args) break; rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type, - data->start, data->end); + data->start, data->end, + false); break; } @@ -529,7 +530,7 @@ static int dm_op(const struct dmop_args *op_args) if ( data->pad ) break; - rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled); + rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled, false); break; } diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index a79cabb680..23ef9b0c02 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -89,6 +89,9 @@ bool hvm_io_pending(struct vcpu *v) { struct hvm_ioreq_vcpu *sv; + if ( s->internal ) + continue; + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -193,6 +196,9 @@ bool handle_hvm_io_completion(struct vcpu *v) { struct hvm_ioreq_vcpu *sv; + if ( s->internal ) + continue; + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -431,6 +437,9 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page) FOR_EACH_IOREQ_SERVER(d, id, s) { + if ( s->internal ) + continue; + if ( (s->ioreq.page == page) || (s->bufioreq.page == page) ) { found = true; @@ -696,15 +705,18 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s) if ( s->enabled ) goto done; - hvm_remove_ioreq_gfn(s, false); - hvm_remove_ioreq_gfn(s, true); + if ( !s->internal ) + { + hvm_remove_ioreq_gfn(s, false); + hvm_remove_ioreq_gfn(s, true); - s->enabled = true; + list_for_each_entry ( sv, + &s->ioreq_vcpu_list, + list_entry ) + hvm_update_ioreq_evtchn(s, sv); + } - list_for_each_entry ( sv, - &s->ioreq_vcpu_list, - list_entry ) - hvm_update_ioreq_evtchn(s, sv); + s->enabled = true; done: spin_unlock(&s->lock); @@ -717,8 +729,11 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) if ( !s->enabled ) goto done; - hvm_add_ioreq_gfn(s, true); - hvm_add_ioreq_gfn(s, false); + if ( !s->internal ) + { + hvm_add_ioreq_gfn(s, true); + hvm_add_ioreq_gfn(s, false); + } s->enabled = false; @@ -728,40 +743,47 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s) static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, struct domain *d, int bufioreq_handling, - ioservid_t id) + ioservid_t id, bool internal) { struct domain *currd = current->domain; struct vcpu *v; int rc; + s->internal = internal; s->target = d; - - get_knownalive_domain(currd); - s->emulator = currd; - spin_lock_init(&s->lock); - INIT_LIST_HEAD(&s->ioreq_vcpu_list); - spin_lock_init(&s->bufioreq_lock); - - s->ioreq.gfn = INVALID_GFN; - s->bufioreq.gfn = INVALID_GFN; rc = hvm_ioreq_server_alloc_rangesets(s, id); if ( rc ) return rc; - s->bufioreq_handling = bufioreq_handling; - - for_each_vcpu ( d, v ) + if ( !internal ) { - rc = hvm_ioreq_server_add_vcpu(s, v); - if ( rc ) - goto fail_add; + get_knownalive_domain(currd); + + s->emulator = currd; + INIT_LIST_HEAD(&s->ioreq_vcpu_list); + spin_lock_init(&s->bufioreq_lock); + + s->ioreq.gfn = INVALID_GFN; + s->bufioreq.gfn = INVALID_GFN; + + s->bufioreq_handling = bufioreq_handling; + + for_each_vcpu ( d, v ) + { + rc = hvm_ioreq_server_add_vcpu(s, v); + if ( rc ) + goto fail_add; + } } + else + s->handler = NULL; return 0; fail_add: + ASSERT(!internal); hvm_ioreq_server_remove_all_vcpus(s); hvm_ioreq_server_unmap_pages(s); @@ -774,27 +796,31 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s, static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s) { ASSERT(!s->enabled); - hvm_ioreq_server_remove_all_vcpus(s); - - /* - * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and - * hvm_ioreq_server_free_pages() in that order. - * This is because the former will do nothing if the pages - * are not mapped, leaving the page to be freed by the latter. - * However if the pages are mapped then the former will set - * the page_info pointer to NULL, meaning the latter will do - * nothing. - */ - hvm_ioreq_server_unmap_pages(s); - hvm_ioreq_server_free_pages(s); hvm_ioreq_server_free_rangesets(s); - put_domain(s->emulator); + if ( !s->internal ) + { + hvm_ioreq_server_remove_all_vcpus(s); + + /* + * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and + * hvm_ioreq_server_free_pages() in that order. + * This is because the former will do nothing if the pages + * are not mapped, leaving the page to be freed by the latter. + * However if the pages are mapped then the former will set + * the page_info pointer to NULL, meaning the latter will do + * nothing. + */ + hvm_ioreq_server_unmap_pages(s); + hvm_ioreq_server_free_pages(s); + + put_domain(s->emulator); + } } int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id) + ioservid_t *id, bool internal) { struct hvm_ioreq_server *s; unsigned int i; @@ -826,7 +852,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, */ set_ioreq_server(d, i, s); - rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i); + rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i, internal); if ( rc ) { set_ioreq_server(d, i, NULL); @@ -863,7 +889,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + /* NB: internal servers cannot be destroyed. */ + if ( s->internal || s->emulator != current->domain ) goto out; domain_pause(d); @@ -908,7 +935,11 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + /* + * NB: don't allow external callers to fetch information about internal + * ioreq servers. + */ + if ( s->internal || s->emulator != current->domain ) goto out; if ( ioreq_gfn || bufioreq_gfn ) @@ -955,7 +986,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + if ( s->internal || s->emulator != current->domain ) goto out; rc = hvm_ioreq_server_alloc_pages(s); @@ -991,7 +1022,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end) + uint64_t end, bool internal) { struct hvm_ioreq_server *s; struct rangeset *r; @@ -1009,7 +1040,12 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + /* + * NB: don't allow external callers to modify the ranges of internal + * servers. + */ + if ( (s->internal != internal) || + (!internal && s->emulator != current->domain) ) goto out; switch ( type ) @@ -1043,7 +1079,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end) + uint64_t end, bool internal) { struct hvm_ioreq_server *s; struct rangeset *r; @@ -1061,7 +1097,12 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + /* + * NB: don't allow external callers to modify the ranges of internal + * servers. + */ + if ( s->internal != internal || + (!internal && s->emulator != current->domain) ) goto out; switch ( type ) @@ -1122,7 +1163,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + if ( s->internal || s->emulator != current->domain ) goto out; rc = p2m_set_ioreq_server(d, flags, s); @@ -1142,7 +1183,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, } int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled) + bool enabled, bool internal) { struct hvm_ioreq_server *s; int rc; @@ -1156,7 +1197,8 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, goto out; rc = -EPERM; - if ( s->emulator != current->domain ) + if ( s->internal != internal || + (!internal && s->emulator != current->domain) ) goto out; domain_pause(d); @@ -1185,6 +1227,8 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v) FOR_EACH_IOREQ_SERVER(d, id, s) { + if ( s->internal ) + continue; rc = hvm_ioreq_server_add_vcpu(s, v); if ( rc ) goto fail; @@ -1218,7 +1262,11 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v) spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); FOR_EACH_IOREQ_SERVER(d, id, s) + { + if ( s->internal ) + continue; hvm_ioreq_server_remove_vcpu(s, v); + } spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e2588e912f..e8119b26a6 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -24,7 +24,7 @@ bool handle_hvm_io_completion(struct vcpu *v); bool is_ioreq_server_page(struct domain *d, const struct page_info *page); int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling, - ioservid_t *id); + ioservid_t *id, bool internal); int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id); int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id, unsigned long *ioreq_gfn, @@ -34,14 +34,14 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, unsigned long idx, mfn_t *mfn); int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end); + uint64_t end, bool internal); int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint64_t start, - uint64_t end); + uint64_t end, bool internal); int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, uint32_t type, uint32_t flags); int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id, - bool enabled); + bool enabled, bool internal); int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); From patchwork Wed Aug 21 14:58:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC894912 for ; Wed, 21 Aug 2019 15:01:39 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 98B2820870 for ; Wed, 21 Aug 2019 15:01:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="GsKEGZFo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 98B2820870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5e-0000G4-FH; Wed, 21 Aug 2019 14:59:58 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5d-0000FF-64 for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:57 +0000 X-Inumbo-ID: 54cfbbd8-c424-11e9-b95f-bc764e2007e4 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 54cfbbd8-c424-11e9-b95f-bc764e2007e4; Wed, 21 Aug 2019 14:59:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bp6X/anh/MTojJZMBuIvQWLpXlQ3Ni3OlbaxVUsdQgk=; b=GsKEGZFoY7DbzorbvPD/nACS1G+cV62eOFrJY9NELKlfQkJjE2C/wEZI ACzX4IwTi+1QzT1FoqRlv5HtqAGfKBPuveAO/xMFkQaIi5a7wL8KYTr1D 4iXiWBy9yEmV4Fm8vK7afW7VmjuV7bdFMlEl6cDHAWuvRjqMWXbzrqNbj g=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 3ccSUqB6K55uMcL39IRUcroJGMer8rZxxWeV8cAnkD+vXhsqhpSUPho4+jfnufo7Flu/Xv/VxC PwuUFlCPzUVzT4qeB/V6dPP9RPa9UVpmE42n0sYfBdnuw+FQimFp1Xqf60+62Zupkz9Ftzma/0 EEwxjQUBo+rTC2j/FxZ2YDiQCA1MVXhZAe1qsQM+HNJhn6CA1q/+kG9B8yOf0g98etzq27zNc8 sQSuNxMpxNkL3ydgVSTl3NaGCO7BjlYINIfRgiorAq6+YI6kOAvaJyRWUSq4aSCiHnkhgEm8uI r80= X-SBRS: 2.7 X-MesageID: 4749749 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4749749" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:58:59 +0200 Message-ID: <20190821145903.45934-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 3/7] ioreq: allow dispatching ioreqs to internal servers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Internal ioreq servers are always processed first, and ioreqs are dispatched by calling the handler function. If no internal servers have registered for an ioreq it's then forwarded to external callers. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/ioreq.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 23ef9b0c02..3fb6fe9585 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1305,6 +1305,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, uint8_t type; uint64_t addr; unsigned int id; + bool internal = true; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) return NULL; @@ -1345,11 +1346,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, addr = p->addr; } + retry: FOR_EACH_IOREQ_SERVER(d, id, s) { struct rangeset *r; - if ( !s->enabled ) + if ( !s->enabled || s->internal != internal ) continue; r = s->range[type]; @@ -1387,6 +1389,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, } } + if ( internal ) + { + internal = false; + goto retry; + } + return NULL; } @@ -1492,9 +1500,18 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, ASSERT(s); + if ( s->internal && buffered ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + if ( buffered ) return hvm_send_buffered_ioreq(s, proto_p); + if ( s->internal ) + return s->handler(curr, proto_p); + if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; From patchwork Wed Aug 21 14:59:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C990114DE for ; Wed, 21 Aug 2019 15:01:30 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A55BE20870 for ; Wed, 21 Aug 2019 15:01:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="hcPmnkdm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A55BE20870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5j-0000XI-Q4; Wed, 21 Aug 2019 15:00:03 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5i-0000NF-77 for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 15:00:02 +0000 X-Inumbo-ID: 564af23e-c424-11e9-b95f-bc764e2007e4 Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 564af23e-c424-11e9-b95f-bc764e2007e4; Wed, 21 Aug 2019 14:59:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399595; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v3ayr7cSwt+nKou4nlbITu5aA2jVTt5P5LjCoxlSv8Q=; b=hcPmnkdmfu8XKOnBtGJQh50ZLY1V3L1/CsVDA/iB48bKruiTdRer5YTx TLE7twhryx7V1CVxEZD4vWj2zoDIuHTq5G7iEps2tHQalPuT0ivVzAG++ yRV+yzoNZJV3cAQMaPLKQOvOBNWJvLq/bKin/PUKKHRdPdbOwtB7ILomi Y=; Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: Vld7Dk3ZHvQzKMhTS0uhmKGoOlElbgZd4wdzWSg/lvbhs+wtlHO7nw+BMZX0MtHQW3M92sOO6J HGnhUFhHbPa7+5Hc8wBreV0ecOWDUHH9uhRa82WparDdTwd7yzWrODFG/Xa4TiW8Z17d+kP4Ab VJn1+/YSh/zjD+eP7kY6ZwgfZX3IO2VlDbx/4uS7RvxpY3X6ZkG9T+xp5D5sw/GhU+DgvaRLP/ 2xhzWKjJtivGspI4YP1rTKIiRwZkRElbUVASSH16A3AAeYv1euWpotYxPY104c3f/DYTHGtXdJ TX0= X-SBRS: 2.7 X-MesageID: 4749758 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4749758" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:00 +0200 Message-ID: <20190821145903.45934-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 4/7] ioreq: allow registering internal ioreq server handler X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Provide a routine to register the handler for an internal ioreq server. Note the handler can only be set once. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/ioreq.c | 32 ++++++++++++++++++++++++++++++++ xen/include/asm-x86/hvm/ioreq.h | 3 +++ 2 files changed, 35 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 3fb6fe9585..d8fea191aa 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -486,6 +486,38 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(struct vcpu *v, ioreq_t *)) +{ + struct hvm_ioreq_server *s; + int rc = 0; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s = get_ioreq_server(d, id); + if ( !s ) + { + rc = -ENOENT; + goto out; + } + if ( !s->internal ) + { + rc = -EINVAL; + goto out; + } + if ( s->handler != NULL ) + { + rc = -EBUSY; + goto out; + } + + s->handler = handler; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e8119b26a6..2131c944d4 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -55,6 +55,9 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); void hvm_ioreq_init(struct domain *d); +int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, + int (*handler)(struct vcpu *v, ioreq_t *)); + #endif /* __ASM_X86_HVM_IOREQ_H__ */ /* From patchwork Wed Aug 21 14:59:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107263 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C2E91399 for ; Wed, 21 Aug 2019 15:01:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E034F20870 for ; Wed, 21 Aug 2019 15:01:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="Z4YZhQ3g" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E034F20870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5U-00009s-Bw; Wed, 21 Aug 2019 14:59:48 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5T-00009V-6b for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:47 +0000 X-Inumbo-ID: 4ea4f138-c424-11e9-8980-bc764e2007e4 Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 4ea4f138-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399582; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qVmlfldA1+402OzpCAHEkAeeGY3Z8Bn3dYHMyvyNz/Y=; b=Z4YZhQ3g0rjc8Ilejf6R+Eq1xC3Vnoa+sIiXzuBiXKYi7zz33kPo3aPU WrH7THtkRIH0/xM2ILq27nwEGCCxytD7jkeUMT7fzXrS4UFLAhpxZf4kq PQwoxRv1IQ+vkVHRh8eRhZILO/iYngx6wv9HhFDimwpZbaOF/5l+oksO2 0=; Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 09VmLrcV46hdc/5rjHRJMarFV0s0f235L3e8OAMOi+HfFFClapkUMmVNT+MxgxezMLjrO/nIOT 0KBLm+jOlRrq6DJ2nn/i1xe8U1IhqpZ5YjeGesKZWdldB4b6TCCcayfhk0Om9siPcke4iwVF3q uQnEjHQVmhBpbNOU+dflA7K1YSN04g3HnvlPjFVX2OuzBBfoZbiJFlYzJVQZkX1EstDx1fc6tf El9SgmoUpv0He7/dMiIyWH0fRfBosFxix9p7gRya8EEw3ByeBK9iee9G/AG05K3KbdzPp0g4ZP y1g= X-SBRS: 2.7 X-MesageID: 4717081 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4717081" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:01 +0200 Message-ID: <20190821145903.45934-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 5/7] ioreq: allow decoding accesses to MMCFG regions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Pick up on the infrastructure already added for vPCI and allow ioreq to decode accesses to MMCFG regions registered for a domain. This infrastructure is still only accessible from internal callers, so MMCFG regions can only be registered from the internal domain builder used by PVH dom0. Note that the vPCI infrastructure to decode and handle accesses to MMCFG regions will be removed in following patches when vPCI is switched to become an internal ioreq server. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/hvm.c | 2 +- xen/arch/x86/hvm/io.c | 36 +++++--------- xen/arch/x86/hvm/ioreq.c | 88 +++++++++++++++++++++++++++++++-- xen/include/asm-x86/hvm/io.h | 12 ++++- xen/include/asm-x86/hvm/ioreq.h | 6 +++ 5 files changed, 113 insertions(+), 31 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 029eea3b85..b7a53377a5 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -741,7 +741,7 @@ void hvm_domain_destroy(struct domain *d) xfree(ioport); } - destroy_vpci_mmcfg(d); + hvm_ioreq_free_mmcfg(d); } static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index a5b0a23f06..6585767c03 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, return CF8_ADDR_LO(cf8) | (addr & 3); } +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf) +{ + addr -= mmcfg->addr; + sbdf->bdf = MMCFG_BDF(addr); + sbdf->bus += mmcfg->start_bus; + sbdf->seg = mmcfg->segment; + + return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); +} + + /* Do some sanity checks. */ static bool vpci_access_allowed(unsigned int reg, unsigned int len) { @@ -383,14 +395,6 @@ void register_vpci_portio_handler(struct domain *d) handler->ops = &vpci_portio_ops; } -struct hvm_mmcfg { - struct list_head next; - paddr_t addr; - unsigned int size; - uint16_t segment; - uint8_t start_bus; -}; - /* Handlers to trap PCI MMCFG config accesses. */ static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, paddr_t addr) @@ -558,22 +562,6 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, return 0; } -void destroy_vpci_mmcfg(struct domain *d) -{ - struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions; - - write_lock(&d->arch.hvm.mmcfg_lock); - while ( !list_empty(mmcfg_regions) ) - { - struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions, - struct hvm_mmcfg, next); - - list_del(&mmcfg->next); - xfree(mmcfg); - } - write_unlock(&d->arch.hvm.mmcfg_lock); -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index d8fea191aa..10c0f7a574 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -690,6 +690,22 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s) rangeset_destroy(s->range[i]); } +void hvm_ioreq_free_mmcfg(struct domain *d) +{ + struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions; + + write_lock(&d->arch.hvm.mmcfg_lock); + while ( !list_empty(mmcfg_regions) ) + { + struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions, + struct hvm_mmcfg, next); + + list_del(&mmcfg->next); + xfree(mmcfg); + } + write_unlock(&d->arch.hvm.mmcfg_lock); +} + static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, ioservid_t id) { @@ -1329,6 +1345,19 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } +static const struct hvm_mmcfg *mmcfg_find(const struct domain *d, + paddr_t addr) +{ + const struct hvm_mmcfg *mmcfg; + + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) + return mmcfg; + + return NULL; +} + + struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { @@ -1338,27 +1367,34 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, uint64_t addr; unsigned int id; bool internal = true; + const struct hvm_mmcfg *mmcfg; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) return NULL; cf8 = d->arch.hvm.pci_cf8; - if ( p->type == IOREQ_TYPE_PIO && - (p->addr & ~3) == 0xcfc && - CF8_ENABLED(cf8) ) + read_lock(&d->arch.hvm.mmcfg_lock); + if ( (p->type == IOREQ_TYPE_PIO && + (p->addr & ~3) == 0xcfc && + CF8_ENABLED(cf8)) || + (p->type == IOREQ_TYPE_COPY && + (mmcfg = mmcfg_find(d, p->addr)) != NULL) ) { uint32_t x86_fam; pci_sbdf_t sbdf; unsigned int reg; - reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf); + reg = p->type == IOREQ_TYPE_PIO ? hvm_pci_decode_addr(cf8, p->addr, + &sbdf) + : hvm_mmcfg_decode_addr(mmcfg, p->addr, + &sbdf); /* PCI config data cycle */ type = XEN_DMOP_IO_RANGE_PCI; addr = ((uint64_t)sbdf.sbdf << 32) | reg; /* AMD extended configuration space access? */ - if ( CF8_ADDR_HI(cf8) && + if ( p->type == IOREQ_TYPE_PIO && CF8_ADDR_HI(cf8) && d->arch.cpuid->x86_vendor == X86_VENDOR_AMD && (x86_fam = get_cpu_family( d->arch.cpuid->basic.raw_fms, NULL, NULL)) > 0x10 && @@ -1377,6 +1413,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY; addr = p->addr; } + read_unlock(&d->arch.hvm.mmcfg_lock); retry: FOR_EACH_IOREQ_SERVER(d, id, s) @@ -1629,6 +1666,47 @@ void hvm_ioreq_init(struct domain *d) register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); } +int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg) +{ + struct hvm_mmcfg *mmcfg, *new; + + if ( start_bus > end_bus ) + return -EINVAL; + + new = xmalloc(struct hvm_mmcfg); + if ( !new ) + return -ENOMEM; + + new->addr = addr + (start_bus << 20); + new->start_bus = start_bus; + new->segment = seg; + new->size = (end_bus - start_bus + 1) << 20; + + write_lock(&d->arch.hvm.mmcfg_lock); + list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) + if ( new->addr < mmcfg->addr + mmcfg->size && + mmcfg->addr < new->addr + new->size ) + { + int ret = -EEXIST; + + if ( new->addr == mmcfg->addr && + new->start_bus == mmcfg->start_bus && + new->segment == mmcfg->segment && + new->size == mmcfg->size ) + ret = 0; + write_unlock(&d->arch.hvm.mmcfg_lock); + xfree(new); + return ret; + } + + list_add(&new->next, &d->arch.hvm.mmcfg_regions); + write_unlock(&d->arch.hvm.mmcfg_lock); + + return 0; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 7ceb119b64..26f0489171 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -165,9 +165,19 @@ void stdvga_deinit(struct domain *d); extern void hvm_dpci_msi_eoi(struct domain *d, int vector); -/* Decode a PCI port IO access into a bus/slot/func/reg. */ +struct hvm_mmcfg { + struct list_head next; + paddr_t addr; + unsigned int size; + uint16_t segment; + uint8_t start_bus; +}; + +/* Decode a PCI port IO or MMCFG access into a bus/slot/func/reg. */ unsigned int hvm_pci_decode_addr(unsigned int cf8, unsigned int addr, pci_sbdf_t *sbdf); +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, + paddr_t addr, pci_sbdf_t *sbdf); /* * HVM port IO handler that performs forwarding of guest IO ports into machine diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 2131c944d4..10b9586885 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -58,6 +58,12 @@ void hvm_ioreq_init(struct domain *d); int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, int (*handler)(struct vcpu *v, ioreq_t *)); +int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, + unsigned int start_bus, unsigned int end_bus, + unsigned int seg); + +void hvm_ioreq_free_mmcfg(struct domain *d); + #endif /* __ASM_X86_HVM_IOREQ_H__ */ /* From patchwork Wed Aug 21 14:59:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D503014DE for ; Wed, 21 Aug 2019 15:01:36 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A643D20870 for ; Wed, 21 Aug 2019 15:01:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="cn2myorw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A643D20870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Z-0000Be-Mj; Wed, 21 Aug 2019 14:59:53 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Y-0000BA-6L for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:52 +0000 X-Inumbo-ID: 51a0715a-c424-11e9-8980-bc764e2007e4 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 51a0715a-c424-11e9-8980-bc764e2007e4; Wed, 21 Aug 2019 14:59:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399588; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Q95w2jTSGI/Q7ICrkONG2X5OzZS4B+rwKu4FumIZXbU=; b=cn2myorww1pDaeiAZq0bGFk2+gCTKzHcB2w1wMKvpfvHWq3w+jhWV1Dz czj2sTkSYsrp1+JImB+rVZ22IgDVi8hVQrYFrUSHm5QGnBtXgaz/BUVPA hL3ZBqDbTToh86XbVgrH2yh2TknmKKmQCzCdMgjWFM5XdF0miqc+E5mo0 U=; Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: 0MeGxGiiMIts12U/PVfF8FACXhq7YzVH0CPStD5qZuTkyHtUkussFYun/N8QQ2prWPssZqK82f yTMkWghDohikRCcsdvz6HTrveoa3lGjjnuncRIMpkkyXXCrEATrTRqs/s0zl0KE3b9dbDe4784 sRErGcnbd75TQ5iYGUuFhufATGLaF7hUrSiMiSL6GQTzdbj3bUgr1Oz8xui4f8tsw4+mbyHTNQ unpVlPH5TWAsFHkXJR1pITCXdhJmZNBS8DBl2bV1EYPWvLRxW9mFvhxwjjc6iu306+SzE7h1ER XtI= X-SBRS: 2.7 X-MesageID: 4782721 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4782721" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:02 +0200 Message-ID: <20190821145903.45934-7-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 6/7] vpci: register as an internal ioreq server X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Switch vPCI to become an internal ioreq server, and hence drop all the vPCI specific decoding and trapping to PCI IO ports and MMCFG regions. This allows to unify the vPCI code with the ioreq infrastructure, opening the door for domains having PCI accesses handled by vPCI and other ioreq servers at the same time. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/dom0_build.c | 9 +- xen/arch/x86/hvm/hvm.c | 5 +- xen/arch/x86/hvm/io.c | 272 ---------------------------- xen/arch/x86/hvm/ioreq.c | 5 + xen/arch/x86/physdev.c | 7 +- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/drivers/vpci/vpci.c | 54 ++++++ xen/include/asm-x86/hvm/io.h | 2 +- xen/include/xen/vpci.h | 3 + 9 files changed, 77 insertions(+), 282 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 8845399ae9..7925189fed 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -29,6 +29,7 @@ #include #include +#include #include #include #include @@ -1117,10 +1118,10 @@ static void __hwdom_init pvh_setup_mmcfg(struct domain *d) for ( i = 0; i < pci_mmcfg_config_num; i++ ) { - rc = register_vpci_mmcfg_handler(d, pci_mmcfg_config[i].address, - pci_mmcfg_config[i].start_bus_number, - pci_mmcfg_config[i].end_bus_number, - pci_mmcfg_config[i].pci_segment); + rc = hvm_ioreq_register_mmcfg(d, pci_mmcfg_config[i].address, + pci_mmcfg_config[i].start_bus_number, + pci_mmcfg_config[i].end_bus_number, + pci_mmcfg_config[i].pci_segment); if ( rc ) printk("Unable to setup MMCFG handler at %#lx for segment %u\n", pci_mmcfg_config[i].address, diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index b7a53377a5..3fcf46779b 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -644,10 +644,13 @@ int hvm_domain_initialise(struct domain *d) d->arch.hvm.io_bitmap = hvm_io_bitmap; register_g2m_portio_handler(d); - register_vpci_portio_handler(d); hvm_ioreq_init(d); + rc = vpci_register_ioreq(d); + if ( rc ) + goto fail1; + hvm_init_guest_time(d); d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot; diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index 6585767c03..9c323d17ef 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -290,278 +290,6 @@ unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); } - -/* Do some sanity checks. */ -static bool vpci_access_allowed(unsigned int reg, unsigned int len) -{ - /* Check access size. */ - if ( len != 1 && len != 2 && len != 4 && len != 8 ) - return false; - - /* Check that access is size aligned. */ - if ( (reg & (len - 1)) ) - return false; - - return true; -} - -/* vPCI config space IO ports handlers (0xcf8/0xcfc). */ -static bool vpci_portio_accept(const struct hvm_io_handler *handler, - const ioreq_t *p) -{ - return (p->addr == 0xcf8 && p->size == 4) || (p->addr & ~3) == 0xcfc; -} - -static int vpci_portio_read(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t *data) -{ - const struct domain *d = current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - *data = ~(uint64_t)0; - - if ( addr == 0xcf8 ) - { - ASSERT(size == 4); - *data = d->arch.hvm.pci_cf8; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) == 0xcfc); - cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg = hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - *data = vpci_read(sbdf, reg, size); - - return X86EMUL_OKAY; -} - -static int vpci_portio_write(const struct hvm_io_handler *handler, - uint64_t addr, uint32_t size, uint64_t data) -{ - struct domain *d = current->domain; - unsigned int reg; - pci_sbdf_t sbdf; - uint32_t cf8; - - if ( addr == 0xcf8 ) - { - ASSERT(size == 4); - d->arch.hvm.pci_cf8 = data; - return X86EMUL_OKAY; - } - - ASSERT((addr & ~3) == 0xcfc); - cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8); - if ( !CF8_ENABLED(cf8) ) - return X86EMUL_UNHANDLEABLE; - - reg = hvm_pci_decode_addr(cf8, addr, &sbdf); - - if ( !vpci_access_allowed(reg, size) ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, size, data); - - return X86EMUL_OKAY; -} - -static const struct hvm_io_ops vpci_portio_ops = { - .accept = vpci_portio_accept, - .read = vpci_portio_read, - .write = vpci_portio_write, -}; - -void register_vpci_portio_handler(struct domain *d) -{ - struct hvm_io_handler *handler; - - if ( !has_vpci(d) ) - return; - - handler = hvm_next_io_handler(d); - if ( !handler ) - return; - - handler->type = IOREQ_TYPE_PIO; - handler->ops = &vpci_portio_ops; -} - -/* Handlers to trap PCI MMCFG config accesses. */ -static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d, - paddr_t addr) -{ - const struct hvm_mmcfg *mmcfg; - - list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) - if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size ) - return mmcfg; - - return NULL; -} - -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr) -{ - return vpci_mmcfg_find(d, addr); -} - -static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg, - paddr_t addr, pci_sbdf_t *sbdf) -{ - addr -= mmcfg->addr; - sbdf->bdf = MMCFG_BDF(addr); - sbdf->bus += mmcfg->start_bus; - sbdf->seg = mmcfg->segment; - - return addr & (PCI_CFG_SPACE_EXP_SIZE - 1); -} - -static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) -{ - struct domain *d = v->domain; - bool found; - - read_lock(&d->arch.hvm.mmcfg_lock); - found = vpci_mmcfg_find(d, addr); - read_unlock(&d->arch.hvm.mmcfg_lock); - - return found; -} - -static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long *data) -{ - struct domain *d = v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - *data = ~0ul; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg = vpci_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - /* - * According to the PCIe 3.1A specification: - * - Configuration Reads and Writes must usually be DWORD or smaller - * in size. - * - Because Root Complex implementations are not required to support - * accesses to a RCRB that cross DW boundaries [...] software - * should take care not to cause the generation of such accesses - * when accessing a RCRB unless the Root Complex will support the - * access. - * Xen however supports 8byte accesses by splitting them into two - * 4byte accesses. - */ - *data = vpci_read(sbdf, reg, min(4u, len)); - if ( len == 8 ) - *data |= (uint64_t)vpci_read(sbdf, reg + 4, 4) << 32; - - return X86EMUL_OKAY; -} - -static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr, - unsigned int len, unsigned long data) -{ - struct domain *d = v->domain; - const struct hvm_mmcfg *mmcfg; - unsigned int reg; - pci_sbdf_t sbdf; - - read_lock(&d->arch.hvm.mmcfg_lock); - mmcfg = vpci_mmcfg_find(d, addr); - if ( !mmcfg ) - { - read_unlock(&d->arch.hvm.mmcfg_lock); - return X86EMUL_RETRY; - } - - reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf); - read_unlock(&d->arch.hvm.mmcfg_lock); - - if ( !vpci_access_allowed(reg, len) || - (reg + len) > PCI_CFG_SPACE_EXP_SIZE ) - return X86EMUL_OKAY; - - vpci_write(sbdf, reg, min(4u, len), data); - if ( len == 8 ) - vpci_write(sbdf, reg + 4, 4, data >> 32); - - return X86EMUL_OKAY; -} - -static const struct hvm_mmio_ops vpci_mmcfg_ops = { - .check = vpci_mmcfg_accept, - .read = vpci_mmcfg_read, - .write = vpci_mmcfg_write, -}; - -int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, - unsigned int start_bus, unsigned int end_bus, - unsigned int seg) -{ - struct hvm_mmcfg *mmcfg, *new; - - ASSERT(is_hardware_domain(d)); - - if ( start_bus > end_bus ) - return -EINVAL; - - new = xmalloc(struct hvm_mmcfg); - if ( !new ) - return -ENOMEM; - - new->addr = addr + (start_bus << 20); - new->start_bus = start_bus; - new->segment = seg; - new->size = (end_bus - start_bus + 1) << 20; - - write_lock(&d->arch.hvm.mmcfg_lock); - list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next ) - if ( new->addr < mmcfg->addr + mmcfg->size && - mmcfg->addr < new->addr + new->size ) - { - int ret = -EEXIST; - - if ( new->addr == mmcfg->addr && - new->start_bus == mmcfg->start_bus && - new->segment == mmcfg->segment && - new->size == mmcfg->size ) - ret = 0; - write_unlock(&d->arch.hvm.mmcfg_lock); - xfree(new); - return ret; - } - - if ( list_empty(&d->arch.hvm.mmcfg_regions) ) - register_mmio_handler(d, &vpci_mmcfg_ops); - - list_add(&new->next, &d->arch.hvm.mmcfg_regions); - write_unlock(&d->arch.hvm.mmcfg_lock); - - return 0; -} - /* * Local variables: * mode: C diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 10c0f7a574..b2582bd3a0 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -1707,6 +1707,11 @@ int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, return 0; } +bool hvm_is_mmcfg_address(const struct domain *d, paddr_t addr) +{ + return mmcfg_find(d, addr); +} + /* * Local variables: * mode: C diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c index 3a3c15890b..a48b220fc3 100644 --- a/xen/arch/x86/physdev.c +++ b/xen/arch/x86/physdev.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -562,9 +563,9 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) * For HVM (PVH) domains try to add the newly found MMCFG to the * domain. */ - ret = register_vpci_mmcfg_handler(currd, info.address, - info.start_bus, info.end_bus, - info.segment); + ret = hvm_ioreq_register_mmcfg(currd, info.address, + info.start_bus, info.end_bus, + info.segment); } break; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index fd05075bb5..e0f3da91ce 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -244,7 +244,7 @@ static bool __hwdom_init hwdom_iommu_map(const struct domain *d, * TODO: runtime added MMCFG regions are not checked to make sure they * don't overlap with already mapped regions, thus preventing trapping. */ - if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) + if ( hvm_is_mmcfg_address(d, pfn_to_paddr(pfn)) ) return false; return true; diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 758d9420e7..510e3ee771 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -20,6 +20,8 @@ #include #include +#include + /* Internal struct to store the emulated PCI registers. */ struct vpci_register { vpci_read_t *read; @@ -473,6 +475,58 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, spin_unlock(&pdev->vpci->lock); } +static int ioreq_handler(struct vcpu *v, ioreq_t *req) +{ + pci_sbdf_t sbdf; + + if ( req->type != IOREQ_TYPE_PCI_CONFIG || req->data_is_ptr ) + { + ASSERT_UNREACHABLE(); + return X86EMUL_UNHANDLEABLE; + } + + sbdf.sbdf = req->addr >> 32; + + if ( req->dir ) + req->data = vpci_read(sbdf, req->addr, req->size); + else + vpci_write(sbdf, req->addr, req->size, req->data); + + return X86EMUL_OKAY; +} + +int vpci_register_ioreq(struct domain *d) +{ + ioservid_t id; + int rc; + + if ( !has_vpci(d) ) + return 0; + + rc = hvm_create_ioreq_server(d, HVM_IOREQSRV_BUFIOREQ_OFF, &id, true); + if ( rc ) + return rc; + + rc = hvm_add_ioreq_handler(d, id, ioreq_handler); + if ( rc ) + return rc; + + if ( is_hardware_domain(d) ) + { + /* Handle all devices in vpci. */ + rc = hvm_map_io_range_to_ioreq_server(d, id, XEN_DMOP_IO_RANGE_PCI, + 0, ~(uint64_t)0, true); + if ( rc ) + return rc; + } + + rc = hvm_set_ioreq_server_state(d, id, true, true); + if ( rc ) + return rc; + + return rc; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h index 26f0489171..75a24f33bc 100644 --- a/xen/include/asm-x86/hvm/io.h +++ b/xen/include/asm-x86/hvm/io.h @@ -196,7 +196,7 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, void destroy_vpci_mmcfg(struct domain *d); /* Check if an address is between a MMCFG region for a domain. */ -bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr); +bool hvm_is_mmcfg_address(const struct domain *d, paddr_t addr); #endif /* __ASM_X86_HVM_IO_H__ */ diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 4cf233c779..666dd1ca68 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -23,6 +23,9 @@ typedef int vpci_register_init_t(struct pci_dev *dev); static vpci_register_init_t *const x##_entry \ __used_section(".data.vpci." p) = x +/* Register vPCI handler wit ioreq. */ +int vpci_register_ioreq(struct domain *d); + /* Add vPCI handlers to device. */ int __must_check vpci_add_handlers(struct pci_dev *dev); From patchwork Wed Aug 21 14:59:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F14413A4 for ; Wed, 21 Aug 2019 15:01:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7ACC820870 for ; Wed, 21 Aug 2019 15:01:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="FdZGet6U" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7ACC820870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5b-0000D0-5E; Wed, 21 Aug 2019 14:59:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Z-0000BY-Jq for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:53 +0000 X-Inumbo-ID: 53e413b0-c424-11e9-adc7-12813bfff9fa Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 53e413b0-c424-11e9-adc7-12813bfff9fa; Wed, 21 Aug 2019 14:59:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R2CKAGsb2RND/Knz+9b3zSn3LkXlYB+nbyj3ICh1Xi8=; b=FdZGet6UfGrmSXvq5B+o+lWyS0CHoM+pLQ7p93jRcQMRS/zrAVsfR9I3 ZKspYZvnEC8amvOY6RTcWefCa01wKByKQM6l7R0BnqJ9LBKwFZ0pOLMEB iUdWkHUvrLyRF7njbohBEUPd1+UCKSLYcoSKAWrbrSSZiQ1d152WufDP6 Q=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: djF5HpQgTe+j60NbmT3TXg5eBKEoot7a/Xa9zuEgV70tZYWLxhyze5S0MTafDIXitk7E6BNdlc DVRFkG8MYZqgsBTn8F6XkyMsmw0i0iZAUxjuw2KlWP0lK7qEBZ3XOzbVB4LLLgDJ6nNPOKQ9Cu rvSKbAgzBGJ2E5PkIDv7AmoHM1pDyLiAJe/nz7BBMbZW7l8CWnqRYIJMvtNHJ/q6MV+G5Kuo+7 3C9/Tr8LjtPskEmko3S+wMdgR19jxdXDE9cpsXHS7Xu1yaCPFwYb++qOpD2KQAqrl2Gp6IT6fS v90= X-SBRS: 2.7 X-MesageID: 4549117 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4549117" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:03 +0200 Message-ID: <20190821145903.45934-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/ioreq.c | 55 ++++++++++++++++++++++++++++---- xen/drivers/vpci/vpci.c | 3 ++ xen/include/asm-x86/hvm/domain.h | 1 + xen/include/asm-x86/hvm/ioreq.h | 2 ++ 4 files changed, 55 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index b2582bd3a0..8e160a0a14 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -186,18 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; if ( s->internal ) + { + if ( s->pending && s->pending(v) ) + { + /* + * Need to raise a scheduler irq in order to prevent the guest + * vcpu from resuming execution. + * + * Note this is not required for external ioreq operations + * because in that case the vcpu is marked as blocked, but this + * cannot be done for long-running internal operations, since + * it would prevent the vcpu from being scheduled and thus the + * long running operation from finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } continue; + } list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -518,6 +529,38 @@ int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, return rc; } +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id, + bool (*pending)(struct vcpu *v)) +{ + struct hvm_ioreq_server *s; + int rc = 0; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s = get_ioreq_server(d, id); + if ( !s ) + { + rc = -ENOENT; + goto out; + } + if ( !s->internal ) + { + rc = -EINVAL; + goto out; + } + if ( s->pending != NULL ) + { + rc = -EBUSY; + goto out; + } + + s->pending = pending; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 510e3ee771..54b0f31612 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -508,6 +508,9 @@ int vpci_register_ioreq(struct domain *d) return rc; rc = hvm_add_ioreq_handler(d, id, ioreq_handler); + if ( rc ) + return rc; + rc = hvm_add_ioreq_pending_handler(d, id, vpci_process_pending); if ( rc ) return rc; diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index f0be303517..80a38ffe48 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -73,6 +73,7 @@ struct hvm_ioreq_server { }; struct { int (*handler)(struct vcpu *v, ioreq_t *); + bool (*pending)(struct vcpu *v); }; }; }; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 10b9586885..cc3e27d059 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -57,6 +57,8 @@ void hvm_ioreq_init(struct domain *d); int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, int (*handler)(struct vcpu *v, ioreq_t *)); +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id, + bool (*pending)(struct vcpu *v)); int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, unsigned int start_bus, unsigned int end_bus,