From patchwork Tue Sep 3 16:14:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11128363 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E08441813 for ; Tue, 3 Sep 2019 16:16:19 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B11E622CF8 for ; Tue, 3 Sep 2019 16:16:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="d4nIYO7V" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B11E622CF8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSD-0001s7-2j; Tue, 03 Sep 2019 16:14:49 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSC-0001rv-1E for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:14:48 +0000 X-Inumbo-ID: f24a4446-ce65-11e9-ab97-12813bfff9fa Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id f24a4446-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:14:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qt960/TRo1SbW4x+6PlOt+hn82QaKoUVl+HwvwZNaTs=; b=d4nIYO7Vc5q/55EHEyiGO4tEfthrWe/8CdG7sONFVCEJ5K7e4BYT9QgB ztoXGMmhFfG8TYK6pvC6JwvcrI3DlL0FUi1P2TKI+jTuZBidsoJ4jhuS9 Q2WV5KlY/iikytqPvmoePk4pAEMqfwRWefv4jGI3Z4ZDMZCjmn35I3y5o U=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: MQgDxRFDgY2qkXyYx9EhX8pYHExyYJpVkIwzlzEhbT2OiwJnCbpmX9ptFd4rzldySLa++GldwA Z+5j1sBUEOmI3tq/e+PJ++SE8qsV7fx4iM53RAh1ykmMrX2fT++2s4Qhtw3HWezMevHABrYH4x uJcZ4LAMuxyD1guo5OUQlnRQZHu6Z7jDk38gYFFV/3EOSFR/WjPOB8jlB7Bq4y2ERVrxykhzX/ RLrGDPcS+PUwrKJ3t4usQNglgKonLjTe9CYHLGVyjwzvL0br8B+uSMnXbEgq7C/6jnc9kZvQPS uXQ= X-SBRS: 2.7 X-MesageID: 5068893 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068893" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:20 +0200 Message-ID: <20190903161428.7159-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 03/11] ioreq: switch selection and forwarding to use ioservid_t X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" hvm_select_ioreq_server and hvm_send_ioreq where both using hvm_ioreq_server directly, switch to use ioservid_t in order to select and forward ioreqs. This is a preparatory change, since future patches will use the ioreq server id in order to differentiate between internal and external ioreq servers. Signed-off-by: Roger Pau Monné Reviewed-by: Paul Durrant Acked-by: Jan Beulich --- Changes since v1: - New in this version. --- xen/arch/x86/hvm/dm.c | 2 +- xen/arch/x86/hvm/emulate.c | 14 +++++++------- xen/arch/x86/hvm/ioreq.c | 24 ++++++++++++------------ xen/arch/x86/hvm/stdvga.c | 8 ++++---- xen/arch/x86/mm/p2m.c | 20 ++++++++++---------- xen/include/asm-x86/hvm/ioreq.h | 5 ++--- xen/include/asm-x86/p2m.h | 9 ++++----- xen/include/public/hvm/dm_op.h | 1 + 8 files changed, 41 insertions(+), 42 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index d6d0e8be89..c2fca9f729 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -263,7 +263,7 @@ static int set_mem_type(struct domain *d, return -EOPNOTSUPP; /* Do not change to HVMMEM_ioreq_server if no ioreq server mapped. */ - if ( !p2m_get_ioreq_server(d, &flags) ) + if ( p2m_get_ioreq_server(d, &flags) == XEN_INVALID_IOSERVID ) return -EINVAL; } diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index d75d3e6fd6..51d2fcba2d 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -254,7 +254,7 @@ static int hvmemul_do_io( * However, there's no cheap approach to avoid above situations in xen, * so the device model side needs to check the incoming ioreq event. */ - struct hvm_ioreq_server *s = NULL; + ioservid_t id = XEN_INVALID_IOSERVID; p2m_type_t p2mt = p2m_invalid; if ( is_mmio ) @@ -267,9 +267,9 @@ static int hvmemul_do_io( { unsigned int flags; - s = p2m_get_ioreq_server(currd, &flags); + id = p2m_get_ioreq_server(currd, &flags); - if ( s == NULL ) + if ( id == XEN_INVALID_IOSERVID ) { rc = X86EMUL_RETRY; vio->io_req.state = STATE_IOREQ_NONE; @@ -289,18 +289,18 @@ static int hvmemul_do_io( } } - if ( !s ) - s = hvm_select_ioreq_server(currd, &p); + if ( id == XEN_INVALID_IOSERVID ) + id = hvm_select_ioreq_server(currd, &p); /* If there is no suitable backing DM, just ignore accesses */ - if ( !s ) + if ( id == XEN_INVALID_IOSERVID ) { rc = hvm_process_io_intercept(&null_handler, &p); vio->io_req.state = STATE_IOREQ_NONE; } else { - rc = hvm_send_ioreq(s, &p, 0); + rc = hvm_send_ioreq(id, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state = STATE_IOREQ_NONE; else if ( !hvm_ioreq_needs_completion(&vio->io_req) ) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 69652e1080..95492bc111 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -39,6 +39,7 @@ static void set_ioreq_server(struct domain *d, unsigned int id, { ASSERT(id < MAX_NR_IOREQ_SERVERS); ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]); + BUILD_BUG_ON(MAX_NR_IOREQ_SERVERS >= XEN_INVALID_IOSERVID); d->arch.hvm.ioreq_server.server[id] = s; } @@ -868,7 +869,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id) domain_pause(d); - p2m_set_ioreq_server(d, 0, s); + p2m_set_ioreq_server(d, 0, id); hvm_ioreq_server_disable(s); @@ -1131,7 +1132,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id, if ( s->emulator != current->domain ) goto out; - rc = p2m_set_ioreq_server(d, flags, s); + rc = p2m_set_ioreq_server(d, flags, id); out: spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); @@ -1255,8 +1256,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d) spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); } -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p) +ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p) { struct hvm_ioreq_server *s; uint32_t cf8; @@ -1265,7 +1265,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, unsigned int id; if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) - return NULL; + return XEN_INVALID_IOSERVID; cf8 = d->arch.hvm.pci_cf8; @@ -1320,7 +1320,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, start = addr; end = start + p->size - 1; if ( rangeset_contains_range(r, start, end) ) - return s; + return id; break; @@ -1329,7 +1329,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, end = hvm_mmio_last_byte(p); if ( rangeset_contains_range(r, start, end) ) - return s; + return id; break; @@ -1338,14 +1338,14 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, { p->type = IOREQ_TYPE_PCI_CONFIG; p->addr = addr; - return s; + return id; } break; } } - return NULL; + return XEN_INVALID_IOSERVID; } static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) @@ -1441,12 +1441,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p) return X86EMUL_OKAY; } -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, - bool buffered) +int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered) { struct vcpu *curr = current; struct domain *d = curr->domain; struct hvm_ioreq_vcpu *sv; + struct hvm_ioreq_server *s = get_ioreq_server(d, id); ASSERT(s); @@ -1512,7 +1512,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered) if ( !s->enabled ) continue; - if ( hvm_send_ioreq(s, p, buffered) == X86EMUL_UNHANDLEABLE ) + if ( hvm_send_ioreq(id, p, buffered) == X86EMUL_UNHANDLEABLE ) failed++; } diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c index bd398dbb1b..a689269712 100644 --- a/xen/arch/x86/hvm/stdvga.c +++ b/xen/arch/x86/hvm/stdvga.c @@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler, .dir = IOREQ_WRITE, .data = data, }; - struct hvm_ioreq_server *srv; + ioservid_t id; if ( !stdvga_cache_is_enabled(s) || !s->stdvga ) goto done; @@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler, } done: - srv = hvm_select_ioreq_server(current->domain, &p); - if ( !srv ) + id = hvm_select_ioreq_server(current->domain, &p); + if ( id == XEN_INVALID_IOSERVID ) return X86EMUL_UNHANDLEABLE; - return hvm_send_ioreq(srv, &p, 1); + return hvm_send_ioreq(id, &p, 1); } static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler, diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 8a5229ee21..43849cbbd9 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -102,6 +102,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m) p2m_pt_init(p2m); spin_lock_init(&p2m->ioreq.lock); + p2m->ioreq.server = XEN_INVALID_IOSERVID; return ret; } @@ -361,7 +362,7 @@ void p2m_memory_type_changed(struct domain *d) int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s) + ioservid_t id) { struct p2m_domain *p2m = p2m_get_hostp2m(d); int rc; @@ -376,16 +377,16 @@ int p2m_set_ioreq_server(struct domain *d, if ( flags == 0 ) { rc = -EINVAL; - if ( p2m->ioreq.server != s ) + if ( p2m->ioreq.server != id ) goto out; - p2m->ioreq.server = NULL; + p2m->ioreq.server = XEN_INVALID_IOSERVID; p2m->ioreq.flags = 0; } else { rc = -EBUSY; - if ( p2m->ioreq.server != NULL ) + if ( p2m->ioreq.server != XEN_INVALID_IOSERVID ) goto out; /* @@ -397,7 +398,7 @@ int p2m_set_ioreq_server(struct domain *d, if ( read_atomic(&p2m->ioreq.entry_count) ) goto out; - p2m->ioreq.server = s; + p2m->ioreq.server = id; p2m->ioreq.flags = flags; } @@ -409,19 +410,18 @@ int p2m_set_ioreq_server(struct domain *d, return rc; } -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags) +ioservid_t p2m_get_ioreq_server(struct domain *d, unsigned int *flags) { struct p2m_domain *p2m = p2m_get_hostp2m(d); - struct hvm_ioreq_server *s; + ioservid_t id; spin_lock(&p2m->ioreq.lock); - s = p2m->ioreq.server; + id = p2m->ioreq.server; *flags = p2m->ioreq.flags; spin_unlock(&p2m->ioreq.lock); - return s; + return id; } void p2m_enable_hardware_log_dirty(struct domain *d) diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index e2588e912f..65491c48d2 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -47,9 +47,8 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v); void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v); void hvm_destroy_all_ioreq_servers(struct domain *d); -struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, - ioreq_t *p); -int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p, +ioservid_t hvm_select_ioreq_server(struct domain *d, ioreq_t *p); +int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered); unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered); diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index 94285db1b4..99a1dab311 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -354,7 +354,7 @@ struct p2m_domain { * ioreq server who's responsible for the emulation of * gfns with specific p2m type(for now, p2m_ioreq_server). */ - struct hvm_ioreq_server *server; + ioservid_t server; /* * flags specifies whether read, write or both operations * are to be emulated by an ioreq server. @@ -819,7 +819,7 @@ static inline p2m_type_t p2m_recalc_type_range(bool recalc, p2m_type_t t, if ( !recalc || !p2m_is_changeable(t) ) return t; - if ( t == p2m_ioreq_server && p2m->ioreq.server != NULL ) + if ( t == p2m_ioreq_server && p2m->ioreq.server != XEN_INVALID_IOSERVID ) return t; return p2m_is_logdirty_range(p2m, gfn_start, gfn_end) ? p2m_ram_logdirty @@ -938,9 +938,8 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn) } int p2m_set_ioreq_server(struct domain *d, unsigned int flags, - struct hvm_ioreq_server *s); -struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d, - unsigned int *flags); + ioservid_t id); +ioservid_t p2m_get_ioreq_server(struct domain *d, unsigned int *flags); static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt, p2m_type_t ot, mfn_t nfn, mfn_t ofn, diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h index d3b554d019..8725cc20d3 100644 --- a/xen/include/public/hvm/dm_op.h +++ b/xen/include/public/hvm/dm_op.h @@ -54,6 +54,7 @@ */ typedef uint16_t ioservid_t; +#define XEN_INVALID_IOSERVID 0xffff /* * XEN_DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a