From patchwork Tue Sep 3 16:14:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11128367 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 722B01398 for ; Tue, 3 Sep 2019 16:16:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4037822CF8 for ; Tue, 3 Sep 2019 16:16:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="D308+m3s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4037822CF8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSW-00025U-Pc; Tue, 03 Sep 2019 16:15:08 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5BSV-00022w-6e for xen-devel@lists.xenproject.org; Tue, 03 Sep 2019 16:15:07 +0000 X-Inumbo-ID: fd81e199-ce65-11e9-ab97-12813bfff9fa Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fd81e199-ce65-11e9-ab97-12813bfff9fa; Tue, 03 Sep 2019 16:15:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1567527307; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=s8EqdBvCTePDowWf8R6eBmH/8k4r12TC0opBxZ29Jws=; b=D308+m3sSpPJYFY+g4D+14jkCEJM96Jm/SaXcHJu+MKtM6AkML35on9X +yq8hOaWnAU5ALRho6AHfeK0529/7Ocakwq3PwfxuwmkVZhJF40yIgMPx 14C/afuObe8ccxAHkMFUqW6O+41bdTJqin1gwOzPiVuQHgd7SMTn0M3BN M=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: JUFmZo9N7wVLT7SRJiZC6ZHCvyss0ikSARoRAAf2BD5hWH2AJIAjrDmGDrwlY+s94DN3DFN88r neStFtCIyw1+U9JJdgVPkkPdoyNKAmow3kfgYinsxfMOI3qslgzCu+aEwn8cSK4MHwiUtzOjrQ 43P99xBx+D+UzRIG1xTWTTqrAEElYyWC7eWY/ySdQ5VdeXyIHF1saNLg3JAeVc0mE+98JOpl6T OJNpVU4eSL3PrLVFMkyd84/u4RZo3XeKjO8Llgf/p3t2fL9C27yVVhcngbvWX1q2Y/nC2JN+Ll eb8= X-SBRS: 2.7 X-MesageID: 5068915 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,463,1559534400"; d="scan'208";a="5068915" From: Roger Pau Monne To: Date: Tue, 3 Sep 2019 18:14:28 +0200 Message-ID: <20190903161428.7159-12-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190903161428.7159-1-roger.pau@citrix.com> References: <20190903161428.7159-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 11/11] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/ioreq.c | 55 +++++++++++++++++++++++++----- xen/drivers/vpci/header.c | 61 ++++++++++++++++++---------------- xen/drivers/vpci/vpci.c | 8 ++++- xen/include/asm-x86/hvm/vcpu.h | 3 +- xen/include/xen/vpci.h | 6 ---- 5 files changed, 89 insertions(+), 44 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index 33c56b880c..caa53dfa84 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -239,16 +239,48 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - - FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s) + FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; + if ( hvm_ioreq_is_internal(id) ) + { + if ( vio->io_req.state == STATE_IOREQ_INPROCESS ) + { + ioreq_t req = vio->io_req; + + /* + * Check and convert the PIO/MMIO ioreq to a PCI config space + * access. + */ + convert_pci_ioreq(d, &req); + + if ( s->handler(v, &req, s->data) == X86EMUL_RETRY ) + { + /* + * Need to raise a scheduler irq in order to prevent the + * guest vcpu from resuming execution. + * + * Note this is not required for external ioreq operations + * because in that case the vcpu is marked as blocked, but + * this cannot be done for long-running internal + * operations, since it would prevent the vcpu from being + * scheduled and thus the long running operation from + * finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } + + /* Finished processing the ioreq. */ + if ( hvm_ioreq_needs_completion(&vio->io_req) ) + vio->io_req.state = STATE_IORESP_READY; + else + vio->io_req.state = STATE_IOREQ_NONE; + } + continue; + } + list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry ) @@ -1582,7 +1614,14 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p, bool buffered) return hvm_send_buffered_ioreq(s, proto_p); if ( hvm_ioreq_is_internal(id) ) - return s->handler(curr, proto_p, s->data); + { + int rc = s->handler(curr, proto_p, s->data); + + if ( rc == X86EMUL_RETRY ) + curr->arch.hvm.hvm_io.io_req.state = STATE_IOREQ_INPROCESS; + + return rc; + } if ( unlikely(!vcpu_start_shutdown_deferral(curr)) ) return X86EMUL_RETRY; diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 3c794f486d..f1c1a69492 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -129,37 +129,42 @@ static void modify_decoding(const struct pci_dev *pdev, uint16_t cmd, bool vpci_process_pending(struct vcpu *v) { - if ( v->vpci.mem ) + struct map_data data = { + .d = v->domain, + .map = v->vpci.cmd & PCI_COMMAND_MEMORY, + }; + int rc; + + if ( !v->vpci.mem ) { - struct map_data data = { - .d = v->domain, - .map = v->vpci.cmd & PCI_COMMAND_MEMORY, - }; - int rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data); - - if ( rc == -ERESTART ) - return true; - - spin_lock(&v->vpci.pdev->vpci->lock); - /* Disable memory decoding unconditionally on failure. */ - modify_decoding(v->vpci.pdev, - rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd, - !rc && v->vpci.rom_only); - spin_unlock(&v->vpci.pdev->vpci->lock); - - rangeset_destroy(v->vpci.mem); - v->vpci.mem = NULL; - if ( rc ) - /* - * FIXME: in case of failure remove the device from the domain. - * Note that there might still be leftover mappings. While this is - * safe for Dom0, for DomUs the domain will likely need to be - * killed in order to avoid leaking stale p2m mappings on - * failure. - */ - vpci_remove_device(v->vpci.pdev); + ASSERT_UNREACHABLE(); + return false; } + rc = rangeset_consume_ranges(v->vpci.mem, map_range, &data); + + if ( rc == -ERESTART ) + return true; + + spin_lock(&v->vpci.pdev->vpci->lock); + /* Disable memory decoding unconditionally on failure. */ + modify_decoding(v->vpci.pdev, + rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd, + !rc && v->vpci.rom_only); + spin_unlock(&v->vpci.pdev->vpci->lock); + + rangeset_destroy(v->vpci.mem); + v->vpci.mem = NULL; + if ( rc ) + /* + * FIXME: in case of failure remove the device from the domain. + * Note that there might still be leftover mappings. While this is + * safe for Dom0, for DomUs the domain will likely need to be + * killed in order to avoid leaking stale p2m mappings on + * failure. + */ + vpci_remove_device(v->vpci.pdev); + return false; } diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 5664020c2d..6069dff612 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -498,6 +498,12 @@ static int ioreq_handler(struct vcpu *v, ioreq_t *req, void *data) return X86EMUL_UNHANDLEABLE; } + if ( v->vpci.mem ) + { + ASSERT(req->state == STATE_IOREQ_INPROCESS); + return vpci_process_pending(v) ? X86EMUL_RETRY : X86EMUL_OKAY; + } + sbdf.sbdf = req->addr >> 32; if ( req->dir ) @@ -505,7 +511,7 @@ static int ioreq_handler(struct vcpu *v, ioreq_t *req, void *data) else write(sbdf, req->addr, req->size, req->data); - return X86EMUL_OKAY; + return v->vpci.mem ? X86EMUL_RETRY : X86EMUL_OKAY; } int vpci_register_ioreq(struct domain *d) diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index 38f5c2bb9b..4563746466 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -92,7 +92,8 @@ struct hvm_vcpu_io { static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq) { - return ioreq->state == STATE_IOREQ_READY && + return (ioreq->state == STATE_IOREQ_READY || + ioreq->state == STATE_IOREQ_INPROCESS) && !ioreq->data_is_ptr && (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE); } diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index 36f435ed5b..a65491e0c9 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -225,12 +225,6 @@ static inline int vpci_register_ioreq(struct domain *d) } static inline void vpci_dump_msi(void) { } - -static inline bool vpci_process_pending(struct vcpu *v) -{ - ASSERT_UNREACHABLE(); - return false; -} #endif #endif