From patchwork Wed Aug 21 14:59:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 11107267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F14413A4 for ; Wed, 21 Aug 2019 15:01:11 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7ACC820870 for ; Wed, 21 Aug 2019 15:01:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="FdZGet6U" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7ACC820870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5b-0000D0-5E; Wed, 21 Aug 2019 14:59:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i0S5Z-0000BY-Jq for xen-devel@lists.xenproject.org; Wed, 21 Aug 2019 14:59:53 +0000 X-Inumbo-ID: 53e413b0-c424-11e9-adc7-12813bfff9fa Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 53e413b0-c424-11e9-adc7-12813bfff9fa; Wed, 21 Aug 2019 14:59:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1566399593; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=R2CKAGsb2RND/Knz+9b3zSn3LkXlYB+nbyj3ICh1Xi8=; b=FdZGet6UfGrmSXvq5B+o+lWyS0CHoM+pLQ7p93jRcQMRS/zrAVsfR9I3 ZKspYZvnEC8amvOY6RTcWefCa01wKByKQM6l7R0BnqJ9LBKwFZ0pOLMEB iUdWkHUvrLyRF7njbohBEUPd1+UCKSLYcoSKAWrbrSSZiQ1d152WufDP6 Q=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@citrix.com; spf=Pass smtp.mailfrom=roger.pau@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of roger.pau@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="roger.pau@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: djF5HpQgTe+j60NbmT3TXg5eBKEoot7a/Xa9zuEgV70tZYWLxhyze5S0MTafDIXitk7E6BNdlc DVRFkG8MYZqgsBTn8F6XkyMsmw0i0iZAUxjuw2KlWP0lK7qEBZ3XOzbVB4LLLgDJ6nNPOKQ9Cu rvSKbAgzBGJ2E5PkIDv7AmoHM1pDyLiAJe/nz7BBMbZW7l8CWnqRYIJMvtNHJ/q6MV+G5Kuo+7 3C9/Tr8LjtPskEmko3S+wMdgR19jxdXDE9cpsXHS7Xu1yaCPFwYb++qOpD2KQAqrl2Gp6IT6fS v90= X-SBRS: 2.7 X-MesageID: 4549117 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.64,412,1559534400"; d="scan'208";a="4549117" From: Roger Pau Monne To: Date: Wed, 21 Aug 2019 16:59:03 +0200 Message-ID: <20190821145903.45934-8-roger.pau@citrix.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190821145903.45934-1-roger.pau@citrix.com> References: <20190821145903.45934-1-roger.pau@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH 7/7] ioreq: provide support for long-running operations... X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Stefano Stabellini , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Tim Deegan , Julien Grall , Paul Durrant , Jan Beulich , Roger Pau Monne Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" ...and switch vPCI to use this infrastructure for long running physmap modification operations. This allows to get rid of the vPCI specific modifications done to handle_hvm_io_completion and allows generalizing the support for long-running operations to other internal ioreq servers. Such support is implemented as a specific handler that can be registers by internal ioreq servers and that will be called to check for pending work. Returning true from this handler will prevent the vcpu from running until the handler returns false. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/ioreq.c | 55 ++++++++++++++++++++++++++++---- xen/drivers/vpci/vpci.c | 3 ++ xen/include/asm-x86/hvm/domain.h | 1 + xen/include/asm-x86/hvm/ioreq.h | 2 ++ 4 files changed, 55 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index b2582bd3a0..8e160a0a14 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -186,18 +186,29 @@ bool handle_hvm_io_completion(struct vcpu *v) enum hvm_io_completion io_completion; unsigned int id; - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - FOR_EACH_IOREQ_SERVER(d, id, s) { struct hvm_ioreq_vcpu *sv; if ( s->internal ) + { + if ( s->pending && s->pending(v) ) + { + /* + * Need to raise a scheduler irq in order to prevent the guest + * vcpu from resuming execution. + * + * Note this is not required for external ioreq operations + * because in that case the vcpu is marked as blocked, but this + * cannot be done for long-running internal operations, since + * it would prevent the vcpu from being scheduled and thus the + * long running operation from finishing. + */ + raise_softirq(SCHEDULE_SOFTIRQ); + return false; + } continue; + } list_for_each_entry ( sv, &s->ioreq_vcpu_list, @@ -518,6 +529,38 @@ int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, return rc; } +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id, + bool (*pending)(struct vcpu *v)) +{ + struct hvm_ioreq_server *s; + int rc = 0; + + spin_lock_recursive(&d->arch.hvm.ioreq_server.lock); + s = get_ioreq_server(d, id); + if ( !s ) + { + rc = -ENOENT; + goto out; + } + if ( !s->internal ) + { + rc = -EINVAL; + goto out; + } + if ( s->pending != NULL ) + { + rc = -EBUSY; + goto out; + } + + s->pending = pending; + + out: + spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock); + + return rc; +} + static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s, struct hvm_ioreq_vcpu *sv) { diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index 510e3ee771..54b0f31612 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -508,6 +508,9 @@ int vpci_register_ioreq(struct domain *d) return rc; rc = hvm_add_ioreq_handler(d, id, ioreq_handler); + if ( rc ) + return rc; + rc = hvm_add_ioreq_pending_handler(d, id, vpci_process_pending); if ( rc ) return rc; diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h index f0be303517..80a38ffe48 100644 --- a/xen/include/asm-x86/hvm/domain.h +++ b/xen/include/asm-x86/hvm/domain.h @@ -73,6 +73,7 @@ struct hvm_ioreq_server { }; struct { int (*handler)(struct vcpu *v, ioreq_t *); + bool (*pending)(struct vcpu *v); }; }; }; diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h index 10b9586885..cc3e27d059 100644 --- a/xen/include/asm-x86/hvm/ioreq.h +++ b/xen/include/asm-x86/hvm/ioreq.h @@ -57,6 +57,8 @@ void hvm_ioreq_init(struct domain *d); int hvm_add_ioreq_handler(struct domain *d, ioservid_t id, int (*handler)(struct vcpu *v, ioreq_t *)); +int hvm_add_ioreq_pending_handler(struct domain *d, ioservid_t id, + bool (*pending)(struct vcpu *v)); int hvm_ioreq_register_mmcfg(struct domain *d, paddr_t addr, unsigned int start_bus, unsigned int end_bus,