From patchwork Fri Oct 8 05:55:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 12544435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 211F2C433EF for ; Fri, 8 Oct 2021 05:57:42 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B9E596101A for ; Fri, 8 Oct 2021 05:57:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B9E596101A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.204382.359569 (Exim 4.92) (envelope-from ) id 1mYisv-0005A9-JS; Fri, 08 Oct 2021 05:57:33 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 204382.359569; Fri, 08 Oct 2021 05:57:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mYisv-0005A2-GU; Fri, 08 Oct 2021 05:57:33 +0000 Received: by outflank-mailman (input) for mailman id 204382; Fri, 08 Oct 2021 05:57:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1mYirz-0007cA-LU for xen-devel@lists.xenproject.org; Fri, 08 Oct 2021 05:56:35 +0000 Received: from mail-lf1-x129.google.com (unknown [2a00:1450:4864:20::129]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id e1a5985e-d9d4-4899-ba40-1f09609fc514; Fri, 08 Oct 2021 05:55:51 +0000 (UTC) Received: by mail-lf1-x129.google.com with SMTP id x27so34444531lfa.9 for ; Thu, 07 Oct 2021 22:55:51 -0700 (PDT) Received: from localhost.localdomain (host-176-36-245-220.b024.la.net.ua. [176.36.245.220]) by smtp.gmail.com with ESMTPSA id f8sm151147lfq.168.2021.10.07.22.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Oct 2021 22:55:49 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e1a5985e-d9d4-4899-ba40-1f09609fc514 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QzbdatLhIl735gEkyVNBitnIXpDdRUzSTDD+uCEX6MU=; b=delfA489Q6cYlsvzBgsyUqfs6iLyZQojMZXOZNSwi//ru9d8tTsLGXpsmknxxB7Jsf ZUbfObRS5KNvOd0qpW50+0TYPDWbbXEkauxVwusgL44a1sXP3EZlokGC61Psf69fHkxI 39zJKW1KlmMq4vHyErctSXOFuIlT7QNFKk+R5UEd9UvzB3L98tGOi6BcpGi2kORktaV7 y0Mx8B7/9qD8/kkLtxr2O1PoKrhMHPt7xRghBTTQKocHOX/cNsQfaVD/mfz029w3OJLM ZjvEU0ieFo+XXapzq2iTxW9nOs1E3nddcNmGPs0TERFn/Ile/6ay43Rc+JAGrIo3jNG/ XqSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QzbdatLhIl735gEkyVNBitnIXpDdRUzSTDD+uCEX6MU=; b=wFLPl5hpKafF3mI+jT7U61Lvn+2Wnt4tEnEZbJrHdn+XgBgZ5XSzd/N7o/4NuD6/4Z cMONh+NJJpsw1PPf8rhN/FjXJhMi4IETC8mYUPNvpGA0gILPNMDNFn1S5tAMBtYMroLh CLwqZ/Nurl8yD+3i52mJvo9fr5MD5hxBozrIano8cboZjsm1O4Oo6aU156IITclu7zJR rALpPPnZQLRI26VozeUB2S5XxFSppIxKY+BxY34z0AOM89pCeuFy2s6NUDx5DALehnyy MkXbKkFuCXgW0R2aVpTNvsBHluFXKRlhGDxTmU68WQ02VKP5yxlrvFe5g5v7YDlpPExf 41+Q== X-Gm-Message-State: AOAM530arnUNA4HC2NVQTEJwlPtFg7sLOOe8Hl54rDX0Tj8HbhtdWVTy WYkyOx30ITbwvCVKgyQUz9cOR9V5UYc= X-Google-Smtp-Source: ABdhPJywjRTSBxfY5U+W2M3mS/L+w/EHBGcH55u4Oms0eFyTgaDV5BEqEg7Hc9R8uhCncZW9thfM2w== X-Received: by 2002:ac2:5506:: with SMTP id j6mr8519664lfk.91.1633672549926; Thu, 07 Oct 2021 22:55:49 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org Cc: julien@xen.org, sstabellini@kernel.org, oleksandr_tyshchenko@epam.com, volodymyr_babchuk@epam.com, Artem_Mygaiev@epam.com, roger.pau@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com, george.dunlap@citrix.com, paul@xen.org, bertrand.marquis@arm.com, rahul.singh@arm.com, Oleksandr Andrushchenko Subject: [PATCH v5 10/10] xen/arm: Process pending vPCI map/unmap operations Date: Fri, 8 Oct 2021 08:55:35 +0300 Message-Id: <20211008055535.337436-11-andr2000@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211008055535.337436-1-andr2000@gmail.com> References: <20211008055535.337436-1-andr2000@gmail.com> MIME-Version: 1.0 From: Oleksandr Andrushchenko vPCI may map and unmap PCI device memory (BARs) being passed through which may take a lot of time. For this those operations may be deferred to be performed later, so that they can be safely preempted. Currently this deferred processing is happening in common IOREQ code which doesn't seem to be the right place for x86 and is even more doubtful because IOREQ may not be enabled for Arm at all. So, for Arm the pending vPCI work may have no chance to be executed if the processing is left as is in the common IOREQ code only. For that reason make vPCI processing happen in arch specific code. Please be aware that there are a few outstanding TODOs affecting this code path, see xen/drivers/vpci/header.c:map_range and xen/drivers/vpci/header.c:vpci_process_pending. Signed-off-by: Oleksandr Andrushchenko [x86 changes] Acked-by: Jan Beulich Reviewed-by: Stefano Stabellini Reviewed-by: Rahul Singh Tested-by: Rahul Singh --- Cc: Andrew Cooper Cc: Paul Durrant Since v2: - update commit message with more insight on x86, IOREQ and Arm - restored order of invocation for IOREQ and vPCI processing (Jan) Since v1: - Moved the check for pending vpci work from the common IOREQ code to hvm_do_resume on x86 - Re-worked the code for Arm to ensure we don't miss pending vPCI work --- xen/arch/arm/traps.c | 13 +++++++++++++ xen/arch/x86/hvm/hvm.c | 6 ++++++ xen/common/ioreq.c | 9 --------- 3 files changed, 19 insertions(+), 9 deletions(-) diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 219ab3c3fbde..b246f51086e3 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include @@ -2304,6 +2305,18 @@ static bool check_for_vcpu_work(void) } #endif + if ( has_vpci(v->domain) ) + { + bool pending; + + local_irq_enable(); + pending = vpci_process_pending(v); + local_irq_disable(); + + if ( pending ) + return true; + } + if ( likely(!v->arch.need_flush_to_ram) ) return false; diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index aa418a3ca1b7..c491242e4b8b 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -546,6 +546,12 @@ void hvm_do_resume(struct vcpu *v) pt_restore_timer(v); + if ( has_vpci(v->domain) && vpci_process_pending(v) ) + { + raise_softirq(SCHEDULE_SOFTIRQ); + return; + } + if ( !vcpu_ioreq_handle_completion(v) ) return; diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index d732dc045df9..689d256544c8 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -25,9 +25,7 @@ #include #include #include -#include #include -#include #include #include @@ -212,19 +210,12 @@ static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p) bool vcpu_ioreq_handle_completion(struct vcpu *v) { - struct domain *d = v->domain; struct vcpu_io *vio = &v->io; struct ioreq_server *s; struct ioreq_vcpu *sv; enum vio_completion completion; bool res = true; - if ( has_vpci(d) && vpci_process_pending(v) ) - { - raise_softirq(SCHEDULE_SOFTIRQ); - return false; - } - while ( (sv = get_pending_vcpu(v, &s)) != NULL ) if ( !wait_for_io(sv, get_ioreq(s, v)) ) return false;