From patchwork Tue Mar 10 12:43:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 11429239 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 27461138D for ; Tue, 10 Mar 2020 12:45:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0CA8624695 for ; Tue, 10 Mar 2020 12:45:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CA8624695 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBeEu-00080W-2R; Tue, 10 Mar 2020 12:44:04 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBeEs-00080R-8c for xen-devel@lists.xenproject.org; Tue, 10 Mar 2020 12:44:02 +0000 X-Inumbo-ID: d14d323e-62cc-11ea-92cf-bc764e2007e4 Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d14d323e-62cc-11ea-92cf-bc764e2007e4; Tue, 10 Mar 2020 12:44:01 +0000 (UTC) IronPort-SDR: OKbgj6z6JFAm2Fy48+mWUE4QxhGcfOxgGNoONaL3g3DWIW9O+fzC1haX3kQtenXY+6AzzizxBM gprxO16kXbcg== X-IronPort-AV: E=Sophos;i="5.70,537,1574121600"; d="scan'208";a="30321223" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-7d76a15f.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 10 Mar 2020 12:44:01 +0000 Received: from EX13MTAUEA002.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162]) by email-inbound-relay-1a-7d76a15f.us-east-1.amazon.com (Postfix) with ESMTPS id D7629A2879; Tue, 10 Mar 2020 12:43:58 +0000 (UTC) Received: from EX13D22EUB002.ant.amazon.com (10.43.166.131) by EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS) id 15.0.1236.3; Tue, 10 Mar 2020 12:43:57 +0000 Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by EX13D22EUB002.ant.amazon.com (10.43.166.131) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 10 Mar 2020 12:43:56 +0000 Received: from u2f063a87eabd5f.cbg10.amazon.com (10.125.106.135) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id 15.0.1236.3 via Frontend Transport; Tue, 10 Mar 2020 12:43:55 +0000 From: To: Date: Tue, 10 Mar 2020 12:43:53 +0000 Message-ID: <20200310124353.4337-1-paul@xen.org> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Precedence: Bulk Subject: [Xen-devel] [PATCH v5] x86: irq: Do not BUG_ON multiple unbind calls for shared pirqs X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Julien Grall , Paul Durrant , Andrew Cooper , Varad Gautam , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Varad Gautam XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS. In that scenario, it is possible to receive multiple __pirq_guest_unbind calls for the same pirq from domain_kill, if the pirq has not yet been removed from the domain's pirq_tree, as: domain_kill() -> domain_relinquish_resources() -> pci_release_devices() -> pci_clean_dpci_irq() -> pirq_guest_unbind() -> __pirq_guest_unbind() For a shared pirq (nr_guests > 1), the first call would zap the current domain from the pirq's guests[] list, but the action handler is never freed as there are other guests using this pirq. As a result, on the second call, __pirq_guest_unbind searches for the current domain which has been removed from the guests[] list, and hits a BUG_ON. Make __pirq_guest_unbind safe to be called multiple times by letting xen continue if a shared pirq has already been unbound from this guest. The PIRQ will be cleaned up from the domain's pirq_tree during the destruction in complete_domain_destroy anyway. Signed-off-by: Varad Gautam [taking over from Varad at v4] Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Julien Grall Cc: Roger Pau Monné Cc: Andrew Cooper Roger suggested cleaning the entry from the domain pirq_tree so that we need not make it safe to re-call __pirq_guest_unbind(). This seems like a reasonable suggestion but the semantics of the code are almost impenetrable (e.g. 'pirq' is used to mean an index, a pointer and is also the name of struct so you generally have little idea what it actally means) so I prefer to stick with a small fix that I can actually reason about. v5: - BUG_ON(!shareable) rather than ASSERT(shareable) - Drop ASSERT on nr_guests v4: - Re-work the guest array search to make it clearer v3: - Style fixups v2: - Split the check on action->nr_guests > 0 and make it an ASSERT --- xen/arch/x86/irq.c | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c index cc2eb8e925..a3701354e6 100644 --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -1680,9 +1680,22 @@ static irq_guest_action_t *__pirq_guest_unbind( BUG_ON(!(desc->status & IRQ_GUEST)); - for ( i = 0; (i < action->nr_guests) && (action->guest[i] != d); i++ ) - continue; - BUG_ON(i == action->nr_guests); + for ( i = 0; i < action->nr_guests; i++ ) + if ( action->guest[i] == d ) + break; + + if ( i == action->nr_guests ) /* No matching entry */ + { + /* + * In case the pirq was shared, unbound for this domain in an earlier + * call, but still existed on the domain's pirq_tree, we still reach + * here if there are any later unbind calls on the same pirq. Return + * if such an unbind happens. + */ + BUG_ON(!action->shareable); + return NULL; + } + memmove(&action->guest[i], &action->guest[i+1], (action->nr_guests-i-1) * sizeof(action->guest[0])); action->nr_guests--;