From patchwork Wed Feb 16 15:16:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 12748700 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C339AC4321E for ; Wed, 16 Feb 2022 15:16:48 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.274218.469622 (Exim 4.92) (envelope-from ) id 1nKM2l-0000yZ-2W; Wed, 16 Feb 2022 15:16:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 274218.469622; Wed, 16 Feb 2022 15:16:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2k-0000yQ-V7; Wed, 16 Feb 2022 15:16:34 +0000 Received: by outflank-mailman (input) for mailman id 274218; Wed, 16 Feb 2022 15:16:33 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2j-0000ia-Gd for xen-devel@lists.xenproject.org; Wed, 16 Feb 2022 15:16:33 +0000 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [2a00:1450:4864:20::134]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 6c6fe179-8f3b-11ec-8eb8-a37418f5ba1a; Wed, 16 Feb 2022 16:16:32 +0100 (CET) Received: by mail-lf1-x134.google.com with SMTP id f37so4414142lfv.8 for ; Wed, 16 Feb 2022 07:16:32 -0800 (PST) Received: from a2klaptop.localdomain ([185.199.97.5]) by smtp.gmail.com with ESMTPSA id bt2sm3082357lfb.93.2022.02.16.07.16.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 07:16:31 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6c6fe179-8f3b-11ec-8eb8-a37418f5ba1a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZBCrQj2lwFzNomdp2NiauAmt+Mty9vunRDBJMAz0cCY=; b=Ai0VY8bkdyCx673J/QNboO6zC0ztb8oEypvUaqAWMKiHZpZ9kZHNhnNoBv8K9GhR0b u2CHNI0W5hTY44LcHYvRb2l+jxZdUI6Dcn4lpThT+katfuO2R5yhvO/npS0hdM3MaTrM zq62uqAs1yET2T44+O5Q/vdV1oCdyv3RF4XlHnlZKeP6SxWxiKTJgV6l7HmfgMpmNhJe bu1+i+0JJ/g1CQHaOIbj4fTVGndgvOSIcpqmn3L2o3qyI+cwZWoDO1W4R0WsWb1SLLKh Ot2DbmXAWjtWXXyf5u/+3GKuMEhkY1pPAMR6S6QHGOD34qkgMC2xwfzoEQ6N4lTLQsLy TU9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZBCrQj2lwFzNomdp2NiauAmt+Mty9vunRDBJMAz0cCY=; b=GjfqcMNYVq1EE0k5rJBFKupfxph1QQiz1fv0vL1xO5jIjXvI0n0MOYbGRbSDissCLb nM3FVyXLM6LZ6NEkya6Hl6ZTl/ukCY2fhnayc/xrWTiNjNTCcCLxkbO81HT0f0TZt5L1 ip9XG9E8BXkz4sbXUb6PD3zmwKCByOlauYdsVDQcHVu67vAjrRpogU3NoQvlrw7gBoCo 0Pd7JPBsPHBCI0EiWRObJBi2IOmOGEhBoDse8I5DgYfI/YqCQ9miaHtLDB9RMLwyOicc 2otBUKZvEiGoWqN2X6APNjEvmP/p9MG4V1eqIYukcrDjDzhkWLgvyil9BlMB3AUUo1LD R1lg== X-Gm-Message-State: AOAM533Qv1+gqwonshQZD6p3h5pKYoJe/TK4+kjtngnAZ57kIAtqv8F7 qfdswxQmOrN9Rh6xEV+szmEGKFPfdTg= X-Google-Smtp-Source: ABdhPJwS/eNqTVLggc40y8HHbv3uipeAY422byje3hbaUB099wArdAJ5Hix4jdXxi9QwZIcucwHrgw== X-Received: by 2002:a05:6512:220e:b0:439:cb39:cb83 with SMTP id h14-20020a056512220e00b00439cb39cb83mr2258047lfu.551.1645024591977; Wed, 16 Feb 2022 07:16:31 -0800 (PST) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org Cc: roger.pau@citrix.com, jbeulich@suse.com, julien@xen.org, sstabellini@kernel.org, oleksandr_tyshchenko@epam.com, volodymyr_babchuk@epam.com, artem_mygaiev@epam.com, bertrand.marquis@arm.com, rahul.singh@arm.com, Oleksandr Andrushchenko Subject: [PATCH 1/4] pci: add rwlock to pcidevs_lock machinery Date: Wed, 16 Feb 2022 17:16:25 +0200 Message-Id: <20220216151628.1610777-2-andr2000@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220216151628.1610777-1-andr2000@gmail.com> References: <20220216151628.1610777-1-andr2000@gmail.com> MIME-Version: 1.0 From: Oleksandr Andrushchenko Currently pcidevs lock is a global recursive spinlock which is fine for the existing use cases. It is used to both protect pdev instances themselves from being removed while in use and to make sure the update of the relevant pdev properties is synchronized. Moving towards vPCI is used for guests this becomes problematic in terms of lock contention. For example, during vpci_{read|write} the access to pdev must be protected to prevent pdev disappearing under our feet. This needs to be done with the help of pcidevs_{lock|unlock}. On the other hand it is highly undesirable to lock all other pdev accesses which only use pdevs in read mode, e.g. those which do not remove or add pdevs. For the above reasons introduce a read/write lock which will help preventing locking contentions between pdev readers and writers: - make pci_{add|remove}_device and setup_hwdom_pci_devices use the new write lock - keep all the rest using the existing API (pcidevs_{lock|unlock}, but extend the later to also acquire the rwlock in read mode. This is in preparation for vPCI to be used for guests. Signed-off-by: Oleksandr Andrushchenko --- xen/drivers/passthrough/pci.c | 45 ++++++++++++++++++++++++++--------- xen/include/xen/pci.h | 4 ++++ 2 files changed, 38 insertions(+), 11 deletions(-) diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index e8b09d77d880..2a0d3d37a69f 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -51,20 +51,38 @@ struct pci_seg { }; static spinlock_t _pcidevs_lock = SPIN_LOCK_UNLOCKED; +static DEFINE_RWLOCK(_pcidevs_rwlock); void pcidevs_lock(void) { + read_lock(&_pcidevs_rwlock); spin_lock_recursive(&_pcidevs_lock); } void pcidevs_unlock(void) { spin_unlock_recursive(&_pcidevs_lock); + read_unlock(&_pcidevs_rwlock); } bool_t pcidevs_locked(void) { - return !!spin_is_locked(&_pcidevs_lock); + return !!spin_is_locked(&_pcidevs_lock) || pcidevs_write_locked(); +} + +void pcidevs_write_lock(void) +{ + write_lock(&_pcidevs_rwlock); +} + +void pcidevs_write_unlock(void) +{ + write_unlock(&_pcidevs_rwlock); +} + +bool pcidevs_write_locked(void) +{ + return !!rw_is_write_locked(&_pcidevs_rwlock); } static struct radix_tree_root pci_segments; @@ -758,7 +776,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, ret = -ENOMEM; - pcidevs_lock(); + pcidevs_write_lock(); pseg = alloc_pseg(seg); if ( !pseg ) goto out; @@ -854,7 +872,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, pci_enable_acs(pdev); out: - pcidevs_unlock(); + pcidevs_write_unlock(); if ( !ret ) { printk(XENLOG_DEBUG "PCI add %s %pp\n", pdev_type, &pdev->sbdf); @@ -885,7 +903,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn) if ( !pseg ) return -ENODEV; - pcidevs_lock(); + pcidevs_write_lock(); list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list ) if ( pdev->bus == bus && pdev->devfn == devfn ) { @@ -899,7 +917,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn) break; } - pcidevs_unlock(); + pcidevs_write_unlock(); return ret; } @@ -1176,6 +1194,11 @@ static void __hwdom_init setup_one_hwdom_device(const struct setup_hwdom *ctxt, ctxt->d->domain_id, err); } +/* + * It's safe to drop and re-acquire the write lock in this context without + * risking pdev disappearing because devices cannot be removed until the + * initial domain has been started. + */ static int __hwdom_init _setup_hwdom_pci_devices(struct pci_seg *pseg, void *arg) { struct setup_hwdom *ctxt = arg; @@ -1208,17 +1231,17 @@ static int __hwdom_init _setup_hwdom_pci_devices(struct pci_seg *pseg, void *arg if ( iommu_verbose ) { - pcidevs_unlock(); + pcidevs_write_unlock(); process_pending_softirqs(); - pcidevs_lock(); + pcidevs_write_lock(); } } if ( !iommu_verbose ) { - pcidevs_unlock(); + pcidevs_write_unlock(); process_pending_softirqs(); - pcidevs_lock(); + pcidevs_write_lock(); } } @@ -1230,9 +1253,9 @@ void __hwdom_init setup_hwdom_pci_devices( { struct setup_hwdom ctxt = { .d = d, .handler = handler }; - pcidevs_lock(); + pcidevs_write_lock(); pci_segments_iterate(_setup_hwdom_pci_devices, &ctxt); - pcidevs_unlock(); + pcidevs_write_unlock(); } /* APEI not supported on ARM yet. */ diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h index b6d7e454f814..e814d9542bfc 100644 --- a/xen/include/xen/pci.h +++ b/xen/include/xen/pci.h @@ -152,6 +152,10 @@ void pcidevs_lock(void); void pcidevs_unlock(void); bool_t __must_check pcidevs_locked(void); +void pcidevs_write_lock(void); +void pcidevs_write_unlock(void); +bool __must_check pcidevs_write_locked(void); + bool_t pci_known_segment(u16 seg); bool_t pci_device_detect(u16 seg, u8 bus, u8 dev, u8 func); int scan_pci_devices(void); From patchwork Wed Feb 16 15:16:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 12748698 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 87343C43217 for ; Wed, 16 Feb 2022 15:16:48 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.274219.469634 (Exim 4.92) (envelope-from ) id 1nKM2m-0001Fd-HY; Wed, 16 Feb 2022 15:16:36 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 274219.469634; Wed, 16 Feb 2022 15:16:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2m-0001FL-Az; Wed, 16 Feb 2022 15:16:36 +0000 Received: by outflank-mailman (input) for mailman id 274219; Wed, 16 Feb 2022 15:16:35 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2l-0000ia-8g for xen-devel@lists.xenproject.org; Wed, 16 Feb 2022 15:16:35 +0000 Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com [2a00:1450:4864:20::229]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 6d7fc221-8f3b-11ec-8eb8-a37418f5ba1a; Wed, 16 Feb 2022 16:16:34 +0100 (CET) Received: by mail-lj1-x229.google.com with SMTP id c15so3779180ljf.11 for ; Wed, 16 Feb 2022 07:16:34 -0800 (PST) Received: from a2klaptop.localdomain ([185.199.97.5]) by smtp.gmail.com with ESMTPSA id bt2sm3082357lfb.93.2022.02.16.07.16.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 07:16:32 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6d7fc221-8f3b-11ec-8eb8-a37418f5ba1a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+ORkUu56CU6HGY8oCapGyNoSUW812hUexJ6w29f/gaw=; b=cPsCuLNUCp/mAbeiTz7Ea9oexvjfKLEeXyM4EJ7w+M8wZJ7cW/CghjXa3J+Yqii44b yMuNZJpfCHjBeHzIVhtk5jph/ON+iuyGZnBG1bD/R28SRTGWoZeGceNfJZEyqvgQmvj4 i7El2Mzpbn2z0yYz0KAXXbZ9jLBXYc3QgiuiOiVWIVZmPUL3WkCIEoWStTcO6DZ+rW6k U1TxrV/fjn1BoCkWZwiVd5MG2mO5lhAE1ssX1H8UsbKHz80ibLFZjn1EComdHg+fgHHj 0Uxle6Y5n2etoFwX7Q1a+KY/C9enzTR55NXbXFprFqX7ekC4Yn1zU6eq9jkmXYBQzwrt FBDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+ORkUu56CU6HGY8oCapGyNoSUW812hUexJ6w29f/gaw=; b=laTvvgKMWeVlwCLCJOVqxVpG+EMRQtAoj4J8zLYcT7E2z1vwxwP7zXOeRlPKMePfu3 /aflLHrdPUJbaKN3bZ54Hl/RbN490sfIEvgplNS46fTX7h+CdNx8yxJgPaJKEhEYfASU y1zOOm464Ku1MjhILiMe5lxSLsWIRzLSZodxk6+dgNdTaodSTXVRGj7i4Qkohdz+Iv34 gH9EkgwPzLVCV7LIbdGg2e9n0B+h3hsbpCE89ekld8FbzuUXuPR92GtuWXk9w6XQBWiC StLftaV5yYmrWl7PA1nPRlkXSLP/BdbRttoAP7MT0uzO5PCbmcW/Hv61souK99Ga9P0V 4jmw== X-Gm-Message-State: AOAM531nqEp/xXh4zjnUkFrEAJkUtb54EXUC5MCiyqfg+Nvn1JKfnZbv PVUwmULszm7QHOKGnEc3SIdOzQ7D2q8= X-Google-Smtp-Source: ABdhPJxDYrdyScCegh3iYn/DlxcOxXd/KOxJ/b5f2U0ecvwx/kfDsZpJ6qUQoFHk6MIe+Kfo1c4e5Q== X-Received: by 2002:a2e:b891:0:b0:23a:ee88:844b with SMTP id r17-20020a2eb891000000b0023aee88844bmr2456330ljp.402.1645024593157; Wed, 16 Feb 2022 07:16:33 -0800 (PST) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org Cc: roger.pau@citrix.com, jbeulich@suse.com, julien@xen.org, sstabellini@kernel.org, oleksandr_tyshchenko@epam.com, volodymyr_babchuk@epam.com, artem_mygaiev@epam.com, bertrand.marquis@arm.com, rahul.singh@arm.com, Oleksandr Andrushchenko Subject: [PATCH 2/4] vpci: restrict unhandled read/write operations for guests Date: Wed, 16 Feb 2022 17:16:26 +0200 Message-Id: <20220216151628.1610777-3-andr2000@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220216151628.1610777-1-andr2000@gmail.com> References: <20220216151628.1610777-1-andr2000@gmail.com> MIME-Version: 1.0 From: Oleksandr Andrushchenko A guest would be able to read and write those registers which are not emulated and have no respective vPCI handlers, so it will be possible for it to access the hardware directly. In order to prevent a guest from reads and writes from/to the unhandled registers make sure only hardware domain can access the hardware directly and restrict guests from doing so. Suggested-by: Roger Pau Monné Signed-off-by: Oleksandr Andrushchenko --- Since v6: - do not use is_hwdom parameter for vpci_{read|write}_hw and use current->domain internally - update commit message New in v6 --- xen/drivers/vpci/vpci.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index fb0947179b79..f564572a51cb 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -213,6 +213,10 @@ static uint32_t vpci_read_hw(pci_sbdf_t sbdf, unsigned int reg, { uint32_t data; + /* Guest domains are not allowed to read real hardware. */ + if ( !is_hardware_domain(current->domain) ) + return ~(uint32_t)0; + switch ( size ) { case 4: @@ -253,9 +257,13 @@ static uint32_t vpci_read_hw(pci_sbdf_t sbdf, unsigned int reg, return data; } -static void vpci_write_hw(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, - uint32_t data) +static void vpci_write_hw(pci_sbdf_t sbdf, unsigned int reg, + unsigned int size, uint32_t data) { + /* Guest domains are not allowed to write real hardware. */ + if ( !is_hardware_domain(current->domain) ) + return; + switch ( size ) { case 4: From patchwork Wed Feb 16 15:16:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 12748701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 61D03C433F5 for ; Wed, 16 Feb 2022 15:16:51 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.274220.469645 (Exim 4.92) (envelope-from ) id 1nKM2n-0001Wu-Of; Wed, 16 Feb 2022 15:16:37 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 274220.469645; Wed, 16 Feb 2022 15:16:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2n-0001W8-Jh; Wed, 16 Feb 2022 15:16:37 +0000 Received: by outflank-mailman (input) for mailman id 274220; Wed, 16 Feb 2022 15:16:36 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2m-0001Dp-JB for xen-devel@lists.xenproject.org; Wed, 16 Feb 2022 15:16:36 +0000 Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com [2a00:1450:4864:20::231]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 6dcb8634-8f3b-11ec-b215-9bbe72dcb22c; Wed, 16 Feb 2022 16:16:35 +0100 (CET) Received: by mail-lj1-x231.google.com with SMTP id c10so3784777ljr.9 for ; Wed, 16 Feb 2022 07:16:35 -0800 (PST) Received: from a2klaptop.localdomain ([185.199.97.5]) by smtp.gmail.com with ESMTPSA id bt2sm3082357lfb.93.2022.02.16.07.16.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 07:16:33 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6dcb8634-8f3b-11ec-b215-9bbe72dcb22c DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mnHvlszou+sUt2CMWmsEloA/G0o6HD6lA07Jc0hGpOo=; b=CnqsZL9viXZMQHQvHmttTnJ0eEVzg9JrOJABXFnJpgoEwIzWtPkIzsA/Khf1vCiYWv lROUz5o/Dk2xoX6bpRS47rkPRDkeScth8pCG1ND3JjccdDZ0TKKtux2CYpWpYwewMtK7 z2OJnLULlUSOZ6VsCf66MzxuSrbxPy6OEA7V58iL05jmDW8IbayQRuS+8f6883ImsQSg FSnZqZN5mq1z5qY4tt5HxePYhHWlSPUPHVnGBK1QDnBBfa5sv3v3jEQS9P6oa/0FiMSx yacoEAQOOwQq9l0YmUElHzMYeSeZJFxPWFHXHiLglQ3FOEpIbpROc7In0gEv6jnVbzKj D1jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mnHvlszou+sUt2CMWmsEloA/G0o6HD6lA07Jc0hGpOo=; b=CwSnFCTSskQ+1e5uAc86f3jqecdq5oL5rT30ifN2e+TKphhqqhpPSapDeqqJ/Xi+Gd V3JXNr+t2EbICuo0cJp5WnCkiDzDW/TaKBZEVbDkt2DUJ8zmxAxJr9ROTVe1sYPUfYIx 8TZNdV08iYtZx9SDtxm0h2pGtTjzh0EE3+XAyh5Ewh/dmWUeWH3Mdwr2aZaKEeghituJ dt6e0FpLQd0TzZpssPd9kZIHWNGpD9SMNGwbWovAszlLvcvcOxGKtCy7XeS72xjAqivk 7YHLvzASlFz3vALjNLqHIEjAvw2sWSln12oDv8ugwtH32aPT1n4wyK2xRj4PulPe0wWJ kkAw== X-Gm-Message-State: AOAM533J0MyWYximf68cCwxsmnGswUtRx0yLJ5fqJWdLCqyMFjUpLcNm uNjtJ5DBrmcc+oZUy5/LgX9uEE0qhcQ= X-Google-Smtp-Source: ABdhPJxbY8NbS1ZgG14niedixXiySlg73DQhQT2TTHksvvrKaiNwPIMy+7I3IkDlaf2T6KXsC9hn8w== X-Received: by 2002:a05:651c:1787:b0:244:3308:eb71 with SMTP id bn7-20020a05651c178700b002443308eb71mr2427606ljb.492.1645024594247; Wed, 16 Feb 2022 07:16:34 -0800 (PST) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org Cc: roger.pau@citrix.com, jbeulich@suse.com, julien@xen.org, sstabellini@kernel.org, oleksandr_tyshchenko@epam.com, volodymyr_babchuk@epam.com, artem_mygaiev@epam.com, bertrand.marquis@arm.com, rahul.singh@arm.com, Oleksandr Andrushchenko Subject: [PATCH 3/4] vpci: use pcidevs locking to protect MMIO handlers Date: Wed, 16 Feb 2022 17:16:27 +0200 Message-Id: <20220216151628.1610777-4-andr2000@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220216151628.1610777-1-andr2000@gmail.com> References: <20220216151628.1610777-1-andr2000@gmail.com> MIME-Version: 1.0 From: Oleksandr Andrushchenko vPCI MMIO handlers are accessing pdevs without protecting this access with pcidevs_{lock|unlock}. This is not a problem as of now as these are only used by Dom0. But, towards vPCI is used also for guests, we need to properly protect pdev and pdev->vpci from being removed while still in use. For that add new locking helpers: pcidevs_read_{un}lock and means to check if the lock is held in read mode. Note, that pcidevs_read_{un}lock doesn't acquire _pcidevs_lock recursive lock because its users are not expected to modify pdev's contents other than pdev->vpci which is protected by pdev->vpci->lock (where appropriate). These new helpers are also suitable for simple pdev list traversals such as for_each_pdev, pci_get_pdev_by_domain and others. This patch adds ASSERTs in the code to check that the rwlock is taken and in appropriate mode. Some of such checks require changes to the initialization of local variables which may be accessed before the ASSERT checks the locking. For example see init_bars and mask_write. Signed-off-by: Oleksandr Andrushchenko --- xen/arch/x86/hvm/vmsi.c | 24 +++++++++++++-- xen/drivers/passthrough/pci.c | 20 +++++++++++++ xen/drivers/vpci/header.c | 24 +++++++++++++-- xen/drivers/vpci/msi.c | 21 +++++++++---- xen/drivers/vpci/msix.c | 55 ++++++++++++++++++++++++++++++----- xen/drivers/vpci/vpci.c | 22 ++++++++++++-- xen/include/xen/pci.h | 5 ++++ xen/include/xen/vpci.h | 2 +- 8 files changed, 151 insertions(+), 22 deletions(-) diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c index 13e2a190b439..5ef8f37ab0fc 100644 --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -889,10 +889,16 @@ void vpci_msix_arch_init_entry(struct vpci_msix_entry *entry) entry->arch.pirq = INVALID_PIRQ; } -int vpci_msix_arch_print(const struct vpci_msix *msix) +int vpci_msix_arch_print(const struct domain *d, const struct vpci_msix *msix) { unsigned int i; + /* + * FIXME: this is not immediately correct, as the lock can be grabbed + * by a different CPU. But this is better then nothing. + */ + ASSERT(pcidevs_read_locked()); + for ( i = 0; i < msix->max_entries; i++ ) { const struct vpci_msix_entry *entry = &msix->entries[i]; @@ -909,11 +915,23 @@ int vpci_msix_arch_print(const struct vpci_msix *msix) if ( i && !(i % 64) ) { struct pci_dev *pdev = msix->pdev; + pci_sbdf_t sbdf = pdev->sbdf; spin_unlock(&msix->pdev->vpci->lock); + pcidevs_read_unlock(); + + /* NB: we still hold rcu_read_lock(&domlist_read_lock); here. */ process_pending_softirqs(); - /* NB: we assume that pdev cannot go away for an alive domain. */ - if ( !pdev->vpci || !spin_trylock(&pdev->vpci->lock) ) + + if ( !pcidevs_read_trylock() ) + return -EBUSY; + pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn); + /* + * FIXME: we may find a re-allocated pdev's copy here. + * Even occupying the same address as before. Do our best. + */ + if ( !pdev || (pdev != msix->pdev) || !pdev->vpci || + !spin_trylock(&pdev->vpci->lock) ) return -EBUSY; if ( pdev->vpci->msix != msix ) { diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 2a0d3d37a69f..74fe1c94cf71 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -70,6 +70,26 @@ bool_t pcidevs_locked(void) return !!spin_is_locked(&_pcidevs_lock) || pcidevs_write_locked(); } +void pcidevs_read_lock(void) +{ + read_lock(&_pcidevs_rwlock); +} + +int pcidevs_read_trylock(void) +{ + return read_trylock(&_pcidevs_rwlock); +} + +void pcidevs_read_unlock(void) +{ + read_unlock(&_pcidevs_rwlock); +} + +bool pcidevs_read_locked(void) +{ + return !!rw_is_locked(&_pcidevs_rwlock); +} + void pcidevs_write_lock(void) { write_lock(&_pcidevs_rwlock); diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 40ff79c33f8f..75e972740106 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -142,16 +142,19 @@ bool vpci_process_pending(struct vcpu *v) if ( rc == -ERESTART ) return true; + pcidevs_read_lock(); spin_lock(&v->vpci.pdev->vpci->lock); /* Disable memory decoding unconditionally on failure. */ modify_decoding(v->vpci.pdev, rc ? v->vpci.cmd & ~PCI_COMMAND_MEMORY : v->vpci.cmd, !rc && v->vpci.rom_only); spin_unlock(&v->vpci.pdev->vpci->lock); + pcidevs_read_unlock(); rangeset_destroy(v->vpci.mem); v->vpci.mem = NULL; if ( rc ) + { /* * FIXME: in case of failure remove the device from the domain. * Note that there might still be leftover mappings. While this is @@ -159,7 +162,10 @@ bool vpci_process_pending(struct vcpu *v) * killed in order to avoid leaking stale p2m mappings on * failure. */ + pcidevs_write_lock(); vpci_remove_device(v->vpci.pdev); + pcidevs_write_unlock(); + } } return false; @@ -172,7 +178,16 @@ static int __init apply_map(struct domain *d, const struct pci_dev *pdev, int rc; while ( (rc = rangeset_consume_ranges(mem, map_range, &data)) == -ERESTART ) + { + /* + * It's safe to drop and re-acquire the lock in this context + * without risking pdev disappearing because devices cannot be + * removed until the initial domain has been started. + */ + pcidevs_write_unlock(); process_pending_softirqs(); + pcidevs_write_lock(); + } rangeset_destroy(mem); if ( !rc ) modify_decoding(pdev, cmd, false); @@ -450,10 +465,15 @@ static int init_bars(struct pci_dev *pdev) uint16_t cmd; uint64_t addr, size; unsigned int i, num_bars, rom_reg; - struct vpci_header *header = &pdev->vpci->header; - struct vpci_bar *bars = header->bars; + struct vpci_header *header; + struct vpci_bar *bars; int rc; + ASSERT(pcidevs_write_locked()); + + header = &pdev->vpci->header; + bars = header->bars; + switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f ) { case PCI_HEADER_TYPE_NORMAL: diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c index 5757a7aed20f..4d19b036c98f 100644 --- a/xen/drivers/vpci/msi.c +++ b/xen/drivers/vpci/msi.c @@ -184,12 +184,17 @@ static void mask_write(const struct pci_dev *pdev, unsigned int reg, static int init_msi(struct pci_dev *pdev) { - uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn); - unsigned int pos = pci_find_cap_offset(pdev->seg, pdev->bus, slot, func, - PCI_CAP_ID_MSI); + uint8_t slot, func; + unsigned int pos; uint16_t control; int ret; + ASSERT(pcidevs_write_locked()); + + slot = PCI_SLOT(pdev->devfn); + func = PCI_FUNC(pdev->devfn); + pos = pci_find_cap_offset(pdev->seg, pdev->bus, slot, func, PCI_CAP_ID_MSI); + if ( !pos ) return 0; @@ -277,6 +282,9 @@ void vpci_dump_msi(void) printk("vPCI MSI/MSI-X d%d\n", d->domain_id); + if ( !pcidevs_read_trylock() ) + continue; + for_each_pdev ( d, pdev ) { const struct vpci_msi *msi; @@ -310,7 +318,7 @@ void vpci_dump_msi(void) printk(" entries: %u maskall: %d enabled: %d\n", msix->max_entries, msix->masked, msix->enabled); - rc = vpci_msix_arch_print(msix); + rc = vpci_msix_arch_print(d, msix); if ( rc ) { /* @@ -318,12 +326,13 @@ void vpci_dump_msi(void) * holding the lock. */ printk("unable to print all MSI-X entries: %d\n", rc); - process_pending_softirqs(); - continue; + goto pdev_done; } } spin_unlock(&pdev->vpci->lock); + pdev_done: + pcidevs_read_unlock(); process_pending_softirqs(); } } diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c index 846f1b8d7038..f0adfb6c1dbc 100644 --- a/xen/drivers/vpci/msix.c +++ b/xen/drivers/vpci/msix.c @@ -144,9 +144,13 @@ static struct vpci_msix *msix_find(const struct domain *d, unsigned long addr) list_for_each_entry ( msix, &d->arch.hvm.msix_tables, next ) { - const struct vpci_bar *bars = msix->pdev->vpci->header.bars; + const struct vpci_bar *bars; unsigned int i; + if ( !msix->pdev->vpci ) + continue; + + bars = msix->pdev->vpci->header.bars; for ( i = 0; i < ARRAY_SIZE(msix->tables); i++ ) if ( bars[msix->tables[i] & PCI_MSIX_BIRMASK].enabled && VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, i) ) @@ -158,7 +162,13 @@ static struct vpci_msix *msix_find(const struct domain *d, unsigned long addr) static int msix_accept(struct vcpu *v, unsigned long addr) { - return !!msix_find(v->domain, addr); + int rc; + + pcidevs_read_lock(); + rc = !!msix_find(v->domain, addr); + pcidevs_read_unlock(); + + return rc; } static bool access_allowed(const struct pci_dev *pdev, unsigned long addr, @@ -186,17 +196,26 @@ static int msix_read(struct vcpu *v, unsigned long addr, unsigned int len, unsigned long *data) { const struct domain *d = v->domain; - struct vpci_msix *msix = msix_find(d, addr); + struct vpci_msix *msix; const struct vpci_msix_entry *entry; unsigned int offset; *data = ~0ul; + pcidevs_read_lock(); + + msix = msix_find(d, addr); if ( !msix ) + { + pcidevs_read_unlock(); return X86EMUL_RETRY; + } if ( !access_allowed(msix->pdev, addr, len) ) + { + pcidevs_read_unlock(); return X86EMUL_OKAY; + } if ( VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, VPCI_MSIX_PBA) ) { @@ -255,6 +274,7 @@ static int msix_read(struct vcpu *v, unsigned long addr, unsigned int len, break; } spin_unlock(&msix->pdev->vpci->lock); + pcidevs_read_unlock(); return X86EMUL_OKAY; } @@ -263,15 +283,24 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len, unsigned long data) { const struct domain *d = v->domain; - struct vpci_msix *msix = msix_find(d, addr); + struct vpci_msix *msix; struct vpci_msix_entry *entry; unsigned int offset; + pcidevs_read_lock(); + + msix = msix_find(d, addr); if ( !msix ) + { + pcidevs_read_unlock(); return X86EMUL_RETRY; + } if ( !access_allowed(msix->pdev, addr, len) ) + { + pcidevs_read_unlock(); return X86EMUL_OKAY; + } if ( VMSIX_ADDR_IN_RANGE(addr, msix->pdev->vpci, VPCI_MSIX_PBA) ) { @@ -294,6 +323,7 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len, } } + pcidevs_read_unlock(); return X86EMUL_OKAY; } @@ -371,6 +401,7 @@ static int msix_write(struct vcpu *v, unsigned long addr, unsigned int len, break; } spin_unlock(&msix->pdev->vpci->lock); + pcidevs_read_unlock(); return X86EMUL_OKAY; } @@ -383,9 +414,13 @@ static const struct hvm_mmio_ops vpci_msix_table_ops = { int vpci_make_msix_hole(const struct pci_dev *pdev) { - struct domain *d = pdev->domain; + struct domain *d; unsigned int i; + ASSERT(pcidevs_read_locked()); + + d = pdev->domain; + if ( !pdev->vpci->msix ) return 0; @@ -430,13 +465,19 @@ int vpci_make_msix_hole(const struct pci_dev *pdev) static int init_msix(struct pci_dev *pdev) { - struct domain *d = pdev->domain; - uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn); + struct domain *d; + uint8_t slot, func; unsigned int msix_offset, i, max_entries; uint16_t control; struct vpci_msix *msix; int rc; + ASSERT(pcidevs_write_locked()); + + d = pdev->domain; + slot = PCI_SLOT(pdev->devfn); + func = PCI_FUNC(pdev->devfn); + msix_offset = pci_find_cap_offset(pdev->seg, pdev->bus, slot, func, PCI_CAP_ID_MSIX); if ( !msix_offset ) diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c index f564572a51cb..119219220a70 100644 --- a/xen/drivers/vpci/vpci.c +++ b/xen/drivers/vpci/vpci.c @@ -37,7 +37,9 @@ extern vpci_register_init_t *const __end_vpci_array[]; void vpci_remove_device(struct pci_dev *pdev) { - if ( !has_vpci(pdev->domain) ) + ASSERT(pcidevs_write_locked()); + + if ( !has_vpci(pdev->domain) || !pdev->vpci ) return; spin_lock(&pdev->vpci->lock); @@ -62,6 +64,8 @@ int vpci_add_handlers(struct pci_dev *pdev) unsigned int i; int rc = 0; + ASSERT(pcidevs_write_locked()); + if ( !has_vpci(pdev->domain) ) return 0; @@ -136,6 +140,8 @@ int vpci_add_register(struct vpci *vpci, vpci_read_t *read_handler, struct list_head *prev; struct vpci_register *r; + ASSERT(pcidevs_write_locked()); + /* Some sanity checks. */ if ( (size != 1 && size != 2 && size != 4) || offset >= PCI_CFG_SPACE_EXP_SIZE || (offset & (size - 1)) || @@ -183,6 +189,8 @@ int vpci_remove_register(struct vpci *vpci, unsigned int offset, const struct vpci_register r = { .offset = offset, .size = size }; struct vpci_register *rm; + ASSERT(pcidevs_write_locked()); + spin_lock(&vpci->lock); list_for_each_entry ( rm, &vpci->handlers, node ) { @@ -330,10 +338,14 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size) return data; } + pcidevs_read_lock(); /* Find the PCI dev matching the address. */ pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn); - if ( !pdev ) + if ( !pdev || (pdev && !pdev->vpci) ) + { + pcidevs_read_unlock(); return vpci_read_hw(sbdf, reg, size); + } spin_lock(&pdev->vpci->lock); @@ -379,6 +391,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, unsigned int size) ASSERT(data_offset < size); } spin_unlock(&pdev->vpci->lock); + pcidevs_read_unlock(); if ( data_offset < size ) { @@ -441,9 +454,11 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, * Find the PCI dev matching the address. * Passthrough everything that's not trapped. */ + pcidevs_read_lock(); pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn); - if ( !pdev ) + if ( !pdev || (pdev && !pdev->vpci) ) { + pcidevs_read_unlock(); vpci_write_hw(sbdf, reg, size, data); return; } @@ -484,6 +499,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size, ASSERT(data_offset < size); } spin_unlock(&pdev->vpci->lock); + pcidevs_read_unlock(); if ( data_offset < size ) /* Tailing gap, write the remaining. */ diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h index e814d9542bfc..53cea3167b1f 100644 --- a/xen/include/xen/pci.h +++ b/xen/include/xen/pci.h @@ -152,6 +152,11 @@ void pcidevs_lock(void); void pcidevs_unlock(void); bool_t __must_check pcidevs_locked(void); +void pcidevs_read_lock(void); +int pcidevs_read_trylock(void); +void pcidevs_read_unlock(void); +bool __must_check pcidevs_read_locked(void); + void pcidevs_write_lock(void); void pcidevs_write_unlock(void); bool __must_check pcidevs_write_locked(void); diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h index e8ac1eb39513..c5f0054026a4 100644 --- a/xen/include/xen/vpci.h +++ b/xen/include/xen/vpci.h @@ -173,7 +173,7 @@ int __must_check vpci_msix_arch_enable_entry(struct vpci_msix_entry *entry, int __must_check vpci_msix_arch_disable_entry(struct vpci_msix_entry *entry, const struct pci_dev *pdev); void vpci_msix_arch_init_entry(struct vpci_msix_entry *entry); -int vpci_msix_arch_print(const struct vpci_msix *msix); +int vpci_msix_arch_print(const struct domain *d, const struct vpci_msix *msix); /* * Helper functions to fetch MSIX related data. They are used by both the From patchwork Wed Feb 16 15:16:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oleksandr Andrushchenko X-Patchwork-Id: 12748697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 551B5C433FE for ; Wed, 16 Feb 2022 15:16:48 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.274221.469649 (Exim 4.92) (envelope-from ) id 1nKM2o-0001bL-5X; Wed, 16 Feb 2022 15:16:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 274221.469649; Wed, 16 Feb 2022 15:16:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2n-0001aH-UZ; Wed, 16 Feb 2022 15:16:37 +0000 Received: by outflank-mailman (input) for mailman id 274221; Wed, 16 Feb 2022 15:16:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nKM2m-0000ia-P5 for xen-devel@lists.xenproject.org; Wed, 16 Feb 2022 15:16:36 +0000 Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com [2a00:1450:4864:20::22c]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 6e692108-8f3b-11ec-8eb8-a37418f5ba1a; Wed, 16 Feb 2022 16:16:36 +0100 (CET) Received: by mail-lj1-x22c.google.com with SMTP id bx31so3882353ljb.0 for ; Wed, 16 Feb 2022 07:16:36 -0800 (PST) Received: from a2klaptop.localdomain ([185.199.97.5]) by smtp.gmail.com with ESMTPSA id bt2sm3082357lfb.93.2022.02.16.07.16.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 16 Feb 2022 07:16:35 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6e692108-8f3b-11ec-8eb8-a37418f5ba1a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jzNPBmlgY6KQmcQw14tBqJfV5dL9s7shqTcsZA+5AgQ=; b=TsV6bEvDXLlZtbCASc/xe9GCKFucEulegH3lKRkFAgyMzTU4SyU+Fun96JhxlD+AOY nikahjJOLIneZdfNd8sMpA43APGtReE811C7xELfMAr21E2cGwhS3s38g+oTvwV8/0ik 3kZJZaUdUyVW2ibfJWyvcCqYQq4gahf2nHuzcbeDQ7SZR483UZ1O/WaBghyrflm5MM6s 6Ucguyj9A7eHx5yL2PtyjXdLXP3NG7az0pJ/kT3Eaq5XoeWSI77AhtkaOCX0EQLIKMKZ P2RhCmWNDcmKSzSWaDaVL14QYZbQEirRAWqpANf3Dq+FxTYKNZI2w8g0RlGw/WJmMIJ8 1l7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jzNPBmlgY6KQmcQw14tBqJfV5dL9s7shqTcsZA+5AgQ=; b=2x1EOO7F5s3UgZ1VuRsiWKNFOqBcANlOMnElB7PMMpsZagfwp7fKJeJ9R0hjvoINGM hq6kAjKqcBFBTT/zOiIKxx/k+jzQ75hn69MhAkngkg7M5ixyIsT5MbUNYNQ7/T+vyY+d gQQGWFoxigczXdK4wgGj/1HtwyIOKg4HaAAzYf7HccM+rJ1Lo7t2Qc6D46IZEdr/nckO PkLc8xhmIm43x6TtVoS0/pO9dDlJTovrvDxhNuVmAmkhO8HURrO4eFDP56tGtNnErlGB ylDj+gc0U+xw1h/7U8J0T+j6UOtDe76oFkElkqltQj9RZU5bO38l65tPvv9FFk7h0WTq a6hA== X-Gm-Message-State: AOAM533JD3NxC+u17dJzP7yKQBT32vXr+AT7UCOuhnAnVQnQ+Eg47xk6 TmAb2DH4GVboGMpzGFp0TXBffkKsUfA= X-Google-Smtp-Source: ABdhPJxnt0SZcGjSHJgWxCZHuf9astKgh20exBxdg26yhlRFgk9MOZE9juT+b1OUcHBpl8MMQnRcKQ== X-Received: by 2002:a05:651c:1791:b0:243:94bd:d94c with SMTP id bn17-20020a05651c179100b0024394bdd94cmr2382485ljb.468.1645024595427; Wed, 16 Feb 2022 07:16:35 -0800 (PST) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org Cc: roger.pau@citrix.com, jbeulich@suse.com, julien@xen.org, sstabellini@kernel.org, oleksandr_tyshchenko@epam.com, volodymyr_babchuk@epam.com, artem_mygaiev@epam.com, bertrand.marquis@arm.com, rahul.singh@arm.com, Oleksandr Andrushchenko Subject: [PATCH 4/4] vpci: resolve possible clash while removing BAR overlaps Date: Wed, 16 Feb 2022 17:16:28 +0200 Message-Id: <20220216151628.1610777-5-andr2000@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220216151628.1610777-1-andr2000@gmail.com> References: <20220216151628.1610777-1-andr2000@gmail.com> MIME-Version: 1.0 From: Oleksandr Andrushchenko modify_bars checks if the mapping of the BAR memory has already been done when mapping other device's BARs or, while unmapping, are still in use by other devices. With the existing locking scheme it is possible that there are other devices trying to do the same in parallel with us, but on other CPUs as we only hold a read lock without acquiring _pcidevs_lock recursive lock. To prevent that upgrade the read lock to normal pcidevs_lock during BAR overlapping check. Signed-off-by: Oleksandr Andrushchenko --- xen/drivers/vpci/header.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c index 75e972740106..c80a8bb5e3e0 100644 --- a/xen/drivers/vpci/header.c +++ b/xen/drivers/vpci/header.c @@ -281,7 +281,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only) /* * Check for overlaps with other BARs. Note that only BARs that are * currently mapped (enabled) are checked for overlaps. + * We are holding pcidevs_read_lock here, but we need to access + * different devices at a time. So, upgrade our current read lock to normal + * pcidevs_lock. */ + pcidevs_lock(); for_each_pdev ( pdev->domain, tmp ) { if ( tmp == pdev ) @@ -321,10 +325,12 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only) printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n", start, end, rc); rangeset_destroy(mem); + pcidevs_unlock(); return rc; } } } + pcidevs_unlock(); ASSERT(dev);