From patchwork Tue Mar 10 07:28:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11428589 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58ACD14E3 for ; Tue, 10 Mar 2020 07:30:05 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3B1522467D for ; Tue, 10 Mar 2020 07:30:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B1522467D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBZK7-0001Ys-D8; Tue, 10 Mar 2020 07:29:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1jBZK5-0001Ye-PU for xen-devel@lists.xenproject.org; Tue, 10 Mar 2020 07:29:05 +0000 X-Inumbo-ID: ceca0bda-62a0-11ea-ad1e-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ceca0bda-62a0-11ea-ad1e-12813bfff9fa; Tue, 10 Mar 2020 07:28:59 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D8E12ACE0; Tue, 10 Mar 2020 07:28:58 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Tue, 10 Mar 2020 08:28:50 +0100 Message-Id: <20200310072853.27567-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200310072853.27567-1-jgross@suse.com> References: <20200310072853.27567-1-jgross@suse.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v4 3/6] xen: add process_pending_softirqs_norcu() for keyhandlers X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Andrew Cooper , Ian Jackson , George Dunlap , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Some keyhandlers are calling process_pending_softirqs() while holding a rcu_read_lock(). This is wrong, as process_pending_softirqs() might activate rcu calls which should not happen inside a rcu_read_lock(). For that purpose add process_pending_softirqs_norcu() which will not do any rcu activity and use this for keyhandlers. Signed-off-by: Juergen Gross --- V3: - add RCU_SOFTIRQ to ignore in process_pending_softirqs_norcu() (Roger Pau Monné) --- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/arch/x86/numa.c | 4 ++-- xen/common/keyhandler.c | 6 +++--- xen/common/softirq.c | 17 +++++++++++++---- xen/drivers/passthrough/amd/pci_amd_iommu.c | 2 +- xen/drivers/passthrough/vtd/iommu.c | 2 +- xen/drivers/vpci/msi.c | 4 ++-- xen/include/xen/softirq.h | 2 ++ 8 files changed, 25 insertions(+), 14 deletions(-) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index eb0f0edfef..f6e813e061 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -1344,7 +1344,7 @@ static void ept_dump_p2m_table(unsigned char key) c ?: ept_entry->ipat ? '!' : ' '); if ( !(record_counter++ % 100) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); } unmap_domain_page(table); } diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index f1066c59c7..cf6fcc9966 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -418,7 +418,7 @@ static void dump_numa(unsigned char key) printk("Memory location of each domain:\n"); for_each_domain ( d ) { - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("Domain %u (total: %u):\n", d->domain_id, domain_tot_pages(d)); @@ -462,7 +462,7 @@ static void dump_numa(unsigned char key) for ( j = 0; j < d->max_vcpus; j++ ) { if ( !(j & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); if ( vnuma->vcpu_to_vnode[j] == i ) { diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c index 87bd145374..0d32bc4e2a 100644 --- a/xen/common/keyhandler.c +++ b/xen/common/keyhandler.c @@ -263,7 +263,7 @@ static void dump_domains(unsigned char key) { unsigned int i; - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("General information for domain %u:\n", d->domain_id); printk(" refcnt=%d dying=%d pause_count=%d\n", @@ -307,7 +307,7 @@ static void dump_domains(unsigned char key) for_each_sched_unit_vcpu ( unit, v ) { if ( !(v->vcpu_id & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk(" VCPU%d: CPU%d [has=%c] poll=%d " "upcall_pend=%02x upcall_mask=%02x ", @@ -337,7 +337,7 @@ static void dump_domains(unsigned char key) for_each_vcpu ( d, v ) { if ( !(v->vcpu_id & 0x3f) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); printk("Notifying guest %d:%d (virq %d, port %d)\n", d->domain_id, v->vcpu_id, diff --git a/xen/common/softirq.c b/xen/common/softirq.c index b83ad96d6c..30beb27ae9 100644 --- a/xen/common/softirq.c +++ b/xen/common/softirq.c @@ -25,7 +25,7 @@ static softirq_handler softirq_handlers[NR_SOFTIRQS]; static DEFINE_PER_CPU(cpumask_t, batch_mask); static DEFINE_PER_CPU(unsigned int, batching); -static void __do_softirq(unsigned long ignore_mask) +static void __do_softirq(unsigned long ignore_mask, bool rcu_allowed) { unsigned int i, cpu; unsigned long pending; @@ -38,7 +38,7 @@ static void __do_softirq(unsigned long ignore_mask) */ cpu = smp_processor_id(); - if ( rcu_pending(cpu) ) + if ( rcu_allowed && rcu_pending(cpu) ) rcu_check_callbacks(cpu); if ( ((pending = (softirq_pending(cpu) & ~ignore_mask)) == 0) @@ -55,13 +55,22 @@ void process_pending_softirqs(void) { ASSERT(!in_irq() && local_irq_is_enabled()); /* Do not enter scheduler as it can preempt the calling context. */ - __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ)); + __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ), + true); +} + +void process_pending_softirqs_norcu(void) +{ + ASSERT(!in_irq() && local_irq_is_enabled()); + /* Do not enter scheduler as it can preempt the calling context. */ + __do_softirq((1ul << SCHEDULE_SOFTIRQ) | (1ul << SCHED_SLAVE_SOFTIRQ) | + (1ul << RCU_SOFTIRQ), false); } void do_softirq(void) { ASSERT_NOT_IN_ATOMIC(); - __do_softirq(0); + __do_softirq(0, true); } void open_softirq(int nr, softirq_handler handler) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c index 3112653960..880d64c748 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -587,7 +587,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level, struct amd_iommu_pte *pde = &table_vaddr[index]; if ( !(index % 2) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); if ( !pde->pr ) continue; diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 3d60976dd5..c7bd8d4ada 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2646,7 +2646,7 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, for ( i = 0; i < PTE_NUM; i++ ) { if ( !(i % 2) ) - process_pending_softirqs(); + process_pending_softirqs_norcu(); pte = &pt_vaddr[i]; if ( !dma_pte_present(*pte) ) diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c index 75010762ed..1d337604cc 100644 --- a/xen/drivers/vpci/msi.c +++ b/xen/drivers/vpci/msi.c @@ -321,13 +321,13 @@ void vpci_dump_msi(void) * holding the lock. */ printk("unable to print all MSI-X entries: %d\n", rc); - process_pending_softirqs(); + process_pending_softirqs_norcu(); continue; } } spin_unlock(&pdev->vpci->lock); - process_pending_softirqs(); + process_pending_softirqs_norcu(); } } rcu_read_unlock(&domlist_read_lock); diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h index b4724f5c8b..b5bf3b83b1 100644 --- a/xen/include/xen/softirq.h +++ b/xen/include/xen/softirq.h @@ -37,7 +37,9 @@ void cpu_raise_softirq_batch_finish(void); * Process pending softirqs on this CPU. This should be called periodically * when performing work that prevents softirqs from running in a timely manner. * Use this instead of do_softirq() when you do not want to be preempted. + * The norcu variant is to be used while holding a read_rcu_lock(). */ void process_pending_softirqs(void); +void process_pending_softirqs_norcu(void); #endif /* __XEN_SOFTIRQ_H__ */