From patchwork Mon Feb 17 07:20:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11385477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20F8D930 for ; Mon, 17 Feb 2020 07:22:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 06F6720702 for ; Mon, 17 Feb 2020 07:22:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 06F6720702 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ahS-0003eR-Ip; Mon, 17 Feb 2020 07:20:14 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ahR-0003eM-J4 for xen-devel@lists.xenproject.org; Mon, 17 Feb 2020 07:20:13 +0000 X-Inumbo-ID: ef6def8e-5155-11ea-ade5-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id ef6def8e-5155-11ea-ade5-bc764e2007e4; Mon, 17 Feb 2020 07:20:12 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CAED7AC66; Mon, 17 Feb 2020 07:20:10 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 17 Feb 2020 08:20:05 +0100 Message-Id: <20200217072006.20211-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200217072006.20211-1-jgross@suse.com> References: <20200217072006.20211-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 1/2] xen/rcu: use rcu softirq for forcing quiescent state X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" As rcu callbacks are processed in __do_softirq() there is no need to use the scheduling softirq for forcing quiescent state. Any other softirq would do the job and the scheduling one is the most expensive. So use the already existing rcu softirq for that purpose. For telling apart why the rcu softirq was raised add a flag for the current usage. Signed-off-by: Juergen Gross --- xen/common/rcupdate.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index 91d4ad0fd8..079ea9d8a1 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -89,6 +89,8 @@ struct rcu_data { /* 3) idle CPUs handling */ struct timer idle_timer; bool idle_timer_active; + + bool process_callbacks; }; /* @@ -194,7 +196,7 @@ static void force_quiescent_state(struct rcu_data *rdp, struct rcu_ctrlblk *rcp) { cpumask_t cpumask; - raise_softirq(SCHEDULE_SOFTIRQ); + raise_softirq(RCU_SOFTIRQ); if (unlikely(rdp->qlen - rdp->last_rs_qlen > rsinterval)) { rdp->last_rs_qlen = rdp->qlen; /* @@ -202,7 +204,7 @@ static void force_quiescent_state(struct rcu_data *rdp, * rdp->cpu is the current cpu. */ cpumask_andnot(&cpumask, &rcp->cpumask, cpumask_of(rdp->cpu)); - cpumask_raise_softirq(&cpumask, SCHEDULE_SOFTIRQ); + cpumask_raise_softirq(&cpumask, RCU_SOFTIRQ); } } @@ -259,7 +261,10 @@ static void rcu_do_batch(struct rcu_data *rdp) if (!rdp->donelist) rdp->donetail = &rdp->donelist; else + { + rdp->process_callbacks = true; raise_softirq(RCU_SOFTIRQ); + } } /* @@ -410,7 +415,13 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp, static void rcu_process_callbacks(void) { - __rcu_process_callbacks(&rcu_ctrlblk, &this_cpu(rcu_data)); + struct rcu_data *rdp = &this_cpu(rcu_data); + + if ( rdp->process_callbacks ) + { + rdp->process_callbacks = false; + __rcu_process_callbacks(&rcu_ctrlblk, rdp); + } } static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp) @@ -518,6 +529,9 @@ static void rcu_idle_timer_handler(void* data) void rcu_check_callbacks(int cpu) { + struct rcu_data *rdp = &this_cpu(rcu_data); + + rdp->process_callbacks = true; raise_softirq(RCU_SOFTIRQ); } From patchwork Mon Feb 17 07:20:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11385473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 653CD92A for ; Mon, 17 Feb 2020 07:21:49 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B97A20702 for ; Mon, 17 Feb 2020 07:21:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B97A20702 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ahT-0003ec-RJ; Mon, 17 Feb 2020 07:20:15 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j3ahS-0003eU-Sn for xen-devel@lists.xenproject.org; Mon, 17 Feb 2020 07:20:14 +0000 X-Inumbo-ID: ef70923e-5155-11ea-bfb1-12813bfff9fa Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ef70923e-5155-11ea-bfb1-12813bfff9fa; Mon, 17 Feb 2020 07:20:12 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id D40E9AEAC; Mon, 17 Feb 2020 07:20:10 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org Date: Mon, 17 Feb 2020 08:20:06 +0100 Message-Id: <20200217072006.20211-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20200217072006.20211-1-jgross@suse.com> References: <20200217072006.20211-1-jgross@suse.com> Subject: [Xen-devel] [PATCH 2/2] xen/rcu: don't use stop_machine_run() for rcu_barrier() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Wei Liu , Konrad Rzeszutek Wilk , George Dunlap , Andrew Cooper , Ian Jackson , Jan Beulich MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Today rcu_barrier() is calling stop_machine_run() to synchronize all physical cpus in order to ensure all pending rcu calls have finished when returning. As stop_machine_run() is using tasklets this requires scheduling of idle vcpus on all cpus imposing the need to call rcu_barrier() on idle cpus only in case of core scheduling being active, as otherwise a scheduling deadlock would occur. There is no need at all to do the syncing of the cpus in tasklets, as rcu activity is started in __do_softirq() called whenever softirq activity is allowed. So rcu_barrier() can easily be modified to use softirq for synchronization of the cpus no longer requiring any scheduling activity. As there already is a rcu softirq reuse that for the synchronization. Finally switch rcu_barrier() to return void as it now can never fail. Signed-off-by: Juergen Gross --- xen/common/rcupdate.c | 49 ++++++++++++++++++++++++++-------------------- xen/include/xen/rcupdate.h | 2 +- 2 files changed, 29 insertions(+), 22 deletions(-) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index 079ea9d8a1..1f02a804e3 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -143,47 +143,51 @@ static int qhimark = 10000; static int qlowmark = 100; static int rsinterval = 1000; -struct rcu_barrier_data { - struct rcu_head head; - atomic_t *cpu_count; -}; +/* + * rcu_barrier() handling: + * cpu_count holds the number of cpu required to finish barrier handling. + * Cpus are synchronized via softirq mechanism. rcu_barrier() is regarded to + * be active if cpu_count is not zero. In case rcu_barrier() is called on + * multiple cpus it is enough to check for cpu_count being not zero on entry + * and to call process_pending_softirqs() in a loop until cpu_count drops to + * zero, as syncing has been requested already and we don't need to sync + * multiple times. + */ +static atomic_t cpu_count = ATOMIC_INIT(0); static void rcu_barrier_callback(struct rcu_head *head) { - struct rcu_barrier_data *data = container_of( - head, struct rcu_barrier_data, head); - atomic_inc(data->cpu_count); + atomic_dec(&cpu_count); } -static int rcu_barrier_action(void *_cpu_count) +static void rcu_barrier_action(void) { - struct rcu_barrier_data data = { .cpu_count = _cpu_count }; - - ASSERT(!local_irq_is_enabled()); - local_irq_enable(); + struct rcu_head head; /* * When callback is executed, all previously-queued RCU work on this CPU * is completed. When all CPUs have executed their callback, data.cpu_count * will have been incremented to include every online CPU. */ - call_rcu(&data.head, rcu_barrier_callback); + call_rcu(&head, rcu_barrier_callback); - while ( atomic_read(data.cpu_count) != num_online_cpus() ) + while ( atomic_read(&cpu_count) ) { process_pending_softirqs(); cpu_relax(); } - - local_irq_disable(); - - return 0; } -int rcu_barrier(void) +void rcu_barrier(void) { - atomic_t cpu_count = ATOMIC_INIT(0); - return stop_machine_run(rcu_barrier_action, &cpu_count, NR_CPUS); + if ( !atomic_cmpxchg(&cpu_count, 0, num_online_cpus()) ) + cpumask_raise_softirq(&cpu_online_map, RCU_SOFTIRQ); + + while ( atomic_read(&cpu_count) ) + { + process_pending_softirqs(); + cpu_relax(); + } } /* Is batch a before batch b ? */ @@ -422,6 +426,9 @@ static void rcu_process_callbacks(void) rdp->process_callbacks = false; __rcu_process_callbacks(&rcu_ctrlblk, rdp); } + + if ( atomic_read(&cpu_count) ) + rcu_barrier_action(); } static int __rcu_pending(struct rcu_ctrlblk *rcp, struct rcu_data *rdp) diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h index 174d058113..87f35b7704 100644 --- a/xen/include/xen/rcupdate.h +++ b/xen/include/xen/rcupdate.h @@ -143,7 +143,7 @@ void rcu_check_callbacks(int cpu); void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *head)); -int rcu_barrier(void); +void rcu_barrier(void); void rcu_idle_enter(unsigned int cpu); void rcu_idle_exit(unsigned int cpu);