From patchwork Mon Nov 7 09:55:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bagas Sanjaya X-Patchwork-Id: 13034222 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75617C4332F for ; Mon, 7 Nov 2022 09:55:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231630AbiKGJzY (ORCPT ); Mon, 7 Nov 2022 04:55:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47598 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231673AbiKGJzW (ORCPT ); Mon, 7 Nov 2022 04:55:22 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35BA91581E; Mon, 7 Nov 2022 01:55:18 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id q71so9983723pgq.8; Mon, 07 Nov 2022 01:55:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Hr8slVM8vWWCownhtCp/SyZlW1/FtPW5skKNhYxHrZI=; b=SviQdqK8TII1if2wp7MGb3OwR8gIonBvw2nuaftxN5e+GOP+qu1Kd+dvxighLGz6eZ msVcF35UvjffoUlhvc2BRsbpKjnnL2GcItKWpVWW8z803IDj/6KhUUZw4XTitgHID+9l 5s0in3FFSWJvAl1GOI1vPulKVnKqnQSISjlSWCX2KatYOZ79eD/GB6Bn4wZhkzj2C6zT OUAMAuoYHBae7IFxyxJJj1w6a+fC5C6CZC5c78aHZ0w7r0DyAwal6G4o55Qq3q0T/lAn 3PhKIRkqZ2FGZzLY0wWQJb6j8izHICcQpIa/wTkXSkUYdswGiK4xxdnwvsF84pDR5JGY K7Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Hr8slVM8vWWCownhtCp/SyZlW1/FtPW5skKNhYxHrZI=; b=OzntdKrplKOk90T6GCrFVF62QZoPUXZ3oswdBOD9bWlg3Xdz6KJ+yeM+p/Z08EyJEx nvzCIWTLypflDh3cu1+sRz/GpbvTNTFlLGwN/w0vTQDDVUFGi8Bc8yeQW9htNFBxxVGQ q/AO73WWh+pgMWnHnSFBXbssorr3JTuQgg0aGn3H5n9cliP1eHTr7bNswH4zRJ//Vd/6 268nzsNitXtj3LRygnDpT5gjNemc4qp6wwKEVLaC/CQPOq1BC7z+cPhLYSS4Mp6ajXGb 1CD0UemgfWnHPiBALUSLQeDNVr9ctgUOqinViFLfE3UejZosAOB83KoTmlGTwWl5e/M+ EXbQ== X-Gm-Message-State: ACrzQf3WNc5TWWH6QuiMo2NgJnXaZtNf1fum/K6EdPMEu5vI98PTms14 7F2PgVqZ5RMPKAPlb3l0DGcP4xCTRx0= X-Google-Smtp-Source: AMsMyM6yO7OPDg8pIduklP+4s8ymkJ6srmETGY8d02lp7ifyDeaJQWWaurQJOt73HPWB6G0bA8RT2w== X-Received: by 2002:a05:6a00:1ad0:b0:56d:5fdb:a29b with SMTP id f16-20020a056a001ad000b0056d5fdba29bmr41532842pfv.76.1667814917649; Mon, 07 Nov 2022 01:55:17 -0800 (PST) Received: from debian.me (subs09a-223-255-225-70.three.co.id. [223.255.225.70]) by smtp.gmail.com with ESMTPSA id bx6-20020a17090af48600b0020ad26fa65dsm3880914pjb.56.2022.11.07.01.55.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Nov 2022 01:55:17 -0800 (PST) Received: by debian.me (Postfix, from userid 1000) id C4A5C103EC6; Mon, 7 Nov 2022 16:55:13 +0700 (WIB) Date: Mon, 7 Nov 2022 16:55:13 +0700 From: Bagas Sanjaya To: "Paul E. McKenney" Cc: Stephen Rothwell , Linux Kernel Mailing List , Linux Next Mailing List , Frederic Weisbecker , Neeraj Upadhyay , Josh Triplett , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Joel Fernandes , Jonathan Corbet , rcu@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH] Documentation: RCU: use code blocks with autogenerated line (was: Re: linux-next: build warning after merge of the rcu tree) Message-ID: References: <20221107142641.527396ea@canb.auug.org.au> <20221107050212.GG28461@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20221107050212.GG28461@paulmck-ThinkPad-P17-Gen-1> Precedence: bulk List-ID: X-Mailing-List: rcu@vger.kernel.org On Sun, Nov 06, 2022 at 09:02:12PM -0800, Paul E. McKenney wrote: > On Mon, Nov 07, 2022 at 02:26:41PM +1100, Stephen Rothwell wrote: > > Hi all, > > > > After merging the rcu tree, today's linux-next build (htmldocs) > > produced this warning: > > > > Documentation/RCU/rcubarrier.rst:205: WARNING: Literal block ends without a blank line; unexpected unindent. > > > > Introduced by commit > > > > 21c2e3909721 ("doc: Update rcubarrier.rst") > > Huh. I guess that numbered code samples are not supposed to have more > than nine lines? Ah well, easy to fix by going back to left-justified > numbers. I was wondering about that! > I think the proper fix is just let Sphinx generates line number: ---- >8 ---- From 5cea54b61b2dbf2ec5e00479c6c8f9e190b49091 Mon Sep 17 00:00:00 2001 From: Bagas Sanjaya Date: Mon, 7 Nov 2022 15:41:31 +0700 Subject: [PATCH] Documentation: RCU: use code blocks with autogenerated line numbers in RCU barrier doc Recently Stephen Rothwell reported htmldocs warning in Documentation/RCU/rcubarrier.rst when merging rcu tree [1], which has been fixed by left-justifying the culprit line numbers. However, similar issues can occur when line numbers are manually added without caution. Instead, use code-block syntax with :linenos: option, which automatically generates line numbers in code snippets. Link: https://lore.kernel.org/linux-next/20221107142641.527396ea@canb.auug.org.au/ [1] Signed-off-by: Bagas Sanjaya --- Documentation/RCU/rcubarrier.rst | 213 +++++++++++++++++-------------- 1 file changed, 114 insertions(+), 99 deletions(-) base-commit: 2c9ea2f2e0a56cbf6931992812bffe47506f23d0 Thanks. diff --git a/Documentation/RCU/rcubarrier.rst b/Documentation/RCU/rcubarrier.rst index 5a643e5233d5f6..79adf39838653e 100644 --- a/Documentation/RCU/rcubarrier.rst +++ b/Documentation/RCU/rcubarrier.rst @@ -70,81 +70,87 @@ If your module uses multiple srcu_struct structures, then it must also use multiple invocations of srcu_barrier() when unloading that module. For example, if it uses call_rcu(), call_srcu() on srcu_struct_1, and call_srcu() on srcu_struct_2, then the following three lines of code -will be required when unloading:: +will be required when unloading: - 1 rcu_barrier(); - 2 srcu_barrier(&srcu_struct_1); - 3 srcu_barrier(&srcu_struct_2); +.. code-block:: c + :linenos: + + rcu_barrier(); + srcu_barrier(&srcu_struct_1); + srcu_barrier(&srcu_struct_2); If latency is of the essence, workqueues could be used to run these three functions concurrently. An ancient version of the rcutorture module makes use of rcu_barrier() -in its exit function as follows:: +in its exit function as follows: - 1 static void - 2 rcu_torture_cleanup(void) - 3 { - 4 int i; - 5 - 6 fullstop = 1; - 7 if (shuffler_task != NULL) { - 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); - 9 kthread_stop(shuffler_task); - 10 } - 11 shuffler_task = NULL; - 12 - 13 if (writer_task != NULL) { - 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); - 15 kthread_stop(writer_task); - 16 } - 17 writer_task = NULL; - 18 - 19 if (reader_tasks != NULL) { - 20 for (i = 0; i < nrealreaders; i++) { - 21 if (reader_tasks[i] != NULL) { - 22 VERBOSE_PRINTK_STRING( - 23 "Stopping rcu_torture_reader task"); - 24 kthread_stop(reader_tasks[i]); - 25 } - 26 reader_tasks[i] = NULL; - 27 } - 28 kfree(reader_tasks); - 29 reader_tasks = NULL; - 30 } - 31 rcu_torture_current = NULL; - 32 - 33 if (fakewriter_tasks != NULL) { - 34 for (i = 0; i < nfakewriters; i++) { - 35 if (fakewriter_tasks[i] != NULL) { - 36 VERBOSE_PRINTK_STRING( - 37 "Stopping rcu_torture_fakewriter task"); - 38 kthread_stop(fakewriter_tasks[i]); - 39 } - 40 fakewriter_tasks[i] = NULL; - 41 } - 42 kfree(fakewriter_tasks); - 43 fakewriter_tasks = NULL; - 44 } - 45 - 46 if (stats_task != NULL) { - 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); - 48 kthread_stop(stats_task); - 49 } - 50 stats_task = NULL; - 51 - 52 /* Wait for all RCU callbacks to fire. */ - 53 rcu_barrier(); - 54 - 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ - 56 - 57 if (cur_ops->cleanup != NULL) - 58 cur_ops->cleanup(); - 59 if (atomic_read(&n_rcu_torture_error)) - 60 rcu_torture_print_module_parms("End of test: FAILURE"); - 61 else - 62 rcu_torture_print_module_parms("End of test: SUCCESS"); - 63 } +.. code-block:: c + :linenos: + + static void + rcu_torture_cleanup(void) + { + int i; + + fullstop = 1; + if (shuffler_task != NULL) { + VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); + kthread_stop(shuffler_task); + } + shuffler_task = NULL; + + if (writer_task != NULL) { + VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); + kthread_stop(writer_task); + } + writer_task = NULL; + + if (reader_tasks != NULL) { + for (i = 0; i < nrealreaders; i++) { + if (reader_tasks[i] != NULL) { + VERBOSE_PRINTK_STRING( + "Stopping rcu_torture_reader task"); + kthread_stop(reader_tasks[i]); + } + reader_tasks[i] = NULL; + } + kfree(reader_tasks); + reader_tasks = NULL; + } + rcu_torture_current = NULL; + + if (fakewriter_tasks != NULL) { + for (i = 0; i < nfakewriters; i++) { + if (fakewriter_tasks[i] != NULL) { + VERBOSE_PRINTK_STRING( + "Stopping rcu_torture_fakewriter task"); + kthread_stop(fakewriter_tasks[i]); + } + fakewriter_tasks[i] = NULL; + } + kfree(fakewriter_tasks); + fakewriter_tasks = NULL; + } + + if (stats_task != NULL) { + VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); + kthread_stop(stats_task); + } + stats_task = NULL; + + /* Wait for all RCU callbacks to fire. */ + rcu_barrier(); + + rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ + + if (cur_ops->cleanup != NULL) + cur_ops->cleanup(); + if (atomic_read(&n_rcu_torture_error)) + rcu_torture_print_module_parms("End of test: FAILURE"); + else + rcu_torture_print_module_parms("End of test: SUCCESS"); + } Line 6 sets a global variable that prevents any RCU callbacks from re-posting themselves. This will not be necessary in most cases, since @@ -191,21 +197,24 @@ queues. His implementation queues an RCU callback on each of the per-CPU callback queues, and then waits until they have all started executing, at which point, all earlier RCU callbacks are guaranteed to have completed. -The original code for rcu_barrier() was roughly as follows:: +The original code for rcu_barrier() was roughly as follows: - 1 void rcu_barrier(void) - 2 { - 3 BUG_ON(in_interrupt()); - 4 /* Take cpucontrol mutex to protect against CPU hotplug */ - 5 mutex_lock(&rcu_barrier_mutex); - 6 init_completion(&rcu_barrier_completion); - 7 atomic_set(&rcu_barrier_cpu_count, 1); - 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1); - 9 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) - 10 complete(&rcu_barrier_completion); - 11 wait_for_completion(&rcu_barrier_completion); - 12 mutex_unlock(&rcu_barrier_mutex); - 13 } +.. code-block:: c + :linenos: + + void rcu_barrier(void) + { + BUG_ON(in_interrupt()); + /* Take cpucontrol mutex to protect against CPU hotplug */ + mutex_lock(&rcu_barrier_mutex); + init_completion(&rcu_barrier_completion); + atomic_set(&rcu_barrier_cpu_count, 1); + on_each_cpu(rcu_barrier_func, NULL, 0, 1); + if (atomic_dec_and_test(&rcu_barrier_cpu_count)) + complete(&rcu_barrier_completion); + wait_for_completion(&rcu_barrier_completion); + mutex_unlock(&rcu_barrier_mutex); + } Line 3 verifies that the caller is in process context, and lines 5 and 12 use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the @@ -230,18 +239,21 @@ This code was rewritten in 2008 and several times thereafter, but this still gives the general idea. The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() -to post an RCU callback, as follows:: +to post an RCU callback, as follows: - 1 static void rcu_barrier_func(void *notused) - 2 { - 3 int cpu = smp_processor_id(); - 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); - 5 struct rcu_head *head; - 6 - 7 head = &rdp->barrier; - 8 atomic_inc(&rcu_barrier_cpu_count); - 9 call_rcu(head, rcu_barrier_callback); - 10 } +.. code-block:: c + :linenos: + + static void rcu_barrier_func(void *notused) + { + int cpu = smp_processor_id(); + struct rcu_data *rdp = &per_cpu(rcu_data, cpu); + struct rcu_head *head; + + head = &rdp->barrier; + atomic_inc(&rcu_barrier_cpu_count); + call_rcu(head, rcu_barrier_callback); + } Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, which contains the struct rcu_head that needed for the later call to @@ -252,13 +264,16 @@ the current CPU's queue. The rcu_barrier_callback() function simply atomically decrements the rcu_barrier_cpu_count variable and finalizes the completion when it -reaches zero, as follows:: +reaches zero, as follows: - 1 static void rcu_barrier_callback(struct rcu_head *notused) - 2 { - 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) - 4 complete(&rcu_barrier_completion); - 5 } +.. code-block:: c + :linenos: + + static void rcu_barrier_callback(struct rcu_head *notused) + { + if (atomic_dec_and_test(&rcu_barrier_cpu_count)) + complete(&rcu_barrier_completion); + } .. _rcubarrier_quiz_3: