diff mbox series

[XEN,1/4] x86/mce: address MISRA C:2012 Rule 5.3

Message ID 52ec7caf08089e3aaaad2bcf709a7d387d55d58f.1690969271.git.nicola.vetrini@bugseng.com (mailing list archive)
State Superseded
Headers show
Series x86: address some violations of MISRA C:2012 Rule 5.3 | expand

Commit Message

Nicola Vetrini Aug. 2, 2023, 9:44 a.m. UTC
Suitable mechanical renames are made to avoid shadowing, thus
addressing violations of MISRA C:2012 Rule 5.3:
"An identifier declared in an inner scope shall not hide an
identifier declared in an outer scope"

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 xen/arch/x86/cpu/mcheck/barrier.c | 8 ++++----
 xen/arch/x86/cpu/mcheck/barrier.h | 8 ++++----
 2 files changed, 8 insertions(+), 8 deletions(-)

Comments

Stefano Stabellini Aug. 3, 2023, 1:45 a.m. UTC | #1
On Wed, 2 Aug 2023, Nicola Vetrini wrote:
> Suitable mechanical renames are made to avoid shadowing, thus
> addressing violations of MISRA C:2012 Rule 5.3:
> "An identifier declared in an inner scope shall not hide an
> identifier declared in an outer scope"
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> ---
>  xen/arch/x86/cpu/mcheck/barrier.c | 8 ++++----
>  xen/arch/x86/cpu/mcheck/barrier.h | 8 ++++----
>  2 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/barrier.c b/xen/arch/x86/cpu/mcheck/barrier.c
> index a7e5b19a44..51a1d37a76 100644
> --- a/xen/arch/x86/cpu/mcheck/barrier.c
> +++ b/xen/arch/x86/cpu/mcheck/barrier.c
> @@ -16,11 +16,11 @@ void mce_barrier_dec(struct mce_softirq_barrier *bar)
>      atomic_dec(&bar->val);
>  }
>  
> -void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait)
> +void mce_barrier_enter(struct mce_softirq_barrier *bar, bool do_wait)

"wait" clashes with xen/common/sched/core.c:wait, which is globally
exported, right?

I think it would be good to add this info to the commit message in this
kind of patches.


>  {
>      int gen;
>  
> -    if ( !wait )
> +    if ( !do_wait )
>          return;
>      atomic_inc(&bar->ingen);
>      gen = atomic_read(&bar->outgen);
> @@ -34,11 +34,11 @@ void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait)
>      }
>  }
>  
> -void mce_barrier_exit(struct mce_softirq_barrier *bar, bool wait)
> +void mce_barrier_exit(struct mce_softirq_barrier *bar, bool do_wait)
>  {
>      int gen;
>  
> -    if ( !wait )
> +    if ( !do_wait )
>          return;
>      atomic_inc(&bar->outgen);
>      gen = atomic_read(&bar->ingen);
> diff --git a/xen/arch/x86/cpu/mcheck/barrier.h b/xen/arch/x86/cpu/mcheck/barrier.h
> index c4d52b6192..5cd1b4e4bf 100644
> --- a/xen/arch/x86/cpu/mcheck/barrier.h
> +++ b/xen/arch/x86/cpu/mcheck/barrier.h
> @@ -32,14 +32,14 @@ void mce_barrier_init(struct mce_softirq_barrier *);
>  void mce_barrier_dec(struct mce_softirq_barrier *);
>  
>  /*
> - * If @wait is false, mce_barrier_enter/exit() will return immediately
> + * If @do_wait is false, mce_barrier_enter/exit() will return immediately
>   * without touching the barrier. It's used when handling a
>   * non-broadcasting MCE (e.g. MCE on some old Intel CPU, MCE on AMD
>   * CPU and LMCE on Intel Skylake-server CPU) which is received on only
>   * one CPU and thus does not invoke mce_barrier_enter/exit() calls on
>   * all CPUs.
>   *
> - * If @wait is true, mce_barrier_enter/exit() will handle the given
> + * If @do_wait is true, mce_barrier_enter/exit() will handle the given
>   * barrier as below.
>   *
>   * Increment the generation number and the value. The generation number
> @@ -53,8 +53,8 @@ void mce_barrier_dec(struct mce_softirq_barrier *);
>   * These barrier functions should always be paired, so that the
>   * counter value will reach 0 again after all CPUs have exited.
>   */
> -void mce_barrier_enter(struct mce_softirq_barrier *, bool wait);
> -void mce_barrier_exit(struct mce_softirq_barrier *, bool wait);
> +void mce_barrier_enter(struct mce_softirq_barrier *, bool do_wait);
> +void mce_barrier_exit(struct mce_softirq_barrier *, bool do_wait);

You might as well add "bar" as first parameter?


>  void mce_barrier(struct mce_softirq_barrier *);
>  
> -- 
> 2.34.1
>
Nicola Vetrini Aug. 3, 2023, 8:06 a.m. UTC | #2
On 03/08/2023 03:45, Stefano Stabellini wrote:
> On Wed, 2 Aug 2023, Nicola Vetrini wrote:
>> Suitable mechanical renames are made to avoid shadowing, thus
>> addressing violations of MISRA C:2012 Rule 5.3:
>> "An identifier declared in an inner scope shall not hide an
>> identifier declared in an outer scope"
>> 
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>> ---
>>  xen/arch/x86/cpu/mcheck/barrier.c | 8 ++++----
>>  xen/arch/x86/cpu/mcheck/barrier.h | 8 ++++----
>>  2 files changed, 8 insertions(+), 8 deletions(-)
>> 
>> diff --git a/xen/arch/x86/cpu/mcheck/barrier.c 
>> b/xen/arch/x86/cpu/mcheck/barrier.c
>> index a7e5b19a44..51a1d37a76 100644
>> --- a/xen/arch/x86/cpu/mcheck/barrier.c
>> +++ b/xen/arch/x86/cpu/mcheck/barrier.c
>> @@ -16,11 +16,11 @@ void mce_barrier_dec(struct mce_softirq_barrier 
>> *bar)
>>      atomic_dec(&bar->val);
>>  }
>> 
>> -void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait)
>> +void mce_barrier_enter(struct mce_softirq_barrier *bar, bool do_wait)
> 
> "wait" clashes with xen/common/sched/core.c:wait, which is globally
> exported, right?
> 
> I think it would be good to add this info to the commit message in this
> kind of patches.
> 

Correct, it's in 'xen/include/xen/wait.h' that makes it visible in the 
file modified
by the patch. I'll add it in v2.

>> -void mce_barrier_enter(struct mce_softirq_barrier *, bool wait);
>> -void mce_barrier_exit(struct mce_softirq_barrier *, bool wait);
>> +void mce_barrier_enter(struct mce_softirq_barrier *, bool do_wait);
>> +void mce_barrier_exit(struct mce_softirq_barrier *, bool do_wait);
> 
> You might as well add "bar" as first parameter?
> 
> 
>>  void mce_barrier(struct mce_softirq_barrier *);
>> 

Will do. I checked that this would not interfere with other patches
related to Rules 8.2 and 8.3.
diff mbox series

Patch

diff --git a/xen/arch/x86/cpu/mcheck/barrier.c b/xen/arch/x86/cpu/mcheck/barrier.c
index a7e5b19a44..51a1d37a76 100644
--- a/xen/arch/x86/cpu/mcheck/barrier.c
+++ b/xen/arch/x86/cpu/mcheck/barrier.c
@@ -16,11 +16,11 @@  void mce_barrier_dec(struct mce_softirq_barrier *bar)
     atomic_dec(&bar->val);
 }
 
-void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait)
+void mce_barrier_enter(struct mce_softirq_barrier *bar, bool do_wait)
 {
     int gen;
 
-    if ( !wait )
+    if ( !do_wait )
         return;
     atomic_inc(&bar->ingen);
     gen = atomic_read(&bar->outgen);
@@ -34,11 +34,11 @@  void mce_barrier_enter(struct mce_softirq_barrier *bar, bool wait)
     }
 }
 
-void mce_barrier_exit(struct mce_softirq_barrier *bar, bool wait)
+void mce_barrier_exit(struct mce_softirq_barrier *bar, bool do_wait)
 {
     int gen;
 
-    if ( !wait )
+    if ( !do_wait )
         return;
     atomic_inc(&bar->outgen);
     gen = atomic_read(&bar->ingen);
diff --git a/xen/arch/x86/cpu/mcheck/barrier.h b/xen/arch/x86/cpu/mcheck/barrier.h
index c4d52b6192..5cd1b4e4bf 100644
--- a/xen/arch/x86/cpu/mcheck/barrier.h
+++ b/xen/arch/x86/cpu/mcheck/barrier.h
@@ -32,14 +32,14 @@  void mce_barrier_init(struct mce_softirq_barrier *);
 void mce_barrier_dec(struct mce_softirq_barrier *);
 
 /*
- * If @wait is false, mce_barrier_enter/exit() will return immediately
+ * If @do_wait is false, mce_barrier_enter/exit() will return immediately
  * without touching the barrier. It's used when handling a
  * non-broadcasting MCE (e.g. MCE on some old Intel CPU, MCE on AMD
  * CPU and LMCE on Intel Skylake-server CPU) which is received on only
  * one CPU and thus does not invoke mce_barrier_enter/exit() calls on
  * all CPUs.
  *
- * If @wait is true, mce_barrier_enter/exit() will handle the given
+ * If @do_wait is true, mce_barrier_enter/exit() will handle the given
  * barrier as below.
  *
  * Increment the generation number and the value. The generation number
@@ -53,8 +53,8 @@  void mce_barrier_dec(struct mce_softirq_barrier *);
  * These barrier functions should always be paired, so that the
  * counter value will reach 0 again after all CPUs have exited.
  */
-void mce_barrier_enter(struct mce_softirq_barrier *, bool wait);
-void mce_barrier_exit(struct mce_softirq_barrier *, bool wait);
+void mce_barrier_enter(struct mce_softirq_barrier *, bool do_wait);
+void mce_barrier_exit(struct mce_softirq_barrier *, bool do_wait);
 
 void mce_barrier(struct mce_softirq_barrier *);