Message ID | 20200123202051.8106-1-cai@lca.pw (mailing list archive) |
---|---|
State | Mainlined |
Commit | 345d52c184dc7de98cff63f1bfa6f90e9db19809 |
Headers | show |
Series | [-next,v2] arm64/spinlock: fix a -Wunused-function warning | expand |
On 1/23/20 3:20 PM, Qian Cai wrote: > The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for > arm64") introduced a warning from Clang because vcpu_is_preempted() is > compiled away, > > kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu' > [-Wunused-function] > static inline int node_cpu(struct optimistic_spin_node *node) > ^ > 1 warning generated. > > Fix it by converting vcpu_is_preempted() to a static inline function. > > Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") > Signed-off-by: Qian Cai <cai@lca.pw> > --- > > v2: convert vcpu_is_preempted() to a static inline function. > > arch/arm64/include/asm/spinlock.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h > index 102404dc1e13..9083d6992603 100644 > --- a/arch/arm64/include/asm/spinlock.h > +++ b/arch/arm64/include/asm/spinlock.h > @@ -18,6 +18,10 @@ > * See: > * https://lore.kernel.org/lkml/20200110100612.GC2827@hirez.programming.kicks-ass.net > */ > -#define vcpu_is_preempted(cpu) false > +#define vcpu_is_preempted vcpu_is_preempted > +static inline bool vcpu_is_preempted(int cpu) > +{ > + return false; > +} > > #endif /* __ASM_SPINLOCK_H */ Acked-by: Waiman Long <longman@redhat.com>
On Thu, Jan 23, 2020 at 03:20:51PM -0500, Qian Cai wrote: > The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for > arm64") introduced a warning from Clang because vcpu_is_preempted() is > compiled away, > > kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu' > [-Wunused-function] > static inline int node_cpu(struct optimistic_spin_node *node) > ^ > 1 warning generated. > > Fix it by converting vcpu_is_preempted() to a static inline function. > > Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") > Signed-off-by: Qian Cai <cai@lca.pw> > --- > > v2: convert vcpu_is_preempted() to a static inline function. > > arch/arm64/include/asm/spinlock.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h > index 102404dc1e13..9083d6992603 100644 > --- a/arch/arm64/include/asm/spinlock.h > +++ b/arch/arm64/include/asm/spinlock.h > @@ -18,6 +18,10 @@ > * See: > * https://lore.kernel.org/lkml/20200110100612.GC2827@hirez.programming.kicks-ass.net > */ > -#define vcpu_is_preempted(cpu) false > +#define vcpu_is_preempted vcpu_is_preempted > +static inline bool vcpu_is_preempted(int cpu) > +{ > + return false; > +} Cheers, I'll queue this at -rc1. Will
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index 102404dc1e13..9083d6992603 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -18,6 +18,10 @@ * See: * https://lore.kernel.org/lkml/20200110100612.GC2827@hirez.programming.kicks-ass.net */ -#define vcpu_is_preempted(cpu) false +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(int cpu) +{ + return false; +} #endif /* __ASM_SPINLOCK_H */
The commit f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") introduced a warning from Clang because vcpu_is_preempted() is compiled away, kernel/locking/osq_lock.c:25:19: warning: unused function 'node_cpu' [-Wunused-function] static inline int node_cpu(struct optimistic_spin_node *node) ^ 1 warning generated. Fix it by converting vcpu_is_preempted() to a static inline function. Fixes: f5bfdc8e3947 ("locking/osq: Use optimized spinning loop for arm64") Signed-off-by: Qian Cai <cai@lca.pw> --- v2: convert vcpu_is_preempted() to a static inline function. arch/arm64/include/asm/spinlock.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)