Message ID | 20160429094218.61b26849@gandalf.local.home (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Friday 29 April 2016 09:42:18 Steven Rostedt wrote: > On Fri, 29 Apr 2016 10:52:32 +0200 > Arnd Bergmann <arnd@arndb.de> wrote: > > > This reverts the earlier fix attempt and works around the problem > > by including both linux/mmu_context.h and asm/mmu_context.h from > > kernel/sched/core.c. This is not a good solution but seems less > > hacky than the alternatives. > > What about simply not compiling finish_arch_post_lock_switch() when > building modules? > > (untested, not compiled or anything) > > Signed-off-by: Steven Rostedt <rostedt@goodmis.org> > It should work as well. I think I suggested doing that the last time the problem came up a few years ago, but we ended up not including the header instead, so I kept doing that. Arnd
On Fri, Apr 29, 2016 at 8:37 AM, Arnd Bergmann <arnd@arndb.de> wrote: > On Friday 29 April 2016 09:42:18 Steven Rostedt wrote: >> On Fri, 29 Apr 2016 10:52:32 +0200 >> Arnd Bergmann <arnd@arndb.de> wrote: >> >> > This reverts the earlier fix attempt and works around the problem >> > by including both linux/mmu_context.h and asm/mmu_context.h from >> > kernel/sched/core.c. This is not a good solution but seems less >> > hacky than the alternatives. >> >> What about simply not compiling finish_arch_post_lock_switch() when >> building modules? >> >> (untested, not compiled or anything) >> >> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> >> > > It should work as well. > > I think I suggested doing that the last time the problem came up > a few years ago, but we ended up not including the header instead, > so I kept doing that. > > Arnd This variant looks considerably nicer to me. --Andy
On Fri, Apr 29, 2016 at 6:42 AM, Steven Rostedt <rostedt@goodmis.org> wrote: > On Fri, 29 Apr 2016 10:52:32 +0200 > Arnd Bergmann <arnd@arndb.de> wrote: > >> This reverts the earlier fix attempt and works around the problem >> by including both linux/mmu_context.h and asm/mmu_context.h from >> kernel/sched/core.c. This is not a good solution but seems less >> hacky than the alternatives. > > What about simply not compiling finish_arch_post_lock_switch() when > building modules? > > (untested, not compiled or anything) > > Signed-off-by: Steven Rostedt <rostedt@goodmis.org> > --- > diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h > index fa5b42d44985..3f22d1b6bac8 100644 > --- a/arch/arm/include/asm/mmu_context.h > +++ b/arch/arm/include/asm/mmu_context.h > @@ -66,6 +66,7 @@ static inline void check_and_switch_context(struct mm_struct *mm, > cpu_switch_mm(mm->pgd, mm); > } > > +#ifndef MODULE > #define finish_arch_post_lock_switch \ > finish_arch_post_lock_switch > static inline void finish_arch_post_lock_switch(void) > @@ -87,6 +88,7 @@ static inline void finish_arch_post_lock_switch(void) > preempt_enable_no_resched(); > } > } > +#endif /* !MODULE */ > > #endif /* CONFIG_MMU */ > Can someone in arm land ack this so Ingo can apply it? --Andy
On Thu, May 12, 2016 at 10:46:56PM -0700, Andy Lutomirski wrote: > On Fri, Apr 29, 2016 at 6:42 AM, Steven Rostedt <rostedt@goodmis.org> wrote: > > On Fri, 29 Apr 2016 10:52:32 +0200 > > Arnd Bergmann <arnd@arndb.de> wrote: > > > >> This reverts the earlier fix attempt and works around the problem > >> by including both linux/mmu_context.h and asm/mmu_context.h from > >> kernel/sched/core.c. This is not a good solution but seems less > >> hacky than the alternatives. > > > > What about simply not compiling finish_arch_post_lock_switch() when > > building modules? > > > > (untested, not compiled or anything) > > > > Signed-off-by: Steven Rostedt <rostedt@goodmis.org> > > --- > > diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h > > index fa5b42d44985..3f22d1b6bac8 100644 > > --- a/arch/arm/include/asm/mmu_context.h > > +++ b/arch/arm/include/asm/mmu_context.h > > @@ -66,6 +66,7 @@ static inline void check_and_switch_context(struct mm_struct *mm, > > cpu_switch_mm(mm->pgd, mm); > > } > > > > +#ifndef MODULE > > #define finish_arch_post_lock_switch \ > > finish_arch_post_lock_switch > > static inline void finish_arch_post_lock_switch(void) > > @@ -87,6 +88,7 @@ static inline void finish_arch_post_lock_switch(void) > > preempt_enable_no_resched(); > > } > > } > > +#endif /* !MODULE */ > > > > #endif /* CONFIG_MMU */ > > > > > Can someone in arm land ack this so Ingo can apply it? Sorry, I'm simply unable to read every message that comes in. Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
On Thursday 12 May 2016 22:46:56 Andy Lutomirski wrote: > On Fri, Apr 29, 2016 at 6:42 AM, Steven Rostedt <rostedt@goodmis.org> wrote: > > On Fri, 29 Apr 2016 10:52:32 +0200 > > Arnd Bergmann <arnd@arndb.de> wrote: > > > >> This reverts the earlier fix attempt and works around the problem > >> by including both linux/mmu_context.h and asm/mmu_context.h from > >> kernel/sched/core.c. This is not a good solution but seems less > >> hacky than the alternatives. > > > > What about simply not compiling finish_arch_post_lock_switch() when > > building modules? > > > > (untested, not compiled or anything) > > > > Signed-off-by: Steven Rostedt <rostedt@goodmis.org> > > --- > > diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h > > index fa5b42d44985..3f22d1b6bac8 100644 > > --- a/arch/arm/include/asm/mmu_context.h > > +++ b/arch/arm/include/asm/mmu_context.h > > @@ -66,6 +66,7 @@ static inline void check_and_switch_context(struct mm_struct *mm, > > cpu_switch_mm(mm->pgd, mm); > > } > > > > +#ifndef MODULE > > #define finish_arch_post_lock_switch \ > > finish_arch_post_lock_switch > > static inline void finish_arch_post_lock_switch(void) > > @@ -87,6 +88,7 @@ static inline void finish_arch_post_lock_switch(void) > > preempt_enable_no_resched(); > > } > > } > > +#endif /* !MODULE */ > > > > #endif /* CONFIG_MMU */ > > > > > Can someone in arm land ack this so Ingo can apply it? > Sorry I forgot about this when I had my original patch in the randconfig patch stack. I've reverted this now and am testing with Steve's version. If I see no other regressions, I'll resend this with a proper changelog and Russell's Ack. Arnd
diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h index fa5b42d44985..3f22d1b6bac8 100644 --- a/arch/arm/include/asm/mmu_context.h +++ b/arch/arm/include/asm/mmu_context.h @@ -66,6 +66,7 @@ static inline void check_and_switch_context(struct mm_struct *mm, cpu_switch_mm(mm->pgd, mm); } +#ifndef MODULE #define finish_arch_post_lock_switch \ finish_arch_post_lock_switch static inline void finish_arch_post_lock_switch(void) @@ -87,6 +88,7 @@ static inline void finish_arch_post_lock_switch(void) preempt_enable_no_resched(); } } +#endif /* !MODULE */ #endif /* CONFIG_MMU */