Message ID | 20190930055925.25842-1-yamada.masahiro@socionext.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | ARM: fix __get_user_check() in case uaccess_* calls are not inlined | expand |
On Mon, Sep 30, 2019 at 8:01 AM Masahiro Yamada <yamada.masahiro@socionext.com> wrote: > > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly"): > > https://lkml.org/lkml/2019/9/26/825 > > I also received a regression report from Nicolas Saenz Julienne: > > https://lkml.org/lkml/2019/9/27/263 > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > to prefer not inlining functions with -Os. I was able to reproduce > it with other boards and defconfig files by manually enabling > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > in order to avoid those registers being overwritten in the callees. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. > > I want to keep as much compiler's freedom as possible about the inlining > decision. So, I changed the function call order instead of adding > __always_inline around. > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > uaccess_restore() after evacuating the __e ("r0"). > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" <bot@kernelci.org> > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Arnd Bergmann <arnd@arndb.de> The patch looks reasonable to me, but I think we should also revert commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") in mainline for now, as it caused this to happen all the time rather than only for users that expliticly select CONFIG_OPTIMIZE_INLINING. We have had other bug reports starting with that commit that run into similar issues, and I'm sure there are more of them. I don't mind having it in linux-next for a while long, but I think it was premature to have it merged into mainline. Arnd
On Mon, 2019-09-30 at 14:59 +0900, Masahiro Yamada wrote: > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly"): > > https://lkml.org/lkml/2019/9/26/825 > > I also received a regression report from Nicolas Saenz Julienne: > > https://lkml.org/lkml/2019/9/27/263 > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > to prefer not inlining functions with -Os. I was able to reproduce > it with other boards and defconfig files by manually enabling > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > in order to avoid those registers being overwritten in the callees. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. > > I want to keep as much compiler's freedom as possible about the inlining > decision. So, I changed the function call order instead of adding > __always_inline around. > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > uaccess_restore() after evacuating the __e ("r0"). > > Fixes: 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" <bot@kernelci.org> > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> > --- Thanks! Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
Hi Arnd, On Mon, Sep 30, 2019 at 4:57 PM Arnd Bergmann <arnd@arndb.de> wrote: > > On Mon, Sep 30, 2019 at 8:01 AM Masahiro Yamada > <yamada.masahiro@socionext.com> wrote: > > > > KernelCI reports that bcm2835_defconfig is no longer booting since > > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > > forcibly"): > > > > https://lkml.org/lkml/2019/9/26/825 > > > > I also received a regression report from Nicolas Saenz Julienne: > > > > https://lkml.org/lkml/2019/9/27/263 > > > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > > to prefer not inlining functions with -Os. I was able to reproduce > > it with other boards and defconfig files by manually enabling > > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > > > The __get_user_check() specifically uses r0, r1, r2 registers. > > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > > in order to avoid those registers being overwritten in the callees. > > > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > > inlining functions, except on x86. > > > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > > So, __always_inline is now the only guaranteed way of forcible inlining. > > > > I want to keep as much compiler's freedom as possible about the inlining > > decision. So, I changed the function call order instead of adding > > __always_inline around. > > > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > > uaccess_restore() after evacuating the __e ("r0"). > > > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > > Reported-by: "kernelci.org bot" <bot@kernelci.org> > > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> > > Acked-by: Arnd Bergmann <arnd@arndb.de> > > The patch looks reasonable to me, but I think we should also revert > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly") in mainline for now, as it caused this to happen all the time rather > than only for users that expliticly select CONFIG_OPTIMIZE_INLINING. > > We have had other bug reports starting with that commit that run into > similar issues, and I'm sure there are more of them. I don't mind having it > in linux-next for a while long, but I think it was premature to have it merged > into mainline. > > Arnd Hmm, I know you are testing randconfig build tests, but how many people are testing the kernel boot in linux-next? People and kernelci started to report the issue immediately after it landed in the mainline...
> From: linux-kernel-owner@vger.kernel.org <linux-kernel-owner@vger.kernel.org> On Behalf Of Masahiro Yamada > Sent: 30 September 2019 06:59 > Subject: [PATCH] ARM: fix __get_user_check() in case uaccess_* calls are not inlined > > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly"): > > https://lkml.org/lkml/2019/9/26/825 > > I also received a regression report from Nicolas Saenz Julienne: > > https://lkml.org/lkml/2019/9/27/263 > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > to prefer not inlining functions with -Os. I was able to reproduce > it with other boards and defconfig files by manually enabling > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > in order to avoid those registers being overwritten in the callees. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. > > I want to keep as much compiler's freedom as possible about the inlining > decision. So, I changed the function call order instead of adding > __always_inline around. > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > uaccess_restore() after evacuating the __e ("r0"). > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" <bot@kernelci.org> > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Tested-by: Fabrizio Castro <fabrizio.castro@bp.renesas.com> > --- > > arch/arm/include/asm/uaccess.h | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > index 303248e5b990..559f252d7e3c 100644 > --- a/arch/arm/include/asm/uaccess.h > +++ b/arch/arm/include/asm/uaccess.h > @@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *); > #define __get_user_check(x, p) \ > ({ \ > unsigned long __limit = current_thread_info()->addr_limit - 1; \ > + unsigned int __ua_flags = uaccess_save_and_enable(); \ > register typeof(*(p)) __user *__p asm("r0") = (p); \ > register __inttype(x) __r2 asm("r2"); \ > register unsigned long __l asm("r1") = __limit; \ > register int __e asm("r0"); \ > - unsigned int __ua_flags = uaccess_save_and_enable(); \ > + unsigned int __err; \ > switch (sizeof(*(__p))) { \ > case 1: \ > if (sizeof((x)) >= 8) \ > @@ -223,9 +224,10 @@ extern int __get_user_64t_4(void *); > break; \ > default: __e = __get_user_bad(); break; \ > } \ > - uaccess_restore(__ua_flags); \ > + __err = __e; \ > x = (typeof(*(p))) __r2; \ > - __e; \ > + uaccess_restore(__ua_flags); \ > + __err; \ > }) > > #define get_user(x, p) \ > -- > 2.17.1
On Mon, Sep 30, 2019 at 02:59:25PM +0900, Masahiro Yamada wrote: > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly"): > > https://lkml.org/lkml/2019/9/26/825 > > I also received a regression report from Nicolas Saenz Julienne: > > https://lkml.org/lkml/2019/9/27/263 > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > to prefer not inlining functions with -Os. I was able to reproduce > it with other boards and defconfig files by manually enabling > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > in order to avoid those registers being overwritten in the callees. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. > > I want to keep as much compiler's freedom as possible about the inlining > decision. So, I changed the function call order instead of adding > __always_inline around. > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > uaccess_restore() after evacuating the __e ("r0"). > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" <bot@kernelci.org> > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> > --- > > arch/arm/include/asm/uaccess.h | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > index 303248e5b990..559f252d7e3c 100644 > --- a/arch/arm/include/asm/uaccess.h > +++ b/arch/arm/include/asm/uaccess.h > @@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *); > #define __get_user_check(x, p) \ > ({ \ > unsigned long __limit = current_thread_info()->addr_limit - 1; \ > + unsigned int __ua_flags = uaccess_save_and_enable(); \ If the compiler is moving uaccess_save_and_enable(), that's something we really don't want - the idea is to _minimise_ the number of kernel memory accesses between enabling userspace access and performing the actual access. Fixing it in this way widens the window for the kernel to be doing something it shoulding in userspace. So, the right solution is to ensure that the compiler always inlines the uaccess_*() helpers - which should be nothing more than four instructions for uaccess_save_and_enable() and two for the restore. I.O.W. it should look something like this: 144: ee134f10 mrc 15, 0, r4, cr3, cr0, {0} 148: e3c4200c bic r2, r4, #12 14c: e24e1001 sub r1, lr, #1 150: e3822004 orr r2, r2, #4 154: ee032f10 mcr 15, 0, r2, cr3, cr0, {0} 158: f57ff06f isb sy 15c: ebfffffe bl 0 <__get_user_4> 160: ee034f10 mcr 15, 0, r4, cr3, cr0, {0} 164: f57ff06f isb sy
On Sun, Sep 29, 2019 at 11:00 PM Masahiro Yamada <yamada.masahiro@socionext.com> wrote: > > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly"): > > https://lkml.org/lkml/2019/9/26/825 > > I also received a regression report from Nicolas Saenz Julienne: > > https://lkml.org/lkml/2019/9/27/263 > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > to prefer not inlining functions with -Os. I was able to reproduce > it with other boards and defconfig files by manually enabling > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. Yep, that part is obvious, but... > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > in order to avoid those registers being overwritten in the callees. Right, r0, r1, r2 are caller saved, meaning that __get_user_check must save/restore them when making function calls. So uaccess_save_and_enable() and uaccess_restore() should either be made into macros (macros and typecheck (see include/linux/typecheck.h) are peanut butter and chocolate), or occur at different points in the function when those register variables are no longer in use. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. > > I want to keep as much compiler's freedom as possible about the inlining > decision. So, I changed the function call order instead of adding > __always_inline around. > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > uaccess_restore() after evacuating the __e ("r0"). > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" <bot@kernelci.org> > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> > --- > > arch/arm/include/asm/uaccess.h | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > index 303248e5b990..559f252d7e3c 100644 > --- a/arch/arm/include/asm/uaccess.h > +++ b/arch/arm/include/asm/uaccess.h > @@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *); > #define __get_user_check(x, p) \ > ({ \ > unsigned long __limit = current_thread_info()->addr_limit - 1; \ > + unsigned int __ua_flags = uaccess_save_and_enable(); \ > register typeof(*(p)) __user *__p asm("r0") = (p); \ > register __inttype(x) __r2 asm("r2"); \ > register unsigned long __l asm("r1") = __limit; \ > register int __e asm("r0"); \ What does it mean for there to be two different local variables pinned to the same register? Ie. it looks like __e and __p are defined to exist in r0. Would having one variable and an explicit cast result in differing storage? > - unsigned int __ua_flags = uaccess_save_and_enable(); \ > + unsigned int __err; \ > switch (sizeof(*(__p))) { \ > case 1: \ > if (sizeof((x)) >= 8) \ > @@ -223,9 +224,10 @@ extern int __get_user_64t_4(void *); > break; \ > default: __e = __get_user_bad(); break; \ ^ I think this assignment to __e should be replaced with an assignment to __err? We no longer need the register at this point and could skip the assignment of x. > } \ > - uaccess_restore(__ua_flags); \ > + __err = __e; \ > x = (typeof(*(p))) __r2; \ > - __e; \ > + uaccess_restore(__ua_flags); \ > + __err; \ > }) > > #define get_user(x, p) \ > -- > 2.17.1 >
Hi Russell, On Tue, Oct 1, 2019 at 2:50 AM Russell King - ARM Linux admin <linux@armlinux.org.uk> wrote: > > On Mon, Sep 30, 2019 at 02:59:25PM +0900, Masahiro Yamada wrote: > > KernelCI reports that bcm2835_defconfig is no longer booting since > > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > > forcibly"): > > > > https://lkml.org/lkml/2019/9/26/825 > > > > I also received a regression report from Nicolas Saenz Julienne: > > > > https://lkml.org/lkml/2019/9/27/263 > > > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > > to prefer not inlining functions with -Os. I was able to reproduce > > it with other boards and defconfig files by manually enabling > > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > > > The __get_user_check() specifically uses r0, r1, r2 registers. > > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > > in order to avoid those registers being overwritten in the callees. > > > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > > inlining functions, except on x86. > > > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > > So, __always_inline is now the only guaranteed way of forcible inlining. > > > > I want to keep as much compiler's freedom as possible about the inlining > > decision. So, I changed the function call order instead of adding > > __always_inline around. > > > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > > uaccess_restore() after evacuating the __e ("r0"). > > > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > > Reported-by: "kernelci.org bot" <bot@kernelci.org> > > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> > > --- > > > > arch/arm/include/asm/uaccess.h | 8 +++++--- > > 1 file changed, 5 insertions(+), 3 deletions(-) > > > > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > > index 303248e5b990..559f252d7e3c 100644 > > --- a/arch/arm/include/asm/uaccess.h > > +++ b/arch/arm/include/asm/uaccess.h > > @@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *); > > #define __get_user_check(x, p) \ > > ({ \ > > unsigned long __limit = current_thread_info()->addr_limit - 1; \ > > + unsigned int __ua_flags = uaccess_save_and_enable(); \ > > If the compiler is moving uaccess_save_and_enable(), that's something > we really don't want Hmm, based on my poor knowledge about compilers, I do not know if this re-arrangement happens... > - the idea is to _minimise_ the number of kernel > memory accesses between enabling userspace access and performing the > actual access. > > Fixing it in this way widens the window for the kernel to be doing > something it shoulding in userspace. > > So, the right solution is to ensure that the compiler always inlines > the uaccess_*() helpers - which should be nothing more than four > instructions for uaccess_save_and_enable() and two for the > restore. > OK, I will use __always_inline to avoid any potential behavior change. Thanks.
Hi Nick, On Tue, Oct 1, 2019 at 7:19 AM Nick Desaulniers <ndesaulniers@google.com> wrote: > > On Sun, Sep 29, 2019 at 11:00 PM Masahiro Yamada > <yamada.masahiro@socionext.com> wrote: > > > > KernelCI reports that bcm2835_defconfig is no longer booting since > > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > > forcibly"): > > > > https://lkml.org/lkml/2019/9/26/825 > > > > I also received a regression report from Nicolas Saenz Julienne: > > > > https://lkml.org/lkml/2019/9/27/263 > > > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > > to prefer not inlining functions with -Os. I was able to reproduce > > it with other boards and defconfig files by manually enabling > > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > > > The __get_user_check() specifically uses r0, r1, r2 registers. > > Yep, that part is obvious, but... > > > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > > in order to avoid those registers being overwritten in the callees. > > Right, r0, r1, r2 are caller saved, meaning that __get_user_check must > save/restore them when making function calls. So > uaccess_save_and_enable() and uaccess_restore() should either be made > into macros (macros and typecheck (see include/linux/typecheck.h) are > peanut butter and chocolate), or occur at different points in the > function when those register variables are no longer in use. > > > > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > > inlining functions, except on x86. > > > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > > So, __always_inline is now the only guaranteed way of forcible inlining. > > > > I want to keep as much compiler's freedom as possible about the inlining > > decision. So, I changed the function call order instead of adding > > __always_inline around. > > > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > > uaccess_restore() after evacuating the __e ("r0"). > > > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > > Reported-by: "kernelci.org bot" <bot@kernelci.org> > > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> > > --- > > > > arch/arm/include/asm/uaccess.h | 8 +++++--- > > 1 file changed, 5 insertions(+), 3 deletions(-) > > > > diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h > > index 303248e5b990..559f252d7e3c 100644 > > --- a/arch/arm/include/asm/uaccess.h > > +++ b/arch/arm/include/asm/uaccess.h > > @@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *); > > #define __get_user_check(x, p) \ > > ({ \ > > unsigned long __limit = current_thread_info()->addr_limit - 1; \ > > + unsigned int __ua_flags = uaccess_save_and_enable(); \ > > register typeof(*(p)) __user *__p asm("r0") = (p); \ > > register __inttype(x) __r2 asm("r2"); \ > > register unsigned long __l asm("r1") = __limit; \ > > register int __e asm("r0"); \ > > What does it mean for there to be two different local variables pinned > to the same register? Ie. it looks like __e and __p are defined to > exist in r0. Would having one variable and an explicit cast result in > differing storage? In my understanding, __p is input (a pointer to the user-space) __e is output (return value) Maybe, does it use two variables for clarification? > > > - unsigned int __ua_flags = uaccess_save_and_enable(); \ > > + unsigned int __err; \ > > switch (sizeof(*(__p))) { \ > > case 1: \ > > if (sizeof((x)) >= 8) \ > > @@ -223,9 +224,10 @@ extern int __get_user_64t_4(void *); > > break; \ > > default: __e = __get_user_bad(); break; \ > > ^ I think this assignment to __e should be replaced with an assignment > to __err? We no longer need the register at this point and could skip > the assignment of x. Right, but '__err = __e' is necessary for non-default cases.
On Mon, Sep 30, 2019 at 8:01 AM Masahiro Yamada <yamada.masahiro@socionext.com> wrote: > KernelCI reports that bcm2835_defconfig is no longer booting since > commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING > forcibly"): > > https://lkml.org/lkml/2019/9/26/825 > > I also received a regression report from Nicolas Saenz Julienne: > > https://lkml.org/lkml/2019/9/27/263 > > This problem has cropped up on arch/arm/config/bcm2835_defconfig > because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends > to prefer not inlining functions with -Os. I was able to reproduce > it with other boards and defconfig files by manually enabling > CONFIG_CC_OPTIMIZE_FOR_SIZE. > > The __get_user_check() specifically uses r0, r1, r2 registers. > So, uaccess_save_and_enable() and uaccess_restore() must be inlined > in order to avoid those registers being overwritten in the callees. > > Prior to commit 9012d011660e ("compiler: allow all arches to enable > CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for > inlining functions, except on x86. > > Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. > So, __always_inline is now the only guaranteed way of forcible inlining. > > I want to keep as much compiler's freedom as possible about the inlining > decision. So, I changed the function call order instead of adding > __always_inline around. > > Call uaccess_save_and_enable() before assigning the __p ("r0"), and > uaccess_restore() after evacuating the __e ("r0"). > > Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") > Reported-by: "kernelci.org bot" <bot@kernelci.org> > Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> > Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Thanks, this fixes the issues I was seeing on r8a7791/koelsch. Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Gr{oetje,eeting}s, Geert
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 303248e5b990..559f252d7e3c 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -191,11 +191,12 @@ extern int __get_user_64t_4(void *); #define __get_user_check(x, p) \ ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ + unsigned int __ua_flags = uaccess_save_and_enable(); \ register typeof(*(p)) __user *__p asm("r0") = (p); \ register __inttype(x) __r2 asm("r2"); \ register unsigned long __l asm("r1") = __limit; \ register int __e asm("r0"); \ - unsigned int __ua_flags = uaccess_save_and_enable(); \ + unsigned int __err; \ switch (sizeof(*(__p))) { \ case 1: \ if (sizeof((x)) >= 8) \ @@ -223,9 +224,10 @@ extern int __get_user_64t_4(void *); break; \ default: __e = __get_user_bad(); break; \ } \ - uaccess_restore(__ua_flags); \ + __err = __e; \ x = (typeof(*(p))) __r2; \ - __e; \ + uaccess_restore(__ua_flags); \ + __err; \ }) #define get_user(x, p) \
KernelCI reports that bcm2835_defconfig is no longer booting since commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly"): https://lkml.org/lkml/2019/9/26/825 I also received a regression report from Nicolas Saenz Julienne: https://lkml.org/lkml/2019/9/27/263 This problem has cropped up on arch/arm/config/bcm2835_defconfig because it enables CONFIG_CC_OPTIMIZE_FOR_SIZE. The compiler tends to prefer not inlining functions with -Os. I was able to reproduce it with other boards and defconfig files by manually enabling CONFIG_CC_OPTIMIZE_FOR_SIZE. The __get_user_check() specifically uses r0, r1, r2 registers. So, uaccess_save_and_enable() and uaccess_restore() must be inlined in order to avoid those registers being overwritten in the callees. Prior to commit 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING"), the 'inline' marker was always enough for inlining functions, except on x86. Since that commit, all architectures can enable CONFIG_OPTIMIZE_INLINING. So, __always_inline is now the only guaranteed way of forcible inlining. I want to keep as much compiler's freedom as possible about the inlining decision. So, I changed the function call order instead of adding __always_inline around. Call uaccess_save_and_enable() before assigning the __p ("r0"), and uaccess_restore() after evacuating the __e ("r0"). Fixes: 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING") Reported-by: "kernelci.org bot" <bot@kernelci.org> Reported-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> --- arch/arm/include/asm/uaccess.h | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)