diff mbox series

arm64: mm: Increase MODULES_VSIZE to 512MB

Message ID 20230326170756.3021936-1-sdonthineni@nvidia.com (mailing list archive)
State New, archived
Headers show
Series arm64: mm: Increase MODULES_VSIZE to 512MB | expand

Commit Message

Shanker Donthineni March 26, 2023, 5:07 p.m. UTC
The allocation of modules occurs in two regions: the first region
MODULES_VSIZE (128MB) is shared with the core kernel, while the
the second region (2GB) is shared with other vmalloc callers.
Depending on the size of the core kernel, the 128MB region may
quickly fill up after loading a few modules, causing the system
to switch to the 2GB region. Unfortunately, even the 2GB region
can run out of space if previously loaded modules and the other
kernel subsystems consume the entire area, leaving no space for
additional modules.

This issue usually occurs when the system has a large number of
CPU cores, PCIe host-brigde controllers, and I/O devices. For
instance, the ECAM region of one host-bridge controller can use
up to 256MB of vmalloc space, while eight controllers can occupy
the entire 2GB.

One potential solution to address this issue is to increase the
size of the MODULES_VSIZE region to 512MB, which would enhance
the system's ability to support a greater number of dynamically
loaded modules and drivers.

Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
---

I am seeking your guidance and feedback on the proposed solution
to address the module load failures, specifically regarding any
potential side effects that I need to be aware of. Additionally,
I would appreciate your suggestions on any alternative solutions
to resolve the issue.

On a NVIDIA T241 system with Ubuntu-22.04, hitting boot failures
due to vmalloc/vmap allocation errors when loading modules.
dmesg:
 [   64.181308] ipmi_ssif: IPMI SSIF Interface driver
 [   64.184494] usbcore: registered new interface driver r8152
 [   64.242492] vmap allocation for size 393216 failed: use vmalloc=<size> to increase size
 [   64.242499] systemd-udevd: vmalloc error: size 327680, vm_struct allocation failed, mode:0xcc0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0-3
 [   64.242510] CPU: 32 PID: 2910 Comm: systemd-udevd Tainted: G           OE      6.2-generic-64k 
 [   64.242513] Hardware name: NVIDIA T241, BIOS v1.1.0 2023-03-18T21:32:31+00:00
 [   64.242515] Call trace:
 [   64.242516]  dump_backtrace+0xe0/0x130
 [   64.242523]  show_stack+0x20/0x60
 [   64.242525]  dump_stack_lvl+0x68/0x84
 [   64.242530]  dump_stack+0x18/0x34
 [   64.242532]  warn_alloc+0x11c/0x1b0
 [   64.242537]  __vmalloc_node_range+0xe0/0x20c
 [   64.242540]  module_alloc+0x118/0x160
 [   64.242543]  move_module+0x2c/0x190
 [   64.242546]  layout_and_allocate+0xfc/0x160
 [   64.242548]  load_module+0x260/0xbc4
 [   64.242549]  __do_sys_finit_module+0xac/0x130
 [   64.242551]  __arm64_sys_finit_module+0x28/0x34
 [   64.242552]  invoke_syscall+0x78/0x100
 [   64.242553]  el0_svc_common.constprop.0+0x170/0x194
 [   64.242555]  do_el0_svc+0x38/0x4c
 [   64.242556]  el0_svc+0x2c/0xc0
 [   64.242558]  el0t_64_sync_handler+0xbc/0x13c
 [   64.242560]  el0t_64_sync+0x1a0/0x1a4

 Documentation/arm64/memory.rst  | 8 ++++----
 arch/arm64/include/asm/memory.h | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

Comments

Ard Biesheuvel March 26, 2023, 5:35 p.m. UTC | #1
On Sun, 26 Mar 2023 at 19:08, Shanker Donthineni <sdonthineni@nvidia.com> wrote:
>
> The allocation of modules occurs in two regions: the first region
> MODULES_VSIZE (128MB) is shared with the core kernel, while the
> the second region (2GB) is shared with other vmalloc callers.
> Depending on the size of the core kernel, the 128MB region may
> quickly fill up after loading a few modules, causing the system
> to switch to the 2GB region.

How much module space are you actually using? This 128 MiB region is
not shared with vmalloc() so it should be dedicated to modules
entirely.

If you are doing EFI boot, you may need to following patch to ensure
that the 128 MiB region is actually the one being used.

commit 010338d729c1090036eb40d2a60b7b7bce2445b8
Author: Ard Biesheuvel <ardb@kernel.org>
Date:   Thu Feb 23 21:41:01 2023 +0100

    arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN


> Unfortunately, even the 2GB region
> can run out of space if previously loaded modules and the other
> kernel subsystems consume the entire area, leaving no space for
> additional modules.
>
> This issue usually occurs when the system has a large number of
> CPU cores, PCIe host-brigde controllers, and I/O devices. For
> instance, the ECAM region of one host-bridge controller can use
> up to 256MB of vmalloc space, while eight controllers can occupy
> the entire 2GB.
>
> One potential solution to address this issue is to increase the
> size of the MODULES_VSIZE region to 512MB, which would enhance
> the system's ability to support a greater number of dynamically
> loaded modules and drivers.
>
> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
> ---
>
> I am seeking your guidance and feedback on the proposed solution
> to address the module load failures, specifically regarding any
> potential side effects that I need to be aware of. Additionally,
> I would appreciate your suggestions on any alternative solutions
> to resolve the issue.
>
> On a NVIDIA T241 system with Ubuntu-22.04, hitting boot failures
> due to vmalloc/vmap allocation errors when loading modules.
> dmesg:
>  [   64.181308] ipmi_ssif: IPMI SSIF Interface driver
>  [   64.184494] usbcore: registered new interface driver r8152
>  [   64.242492] vmap allocation for size 393216 failed: use vmalloc=<size> to increase size
>  [   64.242499] systemd-udevd: vmalloc error: size 327680, vm_struct allocation failed, mode:0xcc0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0-3
>  [   64.242510] CPU: 32 PID: 2910 Comm: systemd-udevd Tainted: G           OE      6.2-generic-64k
>  [   64.242513] Hardware name: NVIDIA T241, BIOS v1.1.0 2023-03-18T21:32:31+00:00
>  [   64.242515] Call trace:
>  [   64.242516]  dump_backtrace+0xe0/0x130
>  [   64.242523]  show_stack+0x20/0x60
>  [   64.242525]  dump_stack_lvl+0x68/0x84
>  [   64.242530]  dump_stack+0x18/0x34
>  [   64.242532]  warn_alloc+0x11c/0x1b0
>  [   64.242537]  __vmalloc_node_range+0xe0/0x20c
>  [   64.242540]  module_alloc+0x118/0x160
>  [   64.242543]  move_module+0x2c/0x190
>  [   64.242546]  layout_and_allocate+0xfc/0x160
>  [   64.242548]  load_module+0x260/0xbc4
>  [   64.242549]  __do_sys_finit_module+0xac/0x130
>  [   64.242551]  __arm64_sys_finit_module+0x28/0x34
>  [   64.242552]  invoke_syscall+0x78/0x100
>  [   64.242553]  el0_svc_common.constprop.0+0x170/0x194
>  [   64.242555]  do_el0_svc+0x38/0x4c
>  [   64.242556]  el0_svc+0x2c/0xc0
>  [   64.242558]  el0t_64_sync_handler+0xbc/0x13c
>  [   64.242560]  el0t_64_sync+0x1a0/0x1a4
>
>  Documentation/arm64/memory.rst  | 8 ++++----
>  arch/arm64/include/asm/memory.h | 2 +-
>  2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/Documentation/arm64/memory.rst b/Documentation/arm64/memory.rst
> index 2a641ba7be3b..76c2fd8bbbf7 100644
> --- a/Documentation/arm64/memory.rst
> +++ b/Documentation/arm64/memory.rst
> @@ -33,8 +33,8 @@ AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
>    0000000000000000     0000ffffffffffff         256TB          user
>    ffff000000000000     ffff7fffffffffff         128TB          kernel logical memory map
>   [ffff600000000000     ffff7fffffffffff]         32TB          [kasan shadow region]
> -  ffff800000000000     ffff800007ffffff         128MB          modules
> -  ffff800008000000     fffffbffefffffff         124TB          vmalloc
> +  ffff800000000000     ffff80001fffffff         512MB          modules
> +  ffff800020000000     fffffbffefffffff         124TB          vmalloc
>    fffffbfff0000000     fffffbfffdffffff         224MB          fixed mappings (top down)
>    fffffbfffe000000     fffffbfffe7fffff           8MB          [guard region]
>    fffffbfffe800000     fffffbffff7fffff          16MB          PCI I/O space
> @@ -50,8 +50,8 @@ AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support):
>    0000000000000000     000fffffffffffff           4PB          user
>    fff0000000000000     ffff7fffffffffff          ~4PB          kernel logical memory map
>   [fffd800000000000     ffff7fffffffffff]        512TB          [kasan shadow region]
> -  ffff800000000000     ffff800007ffffff         128MB          modules
> -  ffff800008000000     fffffbffefffffff         124TB          vmalloc
> +  ffff800000000000     ffff80001fffffff         512MB          modules
> +  ffff800020000000     fffffbffefffffff         124TB          vmalloc
>    fffffbfff0000000     fffffbfffdffffff         224MB          fixed mappings (top down)
>    fffffbfffe000000     fffffbfffe7fffff           8MB          [guard region]
>    fffffbfffe800000     fffffbffff7fffff          16MB          PCI I/O space
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 78e5163836a0..dd5d634e235f 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -46,7 +46,7 @@
>  #define KIMAGE_VADDR           (MODULES_END)
>  #define MODULES_END            (MODULES_VADDR + MODULES_VSIZE)
>  #define MODULES_VADDR          (_PAGE_END(VA_BITS_MIN))
> -#define MODULES_VSIZE          (SZ_128M)
> +#define MODULES_VSIZE          (SZ_512M)
>  #define VMEMMAP_START          (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
>  #define VMEMMAP_END            (VMEMMAP_START + VMEMMAP_SIZE)
>  #define PCI_IO_END             (VMEMMAP_START - SZ_8M)
> --
> 2.25.1
>
Shanker Donthineni March 26, 2023, 6:59 p.m. UTC | #2
Thanks Ard for a quick feedback.

On 3/26/23 12:35, Ard Biesheuvel wrote:
> External email: Use caution opening links or attachments
> 
> 
> On Sun, 26 Mar 2023 at 19:08, Shanker Donthineni <sdonthineni@nvidia.com> wrote:
>>
>> The allocation of modules occurs in two regions: the first region
>> MODULES_VSIZE (128MB) is shared with the core kernel, while the
>> the second region (2GB) is shared with other vmalloc callers.
>> Depending on the size of the core kernel, the 128MB region may
>> quickly fill up after loading a few modules, causing the system
>> to switch to the 2GB region.
> 
> How much module space are you actually using? This 128 MiB region is
> not shared with vmalloc() so it should be dedicated to modules
> entirely.
> 
Is it correct to say that if the KASLR feature is disabled, 128MB is
being shared between the kernel and modules? Approximately 110MB used
by the NVIDIA GPU driver, resulting in the usage of more than 128MB.

root@localhost:~# cat /proc/kallsyms | grep -wE '_etext|_stext|_end'
ffff8000081d0000 T _stext
ffff800009390000 D _etext
ffff80000b4d0000 B _end

root@localhost:~# cat /proc/vmallocinfo | more
0xffff800001390000-0xffff800001450000  786432 move_module+0x2c/0x190 pages=11 vmalloc N0=11
0xffff800001450000-0xffff8000014b0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
0xffff8000014f0000-0xffff800001550000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
0xffff800001590000-0xffff8000015f0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
0xffff800001630000-0xffff800001690000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
0xffff8000016d0000-0xffff800001740000  458752 move_module+0x2c/0x190 pages=6 vmalloc N0=6
0xffff800001780000-0xffff8000017e0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
0xffff800001820000-0xffff800001880000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
...

The first modules loaded at the address 0xffff800001390000.

Less than 128MB is available for modules if KASLR is disabled.

> If you are doing EFI boot, you may need to following patch to ensure
> that the 128 MiB region is actually the one being used.
> 
> commit 010338d729c1090036eb40d2a60b7b7bce2445b8
> Author: Ard Biesheuvel <ardb@kernel.org>
> Date:   Thu Feb 23 21:41:01 2023 +0100
> 
>      arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN
> 
> 
I have included your patch to prevent the incorrect detection of the
KASLR feature. Otherwise, experiencing the different error
"overflow in relocation type 261", R_AARCH64_PREL32. Seems this is
due to the incorrect initialization of module_alloc_base.

>> Unfortunately, even the 2GB region
>> can run out of space if previously loaded modules and the other
>> kernel subsystems consume the entire area, leaving no space for
>> additional modules.
>>
>> This issue usually occurs when the system has a large number of
>> CPU cores, PCIe host-brigde controllers, and I/O devices. For
>> instance, the ECAM region of one host-bridge controller can use
>> up to 256MB of vmalloc space, while eight controllers can occupy
>> the entire 2GB.
>>
>> One potential solution to address this issue is to increase the
>> size of the MODULES_VSIZE region to 512MB, which would enhance
>> the system's ability to support a greater number of dynamically
>> loaded modules and drivers.
>>
>> Signed-off-by: Shanker Donthineni <sdonthineni@nvidia.com>
>> ---
>>
>> I am seeking your guidance and feedback on the proposed solution
>> to address the module load failures, specifically regarding any
>> potential side effects that I need to be aware of. Additionally,
>> I would appreciate your suggestions on any alternative solutions
>> to resolve the issue.
>>
>> On a NVIDIA T241 system with Ubuntu-22.04, hitting boot failures
>> due to vmalloc/vmap allocation errors when loading modules.
>> dmesg:
>>   [   64.181308] ipmi_ssif: IPMI SSIF Interface driver
>>   [   64.184494] usbcore: registered new interface driver r8152
>>   [   64.242492] vmap allocation for size 393216 failed: use vmalloc=<size> to increase size
>>   [   64.242499] systemd-udevd: vmalloc error: size 327680, vm_struct allocation failed, mode:0xcc0(GFP_KERNEL), nodemask=(null),cpuset=/,mems_allowed=0-3
>>   [   64.242510] CPU: 32 PID: 2910 Comm: systemd-udevd Tainted: G           OE      6.2-generic-64k
>>   [   64.242513] Hardware name: NVIDIA T241, BIOS v1.1.0 2023-03-18T21:32:31+00:00
>>   [   64.242515] Call trace:
>>   [   64.242516]  dump_backtrace+0xe0/0x130
>>   [   64.242523]  show_stack+0x20/0x60
>>   [   64.242525]  dump_stack_lvl+0x68/0x84
>>   [   64.242530]  dump_stack+0x18/0x34
>>   [   64.242532]  warn_alloc+0x11c/0x1b0
>>   [   64.242537]  __vmalloc_node_range+0xe0/0x20c
>>   [   64.242540]  module_alloc+0x118/0x160
>>   [   64.242543]  move_module+0x2c/0x190
>>   [   64.242546]  layout_and_allocate+0xfc/0x160
>>   [   64.242548]  load_module+0x260/0xbc4
>>   [   64.242549]  __do_sys_finit_module+0xac/0x130
>>   [   64.242551]  __arm64_sys_finit_module+0x28/0x34
>>   [   64.242552]  invoke_syscall+0x78/0x100
>>   [   64.242553]  el0_svc_common.constprop.0+0x170/0x194
>>   [   64.242555]  do_el0_svc+0x38/0x4c
>>   [   64.242556]  el0_svc+0x2c/0xc0
>>   [   64.242558]  el0t_64_sync_handler+0xbc/0x13c
>>   [   64.242560]  el0t_64_sync+0x1a0/0x1a4
>>
>>   Documentation/arm64/memory.rst  | 8 ++++----
>>   arch/arm64/include/asm/memory.h | 2 +-
>>   2 files changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/arm64/memory.rst b/Documentation/arm64/memory.rst
>> index 2a641ba7be3b..76c2fd8bbbf7 100644
>> --- a/Documentation/arm64/memory.rst
>> +++ b/Documentation/arm64/memory.rst
>> @@ -33,8 +33,8 @@ AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
>>     0000000000000000     0000ffffffffffff         256TB          user
>>     ffff000000000000     ffff7fffffffffff         128TB          kernel logical memory map
>>    [ffff600000000000     ffff7fffffffffff]         32TB          [kasan shadow region]
>> -  ffff800000000000     ffff800007ffffff         128MB          modules
>> -  ffff800008000000     fffffbffefffffff         124TB          vmalloc
>> +  ffff800000000000     ffff80001fffffff         512MB          modules
>> +  ffff800020000000     fffffbffefffffff         124TB          vmalloc
>>     fffffbfff0000000     fffffbfffdffffff         224MB          fixed mappings (top down)
>>     fffffbfffe000000     fffffbfffe7fffff           8MB          [guard region]
>>     fffffbfffe800000     fffffbffff7fffff          16MB          PCI I/O space
>> @@ -50,8 +50,8 @@ AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support):
>>     0000000000000000     000fffffffffffff           4PB          user
>>     fff0000000000000     ffff7fffffffffff          ~4PB          kernel logical memory map
>>    [fffd800000000000     ffff7fffffffffff]        512TB          [kasan shadow region]
>> -  ffff800000000000     ffff800007ffffff         128MB          modules
>> -  ffff800008000000     fffffbffefffffff         124TB          vmalloc
>> +  ffff800000000000     ffff80001fffffff         512MB          modules
>> +  ffff800020000000     fffffbffefffffff         124TB          vmalloc
>>     fffffbfff0000000     fffffbfffdffffff         224MB          fixed mappings (top down)
>>     fffffbfffe000000     fffffbfffe7fffff           8MB          [guard region]
>>     fffffbfffe800000     fffffbffff7fffff          16MB          PCI I/O space
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 78e5163836a0..dd5d634e235f 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -46,7 +46,7 @@
>>   #define KIMAGE_VADDR           (MODULES_END)
>>   #define MODULES_END            (MODULES_VADDR + MODULES_VSIZE)
>>   #define MODULES_VADDR          (_PAGE_END(VA_BITS_MIN))
>> -#define MODULES_VSIZE          (SZ_128M)
>> +#define MODULES_VSIZE          (SZ_512M)
>>   #define VMEMMAP_START          (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
>>   #define VMEMMAP_END            (VMEMMAP_START + VMEMMAP_SIZE)
>>   #define PCI_IO_END             (VMEMMAP_START - SZ_8M)
>> --
>> 2.25.1
>>
Ard Biesheuvel March 29, 2023, 4:07 p.m. UTC | #3
On Sun, 26 Mar 2023 at 20:59, Shanker Donthineni <sdonthineni@nvidia.com> wrote:
>
> Thanks Ard for a quick feedback.
>
> On 3/26/23 12:35, Ard Biesheuvel wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > On Sun, 26 Mar 2023 at 19:08, Shanker Donthineni <sdonthineni@nvidia.com> wrote:
> >>
> >> The allocation of modules occurs in two regions: the first region
> >> MODULES_VSIZE (128MB) is shared with the core kernel, while the
> >> the second region (2GB) is shared with other vmalloc callers.
> >> Depending on the size of the core kernel, the 128MB region may
> >> quickly fill up after loading a few modules, causing the system
> >> to switch to the 2GB region.
> >
> > How much module space are you actually using? This 128 MiB region is
> > not shared with vmalloc() so it should be dedicated to modules
> > entirely.
> >
> Is it correct to say that if the KASLR feature is disabled, 128MB is
> being shared between the kernel and modules? Approximately 110MB used
> by the NVIDIA GPU driver, resulting in the usage of more than 128MB.
>
> root@localhost:~# cat /proc/kallsyms | grep -wE '_etext|_stext|_end'
> ffff8000081d0000 T _stext
> ffff800009390000 D _etext
> ffff80000b4d0000 B _end
>
> root@localhost:~# cat /proc/vmallocinfo | more
> 0xffff800001390000-0xffff800001450000  786432 move_module+0x2c/0x190 pages=11 vmalloc N0=11
> 0xffff800001450000-0xffff8000014b0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
> 0xffff8000014f0000-0xffff800001550000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
> 0xffff800001590000-0xffff8000015f0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
> 0xffff800001630000-0xffff800001690000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
> 0xffff8000016d0000-0xffff800001740000  458752 move_module+0x2c/0x190 pages=6 vmalloc N0=6
> 0xffff800001780000-0xffff8000017e0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
> 0xffff800001820000-0xffff800001880000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
> ...
>
> The first modules loaded at the address 0xffff800001390000.
>
> Less than 128MB is available for modules if KASLR is disabled.
>
> > If you are doing EFI boot, you may need to following patch to ensure
> > that the 128 MiB region is actually the one being used.
> >
> > commit 010338d729c1090036eb40d2a60b7b7bce2445b8
> > Author: Ard Biesheuvel <ardb@kernel.org>
> > Date:   Thu Feb 23 21:41:01 2023 +0100
> >
> >      arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN
> >
> >
> I have included your patch to prevent the incorrect detection of the
> KASLR feature. Otherwise, experiencing the different error
> "overflow in relocation type 261", R_AARCH64_PREL32. Seems this is
> due to the incorrect initialization of module_alloc_base.
>

Hmm, not sure - there was a report about this a while ago but I forgot
the details.

In any case, could we perhaps try something like the below? That way,
we still prefer allocating from the 128 MiB region that is within
direct branching range from the core kernel.

--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -46,7 +46,7 @@
 #define KIMAGE_VADDR           (MODULES_END)
 #define MODULES_END            (MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR          (_PAGE_END(VA_BITS_MIN))
-#define MODULES_VSIZE          (SZ_128M)
+#define MODULES_VSIZE          (SZ_2G)
 #define VMEMMAP_START          (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
 #define VMEMMAP_END            (VMEMMAP_START + VMEMMAP_SIZE)
 #define PCI_IO_END             (VMEMMAP_START - SZ_8M)
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index 5af4975caeb58ff7..b4affe775f23e84f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -37,7 +37,7 @@ void *module_alloc(unsigned long size)
                /* don't exceed the static module region - see below */
                module_alloc_end = MODULES_END;

-       p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
+       p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_end - SZ_128M,
                                module_alloc_end, gfp_mask,
PAGE_KERNEL, VM_DEFER_KMEMLEAK,
                                NUMA_NO_NODE, __builtin_return_address(0));
Shanker Donthineni March 29, 2023, 5:47 p.m. UTC | #4
On 3/29/23 11:07, Ard Biesheuvel wrote:
> External email: Use caution opening links or attachments
> 
> 
> On Sun, 26 Mar 2023 at 20:59, Shanker Donthineni <sdonthineni@nvidia.com> wrote:
>>
>> Thanks Ard for a quick feedback.
>>
>> On 3/26/23 12:35, Ard Biesheuvel wrote:
>>> External email: Use caution opening links or attachments
>>>
>>>
>>> On Sun, 26 Mar 2023 at 19:08, Shanker Donthineni <sdonthineni@nvidia.com> wrote:
>>>>
>>>> The allocation of modules occurs in two regions: the first region
>>>> MODULES_VSIZE (128MB) is shared with the core kernel, while the
>>>> the second region (2GB) is shared with other vmalloc callers.
>>>> Depending on the size of the core kernel, the 128MB region may
>>>> quickly fill up after loading a few modules, causing the system
>>>> to switch to the 2GB region.
>>>
>>> How much module space are you actually using? This 128 MiB region is
>>> not shared with vmalloc() so it should be dedicated to modules
>>> entirely.
>>>
>> Is it correct to say that if the KASLR feature is disabled, 128MB is
>> being shared between the kernel and modules? Approximately 110MB used
>> by the NVIDIA GPU driver, resulting in the usage of more than 128MB.
>>
>> root@localhost:~# cat /proc/kallsyms | grep -wE '_etext|_stext|_end'
>> ffff8000081d0000 T _stext
>> ffff800009390000 D _etext
>> ffff80000b4d0000 B _end
>>
>> root@localhost:~# cat /proc/vmallocinfo | more
>> 0xffff800001390000-0xffff800001450000  786432 move_module+0x2c/0x190 pages=11 vmalloc N0=11
>> 0xffff800001450000-0xffff8000014b0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
>> 0xffff8000014f0000-0xffff800001550000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
>> 0xffff800001590000-0xffff8000015f0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
>> 0xffff800001630000-0xffff800001690000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
>> 0xffff8000016d0000-0xffff800001740000  458752 move_module+0x2c/0x190 pages=6 vmalloc N0=6
>> 0xffff800001780000-0xffff8000017e0000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
>> 0xffff800001820000-0xffff800001880000  393216 move_module+0x2c/0x190 pages=5 vmalloc N0=5
>> ...
>>
>> The first modules loaded at the address 0xffff800001390000.
>>
>> Less than 128MB is available for modules if KASLR is disabled.
>>
>>> If you are doing EFI boot, you may need to following patch to ensure
>>> that the 128 MiB region is actually the one being used.
>>>
>>> commit 010338d729c1090036eb40d2a60b7b7bce2445b8
>>> Author: Ard Biesheuvel <ardb@kernel.org>
>>> Date:   Thu Feb 23 21:41:01 2023 +0100
>>>
>>>       arm64: kaslr: don't pretend KASLR is enabled if offset < MIN_KIMG_ALIGN
>>>
>>>
>> I have included your patch to prevent the incorrect detection of the
>> KASLR feature. Otherwise, experiencing the different error
>> "overflow in relocation type 261", R_AARCH64_PREL32. Seems this is
>> due to the incorrect initialization of module_alloc_base.
>>
> 
> Hmm, not sure - there was a report about this a while ago but I forgot
> the details.
> 
> In any case, could we perhaps try something like the below? That way,
> we still prefer allocating from the 128 MiB region that is within
> direct branching range from the core kernel.
> 
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -46,7 +46,7 @@
>   #define KIMAGE_VADDR           (MODULES_END)
>   #define MODULES_END            (MODULES_VADDR + MODULES_VSIZE)
>   #define MODULES_VADDR          (_PAGE_END(VA_BITS_MIN))
> -#define MODULES_VSIZE          (SZ_128M)
> +#define MODULES_VSIZE          (SZ_2G)
>   #define VMEMMAP_START          (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
>   #define VMEMMAP_END            (VMEMMAP_START + VMEMMAP_SIZE)
>   #define PCI_IO_END             (VMEMMAP_START - SZ_8M)
> diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
> index 5af4975caeb58ff7..b4affe775f23e84f 100644
> --- a/arch/arm64/kernel/module.c
> +++ b/arch/arm64/kernel/module.c
> @@ -37,7 +37,7 @@ void *module_alloc(unsigned long size)
>                  /* don't exceed the static module region - see below */
>                  module_alloc_end = MODULES_END;
> 
> -       p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
> +       p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_end - SZ_128M,
>                                  module_alloc_end, gfp_mask,
> PAGE_KERNEL, VM_DEFER_KMEMLEAK,
>                                  NUMA_NO_NODE, __builtin_return_address(0));

Thanks Ard, I'll include your suggested changes in v2 patch.
diff mbox series

Patch

diff --git a/Documentation/arm64/memory.rst b/Documentation/arm64/memory.rst
index 2a641ba7be3b..76c2fd8bbbf7 100644
--- a/Documentation/arm64/memory.rst
+++ b/Documentation/arm64/memory.rst
@@ -33,8 +33,8 @@  AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
   0000000000000000	0000ffffffffffff	 256TB		user
   ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
  [ffff600000000000	ffff7fffffffffff]	  32TB		[kasan shadow region]
-  ffff800000000000	ffff800007ffffff	 128MB		modules
-  ffff800008000000	fffffbffefffffff	 124TB		vmalloc
+  ffff800000000000	ffff80001fffffff	 512MB		modules
+  ffff800020000000	fffffbffefffffff	 124TB		vmalloc
   fffffbfff0000000	fffffbfffdffffff	 224MB		fixed mappings (top down)
   fffffbfffe000000	fffffbfffe7fffff	   8MB		[guard region]
   fffffbfffe800000	fffffbffff7fffff	  16MB		PCI I/O space
@@ -50,8 +50,8 @@  AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support):
   0000000000000000	000fffffffffffff	   4PB		user
   fff0000000000000	ffff7fffffffffff	  ~4PB		kernel logical memory map
  [fffd800000000000	ffff7fffffffffff]	 512TB		[kasan shadow region]
-  ffff800000000000	ffff800007ffffff	 128MB		modules
-  ffff800008000000	fffffbffefffffff	 124TB		vmalloc
+  ffff800000000000	ffff80001fffffff	 512MB		modules
+  ffff800020000000	fffffbffefffffff	 124TB		vmalloc
   fffffbfff0000000	fffffbfffdffffff	 224MB		fixed mappings (top down)
   fffffbfffe000000	fffffbfffe7fffff	   8MB		[guard region]
   fffffbfffe800000	fffffbffff7fffff	  16MB		PCI I/O space
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 78e5163836a0..dd5d634e235f 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -46,7 +46,7 @@ 
 #define KIMAGE_VADDR		(MODULES_END)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(_PAGE_END(VA_BITS_MIN))
-#define MODULES_VSIZE		(SZ_128M)
+#define MODULES_VSIZE		(SZ_512M)
 #define VMEMMAP_START		(-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
 #define VMEMMAP_END		(VMEMMAP_START + VMEMMAP_SIZE)
 #define PCI_IO_END		(VMEMMAP_START - SZ_8M)