Message ID | 9eea4b61f7b6300def3b6582d8e465ef4207501e.1715286093.git.namcao@linutronix.de (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | remove size limit on XIP kernel | expand |
Context | Check | Description |
---|---|---|
conchuod/vmtest-fixes-PR | fail | merge-conflict |
On 10/05/2024 08:28, Nam Cao wrote: > XIP_OFFSET is the hard-coded offset of writable data section within the > kernel. > > By hard-coding this value, the read-only section of the kernel (which is > placed before the writable data section) is restricted in size. This causes > build failures if the kernel get too big (an example is in Closes:). s/get/gets I think you can use: Closes: https://lore.kernel.org/oe-kbuild-all/202404211031.J6l2AfJk-lkp@intel.com/ [1] And instead use: s/an example is in Closes:/[1] > > Remove this limit. > > Reported-by: kernel test robot <lkp@intel.com> > Closes: https://lore.kernel.org/oe-kbuild-all/202404211031.J6l2AfJk-lkp@intel.com/ > Signed-off-by: Nam Cao <namcao@linutronix.de> > --- > arch/riscv/include/asm/pgtable.h | 7 ------- > arch/riscv/kernel/vmlinux-xip.lds.S | 4 ++-- > 2 files changed, 2 insertions(+), 9 deletions(-) > > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > index fbf342f4afee..75f4a92ea5bb 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -104,13 +104,6 @@ > > #endif > > -#ifdef CONFIG_XIP_KERNEL > -#define XIP_OFFSET SZ_32M > -#define XIP_OFFSET_MASK (SZ_32M - 1) > -#else > -#define XIP_OFFSET 0 > -#endif > - > #ifndef __ASSEMBLY__ > > #include <asm/page.h> > diff --git a/arch/riscv/kernel/vmlinux-xip.lds.S b/arch/riscv/kernel/vmlinux-xip.lds.S > index 8c3daa1b0531..01f73f2ffecc 100644 > --- a/arch/riscv/kernel/vmlinux-xip.lds.S > +++ b/arch/riscv/kernel/vmlinux-xip.lds.S > @@ -65,10 +65,10 @@ SECTIONS > * From this point, stuff is considered writable and will be copied to RAM > */ > __data_loc = ALIGN(PAGE_SIZE); /* location in file */ > - . = KERNEL_LINK_ADDR + XIP_OFFSET; /* location in memory */ > + . = ALIGN(SZ_2M); /* location in memory */ You can't use SZ_2M here since it corresponds to PMD_SIZE for rv64 but on rv32 (which is allowed to use xip kernels), it's 4MB. Use SECTION_ALIGN instead. > > #undef LOAD_OFFSET > -#define LOAD_OFFSET (KERNEL_LINK_ADDR + XIP_OFFSET - (__data_loc & XIP_OFFSET_MASK)) > +#define LOAD_OFFSET (KERNEL_LINK_ADDR + _sdata - __data_loc) > > _sdata = .; /* Start of data section */ > _data = .; When the above comment is fixed, you can add: Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> And many thanks for this big cleanup. Alex
On Mon, May 27, 2024 at 02:58:14PM +0200, Alexandre Ghiti wrote: > > diff --git a/arch/riscv/kernel/vmlinux-xip.lds.S b/arch/riscv/kernel/vmlinux-xip.lds.S > > index 8c3daa1b0531..01f73f2ffecc 100644 > > --- a/arch/riscv/kernel/vmlinux-xip.lds.S > > +++ b/arch/riscv/kernel/vmlinux-xip.lds.S > > @@ -65,10 +65,10 @@ SECTIONS > > * From this point, stuff is considered writable and will be copied to RAM > > */ > > __data_loc = ALIGN(PAGE_SIZE); /* location in file */ > > - . = KERNEL_LINK_ADDR + XIP_OFFSET; /* location in memory */ > > + . = ALIGN(SZ_2M); /* location in memory */ > > You can't use SZ_2M here since it corresponds to PMD_SIZE for rv64 but on > rv32 (which is allowed to use xip kernels), it's 4MB. Use SECTION_ALIGN > instead. SECTION_ALIGN doesn't work unfortunately. For XIP, SECTION_ALIGN is L1_CACHE_BYTES which is 64 bytes, but we need at least PMD_SIZE alignment to setup virtual mapping. Ideally we use PMD_SIZE here, but I can't #include that header file. Probably we can refactor the header files so that we can #include the header file that PMD_SIZE is in. But I am not sure if it's worth it. I'm thinking just go for: ifdef CONFIG_64_BIT . = ALIGN(SZ_2M); #else . = ALIGN(SZ_4M); #endif Or even simpler, just: . = ALIGN(SZ_4M); As much as I hate magic numbers, I think we can give linker script some leeway. Perhaps with an explanation why this alignment is chosen? Or do you have a better idea? Best regards, Nam
On Sun, Jun 02, 2024 at 09:32:17AM +0200, Nam Cao wrote: > On Mon, May 27, 2024 at 02:58:14PM +0200, Alexandre Ghiti wrote: > > > diff --git a/arch/riscv/kernel/vmlinux-xip.lds.S b/arch/riscv/kernel/vmlinux-xip.lds.S > > > index 8c3daa1b0531..01f73f2ffecc 100644 > > > --- a/arch/riscv/kernel/vmlinux-xip.lds.S > > > +++ b/arch/riscv/kernel/vmlinux-xip.lds.S > > > @@ -65,10 +65,10 @@ SECTIONS > > > * From this point, stuff is considered writable and will be copied to RAM > > > */ > > > __data_loc = ALIGN(PAGE_SIZE); /* location in file */ > > > - . = KERNEL_LINK_ADDR + XIP_OFFSET; /* location in memory */ > > > + . = ALIGN(SZ_2M); /* location in memory */ > > > > You can't use SZ_2M here since it corresponds to PMD_SIZE for rv64 but on > > rv32 (which is allowed to use xip kernels), it's 4MB. Use SECTION_ALIGN > > instead. > > SECTION_ALIGN doesn't work unfortunately. For XIP, SECTION_ALIGN is > L1_CACHE_BYTES which is 64 bytes, but we need at least PMD_SIZE alignment > to setup virtual mapping. Sorry, I think I had tunnel vision. The solution is so obvious. I will send v2 shortly. Thanks so much for spending time reviewing. Best regards, Nam > > Ideally we use PMD_SIZE here, but I can't #include that header file. > Probably we can refactor the header files so that we can #include the > header file that PMD_SIZE is in. But I am not sure if it's worth it. > > I'm thinking just go for: > ifdef CONFIG_64_BIT > . = ALIGN(SZ_2M); > #else > . = ALIGN(SZ_4M); > #endif > > Or even simpler, just: > . = ALIGN(SZ_4M); > > As much as I hate magic numbers, I think we can give linker script some > leeway. Perhaps with an explanation why this alignment is chosen? > > Or do you have a better idea? > > Best regards, > Nam
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index fbf342f4afee..75f4a92ea5bb 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -104,13 +104,6 @@ #endif -#ifdef CONFIG_XIP_KERNEL -#define XIP_OFFSET SZ_32M -#define XIP_OFFSET_MASK (SZ_32M - 1) -#else -#define XIP_OFFSET 0 -#endif - #ifndef __ASSEMBLY__ #include <asm/page.h> diff --git a/arch/riscv/kernel/vmlinux-xip.lds.S b/arch/riscv/kernel/vmlinux-xip.lds.S index 8c3daa1b0531..01f73f2ffecc 100644 --- a/arch/riscv/kernel/vmlinux-xip.lds.S +++ b/arch/riscv/kernel/vmlinux-xip.lds.S @@ -65,10 +65,10 @@ SECTIONS * From this point, stuff is considered writable and will be copied to RAM */ __data_loc = ALIGN(PAGE_SIZE); /* location in file */ - . = KERNEL_LINK_ADDR + XIP_OFFSET; /* location in memory */ + . = ALIGN(SZ_2M); /* location in memory */ #undef LOAD_OFFSET -#define LOAD_OFFSET (KERNEL_LINK_ADDR + XIP_OFFSET - (__data_loc & XIP_OFFSET_MASK)) +#define LOAD_OFFSET (KERNEL_LINK_ADDR + _sdata - __data_loc) _sdata = .; /* Start of data section */ _data = .;
XIP_OFFSET is the hard-coded offset of writable data section within the kernel. By hard-coding this value, the read-only section of the kernel (which is placed before the writable data section) is restricted in size. This causes build failures if the kernel get too big (an example is in Closes:). Remove this limit. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202404211031.J6l2AfJk-lkp@intel.com/ Signed-off-by: Nam Cao <namcao@linutronix.de> --- arch/riscv/include/asm/pgtable.h | 7 ------- arch/riscv/kernel/vmlinux-xip.lds.S | 4 ++-- 2 files changed, 2 insertions(+), 9 deletions(-)