Message ID | 35FD53F367049845BC99AC72306C23D103D6DB4915FB@CNBJMBX05.corpusers.net (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Sep 15, 2014 at 06:26:43PM +0800, Wang, Yalin wrote: > this patch change the __init_end address to a page align address, so that free_initmem() > can free the whole .init section, because if the end address is not page aligned, > it will round down to a page align address, then the tail unligned page will not be freed. Please wrap commit messages at or before column 72 - this makes "git log" much easier to read once the change has been committed. I have no objection to the arch/arm part of this patch. However, since different people deal with arch/arm and arch/arm64, this patch needs to be split. Also, it may be worth patching include/asm-generic/vmlinux.lds.h to indicate that __initrd_end should be page aligned - this seems to be a requirement by the (new-ish) free_reserved_area() function, otherwise it does indeed round down. (Added Jiang Liu as the person responsible for free_reserved_area() for any further comments.) > > Signed-off-by: Yalin wang <yalin.wang@sonymobile.com> > --- > arch/arm/kernel/vmlinux.lds.S | 2 +- > arch/arm64/kernel/vmlinux.lds.S | 2 +- > 2 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index 6f57cb9..8e95aa4 100644 > --- a/arch/arm/kernel/vmlinux.lds.S > +++ b/arch/arm/kernel/vmlinux.lds.S > @@ -219,8 +219,8 @@ SECTIONS > __data_loc = ALIGN(4); /* location in binary */ > . = PAGE_OFFSET + TEXT_OFFSET; > #else > - __init_end = .; > . = ALIGN(THREAD_SIZE); > + __init_end = .; > __data_loc = .; > #endif > > diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 97f0c04..edf8715 100644 > --- a/arch/arm64/kernel/vmlinux.lds.S > +++ b/arch/arm64/kernel/vmlinux.lds.S > @@ -97,9 +97,9 @@ SECTIONS > > PERCPU_SECTION(64) > > + . = ALIGN(PAGE_SIZE); > __init_end = .; > > - . = ALIGN(PAGE_SIZE); > _data = .; > _sdata = .; > RW_DATA_SECTION(64, PAGE_SIZE, THREAD_SIZE) > -- > 1.9.2.msysgit.0
On Mon, Sep 15, 2014 at 11:55:25AM +0100, Russell King - ARM Linux wrote: > On Mon, Sep 15, 2014 at 06:26:43PM +0800, Wang, Yalin wrote: > > this patch change the __init_end address to a page align address, so that free_initmem() > > can free the whole .init section, because if the end address is not page aligned, > > it will round down to a page align address, then the tail unligned page will not be freed. > > Please wrap commit messages at or before column 72 - this makes "git log" > much easier to read once the change has been committed. > > I have no objection to the arch/arm part of this patch. However, since > different people deal with arch/arm and arch/arm64, this patch needs to > be split. I don't mind how it goes in. If Russell is ok to take the whole patch: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S index 6f57cb9..8e95aa4 100644 --- a/arch/arm/kernel/vmlinux.lds.S +++ b/arch/arm/kernel/vmlinux.lds.S @@ -219,8 +219,8 @@ SECTIONS __data_loc = ALIGN(4); /* location in binary */ . = PAGE_OFFSET + TEXT_OFFSET; #else - __init_end = .; . = ALIGN(THREAD_SIZE); + __init_end = .; __data_loc = .; #endif diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 97f0c04..edf8715 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -97,9 +97,9 @@ SECTIONS PERCPU_SECTION(64) + . = ALIGN(PAGE_SIZE); __init_end = .; - . = ALIGN(PAGE_SIZE); _data = .; _sdata = .; RW_DATA_SECTION(64, PAGE_SIZE, THREAD_SIZE)
this patch change the __init_end address to a page align address, so that free_initmem() can free the whole .init section, because if the end address is not page aligned, it will round down to a page align address, then the tail unligned page will not be freed. Signed-off-by: wang <yalin.wang2010@gmail.com> --- arch/arm/kernel/vmlinux.lds.S | 2 +- arch/arm64/kernel/vmlinux.lds.S | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)