diff mbox series

arm: skip nomap memblocks while finding the lowmem/highmem boundary

Message ID 20190822034425.25899-1-clin@suse.com (mailing list archive)
State Mainlined
Commit 1d31999cf04c21709f72ceb17e65b54a401330da
Headers show
Series arm: skip nomap memblocks while finding the lowmem/highmem boundary | expand

Commit Message

Chester Lin Aug. 22, 2019, 3:45 a.m. UTC
adjust_lowmem_bounds() checks every memblocks in order to find the boundary
between lowmem and highmem. However some memblocks could be marked as NOMAP
so they are not used by kernel, which should be skipped while calculating
the boundary.

Signed-off-by: Chester Lin <clin@suse.com>
---
 arch/arm/mm/mmu.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Chester Lin Aug. 22, 2019, 3:59 a.m. UTC | #1
On Thu, Aug 22, 2019 at 11:45:34AM +0800, Chester Lin wrote:
> adjust_lowmem_bounds() checks every memblocks in order to find the boundary
> between lowmem and highmem. However some memblocks could be marked as NOMAP
> so they are not used by kernel, which should be skipped while calculating
> the boundary.
> 
> Signed-off-by: Chester Lin <clin@suse.com>
> ---
>  arch/arm/mm/mmu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 426d9085396b..b86dba44d828 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1181,6 +1181,9 @@ void __init adjust_lowmem_bounds(void)
>  		phys_addr_t block_start = reg->base;
>  		phys_addr_t block_end = reg->base + reg->size;
>  
> +		if (memblock_is_nomap(reg))
> +			continue;
> +
>  		if (reg->base < vmalloc_limit) {
>  			if (block_end > lowmem_limit)
>  				/*
> -- 
> 2.22.0
>

Hi Russell, Mike and Ard,

Per the discussion in the thread "[PATH] efi/arm: fix allocation failure ...",
(https://lkml.org/lkml/2019/8/21/163), I presume that the change to disregard
NOMAP memblocks in adjust_lowmem_bounds() should be separated as a single patch.

Please let me know if any suggestion, thank you.
Mike Rapoport Aug. 22, 2019, 6:40 a.m. UTC | #2
On Thu, Aug 22, 2019 at 03:45:34AM +0000, Chester Lin wrote:
> adjust_lowmem_bounds() checks every memblocks in order to find the boundary
> between lowmem and highmem. However some memblocks could be marked as NOMAP
> so they are not used by kernel, which should be skipped while calculating
> the boundary.
> 
> Signed-off-by: Chester Lin <clin@suse.com>

Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>

> ---
>  arch/arm/mm/mmu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 426d9085396b..b86dba44d828 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1181,6 +1181,9 @@ void __init adjust_lowmem_bounds(void)
>  		phys_addr_t block_start = reg->base;
>  		phys_addr_t block_end = reg->base + reg->size;
>  
> +		if (memblock_is_nomap(reg))
> +			continue;
> +
>  		if (reg->base < vmalloc_limit) {
>  			if (block_end > lowmem_limit)
>  				/*
> -- 
> 2.22.0
>
Mike Rapoport Aug. 22, 2019, 6:44 a.m. UTC | #3
On Thu, Aug 22, 2019 at 03:59:42AM +0000, Chester Lin wrote:
> On Thu, Aug 22, 2019 at 11:45:34AM +0800, Chester Lin wrote:
> > adjust_lowmem_bounds() checks every memblocks in order to find the boundary
> > between lowmem and highmem. However some memblocks could be marked as NOMAP
> > so they are not used by kernel, which should be skipped while calculating
> > the boundary.
> > 
> > Signed-off-by: Chester Lin <clin@suse.com>
> > ---
> >  arch/arm/mm/mmu.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> > index 426d9085396b..b86dba44d828 100644
> > --- a/arch/arm/mm/mmu.c
> > +++ b/arch/arm/mm/mmu.c
> > @@ -1181,6 +1181,9 @@ void __init adjust_lowmem_bounds(void)
> >  		phys_addr_t block_start = reg->base;
> >  		phys_addr_t block_end = reg->base + reg->size;
> >  
> > +		if (memblock_is_nomap(reg))
> > +			continue;
> > +
> >  		if (reg->base < vmalloc_limit) {
> >  			if (block_end > lowmem_limit)
> >  				/*
> > -- 
> > 2.22.0
> >
> 
> Hi Russell, Mike and Ard,
> 
> Per the discussion in the thread "[PATH] efi/arm: fix allocation failure ...",
> (https://lkml.org/lkml/2019/8/21/163), I presume that the change to disregard
> NOMAP memblocks in adjust_lowmem_bounds() should be separated as a single patch.
> 
> Please let me know if any suggestion, thank you.

Let's add this one to the series: 

From 06a986e79d60c310c804b3e550bd50316597aec5 Mon Sep 17 00:00:00 2001
From: Mike Rapoport <rppt@linux.ibm.com>
Date: Thu, 22 Aug 2019 09:27:40 +0300
Subject: [PATCH] arm: ensure that usable memory in bank 0 starts from a
 PMD-aligned address

The calculation of memblock_limit in adjust_lowmem_bounds() assumes that
bank 0 starts from a PMD-aligned address. However, the beginning of the
first bank may be NOMAP memory and the start of usable memory
will be not aligned to PMD boundary. In such case the memblock_limit will
be set to the end of the NOMAP region, which will prevent any memblock
allocations.

Mark the region between the end of the NOMAP area and the next PMD-aligned
address as NOMAP as well, so that the usable memory will start at
PMD-aligned address.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/arm/mm/mmu.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4495a26..25da9b2 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1177,6 +1177,22 @@ void __init adjust_lowmem_bounds(void)
 	 */
 	vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET;
 
+	/*
+	 * The first usable region must be PMD aligned. Mark its start
+	 * as MEMBLOCK_NOMAP if it isn't
+	 */
+	for_each_memblock(memory, reg) {
+		if (!memblock_is_nomap(reg)) {
+			if (!IS_ALIGNED(reg->base, PMD_SIZE)) {
+				phys_addr_t len;
+
+				len = round_up(reg->base, PMD_SIZE) - reg->base;
+				memblock_mark_nomap(reg->base, len);
+			}
+			break;
+		}
+	}
+
 	for_each_memblock(memory, reg) {
 		phys_addr_t block_start = reg->base;
 		phys_addr_t block_end = reg->base + reg->size;
Ard Biesheuvel Aug. 22, 2019, 6:46 a.m. UTC | #4
On Thu, 22 Aug 2019 at 09:44, Mike Rapoport <rppt@linux.ibm.com> wrote:
>
> On Thu, Aug 22, 2019 at 03:59:42AM +0000, Chester Lin wrote:
> > On Thu, Aug 22, 2019 at 11:45:34AM +0800, Chester Lin wrote:
> > > adjust_lowmem_bounds() checks every memblocks in order to find the boundary
> > > between lowmem and highmem. However some memblocks could be marked as NOMAP
> > > so they are not used by kernel, which should be skipped while calculating
> > > the boundary.
> > >
> > > Signed-off-by: Chester Lin <clin@suse.com>
> > > ---
> > >  arch/arm/mm/mmu.c | 3 +++
> > >  1 file changed, 3 insertions(+)
> > >
> > > diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> > > index 426d9085396b..b86dba44d828 100644
> > > --- a/arch/arm/mm/mmu.c
> > > +++ b/arch/arm/mm/mmu.c
> > > @@ -1181,6 +1181,9 @@ void __init adjust_lowmem_bounds(void)
> > >             phys_addr_t block_start = reg->base;
> > >             phys_addr_t block_end = reg->base + reg->size;
> > >
> > > +           if (memblock_is_nomap(reg))
> > > +                   continue;
> > > +
> > >             if (reg->base < vmalloc_limit) {
> > >                     if (block_end > lowmem_limit)
> > >                             /*
> > > --
> > > 2.22.0
> > >
> >
> > Hi Russell, Mike and Ard,
> >
> > Per the discussion in the thread "[PATH] efi/arm: fix allocation failure ...",
> > (https://lkml.org/lkml/2019/8/21/163), I presume that the change to disregard
> > NOMAP memblocks in adjust_lowmem_bounds() should be separated as a single patch.
> >
> > Please let me know if any suggestion, thank you.
>
> Let's add this one to the series:
>
> From 06a986e79d60c310c804b3e550bd50316597aec5 Mon Sep 17 00:00:00 2001
> From: Mike Rapoport <rppt@linux.ibm.com>
> Date: Thu, 22 Aug 2019 09:27:40 +0300
> Subject: [PATCH] arm: ensure that usable memory in bank 0 starts from a
>  PMD-aligned address
>
> The calculation of memblock_limit in adjust_lowmem_bounds() assumes that
> bank 0 starts from a PMD-aligned address. However, the beginning of the
> first bank may be NOMAP memory and the start of usable memory
> will be not aligned to PMD boundary. In such case the memblock_limit will
> be set to the end of the NOMAP region, which will prevent any memblock
> allocations.
>
> Mark the region between the end of the NOMAP area and the next PMD-aligned
> address as NOMAP as well, so that the usable memory will start at
> PMD-aligned address.
>
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  arch/arm/mm/mmu.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 4495a26..25da9b2 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -1177,6 +1177,22 @@ void __init adjust_lowmem_bounds(void)
>          */
>         vmalloc_limit = (u64)(uintptr_t)vmalloc_min - PAGE_OFFSET + PHYS_OFFSET;
>
> +       /*
> +        * The first usable region must be PMD aligned. Mark its start
> +        * as MEMBLOCK_NOMAP if it isn't
> +        */
> +       for_each_memblock(memory, reg) {
> +               if (!memblock_is_nomap(reg)) {
> +                       if (!IS_ALIGNED(reg->base, PMD_SIZE)) {
> +                               phys_addr_t len;
> +
> +                               len = round_up(reg->base, PMD_SIZE) - reg->base;
> +                               memblock_mark_nomap(reg->base, len);
> +                       }
> +                       break;
> +               }
> +       }
> +
>         for_each_memblock(memory, reg) {
>                 phys_addr_t block_start = reg->base;
>                 phys_addr_t block_end = reg->base + reg->size;
> --
> 2.7.4
>
>
> --
> Sincerely yours,
> Mike.
>
diff mbox series

Patch

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 426d9085396b..b86dba44d828 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1181,6 +1181,9 @@  void __init adjust_lowmem_bounds(void)
 		phys_addr_t block_start = reg->base;
 		phys_addr_t block_end = reg->base + reg->size;
 
+		if (memblock_is_nomap(reg))
+			continue;
+
 		if (reg->base < vmalloc_limit) {
 			if (block_end > lowmem_limit)
 				/*