diff mbox series

arm64: mm: correct the start of physical address in linear map

Message ID 20210213012316.1525419-1-pasha.tatashin@soleen.com (mailing list archive)
State New, archived
Headers show
Series arm64: mm: correct the start of physical address in linear map | expand

Commit Message

Pasha Tatashin Feb. 13, 2021, 1:23 a.m. UTC
Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
linear map range is not checked correctly.

The start physical address that linear map covers can be actually at the
end of the range because of randmomization. Check that and if so reduce it
to 0.

This can be verified on QEMU with setting kaslr-seed to ~0ul:

memstart_offset_seed = 0xffff
START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
END:   __pa(PAGE_END - 1) =  1000bfffffff

Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
---
 arch/arm64/mm/mmu.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

Comments

Tyler Hicks Feb. 13, 2021, 5:51 p.m. UTC | #1
On 2021-02-12 20:23:16, Pavel Tatashin wrote:
> Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> linear map range is not checked correctly.
> 
> The start physical address that linear map covers can be actually at the
> end of the range because of randmomization. Check that and if so reduce it
> to 0.
> 
> This can be verified on QEMU with setting kaslr-seed to ~0ul:
> 
> memstart_offset_seed = 0xffff
> START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> END:   __pa(PAGE_END - 1) =  1000bfffffff
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")

Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com>

This fixes a memory hot plugging bug that I was seeing on 5.10, with the
introduction of 58284a901b42.

One comment below...

> ---
>  arch/arm64/mm/mmu.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ae0c3d023824..6057ecaea897 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1444,14 +1444,25 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
>  
>  static bool inside_linear_region(u64 start, u64 size)
>  {
> +	u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> +	u64 end_linear_pa = __pa(PAGE_END - 1);
> +
> +	/*
> +	 * Check for a wrap, it is possible because of randomized linear mapping
> +	 * the start physical address is actually bigger than the end physical
> +	 * address. In this case set start to zero because [0, end_linear_pa]
> +	 * range must still be able to cover all addressable physical addresses.
> +	 */
> +	if (start_linear_pa > end_linear_pa)
> +		start_linear_pa = 0;

We're ignoring the portion from the linear mapping's start PA to the
point of wraparound. Could the start and end of the hot plugged memory
fall within this range and, as a result, the hot plug operation be
incorrectly blocked?

Tyler

> +
>  	/*
>  	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
>  	 * accommodating both its ends but excluding PAGE_END. Max physical
>  	 * range which can be mapped inside this linear mapping range, must
>  	 * also be derived from its end points.
>  	 */
> -	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
> -	       (start + size - 1) <= __pa(PAGE_END - 1);
> +	return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
>  }
>  
>  int arch_add_memory(int nid, u64 start, u64 size,
> -- 
> 2.25.1
>
Pasha Tatashin Feb. 13, 2021, 6:17 p.m. UTC | #2
> We're ignoring the portion from the linear mapping's start PA to the
> point of wraparound. Could the start and end of the hot plugged memory
> fall within this range and, as a result, the hot plug operation be
> incorrectly blocked?

Hi Tyler,

Thank you for looking at this fix. The maximum addressable PA's can be
seen in this function: id_aa64mmfr0_parange_to_phys_shift(). For
example for PA shift 32, the linear map must be able to cover any
physical addresses from 0 to "1 << 32". Therefore, 0 to __pa(PAGE_END
- 1); must include 0 to "1<<32".

The randomization of the linear map tries to hide where exactly within
the linear map the [0 to max_phys] addresses are located by changing
PHYS_OFFSET (linear map space is usually much bigger than PA space).
Therefore, the beginning or end of a linear map can actually convert
to completely bagus high PA addresses, but this is normal.

Thank you,
Pasha

>
> Tyler
>
> > +
> >       /*
> >        * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
> >        * accommodating both its ends but excluding PAGE_END. Max physical
> >        * range which can be mapped inside this linear mapping range, must
> >        * also be derived from its end points.
> >        */
> > -     return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
> > -            (start + size - 1) <= __pa(PAGE_END - 1);
> > +     return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
> >  }
> >
> >  int arch_add_memory(int nid, u64 start, u64 size,
> > --
> > 2.25.1
> >
Anshuman Khandual Feb. 15, 2021, 5:26 a.m. UTC | #3
Hello Pavel,

On 2/13/21 6:53 AM, Pavel Tatashin wrote:
> Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> linear map range is not checked correctly.
> 
> The start physical address that linear map covers can be actually at the
> end of the range because of randmomization. Check that and if so reduce it
> to 0.

Looking at the code, this seems possible if memstart_addr which is a signed
value becomes large (after falling below 0) during arm64_memblock_init().

> 
> This can be verified on QEMU with setting kaslr-seed to ~0ul:
> 
> memstart_offset_seed = 0xffff
> START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> END:   __pa(PAGE_END - 1) =  1000bfffffff
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
> ---
>  arch/arm64/mm/mmu.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ae0c3d023824..6057ecaea897 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1444,14 +1444,25 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
>  
>  static bool inside_linear_region(u64 start, u64 size)
>  {
> +	u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> +	u64 end_linear_pa = __pa(PAGE_END - 1);
> +
> +	/*
> +	 * Check for a wrap, it is possible because of randomized linear mapping
> +	 * the start physical address is actually bigger than the end physical
> +	 * address. In this case set start to zero because [0, end_linear_pa]
> +	 * range must still be able to cover all addressable physical addresses.
> +	 */

If this is possible only with randomized linear mapping, could you please
add IS_ENABLED(CONFIG_RANDOMIZED_BASE) during the switch over. Wondering
if WARN_ON(start_linear_pa > end_linear_pa) should be added otherwise i.e
when linear mapping randomization is not enabled.

> +	if (start_linear_pa > end_linear_pa)
> +		start_linear_pa = 0;

This looks okay but will double check and give it some more testing.

> +
>  	/*
>  	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
>  	 * accommodating both its ends but excluding PAGE_END. Max physical
>  	 * range which can be mapped inside this linear mapping range, must
>  	 * also be derived from its end points.
>  	 */
> -	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
> -	       (start + size - 1) <= __pa(PAGE_END - 1);
> +	return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
>  }
>  
>  int arch_add_memory(int nid, u64 start, u64 size,
> 

- Anshuman
Pasha Tatashin Feb. 15, 2021, 1:42 p.m. UTC | #4
On Mon, Feb 15, 2021 at 12:26 AM Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> Hello Pavel,
>
> On 2/13/21 6:53 AM, Pavel Tatashin wrote:
> > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> > linear map range is not checked correctly.
> >
> > The start physical address that linear map covers can be actually at the
> > end of the range because of randmomization. Check that and if so reduce it
> > to 0.
>
> Looking at the code, this seems possible if memstart_addr which is a signed
> value becomes large (after falling below 0) during arm64_memblock_init().

Right.

>
> >
> > This can be verified on QEMU with setting kaslr-seed to ~0ul:
> >
> > memstart_offset_seed = 0xffff
> > START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> > END:   __pa(PAGE_END - 1) =  1000bfffffff
> >
> > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> > Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
> > ---
> >  arch/arm64/mm/mmu.c | 15 +++++++++++++--
> >  1 file changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index ae0c3d023824..6057ecaea897 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -1444,14 +1444,25 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
> >
> >  static bool inside_linear_region(u64 start, u64 size)
> >  {
> > +     u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> > +     u64 end_linear_pa = __pa(PAGE_END - 1);
> > +
> > +     /*
> > +      * Check for a wrap, it is possible because of randomized linear mapping
> > +      * the start physical address is actually bigger than the end physical
> > +      * address. In this case set start to zero because [0, end_linear_pa]
> > +      * range must still be able to cover all addressable physical addresses.
> > +      */
>
> If this is possible only with randomized linear mapping, could you please
> add IS_ENABLED(CONFIG_RANDOMIZED_BASE) during the switch over. Wondering
> if WARN_ON(start_linear_pa > end_linear_pa) should be added otherwise i.e
> when linear mapping randomization is not enabled.

Yeah, good idea, I will add ifdef for CONFIG_RANDOMIZED_BASE.

>
> > +     if (start_linear_pa > end_linear_pa)
> > +             start_linear_pa = 0;
>
> This looks okay but will double check and give it some more testing.

Thank you,
Pasha
diff mbox series

Patch

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ae0c3d023824..6057ecaea897 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1444,14 +1444,25 @@  static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
 
 static bool inside_linear_region(u64 start, u64 size)
 {
+	u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
+	u64 end_linear_pa = __pa(PAGE_END - 1);
+
+	/*
+	 * Check for a wrap, it is possible because of randomized linear mapping
+	 * the start physical address is actually bigger than the end physical
+	 * address. In this case set start to zero because [0, end_linear_pa]
+	 * range must still be able to cover all addressable physical addresses.
+	 */
+	if (start_linear_pa > end_linear_pa)
+		start_linear_pa = 0;
+
 	/*
 	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
 	 * accommodating both its ends but excluding PAGE_END. Max physical
 	 * range which can be mapped inside this linear mapping range, must
 	 * also be derived from its end points.
 	 */
-	return start >= __pa(_PAGE_OFFSET(vabits_actual)) &&
-	       (start + size - 1) <= __pa(PAGE_END - 1);
+	return start >= start_linear_pa && (start + size - 1) <= end_linear_pa;
 }
 
 int arch_add_memory(int nid, u64 start, u64 size,