diff mbox series

[v4,01/12] arm64/mm: Update non-range tlb invalidation routines for FEAT_LPA2

Message ID 20231009185008.3803879-2-ryan.roberts@arm.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: Support FEAT_LPA2 at hyp s1 and vm s2 | expand

Commit Message

Ryan Roberts Oct. 9, 2023, 6:49 p.m. UTC
FEAT_LPA2 impacts tlb invalidation in 2 ways; Firstly, the TTL field in
the non-range tlbi instructions can now validly take a 0 value for the
4KB granule (this is due to the extra level of translation). Secondly,
the BADDR field in the range tlbi instructions must be aligned to 64KB
when LPA2 is in use (TCR.DS=1). Changes are required for tlbi to
continue to operate correctly when LPA2 is in use.

KVM only uses the non-range (__tlbi_level()) routines. Therefore we only
solve the first problem with this patch.

It is solved by always adding the level hint if the level is between [0,
3] (previously anything other than 0 was hinted, which breaks in the new
level -1 case from kvm). When running on non-LPA2 HW, 0 is still safe to
hint as the HW will fall back to non-hinted. While we are at it, we
replace the notion of 0 being the non-hinted seninel with a macro,
TLBI_TTL_UNKNOWN. This means callers won't need updating if/when
translation depth increases in future.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/tlb.h      |  9 ++++---
 arch/arm64/include/asm/tlbflush.h | 43 +++++++++++++++++++------------
 2 files changed, 31 insertions(+), 21 deletions(-)

Comments

Marc Zyngier Oct. 19, 2023, 8:03 a.m. UTC | #1
On Mon, 09 Oct 2023 19:49:57 +0100,
Ryan Roberts <ryan.roberts@arm.com> wrote:
> 
> FEAT_LPA2 impacts tlb invalidation in 2 ways; Firstly, the TTL field in
> the non-range tlbi instructions can now validly take a 0 value for the
> 4KB granule (this is due to the extra level of translation). Secondly,

nit: 0 was always valid. It just didn't indicate any level.

> the BADDR field in the range tlbi instructions must be aligned to 64KB
> when LPA2 is in use (TCR.DS=1). Changes are required for tlbi to
> continue to operate correctly when LPA2 is in use.
> 
> KVM only uses the non-range (__tlbi_level()) routines. Therefore we only
> solve the first problem with this patch.

Is this still true? This patch changes __TLBI_VADDR_RANGE() and co.

>
> It is solved by always adding the level hint if the level is between [0,
> 3] (previously anything other than 0 was hinted, which breaks in the new
> level -1 case from kvm). When running on non-LPA2 HW, 0 is still safe to
> hint as the HW will fall back to non-hinted. While we are at it, we
> replace the notion of 0 being the non-hinted seninel with a macro,
> TLBI_TTL_UNKNOWN. This means callers won't need updating if/when
> translation depth increases in future.
> 
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
>  arch/arm64/include/asm/tlb.h      |  9 ++++---
>  arch/arm64/include/asm/tlbflush.h | 43 +++++++++++++++++++------------
>  2 files changed, 31 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> index 2c29239d05c3..93c537635dbb 100644
> --- a/arch/arm64/include/asm/tlb.h
> +++ b/arch/arm64/include/asm/tlb.h
> @@ -22,15 +22,16 @@ static void tlb_flush(struct mmu_gather *tlb);
>  #include <asm-generic/tlb.h>
>  
>  /*
> - * get the tlbi levels in arm64.  Default value is 0 if more than one
> - * of cleared_* is set or neither is set.
> + * get the tlbi levels in arm64.  Default value is TLBI_TTL_UNKNOWN if more than
> + * one of cleared_* is set or neither is set - this elides the level hinting to
> + * the hardware.
>   * Arm64 doesn't support p4ds now.
>   */
>  static inline int tlb_get_level(struct mmu_gather *tlb)
>  {
>  	/* The TTL field is only valid for the leaf entry. */
>  	if (tlb->freed_tables)
> -		return 0;
> +		return TLBI_TTL_UNKNOWN;
>  
>  	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
>  				   tlb->cleared_puds ||
> @@ -47,7 +48,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
>  				   tlb->cleared_p4ds))
>  		return 1;
>  
> -	return 0;
> +	return TLBI_TTL_UNKNOWN;
>  }
>  
>  static inline void tlb_flush(struct mmu_gather *tlb)
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index b149cf9f91bc..e688246b3b13 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -94,19 +94,22 @@ static inline unsigned long get_trans_granule(void)
>   * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
>   * the level at which the invalidation must take place. If the level is
>   * wrong, no invalidation may take place. In the case where the level
> - * cannot be easily determined, a 0 value for the level parameter will
> - * perform a non-hinted invalidation.
> + * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
> + * a non-hinted invalidation. Any provided level outside the hint range
> + * will also cause fall-back to non-hinted invalidation.
>   *
>   * For Stage-2 invalidation, use the level values provided to that effect
>   * in asm/stage2_pgtable.h.
>   */
>  #define TLBI_TTL_MASK		GENMASK_ULL(47, 44)
>  
> +#define TLBI_TTL_UNKNOWN	(-1)

I find this value somehow confusing, as it represent an actual level
number. It just happen to be one that cannot be provided as a TTL. So
having that as a return value from tlb_get_level() isn't great, and
I'd rather have something that cannot be mistaken for a valid level.

> +
>  #define __tlbi_level(op, addr, level) do {				\
>  	u64 arg = addr;							\
>  									\
>  	if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) &&		\
> -	    level) {							\
> +	    level >= 0 && level <= 3) {					\
>  		u64 ttl = level & 3;					\
>  		ttl |= get_trans_granule() << 2;			\
>  		arg &= ~TLBI_TTL_MASK;					\
> @@ -134,16 +137,17 @@ static inline unsigned long get_trans_granule(void)
>   * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE)
>   *
>   */
> -#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)		\
> -	({							\
> -		unsigned long __ta = (addr) >> PAGE_SHIFT;	\
> -		__ta &= GENMASK_ULL(36, 0);			\
> -		__ta |= (unsigned long)(ttl) << 37;		\
> -		__ta |= (unsigned long)(num) << 39;		\
> -		__ta |= (unsigned long)(scale) << 44;		\
> -		__ta |= get_trans_granule() << 46;		\
> -		__ta |= (unsigned long)(asid) << 48;		\
> -		__ta;						\
> +#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)				\
> +	({									\
> +		unsigned long __ta = (addr) >> PAGE_SHIFT;			\
> +		unsigned long __ttl = (ttl >= 1 && ttl <= 3) ? ttl : 0;		\
> +		__ta &= GENMASK_ULL(36, 0);					\
> +		__ta |= __ttl << 37;						\
> +		__ta |= (unsigned long)(num) << 39;				\
> +		__ta |= (unsigned long)(scale) << 44;				\
> +		__ta |= get_trans_granule() << 46;				\
> +		__ta |= (unsigned long)(asid) << 48;				\
> +		__ta;								\
>  	})
>  
>  /* These macros are used by the TLBI RANGE feature. */
> @@ -216,12 +220,16 @@ static inline unsigned long get_trans_granule(void)
>   *		CPUs, ensuring that any walk-cache entries associated with the
>   *		translation are also invalidated.
>   *
> - *	__flush_tlb_range(vma, start, end, stride, last_level)
> + *	__flush_tlb_range(vma, start, end, stride, last_level, tlb_level)
>   *		Invalidate the virtual-address range '[start, end)' on all
>   *		CPUs for the user address space corresponding to 'vma->mm'.
>   *		The invalidation operations are issued at a granularity
>   *		determined by 'stride' and only affect any walk-cache entries
> - *		if 'last_level' is equal to false.
> + *		if 'last_level' is equal to false. tlb_level is the level at
> + *		which the invalidation must take place. If the level is wrong,
> + *		no invalidation may take place. In the case where the level
> + *		cannot be easily determined, the value TLBI_TTL_UNKNOWN will
> + *		perform a non-hinted invalidation.
>   *
>   *
>   *	Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
> @@ -442,9 +450,10 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
>  	/*
>  	 * We cannot use leaf-only invalidation here, since we may be invalidating
>  	 * table entries as part of collapsing hugepages or moving page tables.
> -	 * Set the tlb_level to 0 because we can not get enough information here.
> +	 * Set the tlb_level to TLBI_TTL_UNKNOWN because we can not get enough
> +	 * information here.
>  	 */
> -	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
> +	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
>  }
>  
>  static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)

It feels like this range stuff would be better located in the second
patch. Not a huge deal though.

	M.
Ryan Roberts Oct. 19, 2023, 9:22 a.m. UTC | #2
On 19/10/2023 09:03, Marc Zyngier wrote:
> On Mon, 09 Oct 2023 19:49:57 +0100,
> Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> FEAT_LPA2 impacts tlb invalidation in 2 ways; Firstly, the TTL field in
>> the non-range tlbi instructions can now validly take a 0 value for the
>> 4KB granule (this is due to the extra level of translation). Secondly,
> 
> nit: 0 was always valid. It just didn't indicate any level.

True. I'll change to "can now validly take a 0 value as a TTL hint".

> 
>> the BADDR field in the range tlbi instructions must be aligned to 64KB
>> when LPA2 is in use (TCR.DS=1). Changes are required for tlbi to
>> continue to operate correctly when LPA2 is in use.
>>
>> KVM only uses the non-range (__tlbi_level()) routines. Therefore we only
>> solve the first problem with this patch.
> 
> Is this still true? This patch changes __TLBI_VADDR_RANGE() and co.

It is no longer true that KVM only uses the non-range routines. v6.6 adds a
series where KVM will now use the range-based routines too. So that text is out
of date and I should have spotted it when doing the rebase - I'll fix. KVM now
using range-based ops is the reason I added patch 2 to this series.

However, this patch doesn't really change __TLBI_VADDR_RANGE()'s behavior, it
just makes it robust to the presence of TLBI_TTL_UNKNOWN, instead of 0 which was
previously used as the "don't know" value.

> 
>>
>> It is solved by always adding the level hint if the level is between [0,
>> 3] (previously anything other than 0 was hinted, which breaks in the new
>> level -1 case from kvm). When running on non-LPA2 HW, 0 is still safe to
>> hint as the HW will fall back to non-hinted. While we are at it, we
>> replace the notion of 0 being the non-hinted seninel with a macro,
>> TLBI_TTL_UNKNOWN. This means callers won't need updating if/when
>> translation depth increases in future.
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>> ---
>>  arch/arm64/include/asm/tlb.h      |  9 ++++---
>>  arch/arm64/include/asm/tlbflush.h | 43 +++++++++++++++++++------------
>>  2 files changed, 31 insertions(+), 21 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
>> index 2c29239d05c3..93c537635dbb 100644
>> --- a/arch/arm64/include/asm/tlb.h
>> +++ b/arch/arm64/include/asm/tlb.h
>> @@ -22,15 +22,16 @@ static void tlb_flush(struct mmu_gather *tlb);
>>  #include <asm-generic/tlb.h>
>>  
>>  /*
>> - * get the tlbi levels in arm64.  Default value is 0 if more than one
>> - * of cleared_* is set or neither is set.
>> + * get the tlbi levels in arm64.  Default value is TLBI_TTL_UNKNOWN if more than
>> + * one of cleared_* is set or neither is set - this elides the level hinting to
>> + * the hardware.
>>   * Arm64 doesn't support p4ds now.
>>   */
>>  static inline int tlb_get_level(struct mmu_gather *tlb)
>>  {
>>  	/* The TTL field is only valid for the leaf entry. */
>>  	if (tlb->freed_tables)
>> -		return 0;
>> +		return TLBI_TTL_UNKNOWN;
>>  
>>  	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
>>  				   tlb->cleared_puds ||
>> @@ -47,7 +48,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
>>  				   tlb->cleared_p4ds))
>>  		return 1;
>>  
>> -	return 0;
>> +	return TLBI_TTL_UNKNOWN;
>>  }
>>  
>>  static inline void tlb_flush(struct mmu_gather *tlb)
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index b149cf9f91bc..e688246b3b13 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -94,19 +94,22 @@ static inline unsigned long get_trans_granule(void)
>>   * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
>>   * the level at which the invalidation must take place. If the level is
>>   * wrong, no invalidation may take place. In the case where the level
>> - * cannot be easily determined, a 0 value for the level parameter will
>> - * perform a non-hinted invalidation.
>> + * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
>> + * a non-hinted invalidation. Any provided level outside the hint range
>> + * will also cause fall-back to non-hinted invalidation.
>>   *
>>   * For Stage-2 invalidation, use the level values provided to that effect
>>   * in asm/stage2_pgtable.h.
>>   */
>>  #define TLBI_TTL_MASK		GENMASK_ULL(47, 44)
>>  
>> +#define TLBI_TTL_UNKNOWN	(-1)
> 
> I find this value somehow confusing, as it represent an actual level
> number. It just happen to be one that cannot be provided as a TTL. So
> having that as a return value from tlb_get_level() isn't great, and
> I'd rather have something that cannot be mistaken for a valid level.

OK, how about INT_MAX?

> 
>> +
>>  #define __tlbi_level(op, addr, level) do {				\
>>  	u64 arg = addr;							\
>>  									\
>>  	if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) &&		\
>> -	    level) {							\
>> +	    level >= 0 && level <= 3) {					\
>>  		u64 ttl = level & 3;					\
>>  		ttl |= get_trans_granule() << 2;			\
>>  		arg &= ~TLBI_TTL_MASK;					\
>> @@ -134,16 +137,17 @@ static inline unsigned long get_trans_granule(void)
>>   * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE)
>>   *
>>   */
>> -#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)		\
>> -	({							\
>> -		unsigned long __ta = (addr) >> PAGE_SHIFT;	\
>> -		__ta &= GENMASK_ULL(36, 0);			\
>> -		__ta |= (unsigned long)(ttl) << 37;		\
>> -		__ta |= (unsigned long)(num) << 39;		\
>> -		__ta |= (unsigned long)(scale) << 44;		\
>> -		__ta |= get_trans_granule() << 46;		\
>> -		__ta |= (unsigned long)(asid) << 48;		\
>> -		__ta;						\
>> +#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)				\
>> +	({									\
>> +		unsigned long __ta = (addr) >> PAGE_SHIFT;			\
>> +		unsigned long __ttl = (ttl >= 1 && ttl <= 3) ? ttl : 0;		\
>> +		__ta &= GENMASK_ULL(36, 0);					\
>> +		__ta |= __ttl << 37;						\
>> +		__ta |= (unsigned long)(num) << 39;				\
>> +		__ta |= (unsigned long)(scale) << 44;				\
>> +		__ta |= get_trans_granule() << 46;				\
>> +		__ta |= (unsigned long)(asid) << 48;				\
>> +		__ta;								\
>>  	})
>>  
>>  /* These macros are used by the TLBI RANGE feature. */
>> @@ -216,12 +220,16 @@ static inline unsigned long get_trans_granule(void)
>>   *		CPUs, ensuring that any walk-cache entries associated with the
>>   *		translation are also invalidated.
>>   *
>> - *	__flush_tlb_range(vma, start, end, stride, last_level)
>> + *	__flush_tlb_range(vma, start, end, stride, last_level, tlb_level)
>>   *		Invalidate the virtual-address range '[start, end)' on all
>>   *		CPUs for the user address space corresponding to 'vma->mm'.
>>   *		The invalidation operations are issued at a granularity
>>   *		determined by 'stride' and only affect any walk-cache entries
>> - *		if 'last_level' is equal to false.
>> + *		if 'last_level' is equal to false. tlb_level is the level at
>> + *		which the invalidation must take place. If the level is wrong,
>> + *		no invalidation may take place. In the case where the level
>> + *		cannot be easily determined, the value TLBI_TTL_UNKNOWN will
>> + *		perform a non-hinted invalidation.
>>   *
>>   *
>>   *	Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
>> @@ -442,9 +450,10 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
>>  	/*
>>  	 * We cannot use leaf-only invalidation here, since we may be invalidating
>>  	 * table entries as part of collapsing hugepages or moving page tables.
>> -	 * Set the tlb_level to 0 because we can not get enough information here.
>> +	 * Set the tlb_level to TLBI_TTL_UNKNOWN because we can not get enough
>> +	 * information here.
>>  	 */
>> -	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
>> +	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
>>  }
>>  
>>  static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
> 
> It feels like this range stuff would be better located in the second
> patch. Not a huge deal though.

As I said, this is the minimal change to the range-based side of things to
robustly deal with the introduction of TLBI_TTL_UNKNOWN.

But I wonder if I'm actually better of squashing both of the 2 patches into one.
The only reason I split it previously was because KVM was only using the
level-based ops.

Thanks for the review!

Ryan


> 
> 	M.
>
Marc Zyngier Oct. 20, 2023, 8:05 a.m. UTC | #3
On Thu, 19 Oct 2023 10:22:37 +0100,
Ryan Roberts <ryan.roberts@arm.com> wrote:
> 
> On 19/10/2023 09:03, Marc Zyngier wrote:
> > On Mon, 09 Oct 2023 19:49:57 +0100,
> > Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> FEAT_LPA2 impacts tlb invalidation in 2 ways; Firstly, the TTL field in
> >> the non-range tlbi instructions can now validly take a 0 value for the
> >> 4KB granule (this is due to the extra level of translation). Secondly,
> > 
> > nit: 0 was always valid. It just didn't indicate any level.
> 
> True. I'll change to "can now validly take a 0 value as a TTL hint".
> 
> > 
> >> the BADDR field in the range tlbi instructions must be aligned to 64KB
> >> when LPA2 is in use (TCR.DS=1). Changes are required for tlbi to
> >> continue to operate correctly when LPA2 is in use.
> >>
> >> KVM only uses the non-range (__tlbi_level()) routines. Therefore we only
> >> solve the first problem with this patch.
> > 
> > Is this still true? This patch changes __TLBI_VADDR_RANGE() and co.
> 
> It is no longer true that KVM only uses the non-range routines. v6.6 adds a
> series where KVM will now use the range-based routines too. So that text is out
> of date and I should have spotted it when doing the rebase - I'll fix. KVM now
> using range-based ops is the reason I added patch 2 to this series.
> 
> However, this patch doesn't really change __TLBI_VADDR_RANGE()'s behavior, it
> just makes it robust to the presence of TLBI_TTL_UNKNOWN, instead of 0 which was
> previously used as the "don't know" value.
> 
> > 
> >>
> >> It is solved by always adding the level hint if the level is between [0,
> >> 3] (previously anything other than 0 was hinted, which breaks in the new
> >> level -1 case from kvm). When running on non-LPA2 HW, 0 is still safe to
> >> hint as the HW will fall back to non-hinted. While we are at it, we
> >> replace the notion of 0 being the non-hinted seninel with a macro,
> >> TLBI_TTL_UNKNOWN. This means callers won't need updating if/when
> >> translation depth increases in future.
> >>
> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> >> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> >> ---
> >>  arch/arm64/include/asm/tlb.h      |  9 ++++---
> >>  arch/arm64/include/asm/tlbflush.h | 43 +++++++++++++++++++------------
> >>  2 files changed, 31 insertions(+), 21 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> >> index 2c29239d05c3..93c537635dbb 100644
> >> --- a/arch/arm64/include/asm/tlb.h
> >> +++ b/arch/arm64/include/asm/tlb.h
> >> @@ -22,15 +22,16 @@ static void tlb_flush(struct mmu_gather *tlb);
> >>  #include <asm-generic/tlb.h>
> >>  
> >>  /*
> >> - * get the tlbi levels in arm64.  Default value is 0 if more than one
> >> - * of cleared_* is set or neither is set.
> >> + * get the tlbi levels in arm64.  Default value is TLBI_TTL_UNKNOWN if more than
> >> + * one of cleared_* is set or neither is set - this elides the level hinting to
> >> + * the hardware.
> >>   * Arm64 doesn't support p4ds now.
> >>   */
> >>  static inline int tlb_get_level(struct mmu_gather *tlb)
> >>  {
> >>  	/* The TTL field is only valid for the leaf entry. */
> >>  	if (tlb->freed_tables)
> >> -		return 0;
> >> +		return TLBI_TTL_UNKNOWN;
> >>  
> >>  	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
> >>  				   tlb->cleared_puds ||
> >> @@ -47,7 +48,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
> >>  				   tlb->cleared_p4ds))
> >>  		return 1;
> >>  
> >> -	return 0;
> >> +	return TLBI_TTL_UNKNOWN;
> >>  }
> >>  
> >>  static inline void tlb_flush(struct mmu_gather *tlb)
> >> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> >> index b149cf9f91bc..e688246b3b13 100644
> >> --- a/arch/arm64/include/asm/tlbflush.h
> >> +++ b/arch/arm64/include/asm/tlbflush.h
> >> @@ -94,19 +94,22 @@ static inline unsigned long get_trans_granule(void)
> >>   * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
> >>   * the level at which the invalidation must take place. If the level is
> >>   * wrong, no invalidation may take place. In the case where the level
> >> - * cannot be easily determined, a 0 value for the level parameter will
> >> - * perform a non-hinted invalidation.
> >> + * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
> >> + * a non-hinted invalidation. Any provided level outside the hint range
> >> + * will also cause fall-back to non-hinted invalidation.
> >>   *
> >>   * For Stage-2 invalidation, use the level values provided to that effect
> >>   * in asm/stage2_pgtable.h.
> >>   */
> >>  #define TLBI_TTL_MASK		GENMASK_ULL(47, 44)
> >>  
> >> +#define TLBI_TTL_UNKNOWN	(-1)
> > 
> > I find this value somehow confusing, as it represent an actual level
> > number. It just happen to be one that cannot be provided as a TTL. So
> > having that as a return value from tlb_get_level() isn't great, and
> > I'd rather have something that cannot be mistaken for a valid level.
> 
> OK, how about INT_MAX?

Works for me.

> 
> > 
> >> +
> >>  #define __tlbi_level(op, addr, level) do {				\
> >>  	u64 arg = addr;							\
> >>  									\
> >>  	if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) &&		\
> >> -	    level) {							\
> >> +	    level >= 0 && level <= 3) {					\
> >>  		u64 ttl = level & 3;					\
> >>  		ttl |= get_trans_granule() << 2;			\
> >>  		arg &= ~TLBI_TTL_MASK;					\
> >> @@ -134,16 +137,17 @@ static inline unsigned long get_trans_granule(void)
> >>   * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE)
> >>   *
> >>   */
> >> -#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)		\
> >> -	({							\
> >> -		unsigned long __ta = (addr) >> PAGE_SHIFT;	\
> >> -		__ta &= GENMASK_ULL(36, 0);			\
> >> -		__ta |= (unsigned long)(ttl) << 37;		\
> >> -		__ta |= (unsigned long)(num) << 39;		\
> >> -		__ta |= (unsigned long)(scale) << 44;		\
> >> -		__ta |= get_trans_granule() << 46;		\
> >> -		__ta |= (unsigned long)(asid) << 48;		\
> >> -		__ta;						\
> >> +#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)				\
> >> +	({									\
> >> +		unsigned long __ta = (addr) >> PAGE_SHIFT;			\
> >> +		unsigned long __ttl = (ttl >= 1 && ttl <= 3) ? ttl : 0;		\
> >> +		__ta &= GENMASK_ULL(36, 0);					\
> >> +		__ta |= __ttl << 37;						\
> >> +		__ta |= (unsigned long)(num) << 39;				\
> >> +		__ta |= (unsigned long)(scale) << 44;				\
> >> +		__ta |= get_trans_granule() << 46;				\
> >> +		__ta |= (unsigned long)(asid) << 48;				\
> >> +		__ta;								\
> >>  	})
> >>  
> >>  /* These macros are used by the TLBI RANGE feature. */
> >> @@ -216,12 +220,16 @@ static inline unsigned long get_trans_granule(void)
> >>   *		CPUs, ensuring that any walk-cache entries associated with the
> >>   *		translation are also invalidated.
> >>   *
> >> - *	__flush_tlb_range(vma, start, end, stride, last_level)
> >> + *	__flush_tlb_range(vma, start, end, stride, last_level, tlb_level)
> >>   *		Invalidate the virtual-address range '[start, end)' on all
> >>   *		CPUs for the user address space corresponding to 'vma->mm'.
> >>   *		The invalidation operations are issued at a granularity
> >>   *		determined by 'stride' and only affect any walk-cache entries
> >> - *		if 'last_level' is equal to false.
> >> + *		if 'last_level' is equal to false. tlb_level is the level at
> >> + *		which the invalidation must take place. If the level is wrong,
> >> + *		no invalidation may take place. In the case where the level
> >> + *		cannot be easily determined, the value TLBI_TTL_UNKNOWN will
> >> + *		perform a non-hinted invalidation.
> >>   *
> >>   *
> >>   *	Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
> >> @@ -442,9 +450,10 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
> >>  	/*
> >>  	 * We cannot use leaf-only invalidation here, since we may be invalidating
> >>  	 * table entries as part of collapsing hugepages or moving page tables.
> >> -	 * Set the tlb_level to 0 because we can not get enough information here.
> >> +	 * Set the tlb_level to TLBI_TTL_UNKNOWN because we can not get enough
> >> +	 * information here.
> >>  	 */
> >> -	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
> >> +	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
> >>  }
> >>  
> >>  static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
> > 
> > It feels like this range stuff would be better located in the second
> > patch. Not a huge deal though.
> 
> As I said, this is the minimal change to the range-based side of things to
> robustly deal with the introduction of TLBI_TTL_UNKNOWN.
> 
> But I wonder if I'm actually better of squashing both of the 2 patches into one.
> The only reason I split it previously was because KVM was only using the
> level-based ops.

Maybe. There is something to be said about making the range rework
(decreasing scale) an independent patch, as it is a significant change
on its own. But maybe the rest of the plumbing can be grouped
together.

Thanks,

	M.
Ryan Roberts Oct. 20, 2023, 12:39 p.m. UTC | #4
On 20/10/2023 09:05, Marc Zyngier wrote:
> On Thu, 19 Oct 2023 10:22:37 +0100,
> Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 19/10/2023 09:03, Marc Zyngier wrote:
>>> On Mon, 09 Oct 2023 19:49:57 +0100,
>>> Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>
>>>> FEAT_LPA2 impacts tlb invalidation in 2 ways; Firstly, the TTL field in
>>>> the non-range tlbi instructions can now validly take a 0 value for the
>>>> 4KB granule (this is due to the extra level of translation). Secondly,
>>>
>>> nit: 0 was always valid. It just didn't indicate any level.
>>
>> True. I'll change to "can now validly take a 0 value as a TTL hint".
>>
>>>
>>>> the BADDR field in the range tlbi instructions must be aligned to 64KB
>>>> when LPA2 is in use (TCR.DS=1). Changes are required for tlbi to
>>>> continue to operate correctly when LPA2 is in use.
>>>>
>>>> KVM only uses the non-range (__tlbi_level()) routines. Therefore we only
>>>> solve the first problem with this patch.
>>>
>>> Is this still true? This patch changes __TLBI_VADDR_RANGE() and co.
>>
>> It is no longer true that KVM only uses the non-range routines. v6.6 adds a
>> series where KVM will now use the range-based routines too. So that text is out
>> of date and I should have spotted it when doing the rebase - I'll fix. KVM now
>> using range-based ops is the reason I added patch 2 to this series.
>>
>> However, this patch doesn't really change __TLBI_VADDR_RANGE()'s behavior, it
>> just makes it robust to the presence of TLBI_TTL_UNKNOWN, instead of 0 which was
>> previously used as the "don't know" value.
>>
>>>
>>>>
>>>> It is solved by always adding the level hint if the level is between [0,
>>>> 3] (previously anything other than 0 was hinted, which breaks in the new
>>>> level -1 case from kvm). When running on non-LPA2 HW, 0 is still safe to
>>>> hint as the HW will fall back to non-hinted. While we are at it, we
>>>> replace the notion of 0 being the non-hinted seninel with a macro,
>>>> TLBI_TTL_UNKNOWN. This means callers won't need updating if/when
>>>> translation depth increases in future.
>>>>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>>>> ---
>>>>  arch/arm64/include/asm/tlb.h      |  9 ++++---
>>>>  arch/arm64/include/asm/tlbflush.h | 43 +++++++++++++++++++------------
>>>>  2 files changed, 31 insertions(+), 21 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
>>>> index 2c29239d05c3..93c537635dbb 100644
>>>> --- a/arch/arm64/include/asm/tlb.h
>>>> +++ b/arch/arm64/include/asm/tlb.h
>>>> @@ -22,15 +22,16 @@ static void tlb_flush(struct mmu_gather *tlb);
>>>>  #include <asm-generic/tlb.h>
>>>>  
>>>>  /*
>>>> - * get the tlbi levels in arm64.  Default value is 0 if more than one
>>>> - * of cleared_* is set or neither is set.
>>>> + * get the tlbi levels in arm64.  Default value is TLBI_TTL_UNKNOWN if more than
>>>> + * one of cleared_* is set or neither is set - this elides the level hinting to
>>>> + * the hardware.
>>>>   * Arm64 doesn't support p4ds now.
>>>>   */
>>>>  static inline int tlb_get_level(struct mmu_gather *tlb)
>>>>  {
>>>>  	/* The TTL field is only valid for the leaf entry. */
>>>>  	if (tlb->freed_tables)
>>>> -		return 0;
>>>> +		return TLBI_TTL_UNKNOWN;
>>>>  
>>>>  	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
>>>>  				   tlb->cleared_puds ||
>>>> @@ -47,7 +48,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
>>>>  				   tlb->cleared_p4ds))
>>>>  		return 1;
>>>>  
>>>> -	return 0;
>>>> +	return TLBI_TTL_UNKNOWN;
>>>>  }
>>>>  
>>>>  static inline void tlb_flush(struct mmu_gather *tlb)
>>>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>>>> index b149cf9f91bc..e688246b3b13 100644
>>>> --- a/arch/arm64/include/asm/tlbflush.h
>>>> +++ b/arch/arm64/include/asm/tlbflush.h
>>>> @@ -94,19 +94,22 @@ static inline unsigned long get_trans_granule(void)
>>>>   * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
>>>>   * the level at which the invalidation must take place. If the level is
>>>>   * wrong, no invalidation may take place. In the case where the level
>>>> - * cannot be easily determined, a 0 value for the level parameter will
>>>> - * perform a non-hinted invalidation.
>>>> + * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
>>>> + * a non-hinted invalidation. Any provided level outside the hint range
>>>> + * will also cause fall-back to non-hinted invalidation.
>>>>   *
>>>>   * For Stage-2 invalidation, use the level values provided to that effect
>>>>   * in asm/stage2_pgtable.h.
>>>>   */
>>>>  #define TLBI_TTL_MASK		GENMASK_ULL(47, 44)
>>>>  
>>>> +#define TLBI_TTL_UNKNOWN	(-1)
>>>
>>> I find this value somehow confusing, as it represent an actual level
>>> number. It just happen to be one that cannot be provided as a TTL. So
>>> having that as a return value from tlb_get_level() isn't great, and
>>> I'd rather have something that cannot be mistaken for a valid level.
>>
>> OK, how about INT_MAX?
> 
> Works for me.
> 
>>
>>>
>>>> +
>>>>  #define __tlbi_level(op, addr, level) do {				\
>>>>  	u64 arg = addr;							\
>>>>  									\
>>>>  	if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) &&		\
>>>> -	    level) {							\
>>>> +	    level >= 0 && level <= 3) {					\
>>>>  		u64 ttl = level & 3;					\
>>>>  		ttl |= get_trans_granule() << 2;			\
>>>>  		arg &= ~TLBI_TTL_MASK;					\
>>>> @@ -134,16 +137,17 @@ static inline unsigned long get_trans_granule(void)
>>>>   * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE)
>>>>   *
>>>>   */
>>>> -#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)		\
>>>> -	({							\
>>>> -		unsigned long __ta = (addr) >> PAGE_SHIFT;	\
>>>> -		__ta &= GENMASK_ULL(36, 0);			\
>>>> -		__ta |= (unsigned long)(ttl) << 37;		\
>>>> -		__ta |= (unsigned long)(num) << 39;		\
>>>> -		__ta |= (unsigned long)(scale) << 44;		\
>>>> -		__ta |= get_trans_granule() << 46;		\
>>>> -		__ta |= (unsigned long)(asid) << 48;		\
>>>> -		__ta;						\
>>>> +#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)				\
>>>> +	({									\
>>>> +		unsigned long __ta = (addr) >> PAGE_SHIFT;			\
>>>> +		unsigned long __ttl = (ttl >= 1 && ttl <= 3) ? ttl : 0;		\
>>>> +		__ta &= GENMASK_ULL(36, 0);					\
>>>> +		__ta |= __ttl << 37;						\
>>>> +		__ta |= (unsigned long)(num) << 39;				\
>>>> +		__ta |= (unsigned long)(scale) << 44;				\
>>>> +		__ta |= get_trans_granule() << 46;				\
>>>> +		__ta |= (unsigned long)(asid) << 48;				\
>>>> +		__ta;								\
>>>>  	})
>>>>  
>>>>  /* These macros are used by the TLBI RANGE feature. */
>>>> @@ -216,12 +220,16 @@ static inline unsigned long get_trans_granule(void)
>>>>   *		CPUs, ensuring that any walk-cache entries associated with the
>>>>   *		translation are also invalidated.
>>>>   *
>>>> - *	__flush_tlb_range(vma, start, end, stride, last_level)
>>>> + *	__flush_tlb_range(vma, start, end, stride, last_level, tlb_level)
>>>>   *		Invalidate the virtual-address range '[start, end)' on all
>>>>   *		CPUs for the user address space corresponding to 'vma->mm'.
>>>>   *		The invalidation operations are issued at a granularity
>>>>   *		determined by 'stride' and only affect any walk-cache entries
>>>> - *		if 'last_level' is equal to false.
>>>> + *		if 'last_level' is equal to false. tlb_level is the level at
>>>> + *		which the invalidation must take place. If the level is wrong,
>>>> + *		no invalidation may take place. In the case where the level
>>>> + *		cannot be easily determined, the value TLBI_TTL_UNKNOWN will
>>>> + *		perform a non-hinted invalidation.
>>>>   *
>>>>   *
>>>>   *	Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
>>>> @@ -442,9 +450,10 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
>>>>  	/*
>>>>  	 * We cannot use leaf-only invalidation here, since we may be invalidating
>>>>  	 * table entries as part of collapsing hugepages or moving page tables.
>>>> -	 * Set the tlb_level to 0 because we can not get enough information here.
>>>> +	 * Set the tlb_level to TLBI_TTL_UNKNOWN because we can not get enough
>>>> +	 * information here.
>>>>  	 */
>>>> -	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
>>>> +	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
>>>>  }
>>>>  
>>>>  static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>>>
>>> It feels like this range stuff would be better located in the second
>>> patch. Not a huge deal though.
>>
>> As I said, this is the minimal change to the range-based side of things to
>> robustly deal with the introduction of TLBI_TTL_UNKNOWN.
>>
>> But I wonder if I'm actually better of squashing both of the 2 patches into one.
>> The only reason I split it previously was because KVM was only using the
>> level-based ops.
> 
> Maybe. There is something to be said about making the range rework
> (decreasing scale) an independent patch, as it is a significant change
> on its own. But maybe the rest of the plumbing can be grouped
> together.

But that's effectively the split I have now, isn't it? The first patch
introduces TLBI_TTL_UNKNOWN to enable use of 0 as a ttl hint. Then the second
patch reworks the range stuff. I don't quite follow what you are suggesting.

> 
> Thanks,
> 
> 	M.
>
Marc Zyngier Oct. 20, 2023, 1:02 p.m. UTC | #5
On Fri, 20 Oct 2023 13:39:47 +0100,
Ryan Roberts <ryan.roberts@arm.com> wrote:
> 
> On 20/10/2023 09:05, Marc Zyngier wrote:
> > Maybe. There is something to be said about making the range rework
> > (decreasing scale) an independent patch, as it is a significant change
> > on its own. But maybe the rest of the plumbing can be grouped
> > together.
> 
> But that's effectively the split I have now, isn't it? The first patch
> introduces TLBI_TTL_UNKNOWN to enable use of 0 as a ttl hint. Then the second
> patch reworks the range stuff. I don't quite follow what you are suggesting.

Not quite.

What I'm proposing is that you pull the scale changes in their own
patch, and preferably without any change to the external API (i.e. no
change to the signature of the helper). They any extra change, such as
the TTL rework can go separately.

So while this is similar to your existing split, I'd like to see it
without any churn around the calling convention. Which means turning
the ordering around, and making use of a static key in the various
helpers that need to know about LPA2.

	M.
Ryan Roberts Oct. 20, 2023, 1:21 p.m. UTC | #6
On 20/10/2023 14:02, Marc Zyngier wrote:
> On Fri, 20 Oct 2023 13:39:47 +0100,
> Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 20/10/2023 09:05, Marc Zyngier wrote:
>>> Maybe. There is something to be said about making the range rework
>>> (decreasing scale) an independent patch, as it is a significant change
>>> on its own. But maybe the rest of the plumbing can be grouped
>>> together.
>>
>> But that's effectively the split I have now, isn't it? The first patch
>> introduces TLBI_TTL_UNKNOWN to enable use of 0 as a ttl hint. Then the second
>> patch reworks the range stuff. I don't quite follow what you are suggesting.
> 
> Not quite.
> 
> What I'm proposing is that you pull the scale changes in their own
> patch, and preferably without any change to the external API (i.e. no
> change to the signature of the helper). They any extra change, such as
> the TTL rework can go separately.
> 
> So while this is similar to your existing split, I'd like to see it
> without any churn around the calling convention. Which means turning
> the ordering around, and making use of a static key in the various
> helpers that need to know about LPA2.

I don't think we can embed the static key usage directly inside
__flush_tlb_range_op() (if that's what you were suggesting), because this macro
is used by both the kernel (for its stage 1) and the hypervisor (for stage 2).
And the kernel doesn't support LPA2 (until Ard's work is merged). So I think
this needs to be an argument to the macro.

Or are you asking that I make the scale change universally, even if LPA2 is not
in use? I could do that as its own change change (which I could benchmark), then
add the rest in a separate change. But my thinking was that we would not want to
change the algorithm for !LAP2 since it is not as effcient (due to the LPA2 64K
alignment requirement).

Sorry for laboring the point - I just want to make sure I understand what you
are asking for.


> 
> 	M.
>
Marc Zyngier Oct. 20, 2023, 1:41 p.m. UTC | #7
On Fri, 20 Oct 2023 14:21:39 +0100,
Ryan Roberts <ryan.roberts@arm.com> wrote:
> 
> On 20/10/2023 14:02, Marc Zyngier wrote:
> > On Fri, 20 Oct 2023 13:39:47 +0100,
> > Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> On 20/10/2023 09:05, Marc Zyngier wrote:
> >>> Maybe. There is something to be said about making the range rework
> >>> (decreasing scale) an independent patch, as it is a significant change
> >>> on its own. But maybe the rest of the plumbing can be grouped
> >>> together.
> >>
> >> But that's effectively the split I have now, isn't it? The first patch
> >> introduces TLBI_TTL_UNKNOWN to enable use of 0 as a ttl hint. Then the second
> >> patch reworks the range stuff. I don't quite follow what you are suggesting.
> > 
> > Not quite.
> > 
> > What I'm proposing is that you pull the scale changes in their own
> > patch, and preferably without any change to the external API (i.e. no
> > change to the signature of the helper). They any extra change, such as
> > the TTL rework can go separately.
> > 
> > So while this is similar to your existing split, I'd like to see it
> > without any churn around the calling convention. Which means turning
> > the ordering around, and making use of a static key in the various
> > helpers that need to know about LPA2.
> 
> I don't think we can embed the static key usage directly inside
> __flush_tlb_range_op() (if that's what you were suggesting), because this macro
> is used by both the kernel (for its stage 1) and the hypervisor (for stage 2).
> And the kernel doesn't support LPA2 (until Ard's work is merged). So I think
> this needs to be an argument to the macro.

I can see two outcomes here:

- either you create separate helpers that abstract the LPA2-ness for
  KVM and stick to non-LPA2 for the kernel (until Ard's series makes
  it in)

- or you leave the whole thing disabled until we have full LPA2
  support.

Eventually, you replace the whole extra parameter with a static key,
and nobody sees any churn.

> Or are you asking that I make the scale change universally, even if LPA2 is not
> in use? I could do that as its own change change (which I could benchmark), then
> add the rest in a separate change. But my thinking was that we would not want to
> change the algorithm for !LAP2 since it is not as effcient (due to the LPA2 64K
> alignment requirement).

I'm all for simplicity. If having an extra 15 potential TLBIs is
acceptable from a performance perspective, I won't complain. But I can
imagine that NV would be suffering from that (TLBIs on S2 have to
trap).

	M.
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index 2c29239d05c3..93c537635dbb 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -22,15 +22,16 @@  static void tlb_flush(struct mmu_gather *tlb);
 #include <asm-generic/tlb.h>
 
 /*
- * get the tlbi levels in arm64.  Default value is 0 if more than one
- * of cleared_* is set or neither is set.
+ * get the tlbi levels in arm64.  Default value is TLBI_TTL_UNKNOWN if more than
+ * one of cleared_* is set or neither is set - this elides the level hinting to
+ * the hardware.
  * Arm64 doesn't support p4ds now.
  */
 static inline int tlb_get_level(struct mmu_gather *tlb)
 {
 	/* The TTL field is only valid for the leaf entry. */
 	if (tlb->freed_tables)
-		return 0;
+		return TLBI_TTL_UNKNOWN;
 
 	if (tlb->cleared_ptes && !(tlb->cleared_pmds ||
 				   tlb->cleared_puds ||
@@ -47,7 +48,7 @@  static inline int tlb_get_level(struct mmu_gather *tlb)
 				   tlb->cleared_p4ds))
 		return 1;
 
-	return 0;
+	return TLBI_TTL_UNKNOWN;
 }
 
 static inline void tlb_flush(struct mmu_gather *tlb)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index b149cf9f91bc..e688246b3b13 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -94,19 +94,22 @@  static inline unsigned long get_trans_granule(void)
  * When ARMv8.4-TTL exists, TLBI operations take an additional hint for
  * the level at which the invalidation must take place. If the level is
  * wrong, no invalidation may take place. In the case where the level
- * cannot be easily determined, a 0 value for the level parameter will
- * perform a non-hinted invalidation.
+ * cannot be easily determined, the value TLBI_TTL_UNKNOWN will perform
+ * a non-hinted invalidation. Any provided level outside the hint range
+ * will also cause fall-back to non-hinted invalidation.
  *
  * For Stage-2 invalidation, use the level values provided to that effect
  * in asm/stage2_pgtable.h.
  */
 #define TLBI_TTL_MASK		GENMASK_ULL(47, 44)
 
+#define TLBI_TTL_UNKNOWN	(-1)
+
 #define __tlbi_level(op, addr, level) do {				\
 	u64 arg = addr;							\
 									\
 	if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) &&		\
-	    level) {							\
+	    level >= 0 && level <= 3) {					\
 		u64 ttl = level & 3;					\
 		ttl |= get_trans_granule() << 2;			\
 		arg &= ~TLBI_TTL_MASK;					\
@@ -134,16 +137,17 @@  static inline unsigned long get_trans_granule(void)
  * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE)
  *
  */
-#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)		\
-	({							\
-		unsigned long __ta = (addr) >> PAGE_SHIFT;	\
-		__ta &= GENMASK_ULL(36, 0);			\
-		__ta |= (unsigned long)(ttl) << 37;		\
-		__ta |= (unsigned long)(num) << 39;		\
-		__ta |= (unsigned long)(scale) << 44;		\
-		__ta |= get_trans_granule() << 46;		\
-		__ta |= (unsigned long)(asid) << 48;		\
-		__ta;						\
+#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl)				\
+	({									\
+		unsigned long __ta = (addr) >> PAGE_SHIFT;			\
+		unsigned long __ttl = (ttl >= 1 && ttl <= 3) ? ttl : 0;		\
+		__ta &= GENMASK_ULL(36, 0);					\
+		__ta |= __ttl << 37;						\
+		__ta |= (unsigned long)(num) << 39;				\
+		__ta |= (unsigned long)(scale) << 44;				\
+		__ta |= get_trans_granule() << 46;				\
+		__ta |= (unsigned long)(asid) << 48;				\
+		__ta;								\
 	})
 
 /* These macros are used by the TLBI RANGE feature. */
@@ -216,12 +220,16 @@  static inline unsigned long get_trans_granule(void)
  *		CPUs, ensuring that any walk-cache entries associated with the
  *		translation are also invalidated.
  *
- *	__flush_tlb_range(vma, start, end, stride, last_level)
+ *	__flush_tlb_range(vma, start, end, stride, last_level, tlb_level)
  *		Invalidate the virtual-address range '[start, end)' on all
  *		CPUs for the user address space corresponding to 'vma->mm'.
  *		The invalidation operations are issued at a granularity
  *		determined by 'stride' and only affect any walk-cache entries
- *		if 'last_level' is equal to false.
+ *		if 'last_level' is equal to false. tlb_level is the level at
+ *		which the invalidation must take place. If the level is wrong,
+ *		no invalidation may take place. In the case where the level
+ *		cannot be easily determined, the value TLBI_TTL_UNKNOWN will
+ *		perform a non-hinted invalidation.
  *
  *
  *	Finally, take a look at asm/tlb.h to see how tlb_flush() is implemented
@@ -442,9 +450,10 @@  static inline void flush_tlb_range(struct vm_area_struct *vma,
 	/*
 	 * We cannot use leaf-only invalidation here, since we may be invalidating
 	 * table entries as part of collapsing hugepages or moving page tables.
-	 * Set the tlb_level to 0 because we can not get enough information here.
+	 * Set the tlb_level to TLBI_TTL_UNKNOWN because we can not get enough
+	 * information here.
 	 */
-	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0);
+	__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
 }
 
 static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end)