diff mbox series

arm64: context: Fix ASID limit in boot warning

Message ID 20200224103934.137314-1-jean-philippe@linaro.org (mailing list archive)
State New, archived
Headers show
Series arm64: context: Fix ASID limit in boot warning | expand

Commit Message

Jean-Philippe Brucker Feb. 24, 2020, 10:39 a.m. UTC
Since commit f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI
is not in use"), the NUM_USER_ASIDS macro doesn't correspond to the
effective number of ASIDs when KPTI is enabled. Fix the number of
available ASIDs in the boot warning.

Fixes: f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI is not in use")
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
---
 arch/arm64/mm/context.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Vladimir Murzin Feb. 24, 2020, 11:06 a.m. UTC | #1
Hi Jean,

On 2/24/20 10:39 AM, Jean-Philippe Brucker wrote:
> Since commit f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI
> is not in use"), the NUM_USER_ASIDS macro doesn't correspond to the
> effective number of ASIDs when KPTI is enabled. Fix the number of
> available ASIDs in the boot warning.
> 
> Fixes: f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI is not in use")
> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> ---
>  arch/arm64/mm/context.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
> index 8ef73e89d514..121aba5b1941 100644
> --- a/arch/arm64/mm/context.c
> +++ b/arch/arm64/mm/context.c
> @@ -262,12 +262,16 @@ asmlinkage void post_ttbr_update_workaround(void)
>  
>  static int asids_init(void)
>  {
> +	unsigned long num_available_asids;
> +	bool kpti = IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
> +
>  	asid_bits = get_cpu_asid_bits();
>  	/*
>  	 * Expect allocation after rollover to fail if we don't have at least
>  	 * one more ASID than CPUs. ASID #0 is reserved for init_mm.
>  	 */
> -	WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus());
> +	num_available_asids = (1UL << (asid_bits - kpti)) - 1;
> +	WARN_ON(num_available_asids <= num_possible_cpus());
>  	atomic64_set(&asid_generation, ASID_FIRST_VERSION);
>  	asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map),
>  			   GFP_KERNEL);
> @@ -280,7 +284,7 @@ static int asids_init(void)
>  	 * caps are not finalized yet, so it is safer to assume KPTI
>  	 * and reserve kernel ASID's from beginning.
>  	 */
> -	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> +	if (kpti)
>  		set_kpti_asid_bits();
>  
>  	pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS);
> 

I agree with what you say about  NUM_USER_ASIDS, yet looking at comment explaining
warning is seems that we may get extra ASIDs later on rollover, so warning might be
seen as false positive. Can we move the warning later on when we know how many ASIDs
we have after all CPU's are up? Maybe late_initcall?

Cheers
Vladimir
Jean-Philippe Brucker Feb. 25, 2020, 12:13 p.m. UTC | #2
Hi Vladimir,

On Mon, Feb 24, 2020 at 11:06:43AM +0000, Vladimir Murzin wrote:
> >  static int asids_init(void)
> >  {
> > +	unsigned long num_available_asids;
> > +	bool kpti = IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
> > +
> >  	asid_bits = get_cpu_asid_bits();
> >  	/*
> >  	 * Expect allocation after rollover to fail if we don't have at least
> >  	 * one more ASID than CPUs. ASID #0 is reserved for init_mm.
> >  	 */
> > -	WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus());
> > +	num_available_asids = (1UL << (asid_bits - kpti)) - 1;
> > +	WARN_ON(num_available_asids <= num_possible_cpus());
> >  	atomic64_set(&asid_generation, ASID_FIRST_VERSION);
> >  	asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map),
> >  			   GFP_KERNEL);
> > @@ -280,7 +284,7 @@ static int asids_init(void)
> >  	 * caps are not finalized yet, so it is safer to assume KPTI
> >  	 * and reserve kernel ASID's from beginning.
> >  	 */
> > -	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
> > +	if (kpti)
> >  		set_kpti_asid_bits();
> >  
> >  	pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS);
> > 
> 
> I agree with what you say about  NUM_USER_ASIDS, yet looking at comment explaining
> warning is seems that we may get extra ASIDs later on rollover, so warning might be
> seen as false positive. Can we move the warning later on when we know how many ASIDs
> we have after all CPU's are up?

Yes I think that's better. I've stumbled upon the issue when reworking my
ASID-pin patch [1], which will also need a precise maximum number of ASIDs
available.

> Maybe late_initcall?

I guess arch_initcall might be more appropriate. Any init level except
early should work since they are after smp_init().

Thanks,
Jean

[1] https://lore.kernel.org/linux-iommu/20200224182401.353359-8-jean-philippe@linaro.org/
diff mbox series

Patch

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 8ef73e89d514..121aba5b1941 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -262,12 +262,16 @@  asmlinkage void post_ttbr_update_workaround(void)
 
 static int asids_init(void)
 {
+	unsigned long num_available_asids;
+	bool kpti = IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+
 	asid_bits = get_cpu_asid_bits();
 	/*
 	 * Expect allocation after rollover to fail if we don't have at least
 	 * one more ASID than CPUs. ASID #0 is reserved for init_mm.
 	 */
-	WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus());
+	num_available_asids = (1UL << (asid_bits - kpti)) - 1;
+	WARN_ON(num_available_asids <= num_possible_cpus());
 	atomic64_set(&asid_generation, ASID_FIRST_VERSION);
 	asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map),
 			   GFP_KERNEL);
@@ -280,7 +284,7 @@  static int asids_init(void)
 	 * caps are not finalized yet, so it is safer to assume KPTI
 	 * and reserve kernel ASID's from beginning.
 	 */
-	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
+	if (kpti)
 		set_kpti_asid_bits();
 
 	pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS);