Message ID | 20200226123738.582547-1-jean-philippe@linaro.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] arm64: context: Fix ASID limit in boot warning | expand |
On 2/26/20 12:37 PM, Jean-Philippe Brucker wrote: > Since commit f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI > is not in use"), the NUM_USER_ASIDS macro doesn't correspond to the > effective number of ASIDs when KPTI is enabled. Get an accurate number > of available ASIDs in an arch_initcall, once we've discovered all CPUs' > capabilities and know if we still need to halve the ASID space for KPTI. > > Fixes: f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI is not in use") > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> > --- > v1->v2: move warning to arch_initcall(), post capabilities (e.g. E0PD) > discovery > > This change may be a little invasive for just a validation warning, but > it will likely be needed later, in the asid-pinning patch I'd like to > introduce for IOMMU SVA. > --- > arch/arm64/mm/context.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c > index 8ef73e89d514..efe98f0dcc89 100644 > --- a/arch/arm64/mm/context.c > +++ b/arch/arm64/mm/context.c > @@ -260,14 +260,23 @@ asmlinkage void post_ttbr_update_workaround(void) > CONFIG_CAVIUM_ERRATUM_27456)); > } > > -static int asids_init(void) > +static int asids_update_limit(void) > { > - asid_bits = get_cpu_asid_bits(); > /* > * Expect allocation after rollover to fail if we don't have at least > * one more ASID than CPUs. ASID #0 is reserved for init_mm. > */ > - WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); > + bool kpti = arm64_kernel_unmapped_at_el0(); > + unsigned long num_available_asids = (1UL << (asid_bits - kpti)) - 1; > + > + WARN_ON(num_available_asids <= num_possible_cpus()); > + return 0; > +} Since we have actual number of ASIDs here can we move pr_info here as well? No need to re-spin immediately. Cheers Vladimir > +arch_initcall(asids_update_limit); > + > +static int asids_init(void) > +{ > + asid_bits = get_cpu_asid_bits(); > atomic64_set(&asid_generation, ASID_FIRST_VERSION); > asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), > GFP_KERNEL); >
On 26/02/2020 12:37 pm, Jean-Philippe Brucker wrote: > Since commit f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI > is not in use"), the NUM_USER_ASIDS macro doesn't correspond to the > effective number of ASIDs when KPTI is enabled. Get an accurate number > of available ASIDs in an arch_initcall, once we've discovered all CPUs' > capabilities and know if we still need to halve the ASID space for KPTI. > > Fixes: f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI is not in use") > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> > --- > v1->v2: move warning to arch_initcall(), post capabilities (e.g. E0PD) > discovery > > This change may be a little invasive for just a validation warning, but > it will likely be needed later, in the asid-pinning patch I'd like to > introduce for IOMMU SVA. > --- > arch/arm64/mm/context.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c > index 8ef73e89d514..efe98f0dcc89 100644 > --- a/arch/arm64/mm/context.c > +++ b/arch/arm64/mm/context.c > @@ -260,14 +260,23 @@ asmlinkage void post_ttbr_update_workaround(void) > CONFIG_CAVIUM_ERRATUM_27456)); > } > > -static int asids_init(void) > +static int asids_update_limit(void) > { > - asid_bits = get_cpu_asid_bits(); > /* > * Expect allocation after rollover to fail if we don't have at least > * one more ASID than CPUs. ASID #0 is reserved for init_mm. > */ > - WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); > + bool kpti = arm64_kernel_unmapped_at_el0(); > + unsigned long num_available_asids = (1UL << (asid_bits - kpti)) - 1; Yikes! Could the adjustment be a little more obvious please? e.g.: if (arm64_kernel_unmapped_at_el0()) num_available_asids /= 2; I assume this isn't a path where we need to shave off every last cycle possible. Robin. > + > + WARN_ON(num_available_asids <= num_possible_cpus()); > + return 0; > +} > +arch_initcall(asids_update_limit); > + > +static int asids_init(void) > +{ > + asid_bits = get_cpu_asid_bits(); > atomic64_set(&asid_generation, ASID_FIRST_VERSION); > asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), > GFP_KERNEL); >
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index 8ef73e89d514..efe98f0dcc89 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -260,14 +260,23 @@ asmlinkage void post_ttbr_update_workaround(void) CONFIG_CAVIUM_ERRATUM_27456)); } -static int asids_init(void) +static int asids_update_limit(void) { - asid_bits = get_cpu_asid_bits(); /* * Expect allocation after rollover to fail if we don't have at least * one more ASID than CPUs. ASID #0 is reserved for init_mm. */ - WARN_ON(NUM_USER_ASIDS - 1 <= num_possible_cpus()); + bool kpti = arm64_kernel_unmapped_at_el0(); + unsigned long num_available_asids = (1UL << (asid_bits - kpti)) - 1; + + WARN_ON(num_available_asids <= num_possible_cpus()); + return 0; +} +arch_initcall(asids_update_limit); + +static int asids_init(void) +{ + asid_bits = get_cpu_asid_bits(); atomic64_set(&asid_generation, ASID_FIRST_VERSION); asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map), GFP_KERNEL);
Since commit f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI is not in use"), the NUM_USER_ASIDS macro doesn't correspond to the effective number of ASIDs when KPTI is enabled. Get an accurate number of available ASIDs in an arch_initcall, once we've discovered all CPUs' capabilities and know if we still need to halve the ASID space for KPTI. Fixes: f88f42f853a8 ("arm64: context: Free up kernel ASIDs if KPTI is not in use") Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> --- v1->v2: move warning to arch_initcall(), post capabilities (e.g. E0PD) discovery This change may be a little invasive for just a validation warning, but it will likely be needed later, in the asid-pinning patch I'd like to introduce for IOMMU SVA. --- arch/arm64/mm/context.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-)