@@ -219,7 +219,7 @@ static void clear_asid_other(void)
* This is only expected to be set if we have disabled
* kernel _PAGE_GLOBAL pages.
*/
- if (!static_cpu_has(X86_FEATURE_PTI)) {
+ if (!static_cpu_has(X86_FEATURE_PTI) && !static_cpu_has(X86_FEATURE_ASI)) {
WARN_ON_ONCE(1);
return;
}
@@ -1178,15 +1178,19 @@ void flush_tlb_one_kernel(unsigned long addr)
* use PCID if we also use global PTEs for the kernel mapping, and
* INVLPG flushes global translations across all address spaces.
*
- * If PTI is on, then the kernel is mapped with non-global PTEs, and
- * __flush_tlb_one_user() will flush the given address for the current
- * kernel address space and for its usermode counterpart, but it does
- * not flush it for other address spaces.
+ * If PTI or ASI is on, then the kernel is mapped with non-global PTEs,
+ * and __flush_tlb_one_user() will flush the given address for the
+ * current kernel address space and, if PTI is on, for its usermode
+ * counterpart, but it does not flush it for other address spaces.
*/
flush_tlb_one_user(addr);
- if (!static_cpu_has(X86_FEATURE_PTI))
+ /* Nothing more to do if PTI and ASI are completely off. */
+ if (!static_cpu_has(X86_FEATURE_PTI) && !static_cpu_has(X86_FEATURE_ASI)) {
+ VM_WARN_ON_ONCE(static_cpu_has(X86_FEATURE_PCID) &&
+ !(__default_kernel_pte_mask & _PAGE_GLOBAL));
return;
+ }
/*
* See above. We need to propagate the flush to all other address
@@ -1275,6 +1279,13 @@ STATIC_NOPV void native_flush_tlb_local(void)
invalidate_user_asid(this_cpu_read(cpu_tlbstate.loaded_mm_asid));
+ /*
+ * Restricted ASI CR3 is unstable outside of critical section, so we
+ * couldn't flush via a CR3 read/write.
+ */
+ if (!asi_in_critical_section())
+ asi_exit();
+
/* If current->mm == NULL then the read_cr3() "borrows" an mm */
native_write_cr3(__native_read_cr3());
}
This is the absolute minimum change for TLB flushing to be correct under ASI. There are two arguably orthogonal changes in here but they feel small enough for a single commit. .:: CR3 stabilization As noted in the comment ASI can destabilize CR3, but we can stabilize it again by calling asi_exit, this makes it safe to read CR3 and write it back. This is enough to be correct - we don't have to worry about invalidating the other ASI address space (i.e. we don't need to invalidate the restricted address space if we are currently unrestricted / vice versa) because we currently never set the noflush bit in CR3 for ASI transitions. Even without using CR3's noflush bit there are trivial optimizations still on the table here: on where invpcid_flush_single_context is available (i.e. with the INVPCID_SINGLE feature) we can use that in lieu of the CR3 read/write, and avoid the extremely costly asi_exit. .:: Invalidating kernel mappings Before ASI, with KPTI off we always either disable PCID or use global mappings for kernel memory. However ASI disables global kernel mappings regardless of factors. So we need to invalidate other address spaces to trigger a flush when we switch into them. Note that there is currently a pointless write of cpu_tlbstate.invalidate_other in the case of KPTI and !PCID. We've added another case of that (ASI, !KPTI and !PCID). I think that's preferable to expanding the conditional in flush_tlb_one_kernel. Signed-off-by: Brendan Jackman <jackmanb@google.com> --- arch/x86/mm/tlb.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-)