diff mbox series

[v2,08/54] accel/tcg: Flush entire tlb when a masked range wraps

Message ID 20241114160131.48616-9-richard.henderson@linaro.org (mailing list archive)
State New
Headers show
Series accel/tcg: Convert victim tlb to IntervalTree | expand

Commit Message

Richard Henderson Nov. 14, 2024, 4 p.m. UTC
We expect masked address spaces to be quite large, e.g. 56 bits
for AArch64 top-byte-ignore mode.  We do not expect addr+len to
wrap around, but it is possible with AArch64 guest flush range
instructions.

Convert this unlikely case to a full tlb flush.  This can simplify
the subroutines actually performing the range flush.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

Comments

Pierrick Bouvier Nov. 14, 2024, 5:58 p.m. UTC | #1
On 11/14/24 08:00, Richard Henderson wrote:
> We expect masked address spaces to be quite large, e.g. 56 bits
> for AArch64 top-byte-ignore mode.  We do not expect addr+len to
> wrap around, but it is possible with AArch64 guest flush range
> instructions.
> 
> Convert this unlikely case to a full tlb flush.  This can simplify
> the subroutines actually performing the range flush.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   accel/tcg/cputlb.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 5510f40333..31c45a6213 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -802,6 +802,11 @@ void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
>           tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
>           return;
>       }
> +    /* If addr+len wraps in len bits, fall back to full flush. */
> +    if (bits < TARGET_LONG_BITS && ((addr ^ (addr + len - 1)) >> bits)) {
> +        tlb_flush_by_mmuidx(cpu, idxmap);
> +        return;
> +    }
>   
>       /* This should already be page aligned */
>       d.addr = addr & TARGET_PAGE_MASK;
> @@ -838,6 +843,11 @@ void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
>           tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap);
>           return;
>       }
> +    /* If addr+len wraps in len bits, fall back to full flush. */
> +    if (bits < TARGET_LONG_BITS && ((addr ^ (addr + len - 1)) >> bits)) {
> +        tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap);
> +        return;
> +    }
>   
>       /* This should already be page aligned */
>       d.addr = addr & TARGET_PAGE_MASK;

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
diff mbox series

Patch

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5510f40333..31c45a6213 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -802,6 +802,11 @@  void tlb_flush_range_by_mmuidx(CPUState *cpu, vaddr addr,
         tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
         return;
     }
+    /* If addr+len wraps in len bits, fall back to full flush. */
+    if (bits < TARGET_LONG_BITS && ((addr ^ (addr + len - 1)) >> bits)) {
+        tlb_flush_by_mmuidx(cpu, idxmap);
+        return;
+    }
 
     /* This should already be page aligned */
     d.addr = addr & TARGET_PAGE_MASK;
@@ -838,6 +843,11 @@  void tlb_flush_range_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
         tlb_flush_page_by_mmuidx_all_cpus_synced(src_cpu, addr, idxmap);
         return;
     }
+    /* If addr+len wraps in len bits, fall back to full flush. */
+    if (bits < TARGET_LONG_BITS && ((addr ^ (addr + len - 1)) >> bits)) {
+        tlb_flush_by_mmuidx_all_cpus_synced(src_cpu, idxmap);
+        return;
+    }
 
     /* This should already be page aligned */
     d.addr = addr & TARGET_PAGE_MASK;