diff mbox series

[1/2] ARM: ioremap: Flush PGDs for VMALLOC shadow

Message ID 20241015-arm-kasan-vmalloc-crash-v1-1-dbb23592ca83@linaro.org (mailing list archive)
State New, archived
Headers show
Series Fix KASAN crash when using KASAN_VMALLOC | expand

Commit Message

Linus Walleij Oct. 15, 2024, 9:37 p.m. UTC
When sync:ing the VMALLOC area to other CPUs, make sure to also
sync the KASAN shadow memory for the VMALLOC area, so that we
don't get stale entries for the shadow memory in the top level PGD.

Cc: stable@vger.kernel.org
Fixes: 565cbaad83d8 ("ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC")
Link: https://lore.kernel.org/linux-arm-kernel/a1a1d062-f3a2-4d05-9836-3b098de9db6d@foss.st.com/
Reported-by: Clement LE GOFFIC <clement.legoffic@foss.st.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
 arch/arm/mm/ioremap.c | 7 +++++++
 1 file changed, 7 insertions(+)

Comments

Ard Biesheuvel Oct. 16, 2024, 11:32 a.m. UTC | #1
On Tue, 15 Oct 2024 at 23:37, Linus Walleij <linus.walleij@linaro.org> wrote:
>
> When sync:ing the VMALLOC area to other CPUs, make sure to also
> sync the KASAN shadow memory for the VMALLOC area, so that we
> don't get stale entries for the shadow memory in the top level PGD.
>
> Cc: stable@vger.kernel.org
> Fixes: 565cbaad83d8 ("ARM: 9202/1: kasan: support CONFIG_KASAN_VMALLOC")
> Link: https://lore.kernel.org/linux-arm-kernel/a1a1d062-f3a2-4d05-9836-3b098de9db6d@foss.st.com/
> Reported-by: Clement LE GOFFIC <clement.legoffic@foss.st.com>
> Suggested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
> ---
>  arch/arm/mm/ioremap.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
> index 794cfea9f9d4..449f1f04814c 100644
> --- a/arch/arm/mm/ioremap.c
> +++ b/arch/arm/mm/ioremap.c
> @@ -23,6 +23,7 @@
>   */
>  #include <linux/module.h>
>  #include <linux/errno.h>
> +#include <linux/kasan.h>
>  #include <linux/mm.h>
>  #include <linux/vmalloc.h>
>  #include <linux/io.h>
> @@ -125,6 +126,12 @@ void __check_vmalloc_seq(struct mm_struct *mm)
>                        pgd_offset_k(VMALLOC_START),
>                        sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
>                                         pgd_index(VMALLOC_START)));
> +               if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
> +                       memcpy(pgd_offset(mm, (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START)),
> +                              pgd_offset_k((unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START)),
> +                              sizeof(pgd_t) * (pgd_index((unsigned long)kasan_mem_to_shadow((void *)VMALLOC_END)) -
> +                                               pgd_index((unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START))));
> +               }

+1 to Russell's suggestion to change this wall of text into something legible.

Then, there is another part to this: in arch/arm/kernel/traps.c, we
have the following code

void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
{
    if (start < VMALLOC_END && end > VMALLOC_START)
        atomic_inc_return_release(&init_mm.context.vmalloc_seq);
}

where we only bump vmalloc_seq if the updated region overlaps with the
vmalloc region, so this will need a similar treatment afaict.
Linus Walleij Oct. 16, 2024, 6:38 p.m. UTC | #2
On Wed, Oct 16, 2024 at 1:33 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> > @@ -125,6 +126,12 @@ void __check_vmalloc_seq(struct mm_struct *mm)
(...)
> Then, there is another part to this: in arch/arm/kernel/traps.c, we
> have the following code
>
> void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> {
>     if (start < VMALLOC_END && end > VMALLOC_START)
>         atomic_inc_return_release(&init_mm.context.vmalloc_seq);
> }
>
> where we only bump vmalloc_seq if the updated region overlaps with the
> vmalloc region, so this will need a similar treatment afaict.

Not really, right? We bump init_mm.context.vmalloc_seq if the address
overlaps the entire vmalloc area.

Then the previously patched __check_vmalloc_seq() will check that
atomic counter and copy the PGD entries, and with the code in this
patch it will also copy (sync) the corresponding shadow memory
at that point.

Yours,
Linus Walleij
Ard Biesheuvel Oct. 16, 2024, 6:50 p.m. UTC | #3
On Wed, 16 Oct 2024 at 20:38, Linus Walleij <linus.walleij@linaro.org> wrote:
>
> On Wed, Oct 16, 2024 at 1:33 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> > > @@ -125,6 +126,12 @@ void __check_vmalloc_seq(struct mm_struct *mm)
> (...)
> > Then, there is another part to this: in arch/arm/kernel/traps.c, we
> > have the following code
> >
> > void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> > {
> >     if (start < VMALLOC_END && end > VMALLOC_START)
> >         atomic_inc_return_release(&init_mm.context.vmalloc_seq);
> > }
> >
> > where we only bump vmalloc_seq if the updated region overlaps with the
> > vmalloc region, so this will need a similar treatment afaict.
>
> Not really, right? We bump init_mm.context.vmalloc_seq if the address
> overlaps the entire vmalloc area.
>
> Then the previously patched __check_vmalloc_seq() will check that
> atomic counter and copy the PGD entries, and with the code in this
> patch it will also copy (sync) the corresponding shadow memory
> at that point.
>

Yes, so we rely on the fact that changes to the vmalloc area and
changes to the associated shadow mappings always occur in combination,
right?

I think that should probably be safe, but we have to be sure.
Linus Walleij Oct. 16, 2024, 7:04 p.m. UTC | #4
On Wed, Oct 16, 2024 at 8:50 PM Ard Biesheuvel <ardb@kernel.org> wrote:

> Yes, so we rely on the fact that changes to the vmalloc area and
> changes to the associated shadow mappings always occur in combination,
> right?

Yes otherwise it is pretty much the definition of a KASAN violation.

Mostly it "just works" because all low-level operations emitted by the
compiler and all memcpy() (etc) are patched to do any memory access
in tandem, this vmalloc_seq-thing was a big confusion for me.

I'll send out the revised patches so people can test!

Yours,
Linus Walleij
diff mbox series

Patch

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 794cfea9f9d4..449f1f04814c 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -23,6 +23,7 @@ 
  */
 #include <linux/module.h>
 #include <linux/errno.h>
+#include <linux/kasan.h>
 #include <linux/mm.h>
 #include <linux/vmalloc.h>
 #include <linux/io.h>
@@ -125,6 +126,12 @@  void __check_vmalloc_seq(struct mm_struct *mm)
 		       pgd_offset_k(VMALLOC_START),
 		       sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
 					pgd_index(VMALLOC_START)));
+		if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
+			memcpy(pgd_offset(mm, (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START)),
+			       pgd_offset_k((unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START)),
+			       sizeof(pgd_t) * (pgd_index((unsigned long)kasan_mem_to_shadow((void *)VMALLOC_END)) -
+						pgd_index((unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START))));
+		}
 		/*
 		 * Use a store-release so that other CPUs that observe the
 		 * counter's new value are guaranteed to see the results of the