Message ID | 20171013154426.GC4746@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
> Thanks for sharing the .config and tree. It looks like the problem is that > kimg_shadow_start and kimg_shadow_end are not page-aligned. Whilst I fix > them up in kasan_map_populate, they remain unaligned when passed to > kasan_populate_zero_shadow, which confuses the loop termination conditions > in e.g. zero_pte_populate and the shadow isn't configured properly. This makes sense. Thank you. I will insert these changes into your patch, and send out a new series soon after sanity checking it. Pavel
BTW, don't we need the same aligments inside for_each_memblock() loop? How about change kasan_map_populate() to accept regular VA start, end address, and convert them internally after aligning to PAGE_SIZE? Thank you, Pavel On Fri, Oct 13, 2017 at 11:54 AM, Pavel Tatashin <pasha.tatashin@oracle.com> wrote: >> Thanks for sharing the .config and tree. It looks like the problem is that >> kimg_shadow_start and kimg_shadow_end are not page-aligned. Whilst I fix >> them up in kasan_map_populate, they remain unaligned when passed to >> kasan_populate_zero_shadow, which confuses the loop termination conditions >> in e.g. zero_pte_populate and the shadow isn't configured properly. > > This makes sense. Thank you. I will insert these changes into your > patch, and send out a new series soon after sanity checking it. > > Pavel
On Fri, Oct 13, 2017 at 12:00:27PM -0400, Pavel Tatashin wrote: > BTW, don't we need the same aligments inside for_each_memblock() loop? Hmm, yes actually, given that we shift them right for the shadow address. > How about change kasan_map_populate() to accept regular VA start, end > address, and convert them internally after aligning to PAGE_SIZE? That's what my original patch did, but it doesn't help on its own because kasan_populate_zero_shadow would need the same change. Will
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index b922826d9908..207b1acb823a 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -146,7 +146,7 @@ asmlinkage void __init kasan_early_init(void) static void __init kasan_map_populate(unsigned long start, unsigned long end, int node) { - kasan_pgd_populate(start & PAGE_MASK, PAGE_ALIGN(end), node, false); + kasan_pgd_populate(start, end, node, false); } /* @@ -183,8 +183,8 @@ void __init kasan_init(void) struct memblock_region *reg; int i; - kimg_shadow_start = (u64)kasan_mem_to_shadow(_text); - kimg_shadow_end = (u64)kasan_mem_to_shadow(_end); + kimg_shadow_start = (u64)kasan_mem_to_shadow(_text) & PAGE_MASK; + kimg_shadow_end = PAGE_ALIGN((u64)kasan_mem_to_shadow(_end)); mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR); mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END);