From patchwork Thu Nov 26 13:14:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Ryabinin X-Patchwork-Id: 7706681 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F11CE9F401 for ; Thu, 26 Nov 2015 13:17:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id F31462077D for ; Thu, 26 Nov 2015 13:17:09 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1662E2077C for ; Thu, 26 Nov 2015 13:17:09 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a1wO2-00086E-Vm; Thu, 26 Nov 2015 13:14:58 +0000 Received: from relay.parallels.com ([195.214.232.42]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a1wO0-00084d-6T for linux-arm-kernel@lists.infradead.org; Thu, 26 Nov 2015 13:14:57 +0000 Received: from [10.67.48.55] (helo=mail.parallels.net) by relay.parallels.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-SHA384:256) (Exim 4.86) (envelope-from ) id 1a1wNX-0001p8-Tt; Thu, 26 Nov 2015 16:14:28 +0300 Received: from localhost.sw.ru (10.30.25.228) by MSK-EXCH1.sw.swsoft.com (10.67.48.55) with Microsoft SMTP Server (TLS) id 15.0.1130.7; Thu, 26 Nov 2015 14:14:17 +0100 From: Andrey Ryabinin To: Catalin Marinas , Will Deacon , Subject: [PATCH RFT] arm64: kasan: Make KASAN work with 16K pages + 48 bit VA Date: Thu, 26 Nov 2015 16:14:46 +0300 Message-ID: <1448543686-31869-1-git-send-email-aryabinin@virtuozzo.com> X-Mailer: git-send-email 2.4.10 MIME-Version: 1.0 X-ClientProxiedBy: US-EXCH2.sw.swsoft.com (10.255.249.46) To MSK-EXCH1.sw.swsoft.com (10.67.48.55) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151126_051456_465426_75134411 X-CRM114-Status: GOOD ( 12.62 ) X-Spam-Score: -4.2 (----) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Yury , Arnd Bergmann , Ard Biesheuvel , Linus Walleij , "Suzuki K. Poulose" , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Alexey Klimov , Alexander Potapenko , David Keitel , Andrey Ryabinin , Dmitry Vyukov Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently kasan assumes that shadow memory covers one or more entire PGDs. That's not true for 16K pages + 48bit VA space, where PGDIR_SIZE is bigger than the whole shadow memory. This patch tries to fix that case. clear_page_tables() is a new replacement of clear_pgs(). Instead of always clearing pgds it clears top level page table entries that entirely belongs to shadow memory. In addition to 'tmp_pg_dir' we now have 'tmp_pud' which is used to store puds that now might be cleared by clear_page_tables. Reported-by: Suzuki K. Poulose Signed-off-by: Andrey Ryabinin --- *** THIS is not tested with 16k pages *** arch/arm64/mm/kasan_init.c | 87 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 76 insertions(+), 11 deletions(-) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index cf038c7..ea9f92a 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -22,6 +22,7 @@ #include static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); +static pud_t tmp_pud[PAGE_SIZE/sizeof(pud_t)] __initdata __aligned(PAGE_SIZE); static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long end) @@ -92,20 +93,84 @@ asmlinkage void __init kasan_early_init(void) { BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END - (1UL << 61)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); - BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); + BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PUD_SIZE)); kasan_map_early_shadow(); } -static void __init clear_pgds(unsigned long start, - unsigned long end) +static void __init clear_pmds(pud_t *pud, unsigned long addr, unsigned long end) { + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + + do { + next = pmd_addr_end(addr, end); + if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) + pmd_clear(pmd); + + } while (pmd++, addr = next, addr != end); +} + +static void __init clear_puds(pgd_t *pgd, unsigned long addr, unsigned long end) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(pgd, addr); + + do { + next = pud_addr_end(addr, end); + if (IS_ALIGNED(addr, PUD_SIZE) && end - addr >= PUD_SIZE) + pud_clear(pud); + + if (!pud_none(*pud)) + clear_pmds(pud, addr, next); + } while (pud++, addr = next, addr != end); +} + +static void __init clear_page_tables(unsigned long addr, unsigned long end) +{ + pgd_t *pgd; + unsigned long next; + + pgd = pgd_offset_k(addr); + + do { + next = pgd_addr_end(addr, end); + if (IS_ALIGNED(addr, PGDIR_SIZE) && end - addr >= PGDIR_SIZE) + pgd_clear(pgd); + + if (!pgd_none(*pgd)) + clear_puds(pgd, addr, next); + } while (pgd++, addr = next, addr != end); +} + +static void copy_pagetables(void) +{ + pgd_t *pgd = tmp_pg_dir + pgd_index(KASAN_SHADOW_START); + + memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); + /* - * Remove references to kasan page tables from - * swapper_pg_dir. pgd_clear() can't be used - * here because it's nop on 2,3-level pagetable setups + * If kasan shadow shares PGD with other mappings, + * clear_page_tables() will clear puds instead of pgd, + * so we need temporary pud table to keep early shadow mapped. */ - for (; start < end; start += PGDIR_SIZE) - set_pgd(pgd_offset_k(start), __pgd(0)); + if (PGDIR_SIZE > KASAN_SHADOW_END - KASAN_SHADOW_START) { + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + memcpy(tmp_pud, pgd_page_vaddr(*pgd), sizeof(tmp_pud)); + + pgd_populate(&init_mm, pgd, tmp_pud); + pud = pud_offset(pgd, KASAN_SHADOW_START); + pmd = pmd_offset(pud, KASAN_SHADOW_START); + pud_populate(&init_mm, pud, pmd); + pte = pte_offset_kernel(pmd, KASAN_SHADOW_START); + pmd_populate_kernel(&init_mm, pmd, pte); + } } static void __init cpu_set_ttbr1(unsigned long ttbr1) @@ -123,16 +188,16 @@ void __init kasan_init(void) /* * We are going to perform proper setup of shadow memory. - * At first we should unmap early shadow (clear_pgds() call bellow). + * At first we should unmap early shadow (clear_page_tables()). * However, instrumented code couldn't execute without shadow memory. * tmp_pg_dir used to keep early shadow mapped until full shadow * setup will be finished. */ - memcpy(tmp_pg_dir, swapper_pg_dir, sizeof(tmp_pg_dir)); + copy_pagetables(); cpu_set_ttbr1(__pa(tmp_pg_dir)); flush_tlb_all(); - clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); + clear_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END); kasan_populate_zero_shadow((void *)KASAN_SHADOW_START, kasan_mem_to_shadow((void *)MODULES_VADDR));