From patchwork Sun Dec 1 01:55:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268369 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D18C112B for ; Sun, 1 Dec 2019 01:55:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 606AF208C3 for ; Sun, 1 Dec 2019 01:55:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="XPFPsN8K" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 606AF208C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 841746B031A; Sat, 30 Nov 2019 20:55:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7CAC36B031C; Sat, 30 Nov 2019 20:55:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 694EC6B031D; Sat, 30 Nov 2019 20:55:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4DDAB6B031A for ; Sat, 30 Nov 2019 20:55:05 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 000652C7C for ; Sun, 1 Dec 2019 01:55:04 +0000 (UTC) X-FDA: 76214904570.14.chess19_2c544c2340519 X-Spam-Summary: 2,0,0,5f841c57ab8cae2b,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:aryabinin@virtuozzo.com:christophe.leroy@c-s.fr:dja@axtens.net:dvyukov@google.com:glider@google.com:gor@linux.ibm.com::mark.rutland@arm.com:mm-commits@vger.kernel.org:torvalds@linux-foundation.org,RULES_HIT:41:355:379:800:960:967:973:988:989:1260:1263:1345:1381:1431:1437:1535:1543:1711:1730:1747:1777:1792:2393:2525:2559:2563:2682:2685:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4605:5007:6119:6261:6653:6737:7576:8599:8784:9025:9545:10004:10913:11026:11233:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12783:12986:13141:13230:13846:14096:14181:14721:14849:21080:21451:21627:21939:30054,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutr al,Custo X-HE-Tag: chess19_2c544c2340519 X-Filterd-Recvd-Size: 5053 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:55:04 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B5B0E20880; Sun, 1 Dec 2019 01:55:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165303; bh=vId6BvL2gsVrDejxmnEJ+wki9ykjb9Ss5DsSTaaaZWw=; h=Date:From:To:Subject:From; b=XPFPsN8KVRHZ9IUYdwezEteKi1+IR0GgM5Es1qsefQX1/7swQobz1UDYb5bQFCff0 LE6lTtwkaio9WrzbZYrswjfeQDsEvbDJ2Aegvli0KPcaHJLAMNNV55clUJsypxlru1 BW8C6LGRKZ5anx35QvpI/FPcQ7nG3rGhMa6dHJ70= Date: Sat, 30 Nov 2019 17:55:00 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, aryabinin@virtuozzo.com, christophe.leroy@c-s.fr, dja@axtens.net, dvyukov@google.com, glider@google.com, gor@linux.ibm.com, linux-mm@kvack.org, mark.rutland@arm.com, mm-commits@vger.kernel.org, torvalds@linux-foundation.org Subject: [patch 096/158] x86/kasan: support KASAN_VMALLOC Message-ID: <20191201015500.YucT296fu%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Daniel Axtens Subject: x86/kasan: support KASAN_VMALLOC In the case where KASAN directly allocates memory to back vmalloc space, don't map the early shadow page over it. We prepopulate pgds/p4ds for the range that would otherwise be empty. This is required to get it synced to hardware on boot, allowing the lower levels of the page tables to be filled dynamically. Link: http://lkml.kernel.org/r/20191031093909.9228-5-dja@axtens.net Signed-off-by: Daniel Axtens Acked-by: Dmitry Vyukov Reviewed-by: Andrey Ryabinin Cc: Alexander Potapenko Cc: Christophe Leroy Cc: Mark Rutland Cc: Vasily Gorbik Signed-off-by: Andrew Morton --- arch/x86/Kconfig | 1 arch/x86/mm/kasan_init_64.c | 61 ++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) --- a/arch/x86/Kconfig~x86-kasan-support-kasan_vmalloc +++ a/arch/x86/Kconfig @@ -135,6 +135,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT --- a/arch/x86/mm/kasan_init_64.c~x86-kasan-support-kasan_vmalloc +++ a/arch/x86/mm/kasan_init_64.c @@ -245,6 +245,49 @@ static void __init kasan_map_early_shado } while (pgd++, addr = next, addr != end); } +static void __init kasan_shallow_populate_p4ds(pgd_t *pgd, + unsigned long addr, + unsigned long end) +{ + p4d_t *p4d; + unsigned long next; + void *p; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + + if (p4d_none(*p4d)) { + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true); + p4d_populate(&init_mm, p4d, p); + } + } while (p4d++, addr = next, addr != end); +} + +static void __init kasan_shallow_populate_pgds(void *start, void *end) +{ + unsigned long addr, next; + pgd_t *pgd; + void *p; + + addr = (unsigned long)start; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, (unsigned long)end); + + if (pgd_none(*pgd)) { + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true); + pgd_populate(&init_mm, pgd, p); + } + + /* + * we need to populate p4ds to be synced when running in + * four level mode - see sync_global_pgds_l4() + */ + kasan_shallow_populate_p4ds(pgd, addr, next); + } while (pgd++, addr = next, addr != (unsigned long)end); +} + #ifdef CONFIG_KASAN_INLINE static int kasan_die_handler(struct notifier_block *self, unsigned long val, @@ -354,6 +397,24 @@ void __init kasan_init(void) kasan_populate_early_shadow( kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), + kasan_mem_to_shadow((void *)VMALLOC_START)); + + /* + * If we're in full vmalloc mode, don't back vmalloc space with early + * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to + * the global table and we can populate the lower levels on demand. + */ + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) + kasan_shallow_populate_pgds( + kasan_mem_to_shadow((void *)VMALLOC_START), + kasan_mem_to_shadow((void *)VMALLOC_END)); + else + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_START), + kasan_mem_to_shadow((void *)VMALLOC_END)); + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_END + 1), shadow_cpu_entry_begin); kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,