From patchwork Tue Oct 29 04:20:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11216957 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F5EE1390 for ; Tue, 29 Oct 2019 04:21:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2CFB321835 for ; Tue, 29 Oct 2019 04:21:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="P1vgWGkf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2CFB321835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4AE5C6B000D; Tue, 29 Oct 2019 00:21:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 437AF6B000E; Tue, 29 Oct 2019 00:21:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D7CD6B0010; Tue, 29 Oct 2019 00:21:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 051A36B000D for ; Tue, 29 Oct 2019 00:21:25 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 9EE0F613D for ; Tue, 29 Oct 2019 04:21:25 +0000 (UTC) X-FDA: 76095522930.16.sugar33_80ce74503b62a X-Spam-Summary: 2,0,0,c723dbdc29aa5f2a,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2914:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:4605:5007:6117:6119:6261:6653:7903:10004:11026:11233:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:13141:13230:13894:14096:14181:14394:14721:21080:21444:21451:21627:30054:30070,0,RBL:209.85.210.193:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:non e X-HE-Tag: sugar33_80ce74503b62a X-Filterd-Recvd-Size: 6742 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 29 Oct 2019 04:21:25 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id v4so8578490pff.6 for ; Mon, 28 Oct 2019 21:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WyYpZbWP6HUDQzKMRNpQV+55Z3UIHrq0rortvFyZDZM=; b=P1vgWGkf/BnwrZmL0q5T0WQPs38zAc66WR+8qieAC1hUtr621MHKjoSFdetsrm+m+r 1WBO50R1y8tyWsFHolAAgVsTlZD+eVmr/nnhrn2uSp4qiu5RVcZ1Rw4gDl/qFnm3gHfE dscZ0AqBNtSUNVCxp8EQZzW5AP021+KMshCs4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WyYpZbWP6HUDQzKMRNpQV+55Z3UIHrq0rortvFyZDZM=; b=YHG9gn76JuOLFGdMtH7bn5SnRrUEhOFLjJJoHC4D7xVFUQ3Hims5/+/F0iH/07ECU4 d+DSTV0i/vUEO5v18IBWAHEQb7Ql4umM0NJVCIkqeuXiw1vtCWwwRk7kltZrXAWdBDkG elpEtQn7Ag+FOX3rc4ck1vkaAJp18ZP45lNfWJUF5kpTqCkBKa3MusbgU5r4QSc8rpc9 f1RFug72GSJNH94XOnfLcRleRLlOrxGOxi7YSnKQmbYGtQidcFbzXi2lDcYfqJWpBeyc UmOfkYQz+uC2E4HPSyFelv3heL55bwHRqj4jE4InBN3qM5CcFxgReZCMvzmxLDE0iqqb JTpQ== X-Gm-Message-State: APjAAAXt0wc/mhrjMQew9yb6oXItbaY8pEXAL7hKhvJTx2zOWeyuR2w0 cfpFGQjx0G0yvy47RHu64+LIDg== X-Google-Smtp-Source: APXvYqz089f1zvT/a0WmxOBM04w27sj2elEK1j7mV//9L+xZ/8smZxmsjO7tME/hRSZrcZUo6utVpQ== X-Received: by 2002:a17:90a:7608:: with SMTP id s8mr3568275pjk.75.1572322884077; Mon, 28 Oct 2019 21:21:24 -0700 (PDT) Received: from localhost ([2001:44b8:802:1120:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id t8sm1026995pjv.18.2019.10.28.21.21.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 21:21:23 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v10 4/5] x86/kasan: support KASAN_VMALLOC Date: Tue, 29 Oct 2019 15:20:58 +1100 Message-Id: <20191029042059.28541-5-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191029042059.28541-1-dja@axtens.net> References: <20191029042059.28541-1-dja@axtens.net> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the case where KASAN directly allocates memory to back vmalloc space, don't map the early shadow page over it. We prepopulate pgds/p4ds for the range that would otherwise be empty. This is required to get it synced to hardware on boot, allowing the lower levels of the page tables to be filled dynamically. Acked-by: Dmitry Vyukov Signed-off-by: Daniel Axtens Reviewed-by: Andrey Ryabinin --- v5: fix some checkpatch CHECK warnings. There are some that remain around lines ending with '(': I have not changed these because it's consistent with the rest of the file and it's not easy to see how to fix it without creating an overlong line or lots of temporary variables. v2: move from faulting in shadow pgds to prepopulating --- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 60 +++++++++++++++++++++++++++++++++++++ 2 files changed, 61 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 45699e458057..d65b0fcc9bc0 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -135,6 +135,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c index 296da58f3013..8f00f462709e 100644 --- a/arch/x86/mm/kasan_init_64.c +++ b/arch/x86/mm/kasan_init_64.c @@ -245,6 +245,51 @@ static void __init kasan_map_early_shadow(pgd_t *pgd) } while (pgd++, addr = next, addr != end); } +static void __init kasan_shallow_populate_p4ds(pgd_t *pgd, + unsigned long addr, + unsigned long end, + int nid) +{ + p4d_t *p4d; + unsigned long next; + void *p; + + p4d = p4d_offset(pgd, addr); + do { + next = p4d_addr_end(addr, end); + + if (p4d_none(*p4d)) { + p = early_alloc(PAGE_SIZE, nid, true); + p4d_populate(&init_mm, p4d, p); + } + } while (p4d++, addr = next, addr != end); +} + +static void __init kasan_shallow_populate_pgds(void *start, void *end) +{ + unsigned long addr, next; + pgd_t *pgd; + void *p; + int nid = early_pfn_to_nid((unsigned long)start); + + addr = (unsigned long)start; + pgd = pgd_offset_k(addr); + do { + next = pgd_addr_end(addr, (unsigned long)end); + + if (pgd_none(*pgd)) { + p = early_alloc(PAGE_SIZE, nid, true); + pgd_populate(&init_mm, pgd, p); + } + + /* + * we need to populate p4ds to be synced when running in + * four level mode - see sync_global_pgds_l4() + */ + kasan_shallow_populate_p4ds(pgd, addr, next, nid); + } while (pgd++, addr = next, addr != (unsigned long)end); +} + #ifdef CONFIG_KASAN_INLINE static int kasan_die_handler(struct notifier_block *self, unsigned long val, @@ -352,9 +397,24 @@ void __init kasan_init(void) shadow_cpu_entry_end = (void *)round_up( (unsigned long)shadow_cpu_entry_end, PAGE_SIZE); + /* + * If we're in full vmalloc mode, don't back vmalloc space with early + * shadow pages. Instead, prepopulate pgds/p4ds so they are synced to + * the global table and we can populate the lower levels on demand. + */ +#ifdef CONFIG_KASAN_VMALLOC + kasan_shallow_populate_pgds( + kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), + kasan_mem_to_shadow((void *)VMALLOC_END)); + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)VMALLOC_END + 1), + shadow_cpu_entry_begin); +#else kasan_populate_early_shadow( kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), shadow_cpu_entry_begin); +#endif kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, (unsigned long)shadow_cpu_entry_end, 0);