From patchwork Wed Mar 25 16:12:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458231 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAE00913 for ; Wed, 25 Mar 2020 16:13:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 787BD20772 for ; Wed, 25 Mar 2020 16:13:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qebyHNFo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 787BD20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C3086B0032; Wed, 25 Mar 2020 12:13:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 29B396B0036; Wed, 25 Mar 2020 12:13:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B03F6B0037; Wed, 25 Mar 2020 12:13:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 0446B6B0032 for ; Wed, 25 Mar 2020 12:13:13 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DC4BE180629FB for ; Wed, 25 Mar 2020 16:13:13 +0000 (UTC) X-FDA: 76634379066.07.gun39_73772d6689158 X-Spam-Summary: 2,0,0,2f3724392d2243b9,d41d8cd98f00b204,3f4n7xgykcaymrojkxmuumrk.iusrot03-ssq1giq.uxm@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3868:3870:3871:4250:4321:5007:6261:6653:6742:6743:9036:9969:10004:10400:11026:11232:11473:11657:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:30054:30064,0,RBL:209.85.219.202:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: gun39_73772d6689158 X-Filterd-Recvd-Size: 5511 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:13 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id t7so2940539ybj.1 for ; Wed, 25 Mar 2020 09:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sssZYXa/Y7b+zWJy12LmpxkXvVq7ExmyCF02tNMIUtU=; b=qebyHNFoUUhb02spBx01HkHA+vcisyly+XjKDPQoXNdEMraJsnABOtS2kpjo4aQcfh HJu8csS7xKYlJFITfUxNmQoN7TRj32xQVvpIScW3J5gYyTXGv2iw1cWZ658w5676B/IQ fjIQxpCyMyq2TjZIWMO4KJMAWMjvpcXGFPE6qEojhJuq0o1t3nv5H9ZtuzLCn3O4zYqw vjU3rOz98kelHuRbOPr+zkTTJnSuIQEQ3NzT4yxfSxh3+l9UtjzYu84huP4msnxx/Kh0 MQuuZqbWM4ay1dHSUbXPmHOy/gKRX//rLAF4Hid/1ZeoJrJ5pft3yw6wK4rQ83g1R7+n z+Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sssZYXa/Y7b+zWJy12LmpxkXvVq7ExmyCF02tNMIUtU=; b=NJlf20MaKCcR0IgfNFnVJdm5Vd/7asDd/JFS3Ff/MkhHs3g4AmH05TlKF3QHTDE+0L 0Gq5qlue7EfiXlTlqPrLfhNaAW/xXylh9mdeHX8j7z0FLAuRiQdrMHe0JUxneUd4EuqU +vULJ0oGgwH8Fi7BePMG7AaWxcHnVzHP+XytDSiSaEySure5/hZqZdlCM0gfRI5+8f4A dNFRTaX3vk+GhFG6w5T52uS+9U/TgXUvXDVADyZJZcZdhFm/FZC+hwQI9+6ynAFmL/KD xq5fLOHWAsYUrHDXBsqGG186R+8yKASTdIw5OEXv+pwJ5tKw1mviiDRd4obwgJlnx7tt 0e2Q== X-Gm-Message-State: ANhLgQ351Un00q7YdwdO0BJ9w2foK67WeZVCx3hxOomFwGJLqIH/eeU6 47n+eZ1/nNmO+6/KlamGSZhpoWPfXMk= X-Google-Smtp-Source: ADFU+vvadgctgUapN+dr0duN/OKqgQTCVskNi+Pe6IgIVn7kIaS1GQg626o0sKIvVDsZg4pYYoF0cwJnLv0= X-Received: by 2002:a25:1656:: with SMTP id 83mr6758788ybw.373.1585152791465; Wed, 25 Mar 2020 09:13:11 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:16 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-6-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 05/38] kmsan: reduce vmalloc space From: glider@google.com To: Vegard Nossum , Andrew Morton , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Andrew Morton Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- Change-Id: Iaa5e8e0fc2aa66c956f937f5a1de6e5ef40d57cc --- arch/x86/include/asm/pgtable_64_types.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d9..586629e204366 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,22 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ +#ifndef CONFIG_KMSAN #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) +#define VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE * 2) +#define VMALLOC_META_END (VMALLOC_END + VMALLOC_ORIGIN_OFFSET) +#define MODULES_SHADOW_START (VMALLOC_META_END + 1) +#define MODULES_ORIGIN_START (MODULES_SHADOW_START + MODULES_LEN) +#define MODULES_ORIGIN_END (MODULES_ORIGIN_START + MODULES_LEN) +#endif #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */