From patchwork Thu Jul 2 02:54:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11637773 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFE6913B4 for ; Thu, 2 Jul 2020 02:54:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8CAFB207F5 for ; Thu, 2 Jul 2020 02:54:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="l8Hhhj8s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8CAFB207F5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 91A1D8D000C; Wed, 1 Jul 2020 22:54:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CFC18D0001; Wed, 1 Jul 2020 22:54:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76B188D000C; Wed, 1 Jul 2020 22:54:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id 6184B8D0001 for ; Wed, 1 Jul 2020 22:54:42 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1F2DF181AC9CB for ; Thu, 2 Jul 2020 02:54:42 +0000 (UTC) X-FDA: 76991618004.28.dust84_1e0061b26e85 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id F1A796C37 for ; Thu, 2 Jul 2020 02:54:41 +0000 (UTC) X-Spam-Summary: 1,0,0,bf7cc8cf7d10da9f,d41d8cd98f00b204,dja@axtens.net,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3874:4250:4321:4605:5007:6261:6653:7514:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13894:14181:14394:14721:21080:21444:21451:21627:21795:21990:30012:30034:30051:30054,0,RBL:209.85.216.68:@axtens.net:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04yfpr1dkwo74jwm93eyzesuajgarypcn18mcuwh3qhjm87rmdbgmf9azdef4r4.d8u33aqchnasgafhbn5x8bpdwiqq4nofeopd6885bdzb4ockf98yp86atcbjx5k.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: dust84_1e0061b26e85 X-Filterd-Recvd-Size: 5890 Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Jul 2020 02:54:41 +0000 (UTC) Received: by mail-pj1-f68.google.com with SMTP id cm21so2466188pjb.3 for ; Wed, 01 Jul 2020 19:54:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jEAbvp0NcKYXLPlKcCam8GQvvafUBgcVOJIj099LJII=; b=l8Hhhj8syHtUNp+VamGkuO605PBoNs6uw34vpgDBxugaKXT2I1bsRT706gUgLeQb93 UcvGY5rWgmysWYXpcfJjpd+Qx7bW15hiJuKELfz6TgFbM3YM7SvlWHzFwk/bZKsj/mWj SWSx61IQHNmJ2LbwV6MWAFY0GnBISyORFGlkc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jEAbvp0NcKYXLPlKcCam8GQvvafUBgcVOJIj099LJII=; b=mqLF0S3Z+yp8lPMDXZjdPljtSHGmStIFYDRqUUAS90qoNkt3Eve6vxp9TBdcydXzED oXJt+NkonafLBbaLg5o7Q6BVZY/Vgz1radvqevRjmHPUWCVHrsR1BSUCg6Gkdcb4uLDU I6czeEBy+8860Ab2ycnL3vss+qWjRG4OSOAKkDJDiMMraRDH4mQmTM77zXqaKhkt5FzF boMOnKeceYUI/erXdHzf00cnGmFSP4uOfjFrVA/1AeE1zlpbbMODvt6SR0yJSBvRF02h Nv6xY8ZGSoZE2qHM8IOGPVyuOAZZMlTO09Q3mnfSKFHjT/RIhfn5wWzY7dbMnt7RXDV0 /KJQ== X-Gm-Message-State: AOAM533v9kEhJERHAxOM0teWhZr17txF/uXvCJhE6L2PYIsBwcQcxBDH awdt3rF6JIQVR9VEvMUBlRqsxg== X-Google-Smtp-Source: ABdhPJzHgVpfPFDfS8EwmQPaJdUfflOkSYb9QRnPNc4+/66KDIKNgnGl0dQ7GbTywsipCv35/i6xsA== X-Received: by 2002:a17:90a:bb84:: with SMTP id v4mr20987849pjr.162.1593658480672; Wed, 01 Jul 2020 19:54:40 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-3c80-6152-10ca-83bc.static.ipv6.internode.on.net. [2001:44b8:1113:6700:3c80:6152:10ca:83bc]) by smtp.gmail.com with ESMTPSA id 140sm7127309pfz.154.2020.07.01.19.54.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 19:54:40 -0700 (PDT) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v8 1/4] kasan: define and use MAX_PTRS_PER_* for early shadow tables Date: Thu, 2 Jul 2020 12:54:29 +1000 Message-Id: <20200702025432.16912-2-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200702025432.16912-1-dja@axtens.net> References: <20200702025432.16912-1-dja@axtens.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: F1A796C37 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: powerpc has a variable number of PTRS_PER_*, set at runtime based on the MMU that the kernel is booted under. This means the PTRS_PER_* are no longer constants, and therefore breaks the build. Define default MAX_PTRS_PER_*s in the same style as MAX_PTRS_PER_P4D. As KASAN is the only user at the moment, just define them in the kasan header, and have them default to PTRS_PER_* unless overridden in arch code. Suggested-by: Christophe Leroy Suggested-by: Balbir Singh Reviewed-by: Christophe Leroy Reviewed-by: Balbir Singh Signed-off-by: Daniel Axtens --- include/linux/kasan.h | 18 +++++++++++++++--- mm/kasan/init.c | 6 +++--- 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 82522e996c76..b6f94952333b 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -14,10 +14,22 @@ struct task_struct; #include #include +#ifndef MAX_PTRS_PER_PTE +#define MAX_PTRS_PER_PTE PTRS_PER_PTE +#endif + +#ifndef MAX_PTRS_PER_PMD +#define MAX_PTRS_PER_PMD PTRS_PER_PMD +#endif + +#ifndef MAX_PTRS_PER_PUD +#define MAX_PTRS_PER_PUD PTRS_PER_PUD +#endif + extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; -extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; -extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; -extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; +extern pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE]; +extern pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD]; +extern pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD]; extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; int kasan_populate_early_shadow(const void *shadow_start, diff --git a/mm/kasan/init.c b/mm/kasan/init.c index fe6be0be1f76..42bca3d27db8 100644 --- a/mm/kasan/init.c +++ b/mm/kasan/init.c @@ -46,7 +46,7 @@ static inline bool kasan_p4d_table(pgd_t pgd) } #endif #if CONFIG_PGTABLE_LEVELS > 3 -pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss; +pud_t kasan_early_shadow_pud[MAX_PTRS_PER_PUD] __page_aligned_bss; static inline bool kasan_pud_table(p4d_t p4d) { return p4d_page(p4d) == virt_to_page(lm_alias(kasan_early_shadow_pud)); @@ -58,7 +58,7 @@ static inline bool kasan_pud_table(p4d_t p4d) } #endif #if CONFIG_PGTABLE_LEVELS > 2 -pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss; +pmd_t kasan_early_shadow_pmd[MAX_PTRS_PER_PMD] __page_aligned_bss; static inline bool kasan_pmd_table(pud_t pud) { return pud_page(pud) == virt_to_page(lm_alias(kasan_early_shadow_pmd)); @@ -69,7 +69,7 @@ static inline bool kasan_pmd_table(pud_t pud) return false; } #endif -pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss; +pte_t kasan_early_shadow_pte[MAX_PTRS_PER_PTE] __page_aligned_bss; static inline bool kasan_pte_table(pmd_t pmd) { From patchwork Thu Jul 2 02:54:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11637775 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2E453913 for ; Thu, 2 Jul 2020 02:54:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EFEE820781 for ; Thu, 2 Jul 2020 02:54:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="ZP48jJgv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EFEE820781 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 104A68D0012; Wed, 1 Jul 2020 22:54:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B4DE8D0001; Wed, 1 Jul 2020 22:54:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE6FF8D0012; Wed, 1 Jul 2020 22:54:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id D92298D0001 for ; Wed, 1 Jul 2020 22:54:45 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A75082DFC for ; Thu, 2 Jul 2020 02:54:45 +0000 (UTC) X-FDA: 76991618130.09.shame80_0f0e79826e85 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 832E4180AD801 for ; Thu, 2 Jul 2020 02:54:45 +0000 (UTC) X-Spam-Summary: 1,0,0,8dde603296e0db8a,d41d8cd98f00b204,dja@axtens.net,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:2740:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:4250:4321:5007:6119:6261:6653:7903:10004:11026:11232:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13141:13200:13229:13230:13894:14096:14181:14394:14721:21080:21444:21451:21627:21740:30054,0,RBL:209.85.215.195:@axtens.net:.lbl8.mailshell.net-62.2.0.100 66.100.201.201;04ygtkhhwyb93y7nab97emu3qjnpqop1gg9ts8p55yc1djgo41ib4eis7iaezdz.qqi5febyx8xx4qewsk3incuntwg739ummuiwhc168sqky73ucbzzxac968kxzck.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: shame80_0f0e79826e85 X-Filterd-Recvd-Size: 5136 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Jul 2020 02:54:45 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id f3so12747177pgr.2 for ; Wed, 01 Jul 2020 19:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=D4Q7wlnC9uinQAfoiYOmKAt3GyYnF0nwS1m28sSQDUo=; b=ZP48jJgvWkI9Wmfys6LSUGRJKCAuUvIJ1CenCS7q4/qaZMUpIwAmGMeKJPT2wS0Wmy yQ4JtPRu7jJcldH2+IBxwZ7U8Dmekr3AiJj8akIHRdks0YXPwYaZJa5DOKZilVmzJlCP F/rL62iMEdh++SSXvJq21IJPk/pF89qIJJacE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=D4Q7wlnC9uinQAfoiYOmKAt3GyYnF0nwS1m28sSQDUo=; b=Sg7Op3R2Oo+t6yvUGvB1aJFuDT69k5FQESi79M88+Xb0W/+xbURiVih+Qh5SrO/MlV laGzlOUkkn+oPB5GaEGUvIT/kbtyp9+5l/kYB8xEKOdUU+uKFE3eP6yDso39XSMpw9Cq ogdXBK5gkm8+4ajaAnLMdROQGK7mjg9wwdNAdWr4XH5znrUQe+fRBlxVqF5/9BCwwawQ YJUrg5Emd6aXZEojSuxZ6nR3ogUDFEn/5WNeCXI7aocyYsYJtNqtfMUnBJUCS34+/Wgk icQkoundzxcscM2cxpUgcA7zVwWUy1MT4HEYNYH31TD7qs3XkNT+dNnf6rn7gl2QmNSK QD3Q== X-Gm-Message-State: AOAM531M7GrMuHoOVCc/a7AFFklPEaQbJH9UTwXRgYdDOxyrfMDeRPIU NTCEfi9jC85XmN45YP7hgzMx4Q== X-Google-Smtp-Source: ABdhPJyDd94QLnAKxB+F6OzOYld27r7l1OjUzszJAvVmLhfM9lYgzLjnElqe0vUfemvYhf3Y3Zpv8g== X-Received: by 2002:a63:20d:: with SMTP id 13mr22820110pgc.166.1593658484159; Wed, 01 Jul 2020 19:54:44 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-3c80-6152-10ca-83bc.static.ipv6.internode.on.net. [2001:44b8:1113:6700:3c80:6152:10ca:83bc]) by smtp.gmail.com with ESMTPSA id h6sm7031275pfo.123.2020.07.01.19.54.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 19:54:43 -0700 (PDT) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v8 2/4] kasan: Document support on 32-bit powerpc Date: Thu, 2 Jul 2020 12:54:30 +1000 Message-Id: <20200702025432.16912-3-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200702025432.16912-1-dja@axtens.net> References: <20200702025432.16912-1-dja@axtens.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: 832E4180AD801 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KASAN is supported on 32-bit powerpc and the docs should reflect this. Document s390 support while we're at it. Suggested-by: Christophe Leroy Reviewed-by: Christophe Leroy Signed-off-by: Daniel Axtens --- Documentation/dev-tools/kasan.rst | 7 +++++-- Documentation/powerpc/kasan.txt | 12 ++++++++++++ 2 files changed, 17 insertions(+), 2 deletions(-) create mode 100644 Documentation/powerpc/kasan.txt diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index c652d740735d..554cbee1d240 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -22,7 +22,8 @@ global variables yet. Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and -riscv architectures, and tag-based KASAN is supported only for arm64. +riscv architectures. It is also supported on 32-bit powerpc kernels. Tag-based +KASAN is supported only on arm64. Usage ----- @@ -255,7 +256,9 @@ CONFIG_KASAN_VMALLOC ~~~~~~~~~~~~~~~~~~~~ With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the -cost of greater memory usage. Currently this is only supported on x86. +cost of greater memory usage. Currently this supported on x86, s390 +and 32-bit powerpc. It is optional, except on 32-bit powerpc kernels +with module support, where it is required. This works by hooking into vmalloc and vmap, and dynamically allocating real shadow memory to back the mappings. diff --git a/Documentation/powerpc/kasan.txt b/Documentation/powerpc/kasan.txt new file mode 100644 index 000000000000..26bb0e8bb18c --- /dev/null +++ b/Documentation/powerpc/kasan.txt @@ -0,0 +1,12 @@ +KASAN is supported on powerpc on 32-bit only. + +32 bit support +============== + +KASAN is supported on both hash and nohash MMUs on 32-bit. + +The shadow area sits at the top of the kernel virtual memory space above the +fixmap area and occupies one eighth of the total kernel virtual memory space. + +Instrumentation of the vmalloc area is optional, unless built with modules, +in which case it is required. From patchwork Thu Jul 2 02:54:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11637777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E48A13B4 for ; Thu, 2 Jul 2020 02:54:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0B2D2207F5 for ; Thu, 2 Jul 2020 02:54:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="cPFe1UFu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B2D2207F5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 39C238D0014; Wed, 1 Jul 2020 22:54:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 34C4C8D0001; Wed, 1 Jul 2020 22:54:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28A648D0014; Wed, 1 Jul 2020 22:54:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id 143998D0001 for ; Wed, 1 Jul 2020 22:54:50 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C68FB180AD801 for ; Thu, 2 Jul 2020 02:54:49 +0000 (UTC) X-FDA: 76991618298.21.bell31_2c094f526e85 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id A2421180442C0 for ; Thu, 2 Jul 2020 02:54:49 +0000 (UTC) X-Spam-Summary: 1,0,0,bd9fb4f94d603213,d41d8cd98f00b204,dja@axtens.net,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1714:1730:1747:1777:1792:1801:2393:2559:2562:3138:3139:3140:3141:3142:3350:3865:3867:3871:3872:4605:5007:6261:6653:7903:10004:11026:11473:11657:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14096:14181:14384:14394:14581:14721:21080:21444:21451:21627:30029:30054,0,RBL:209.85.215.194:@axtens.net:.lbl8.mailshell.net-66.100.201.201 62.2.0.100;04y8ez5i5tya9mry7wr4xbjnaoda5yckcawezjrtt6k3phgts7odusnneimokso.3o9t3bpfenocrbfe3s3mdqkpg6ynh77ozer8t43dsk6u9611mdrqep3p9yr975t.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bell31_2c094f526e85 X-Filterd-Recvd-Size: 3935 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Jul 2020 02:54:49 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id f3so12747264pgr.2 for ; Wed, 01 Jul 2020 19:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=s2vaNiCstBIb+WWFV0QIyynMgzg46gv0imUofC6H27w=; b=cPFe1UFuVB5bSAqO8gzT0dcmEzl5dr+yPN/2vTgkw6gJ0x+9PbjI4KodzysL8Sq22w hUrQJeIHBpG3h8MYz3SiC41U7Ar/kzpMy7Q2lyibqJ472GyP77xLN6lU2TtbEKrg4LwQ aSU522rlTzkc6mfecJMBcMWQ9/2N1ry6XdjV8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=s2vaNiCstBIb+WWFV0QIyynMgzg46gv0imUofC6H27w=; b=C/WrigbWQzumMY9GZNOCahZmeHsVxws3ub0JP/Tz1ob8w2dKcI1cfe4b20bjQGF9jB +DKd6YTLxx6W+TshAeGyr5FvTRNit161SkSY5pX2ZhgAjt5GsPnRNm2b1kXp72U2JXuA sSTFLENEk45EkI8lLVR822gr97V4WlPchgY3hzkjUepQa3ON6s7ah9HpW+RkZr3aGc1J /MqJcVSFpqv5QxluL0NkUQCl8AAs9c+PDAxo/Izp1JG9CdNcwIdkYU6+uvpRAnCIYVJO iQhNHrH5Usp57wWCUFzBPRkUwYsDyHWYQsUwkk9o+umW4khj7ddzonB23j7IlcZjUcnt IS0A== X-Gm-Message-State: AOAM5323+TbwiEJMjhLGrRcWi6vhC2mpy+xhsFbT8+JVpFygECs0AMrS LoA7CYeebvVrue8MrnMDwsDiBg== X-Google-Smtp-Source: ABdhPJyXmFA8fucYvwuXRwDdGknORYIojGHyP9C/hmb1sbVz/1EdQsOerDFGeamE2e8WgmO0U7NMKA== X-Received: by 2002:a63:8f58:: with SMTP id r24mr21924594pgn.379.1593658488570; Wed, 01 Jul 2020 19:54:48 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-3c80-6152-10ca-83bc.static.ipv6.internode.on.net. [2001:44b8:1113:6700:3c80:6152:10ca:83bc]) by smtp.gmail.com with ESMTPSA id c19sm6198267pjs.11.2020.07.01.19.54.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 19:54:48 -0700 (PDT) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens Subject: [PATCH v8 3/4] powerpc/mm/kasan: rename kasan_init_32.c to init_32.c Date: Thu, 2 Jul 2020 12:54:31 +1000 Message-Id: <20200702025432.16912-4-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200702025432.16912-1-dja@axtens.net> References: <20200702025432.16912-1-dja@axtens.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: A2421180442C0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000028, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kasan is already implied by the directory name, we don't need to repeat it. Suggested-by: Christophe Leroy Signed-off-by: Daniel Axtens --- arch/powerpc/mm/kasan/Makefile | 2 +- arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} (100%) diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile index bb1a5408b86b..42fb628a44fd 100644 --- a/arch/powerpc/mm/kasan/Makefile +++ b/arch/powerpc/mm/kasan/Makefile @@ -2,6 +2,6 @@ KASAN_SANITIZE := n -obj-$(CONFIG_PPC32) += kasan_init_32.o +obj-$(CONFIG_PPC32) += init_32.o obj-$(CONFIG_PPC_8xx) += 8xx.o obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/init_32.c similarity index 100% rename from arch/powerpc/mm/kasan/kasan_init_32.c rename to arch/powerpc/mm/kasan/init_32.c From patchwork Thu Jul 2 02:54:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11637779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D7B51913 for ; Thu, 2 Jul 2020 02:54:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6F953207F5 for ; Thu, 2 Jul 2020 02:54:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="YlNneNLi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F953207F5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 98F5C8D0019; Wed, 1 Jul 2020 22:54:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 966D38D0001; Wed, 1 Jul 2020 22:54:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 856778D0019; Wed, 1 Jul 2020 22:54:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6A40F8D0001 for ; Wed, 1 Jul 2020 22:54:54 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 299DF180AD801 for ; Thu, 2 Jul 2020 02:54:54 +0000 (UTC) X-FDA: 76991618508.12.pig81_02011cb26e85 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id 0513D1800D65A for ; Thu, 2 Jul 2020 02:54:54 +0000 (UTC) X-Spam-Summary: 50,0,0,303c42d14fe54cea,d41d8cd98f00b204,dja@axtens.net,,RULES_HIT:69:146:327:355:379:541:901:960:967:973:988:989:1260:1263:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2198:2199:2200:2393:2525:2538:2565:2682:2685:2693:2736:2740:2859:2895:2898:2901:2902:2904:2915:2924:2925:2926:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3503:3504:3743:3834:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4560:4605:5007:6261:6630:6653:6691:7514:7875:7903:8603:8660:8784:8957:9010:9025:9036:9121:9388:10004:10559:11026:11232:11233:11320:11657:11854:11914:12043:12048:12198:12219:12291:12295:12296:12297:12438:12517:12519:12555:12663:12679:12683:12895:12986:13141:13148:13149:13161:13229:13230:13894:14394:21063:21067:21080:21324:21433:21444:21451:21524:21627:21740:21781:21810:21939:21990:30003:30012:30034:30054:30060:30067:30069:30070:30074:30075:30089:30090,0,RBL:209.85.21 5.196:@a X-HE-Tag: pig81_02011cb26e85 X-Filterd-Recvd-Size: 33212 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Jul 2020 02:54:53 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id j19so5864801pgm.11 for ; Wed, 01 Jul 2020 19:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rr0gL/pGZvNJEluWvl6rU6rOb5JjmPPekUymGPqo6rs=; b=YlNneNLioNHR5EwT00sRJXkPw9PiLTPGql0CzIovQIzbdfJdahNDQ3tpZs6YpauUTk c/80DAdfnwjwfLmdEwIQtIWgyTAL4yhyj9EJvuffcD3wr4e2hI75fuI0BEjOBiS7/dS/ S9/xht3vNBoknAWPFozZA2GxsVi/xbszsaWgE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rr0gL/pGZvNJEluWvl6rU6rOb5JjmPPekUymGPqo6rs=; b=gnM9ubx9FyQJ30DBaVBDIAfK5AAGiNThKsFOETVjhaATnnb0SEloJwMELnbUZHNJJR 9g4i22iHGo16rglFQsr5zHhmvESjcRRRw28IZ57SQOnRFe/gD1bct1MJReyXUawJ+T6G 7tRkPcNNI5J/shh0o6RpWfyMckpxzzm38EMzX4zxDenubPo0CKEUBgVPzPrOOpXp8jxN Aw1q55BqJC222miqM2R+NBDRBNnxq704OGn1Z2XOooLkF9JzsQMLsPStAmc5MtHnpiLk SSZDh0iKMvPxJykYek5/FhZ4j2SYi8ZbSHuGRmdQIB+wTYtcnaK4rojvurAT6j6WhRfi lYlQ== X-Gm-Message-State: AOAM5307TKfogMow7w+tWBlprg2DYeDjpGk2GRBpvao6/az+wflXZKjH jD/yXPhPy7d54gqw4KdSE15FGw== X-Google-Smtp-Source: ABdhPJzSEFYdDBYWtW1oaUSiUzW1wfRz17P4GZvAneLt7311A2GOlM6CRP+mZZKZeWlMJ7VfxVBNiA== X-Received: by 2002:a65:60ce:: with SMTP id r14mr22048368pgv.85.1593658492262; Wed, 01 Jul 2020 19:54:52 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-3c80-6152-10ca-83bc.static.ipv6.internode.on.net. [2001:44b8:1113:6700:3c80:6152:10ca:83bc]) by smtp.gmail.com with ESMTPSA id 21sm7101279pfu.124.2020.07.01.19.54.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Jul 2020 19:54:51 -0700 (PDT) From: Daniel Axtens To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, kasan-dev@googlegroups.com, christophe.leroy@c-s.fr, aneesh.kumar@linux.ibm.com, bsingharora@gmail.com Cc: Daniel Axtens , Michael Ellerman Subject: [PATCH v8 4/4] powerpc: Book3S 64-bit "heavyweight" KASAN support Date: Thu, 2 Jul 2020 12:54:32 +1000 Message-Id: <20200702025432.16912-5-dja@axtens.net> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200702025432.16912-1-dja@axtens.net> References: <20200702025432.16912-1-dja@axtens.net> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0513D1800D65A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement a limited form of KASAN for Book3S 64-bit machines running under the Radix MMU: - Set aside the last 1/8th of the first contiguous block of physical memory to provide writable shadow for the linear map. For annoying reasons documented below, the memory size must be specified at compile time. - Enable the compiler instrumentation to check addresses and maintain the shadow region. (This is the guts of KASAN which we can easily reuse.) - Require kasan-vmalloc support to handle modules and anything else in vmalloc space. - KASAN needs to be able to validate all pointer accesses, but we can't back all kernel addresses with real memory - only linear map and vmalloc. On boot, set up a single page of read-only shadow that marks all these accesses as valid. - Make our stack-walking code KASAN-safe by using READ_ONCE_NOCHECK - generic code, arm64, s390 and x86 all do this for similar sorts of reasons: when unwinding a stack, we might touch memory that KASAN has marked as being out-of-bounds. In our case we often get this when checking for an exception frame because we're checking an arbitrary offset into the stack frame. See commit 20955746320e ("s390/kasan: avoid false positives during stack unwind"), commit bcaf669b4bdb ("arm64: disable kasan when accessing frame->fp in unwind_frame"), commit 91e08ab0c851 ("x86/dumpstack: Prevent KASAN false positive warnings") and commit 6e22c8366416 ("tracing, kasan: Silence Kasan warning in check_stack of stack_tracer") - Document KASAN in both generic and powerpc docs. Background ---------- KASAN support on Book3S is a bit tricky to get right: - It would be good to support inline instrumentation so as to be able to catch stack issues that cannot be caught with outline mode. - Inline instrumentation requires a fixed offset. - Book3S runs code in real mode after booting. Most notably a lot of KVM runs in real mode, and it would be good to be able to instrument it. - Because code runs in real mode after boot, the offset has to point to valid memory both in and out of real mode. [ppc64 mm note: The kernel installs a linear mapping at effective address c000... onward. This is a one-to-one mapping with physical memory from 0000... onward. Because of how memory accesses work on powerpc 64-bit Book3S, a kernel pointer in the linear map accesses the same memory both with translations on (accessing as an 'effective address'), and with translations off (accessing as a 'real address'). This works in both guests and the hypervisor. For more details, see s5.7 of Book III of version 3 of the ISA, in particular the Storage Control Overview, s5.7.3, and s5.7.5 - noting that this KASAN implementation currently only supports Radix.] One approach is just to give up on inline instrumentation. This way all checks can be delayed until after everything set is up correctly, and the address-to-shadow calculations can be overridden. However, the features and speed boost provided by inline instrumentation are worth trying to do better. If _at compile time_ it is known how much contiguous physical memory a system has, the top 1/8th of the first block of physical memory can be set aside for the shadow. This is a big hammer and comes with 3 big consequences: - there's no nice way to handle physically discontiguous memory, so only the first physical memory block can be used. - kernels will simply fail to boot on machines with less memory than specified when compiling. - kernels running on machines with more memory than specified when compiling will simply ignore the extra memory. Despite the limitations, it can still find bugs, e.g. http://patchwork.ozlabs.org/patch/1103775/ At the moment, this physical memory limit must be set _even for outline mode_. This may be changed in a later series - a different implementation could be added for outline mode that dynamically allocates shadow at a fixed offset. For example, see https://patchwork.ozlabs.org/patch/795211/ Suggested-by: Michael Ellerman Cc: Balbir Singh # ppc64 out-of-line radix version Cc: Christophe Leroy # ppc32 version Reviewed-by: # focussed mainly on Documentation and things impacting PPC32 Signed-off-by: Daniel Axtens Reported-by: kernel test robot --- Documentation/dev-tools/kasan.rst | 9 +- Documentation/powerpc/kasan.txt | 112 ++++++++++++++++++- arch/powerpc/Kconfig | 3 +- arch/powerpc/Kconfig.debug | 23 +++- arch/powerpc/Makefile | 11 ++ arch/powerpc/include/asm/book3s/64/hash.h | 4 + arch/powerpc/include/asm/book3s/64/pgtable.h | 7 ++ arch/powerpc/include/asm/book3s/64/radix.h | 5 + arch/powerpc/include/asm/kasan.h | 11 +- arch/powerpc/kernel/Makefile | 2 + arch/powerpc/kernel/process.c | 16 +-- arch/powerpc/kernel/prom.c | 76 ++++++++++++- arch/powerpc/mm/kasan/Makefile | 1 + arch/powerpc/mm/kasan/init_book3s_64.c | 73 ++++++++++++ arch/powerpc/mm/ptdump/ptdump.c | 10 +- arch/powerpc/platforms/Kconfig.cputype | 1 + 16 files changed, 346 insertions(+), 18 deletions(-) create mode 100644 arch/powerpc/mm/kasan/init_book3s_64.c diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index 554cbee1d240..5722de91ccce 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -22,8 +22,9 @@ global variables yet. Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and -riscv architectures. It is also supported on 32-bit powerpc kernels. Tag-based -KASAN is supported only on arm64. +riscv architectures. It is also supported on powerpc, for 32-bit kernels, and +for 64-bit kernels running under the Radix MMU. Tag-based KASAN is supported +only on arm64. Usage ----- @@ -257,8 +258,8 @@ CONFIG_KASAN_VMALLOC With ``CONFIG_KASAN_VMALLOC``, KASAN can cover vmalloc space at the cost of greater memory usage. Currently this supported on x86, s390 -and 32-bit powerpc. It is optional, except on 32-bit powerpc kernels -with module support, where it is required. +and powerpc. It is optional, except on 64-bit powerpc kernels, and on +32-bit powerpc kernels with module support, where it is required. This works by hooking into vmalloc and vmap, and dynamically allocating real shadow memory to back the mappings. diff --git a/Documentation/powerpc/kasan.txt b/Documentation/powerpc/kasan.txt index 26bb0e8bb18c..bf645a5cd486 100644 --- a/Documentation/powerpc/kasan.txt +++ b/Documentation/powerpc/kasan.txt @@ -1,4 +1,4 @@ -KASAN is supported on powerpc on 32-bit only. +KASAN is supported on powerpc on 32-bit and Radix 64-bit only. 32 bit support ============== @@ -10,3 +10,113 @@ fixmap area and occupies one eighth of the total kernel virtual memory space. Instrumentation of the vmalloc area is optional, unless built with modules, in which case it is required. + +64 bit support +============== + +Currently, only the radix MMU is supported. There have been versions for Book3E +processors floating around on the mailing list, but nothing has been merged. + +KASAN support on Book3S is a bit tricky to get right: + + - It would be good to support inline instrumentation so as to be able to catch + stack issues that cannot be caught with outline mode. + + - Inline instrumentation requires a fixed offset. + + - Book3S runs code in real mode after booting. Most notably a lot of KVM runs + in real mode, and it would be good to be able to instrument it. + + - Because code runs in real mode after boot, the offset has to point to + valid memory both in and out of real mode. + +One approach is just to give up on inline instrumentation. This way all checks +can be delayed until after everything set is up correctly, and the +address-to-shadow calculations can be overridden. However, the features and +speed boost provided by inline instrumentation are worth trying to do better. + +If _at compile time_ it is known how much contiguous physical memory a system +has, the top 1/8th of the first block of physical memory can be set aside for +the shadow. This is a big hammer and comes with 3 big consequences: + + - there's no nice way to handle physically discontiguous memory, so only the + first physical memory block can be used. + + - kernels will simply fail to boot on machines with less memory than specified + when compiling. + + - kernels running on machines with more memory than specified when compiling + will simply ignore the extra memory. + +At the moment, this physical memory limit must be set _even for outline mode_. +This may be changed in a future version - a different implementation could be +added for outline mode that dynamically allocates shadow at a fixed offset. +For example, see https://patchwork.ozlabs.org/patch/795211/ + +This value is configured in CONFIG_PHYS_MEM_SIZE_FOR_KASAN. + +Tips +---- + + - Compile with CONFIG_RELOCATABLE. + + In development, boot hangs were observed when building with ftrace and KUAP + on. These ended up being due to kernel bloat pushing prom_init calls to be + done via the PLT. Because the kernel was not relocatable, and the calls are + done very early, this caused execution to jump off into somewhere + invalid. Enabling relocation fixes this. + +NUMA/discontiguous physical memory +---------------------------------- + +Currently the code cannot really deal with discontiguous physical memory. Only +physical memory that is contiguous from physical address zero can be used. The +size of that memory, not total memory, must be specified when configuring the +kernel. + +Discontiguous memory can occur on machines with memory spread across multiple +nodes. For example, on a Talos II with 64GB of RAM: + + - 32GB runs from 0x0 to 0x0000_0008_0000_0000, + - then there's a gap, + - then the final 32GB runs from 0x0000_2000_0000_0000 to 0x0000_2008_0000_0000 + +This can create _significant_ issues: + + - If the machine is treated as having 64GB of _contiguous_ RAM, the + instrumentation would assume that it ran from 0x0 to + 0x0000_0010_0000_0000. The last 1/8th - 0x0000_000e_0000_0000 to + 0x0000_0010_0000_0000 would be reserved as the shadow region. But when the + kernel tried to access any of that, it would be trying to access pages that + are not physically present. + + - If the shadow region size is based on the top address, then the shadow + region would be 0x2008_0000_0000 / 8 = 0x0401_0000_0000 bytes = 4100 GB of + memory, clearly more than the 64GB of RAM physically present. + +Therefore, the code currently is restricted to dealing with memory in the node +starting at 0x0. For this system, that's 32GB. If a contiguous physical memory +size greater than the size of the first contiguous region of memory is +specified, the system will be unable to boot or even print an error message. + +The layout of a system's memory can be observed in the messages that the Radix +MMU prints on boot. The Talos II discussed earlier has: + +radix-mmu: Mapped 0x0000000000000000-0x0000000040000000 with 1.00 GiB pages (exec) +radix-mmu: Mapped 0x0000000040000000-0x0000000800000000 with 1.00 GiB pages +radix-mmu: Mapped 0x0000200000000000-0x0000200800000000 with 1.00 GiB pages + +As discussed, this system would be configured for 32768 MB. + +Another system prints: + +radix-mmu: Mapped 0x0000000000000000-0x0000000040000000 with 1.00 GiB pages (exec) +radix-mmu: Mapped 0x0000000040000000-0x0000002000000000 with 1.00 GiB pages +radix-mmu: Mapped 0x0000200000000000-0x0000202000000000 with 1.00 GiB pages + +This machine has more memory: 0x0000_0040_0000_0000 total, but only +0x0000_0020_0000_0000 is physically contiguous from zero, so it would be +configured for 131072 MB of physically contiguous memory. + +This restriction currently also affects outline mode, but this could be +changed in future if an alternative outline implementation is added. diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 51abc59c3334..fe04704a05eb 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -172,7 +172,8 @@ config PPC select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 - select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 + select HAVE_ARCH_KASAN if PPC_BOOK3S_64 && PPC_RADIX_MMU + select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug index b88900f4832f..22cc80c22ff9 100644 --- a/arch/powerpc/Kconfig.debug +++ b/arch/powerpc/Kconfig.debug @@ -394,7 +394,28 @@ config PPC_FAST_ENDIAN_SWITCH help If you're unsure what this is, say N. +config PHYS_MEM_SIZE_FOR_KASAN + int "Contiguous physical memory size for KASAN (MB)" if KASAN && PPC_BOOK3S_64 + default 1024 + help + + To get inline instrumentation support for KASAN on 64-bit Book3S + machines, you need to know how much contiguous physical memory your + system has. A shadow offset will be calculated based on this figure, + which will be compiled in to the kernel. KASAN will use this offset + to access its shadow region, which is used to verify memory accesses. + + If you attempt to boot on a system with less memory than you specify + here, your system will fail to boot very early in the process. If you + boot on a system with more memory than you specify, the extra memory + will wasted - it will be reserved and not used. + + For systems with discontiguous blocks of physical memory, specify the + size of the block starting at 0x0. You can determine this by looking + at the memory layout info printed to dmesg by the radix MMU code + early in boot. See Documentation/powerpc/kasan.txt. + config KASAN_SHADOW_OFFSET hex - depends on KASAN + depends on KASAN && PPC32 default 0xe0000000 diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index 3e8da9cf2eb9..7a8e4739bec4 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -230,6 +230,17 @@ ifdef CONFIG_476FPE_ERR46 -T $(srctree)/arch/powerpc/platforms/44x/ppc476_modules.lds endif +ifdef CONFIG_PPC_BOOK3S_64 +# The KASAN shadow offset is such that linear map (0xc000...) is shadowed by +# the last 8th of linearly mapped physical memory. This way, if the code uses +# 0xc addresses throughout, accesses work both in in real mode (where the top +# bits are ignored) and outside of real mode. +# +# 0xc000000000000000 >> 3 = 0xa800000000000000 = 12105675798371893248 +KASAN_SHADOW_OFFSET = $(shell echo 7 \* 1024 \* 1024 \* $(CONFIG_PHYS_MEM_SIZE_FOR_KASAN) / 8 + 12105675798371893248 | bc) +KBUILD_CFLAGS += -DKASAN_SHADOW_OFFSET=$(KASAN_SHADOW_OFFSET)UL +endif + # No AltiVec or VSX instructions when building kernel KBUILD_CFLAGS += $(call cc-option,-mno-altivec) KBUILD_CFLAGS += $(call cc-option,-mno-vsx) diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index 73ad038ed10b..105b90594a8a 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -18,6 +18,10 @@ #include #endif +#define H_PTRS_PER_PTE (1 << H_PTE_INDEX_SIZE) +#define H_PTRS_PER_PMD (1 << H_PMD_INDEX_SIZE) +#define H_PTRS_PER_PUD (1 << H_PUD_INDEX_SIZE) + /* Bits to set in a PMD/PUD/PGD entry valid bit*/ #define HASH_PMD_VAL_BITS (0x8000000000000000UL) #define HASH_PUD_VAL_BITS (0x8000000000000000UL) diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 25c3cb8272c0..42edd172178a 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -231,6 +231,13 @@ extern unsigned long __pmd_frag_size_shift; #define PTRS_PER_PUD (1 << PUD_INDEX_SIZE) #define PTRS_PER_PGD (1 << PGD_INDEX_SIZE) +#define MAX_PTRS_PER_PTE ((H_PTRS_PER_PTE > R_PTRS_PER_PTE) ? \ + H_PTRS_PER_PTE : R_PTRS_PER_PTE) +#define MAX_PTRS_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? \ + H_PTRS_PER_PMD : R_PTRS_PER_PMD) +#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? \ + H_PTRS_PER_PUD : R_PTRS_PER_PUD) + /* PMD_SHIFT determines what a second-level page table entry can map */ #define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE) #define PMD_SIZE (1UL << PMD_SHIFT) diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h index 0cba794c4fb8..8d41f4e657c2 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -35,6 +35,11 @@ #define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE) #define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE) #define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE) + +#define R_PTRS_PER_PTE (1 << RADIX_PTE_INDEX_SIZE) +#define R_PTRS_PER_PMD (1 << RADIX_PMD_INDEX_SIZE) +#define R_PTRS_PER_PUD (1 << RADIX_PUD_INDEX_SIZE) + /* * Size of EA range mapped by our pagetables. */ diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/include/asm/kasan.h index be85c7005fb1..3b57aee0d461 100644 --- a/arch/powerpc/include/asm/kasan.h +++ b/arch/powerpc/include/asm/kasan.h @@ -21,9 +21,18 @@ #define KASAN_SHADOW_START (KASAN_SHADOW_OFFSET + \ (PAGE_OFFSET >> KASAN_SHADOW_SCALE_SHIFT)) +#ifdef CONFIG_KASAN_SHADOW_OFFSET #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET) +#endif +#ifdef CONFIG_PPC32 #define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT)) +#endif + +#ifdef CONFIG_PPC_BOOK3S_64 +#define KASAN_SHADOW_END (KASAN_SHADOW_OFFSET + \ + (RADIX_VMEMMAP_END >> KASAN_SHADOW_SCALE_SHIFT)) +#endif #ifdef CONFIG_KASAN void kasan_early_init(void); @@ -38,5 +47,5 @@ void kasan_update_early_region(unsigned long k_start, unsigned long k_end, pte_t int kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end); int kasan_init_region(void *start, size_t size); -#endif /* __ASSEMBLY */ +#endif /* !__ASSEMBLY__ */ #endif diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile index 244542ae2a91..70a6e386c98d 100644 --- a/arch/powerpc/kernel/Makefile +++ b/arch/powerpc/kernel/Makefile @@ -32,6 +32,8 @@ KASAN_SANITIZE_early_32.o := n KASAN_SANITIZE_cputable.o := n KASAN_SANITIZE_prom_init.o := n KASAN_SANITIZE_btext.o := n +KASAN_SANITIZE_paca.o := n +KASAN_SANITIZE_setup_64.o := n ifdef CONFIG_KASAN CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 4650b9bb217f..a8b3690f41bf 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2097,8 +2097,8 @@ void show_stack(struct task_struct *tsk, unsigned long *stack, break; stack = (unsigned long *) sp; - newsp = stack[0]; - ip = stack[STACK_FRAME_LR_SAVE]; + newsp = READ_ONCE_NOCHECK(stack[0]); + ip = READ_ONCE_NOCHECK(stack[STACK_FRAME_LR_SAVE]); if (!firstframe || ip != lr) { printk("%s["REG"] ["REG"] %pS", loglvl, sp, ip, (void *)ip); @@ -2118,14 +2118,16 @@ void show_stack(struct task_struct *tsk, unsigned long *stack, * See if this is an exception frame. * We look for the "regshere" marker in the current frame. */ - if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE) - && stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) { + if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE) && + (READ_ONCE_NOCHECK(stack[STACK_FRAME_MARKER]) == + STACK_FRAME_REGS_MARKER)) { struct pt_regs *regs = (struct pt_regs *) (sp + STACK_FRAME_OVERHEAD); - lr = regs->link; + lr = READ_ONCE_NOCHECK(regs->link); printk("%s--- interrupt: %lx at %pS\n LR = %pS\n", - loglvl, regs->trap, - (void *)regs->nip, (void *)lr); + loglvl, READ_ONCE_NOCHECK(regs->trap), + (void *)READ_ONCE_NOCHECK(regs->nip), + (void *)READ_ONCE_NOCHECK(lr)); firstframe = 1; } diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c index 9cc49f265c86..79662c5940de 100644 --- a/arch/powerpc/kernel/prom.c +++ b/arch/powerpc/kernel/prom.c @@ -72,6 +72,7 @@ unsigned long tce_alloc_start, tce_alloc_end; u64 ppc64_rma_size; #endif static phys_addr_t first_memblock_size; +static phys_addr_t top_phys_addr; static int __initdata boot_cpu_count; static int __init early_parse_mem(char *p) @@ -439,6 +440,36 @@ static int __init early_init_dt_scan_chosen_ppc(unsigned long node, return 1; } +/* + * KASAN memory limit checking for 64-bit Book3S + * + * Currently we place the KASAN shadow region at the last 1/8th of the memory + * region that runs from 0 to CONFIG_PHYS_MEM_SIZE_FOR_KASAN. + * + * To handle the NUMA/discontiguous memory case, don't allow a block to be + * added if it falls completely beyond the configured physical memory. Print an + * informational message. + * + * Frustratingly we also see discontiguous memory with qemu - it seems to split + * the specified memory into a number of smaller blocks. If this happens under + * qemu, it probably represents misconfiguration. So we want the message to be + * noticeable, but not shouty. + * + * See Documentation/powerpc/kasan.txt + */ + +static inline bool validate_kasan_mem_limit(u64 base, u64 size) +{ + if (IS_ENABLED(CONFIG_KASAN) && IS_ENABLED(CONFIG_PPC_BOOK3S_64) && + (base >= ((u64)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * SZ_1M))) { + pr_warn("KASAN: not adding memory block at %llx (size %llx)\n" + "This could be due to discontiguous memory or kernel misconfiguration.", + base, size); + return false; + } + return true; +} + /* * Compare the range against max mem limit and update * size if it cross the limit. @@ -449,6 +480,9 @@ static bool validate_mem_limit(u64 base, u64 *size) { u64 max_mem = 1UL << (MAX_PHYSMEM_BITS); + if (!validate_kasan_mem_limit(base, *size)) + return false; + if (base >= max_mem) return false; if ((base + *size) > max_mem) @@ -458,7 +492,7 @@ static bool validate_mem_limit(u64 base, u64 *size) #else static bool validate_mem_limit(u64 base, u64 *size) { - return true; + return validate_kasan_mem_limit(base, *size); } #endif @@ -577,8 +611,10 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size) /* Add the chunk to the MEMBLOCK list */ if (add_mem_to_memblock) { - if (validate_mem_limit(base, &size)) + if (validate_mem_limit(base, &size)) { memblock_add(base, size); + top_phys_addr = max(top_phys_addr, (phys_addr_t)(base + size)); + } } } @@ -618,6 +654,8 @@ static void __init early_reserve_mem_dt(void) static void __init early_reserve_mem(void) { __be64 *reserve_map; + phys_addr_t kasan_shadow_start; + phys_addr_t kasan_memory_size; reserve_map = (__be64 *)(((unsigned long)initial_boot_params) + fdt_off_mem_rsvmap(initial_boot_params)); @@ -656,6 +694,40 @@ static void __init early_reserve_mem(void) return; } #endif + + if (IS_ENABLED(CONFIG_KASAN) && IS_ENABLED(CONFIG_PPC_BOOK3S_64)) { + kasan_memory_size = + ((phys_addr_t)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * SZ_1M); + + if (top_phys_addr < kasan_memory_size) { + /* + * We are doomed. We shouldn't even be able to get this + * far, but we do in qemu. If we continue and turn + * relocations on, we'll take fatal page faults for + * memory that's not physically present. Instead, + * panic() here: it will be saved to __log_buf even if + * it doesn't get printed to the console. + */ + panic("Tried to boot a KASAN kernel configured for %u MB with only %llu MB! Aborting.", + CONFIG_PHYS_MEM_SIZE_FOR_KASAN, + (u64)(top_phys_addr * SZ_1M)); + } else if (top_phys_addr > kasan_memory_size) { + /* print a biiiig warning in hopes people notice */ + pr_err("===========================================\n" + "Physical memory exceeds compiled-in maximum!\n" + "This kernel was compiled for KASAN with %u MB physical memory.\n" + "The physical memory detected is at least %llu MB.\n" + "Memory above the compiled limit will not be used!\n" + "===========================================\n", + CONFIG_PHYS_MEM_SIZE_FOR_KASAN, + (u64)(top_phys_addr / SZ_1M)); + } + + kasan_shadow_start = ALIGN_DOWN(kasan_memory_size * 7 / 8, PAGE_SIZE); + DBG("reserving %llx -> %llx for KASAN", + kasan_shadow_start, top_phys_addr); + memblock_reserve(kasan_shadow_start, top_phys_addr - kasan_shadow_start); + } } #ifdef CONFIG_PPC_TRANSACTIONAL_MEM diff --git a/arch/powerpc/mm/kasan/Makefile b/arch/powerpc/mm/kasan/Makefile index 42fb628a44fd..33c92e7012af 100644 --- a/arch/powerpc/mm/kasan/Makefile +++ b/arch/powerpc/mm/kasan/Makefile @@ -5,3 +5,4 @@ KASAN_SANITIZE := n obj-$(CONFIG_PPC32) += init_32.o obj-$(CONFIG_PPC_8xx) += 8xx.o obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o +obj-$(CONFIG_PPC_BOOK3S_64) += init_book3s_64.o diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c new file mode 100644 index 000000000000..1c95fe6495c7 --- /dev/null +++ b/arch/powerpc/mm/kasan/init_book3s_64.c @@ -0,0 +1,73 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KASAN for 64-bit Book3S powerpc + * + * Copyright (C) 2019 IBM Corporation + * Author: Daniel Axtens + */ + +#define DISABLE_BRANCH_PROFILING + +#include +#include +#include +#include + +void __init kasan_init(void) +{ + int i; + void *k_start = kasan_mem_to_shadow((void *)RADIX_KERN_VIRT_START); + void *k_end = kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END); + + pte_t pte = pte_mkpte(pfn_pte(virt_to_pfn(kasan_early_shadow_page), + PAGE_KERNEL)); + + if (!early_radix_enabled()) + panic("KASAN requires radix!"); + + for (i = 0; i < PTRS_PER_PTE; i++) + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, + &kasan_early_shadow_pte[i], pte, 0); + + for (i = 0; i < PTRS_PER_PMD; i++) + pmd_populate_kernel(&init_mm, &kasan_early_shadow_pmd[i], + kasan_early_shadow_pte); + + for (i = 0; i < PTRS_PER_PUD; i++) + pud_populate(&init_mm, &kasan_early_shadow_pud[i], + kasan_early_shadow_pmd); + + memset((void *)KASAN_SHADOW_START, KASAN_SHADOW_INIT, + ((u64)CONFIG_PHYS_MEM_SIZE_FOR_KASAN * + SZ_1M >> KASAN_SHADOW_SCALE_SHIFT)); + + kasan_populate_early_shadow(kasan_mem_to_shadow((void *)RADIX_KERN_VIRT_START), + kasan_mem_to_shadow((void *)RADIX_VMALLOC_START)); + + /* leave a hole here for vmalloc */ + + kasan_populate_early_shadow( + kasan_mem_to_shadow((void *)RADIX_VMALLOC_END), + kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END)); + + flush_tlb_kernel_range((unsigned long)k_start, (unsigned long)k_end); + + /* mark early shadow region as RO and wipe */ + pte = pte_mkpte(pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO)); + for (i = 0; i < PTRS_PER_PTE; i++) + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, + &kasan_early_shadow_pte[i], pte, 0); + + /* + * clear_page relies on some cache info that hasn't been set up yet. + * It ends up looping ~forever and blows up other data. + * Use memset instead. + */ + memset(kasan_early_shadow_page, 0, PAGE_SIZE); + + /* Enable error messages */ + init_task.kasan_depth = 0; + pr_info("KASAN init done (64-bit Book3S heavyweight mode)\n"); +} + +void __init kasan_late_init(void) { } diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c index de6e05ef871c..60b8f61491b3 100644 --- a/arch/powerpc/mm/ptdump/ptdump.c +++ b/arch/powerpc/mm/ptdump/ptdump.c @@ -74,6 +74,10 @@ struct addr_marker { static struct addr_marker address_markers[] = { { 0, "Start of kernel VM" }, +#if defined(CONFIG_PPC64) && defined(CONFIG_KASAN) + { 0, "kasan shadow mem start" }, + { 0, "kasan shadow mem end" }, +#endif { 0, "vmalloc() Area" }, { 0, "vmalloc() End" }, #ifdef CONFIG_PPC64 @@ -93,10 +97,10 @@ static struct addr_marker address_markers[] = { #endif { 0, "Fixmap start" }, { 0, "Fixmap end" }, -#endif #ifdef CONFIG_KASAN { 0, "kasan shadow mem start" }, { 0, "kasan shadow mem end" }, +#endif #endif { -1, NULL }, }; @@ -349,6 +353,10 @@ static void populate_markers(void) int i = 0; address_markers[i++].start_address = PAGE_OFFSET; +#if defined(CONFIG_PPC64) && defined(CONFIG_KASAN) + address_markers[i++].start_address = KASAN_SHADOW_START; + address_markers[i++].start_address = KASAN_SHADOW_END; +#endif address_markers[i++].start_address = VMALLOC_START; address_markers[i++].start_address = VMALLOC_END; #ifdef CONFIG_PPC64 diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index 87737ec86d39..b430e4314b56 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -99,6 +99,7 @@ config PPC_BOOK3S_64 select ARCH_SUPPORTS_NUMA_BALANCING select IRQ_WORK select PPC_MM_SLICES + select KASAN_VMALLOC if KASAN config PPC_BOOK3E_64 bool "Embedded processors"