From patchwork Tue Oct 29 04:20:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11216949 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 050C31390 for ; Tue, 29 Oct 2019 04:21:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA3C3218AC for ; Tue, 29 Oct 2019 04:21:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="kjSFsu5l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA3C3218AC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E23B56B0007; Tue, 29 Oct 2019 00:21:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DAE606B0008; Tue, 29 Oct 2019 00:21:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C74DB6B000A; Tue, 29 Oct 2019 00:21:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id 9DA0B6B0007 for ; Tue, 29 Oct 2019 00:21:08 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 3458162E8 for ; Tue, 29 Oct 2019 04:21:08 +0000 (UTC) X-FDA: 76095522216.19.rose09_7e4443322e518 X-Spam-Summary: 50,0,0,d66a02edc502c2c6,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:41:355:379:541:967:973:988:989:1260:1311:1314:1345:1437:1515:1535:1542:1711:1730:1747:1777:1792:1801:2194:2199:2393:2525:2561:2565:2682:2685:2859:2903:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3353:3622:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4605:5007:6119:6261:6653:7875:7903:9025:10004:11026:11473:11658:11914:12043:12048:12219:12291:12296:12297:12438:12517:12519:12555:12679:12683:12698:12737:12895:13161:13229:13894:14096:14181:14394:14721:21063:21080:21222:21433:21444:21451:21627:21740:21788:30012:30054:30070:30074,0,RBL:209.85.210.193:@axtens.net:.lbl8.mailshell. net-62.1 X-HE-Tag: rose09_7e4443322e518 X-Filterd-Recvd-Size: 5071 Received: from mail-pf1-f193.google.com (mail-pf1-f193.google.com [209.85.210.193]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 29 Oct 2019 04:21:07 +0000 (UTC) Received: by mail-pf1-f193.google.com with SMTP id c7so7399607pfo.12 for ; Mon, 28 Oct 2019 21:21:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YQ4pMzGN2GdDuONqdL7jGr9z3ckVwTP2I22ymxPUf70=; b=kjSFsu5lfogxvrqtja7CdREfAAJGFvDZhN5x/U9HPImneeDVXbXL3+UXKvS/p9hLF8 qOq64dVqVl0+oZE+pEioENvIgZhVgt5IwHKz2kGiGcoGsETAe/Q75KpYNo6B9UMwX+me 7fkD1R+k2EDmZaP0QuAB4yrDjLVhcIT/8wiEM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=YQ4pMzGN2GdDuONqdL7jGr9z3ckVwTP2I22ymxPUf70=; b=ebShcSvPMbqEtFk66rmtRqASIP3ufhMFq33BUF3BnE+GvW/tx140NTUlLTzv0zLsTh YqASIAwf9zP2CFYkUsC+MsDmIn1Ok0HBoMTp52aJ7SWoOuoQNPDqOHy+mg/tswyjGCPI uRH7jgJBw/uabj4eyWNWkhVAi+qZxP9pBYIIxzZQFW7bhKWIVrIHcgE0xEvxtuL+7dVQ HHgAxiXVSbt352Be1Jk0DsgEx5yWRVo9de4ujhPNjXcPLDqRFZQNyCqjpFNeSXlMkypV tMwf+s2kp8eKKGL7ajWBRDPzI6hQ0l+pzFvJsLnO0d7OTL4wR7d5nP2+WlSBhsd8sdT5 atBA== X-Gm-Message-State: APjAAAVo91a2wiuXkvs349ptN+2BCUUY5bNcTrsGYThGZeDlsBY7K0Tf C/b/qN+sdPViI/OKfmdt8lgT5g== X-Google-Smtp-Source: APXvYqyBpRaSgYBzIdz3dtaj3H2tdNpX20my5AQ2b8q9icz+RK1eMt6UEIZ/h0E1Rd0JstGCqWa+LA== X-Received: by 2002:a63:311:: with SMTP id 17mr24417658pgd.327.1572322866102; Mon, 28 Oct 2019 21:21:06 -0700 (PDT) Received: from localhost ([2001:44b8:802:1120:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id c1sm1013409pjc.23.2019.10.28.21.21.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 21:21:05 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v10 0/5] kasan: support backing vmalloc space with real shadow memory Date: Tue, 29 Oct 2019 15:20:54 +1100 Message-Id: <20191029042059.28541-1-dja@axtens.net> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, vmalloc space is backed by the early shadow page. This means that kasan is incompatible with VMAP_STACK. This series provides a mechanism to back vmalloc space with real, dynamically allocated memory. I have only wired up x86, because that's the only currently supported arch I can work with easily, but it's very easy to wire up other architectures, and it appears that there is some work-in-progress code to do this on arm64 and s390. This has been discussed before in the context of VMAP_STACK: - https://bugzilla.kernel.org/show_bug.cgi?id=202009 - https://lkml.org/lkml/2018/7/22/198 - https://lkml.org/lkml/2019/7/19/822 In terms of implementation details: Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on. We hook in to the vmap infrastructure to lazily clean up unused shadow memory. Daniel Axtens (5): kasan: support backing vmalloc space with real shadow memory kasan: add test for vmalloc fork: support VMAP_STACK with KASAN_VMALLOC x86/kasan: support KASAN_VMALLOC kasan debug: track pages allocated for vmalloc shadow Documentation/dev-tools/kasan.rst | 63 ++++++++ arch/Kconfig | 9 +- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 60 +++++++ include/linux/kasan.h | 31 ++++ include/linux/moduleloader.h | 2 +- include/linux/vmalloc.h | 12 ++ kernel/fork.c | 4 + lib/Kconfig.kasan | 16 ++ lib/test_kasan.c | 26 +++ mm/kasan/common.c | 254 ++++++++++++++++++++++++++++++ mm/kasan/generic_report.c | 3 + mm/kasan/kasan.h | 1 + mm/vmalloc.c | 53 ++++++- 14 files changed, 522 insertions(+), 13 deletions(-)