From patchwork Tue Mar 29 12:39:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AAD2C433FE for ; Tue, 29 Mar 2022 12:40:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8ECE78D0003; Tue, 29 Mar 2022 08:40:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89C8B8D0001; Tue, 29 Mar 2022 08:40:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 764F88D0003; Tue, 29 Mar 2022 08:40:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 66C0C8D0001 for ; Tue, 29 Mar 2022 08:40:32 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 38D651215D3 for ; Tue, 29 Mar 2022 12:40:32 +0000 (UTC) X-FDA: 79297382304.02.4975F89 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf31.hostedemail.com (Postfix) with ESMTP id D18EA20004 for ; Tue, 29 Mar 2022 12:40:31 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id d28-20020a50f69c000000b00419125c67f4so10951498edn.17 for ; Tue, 29 Mar 2022 05:40:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3CQYkeXdj1XVJNClFGQhidAhDBObT15U+enV+/AhM6A=; b=Rs1GnbfzAh48GyKvXrTviY6xmcPrSlCBrhBam52FN5lKhGkSSoDXuC/+sTSLnCX5XO /i4ncOwVT391qnv8EZbsDSgMBgczL/nCzy9Jqzu/nf38q/o+9kZr7KSlVoUWiViJ9870 M0b0YE9E38a6zKyq5m0p8rIATZFa84ILdXxkvPpuNk7AfC4/9N8vExikGAHybh4Pb0rX 8KcC4OmYsI0xN2dW728NWEU8I1Adn92Xl7Ph9hUyWVnxsPmYOh4rsrvl7JDTJxDDCuk+ Va2vfsVuAq80Q0B5XplgXmEPTnrJ0hx+BQmCkMVH18sZhimHpdOgC1BUg2+kBBmprNgo nutA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3CQYkeXdj1XVJNClFGQhidAhDBObT15U+enV+/AhM6A=; b=Hi5217pN/OjES0ruI5s0xGW9lmgPM2a2Xv8y1eN2EnVmpSeeO90Z4tM/3kezq+bz03 2MOaoeM5vnQezZCfy+cTq41D042yS0/ABW4jtiM7WSAhTQq2cZDTyWk6HqlkUwmlqC2p yHO9eoXBRIPlfjxeX50glFnsOxnzbPWU0Dr3Zr/vbcEZX7eV3zO7egjSH/ln8O/8eDUU VDWR9i925klPteFBbXfRPZLn6atxn+SpPY0LN9dkpmlyIopqUWlaWgcdnDMINWJNCQjO UqlylWaQRzn5mFFXYbQXR8qyqgnhb5P1GWKXuB2FTQvP9UWqPKc0VAtX0VodFKEBtBcK OvMw== X-Gm-Message-State: AOAM533UaPBqmkPRUuBiYpMPbF02fLu+xDLoU8W3socwi1x5k+8Cs5q6 vU0B55JhH+ixHw61KVhH+pj0gRRZ6e0= X-Google-Smtp-Source: ABdhPJwxfLc7E5CGNpFchMAnZqVrfibLaBjl6ZrGzY59hWiYTgmr/LPloRQjIFWbW4/icF3FYORBAEJgcTc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:9c82:b0:6df:c5f0:d456 with SMTP id fj2-20020a1709069c8200b006dfc5f0d456mr34088614ejc.287.1648557630566; Tue, 29 Mar 2022 05:40:30 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:30 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-2-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 01/48] x86: add missing include to sparsemem.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: hz4fd8tpgdbfrkzxww4zp5bcm6mtzjbr X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D18EA20004 Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Rs1Gnbfz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf31.hostedemail.com: domain of 3Pv5CYgYKCF8DIFABODLLDIB.9LJIFKRU-JJHS79H.LOD@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3Pv5CYgYKCF8DIFABODLLDIB.9LJIFKRU-JJHS79H.LOD@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557631-676833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dmitry Vyukov sparsemem.h:34:32: error: unknown type name 'phys_addr_t' extern int phys_to_target_node(phys_addr_t start); ^ sparsemem.h:36:39: error: unknown type name 'u64' extern int memory_add_physaddr_to_nid(u64 start); ^ Signed-off-by: Dmitry Vyukov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ifae221ce85d870d8f8d17173bd44d5cf9be2950f --- arch/x86/include/asm/sparsemem.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 6a9ccc1b2be5d..64df897c0ee30 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -2,6 +2,8 @@ #ifndef _ASM_X86_SPARSEMEM_H #define _ASM_X86_SPARSEMEM_H +#include + #ifdef CONFIG_SPARSEMEM /* * generic non-linear memory support: From patchwork Tue Mar 29 12:39:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794752 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F007FC433FE for ; Tue, 29 Mar 2022 12:40:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 851D88D0006; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 802AB8D0005; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A0ED8D0006; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 5BBE28D0005 for ; Tue, 29 Mar 2022 08:40:35 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 16B6A8249980 for ; Tue, 29 Mar 2022 12:40:35 +0000 (UTC) X-FDA: 79297382430.30.54134B7 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf10.hostedemail.com (Postfix) with ESMTP id 87166C0005 for ; Tue, 29 Mar 2022 12:40:34 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id x2-20020a1709065ac200b006d9b316257fso8148542ejs.12 for ; Tue, 29 Mar 2022 05:40:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8AKHIAUMGhsnollvhsjHBd8Sk5jMCuogkkUw2fq2mio=; b=XcHMPrIunHEB7bnidvOWVqGkL2poJnCfl3iM5yKiBsKR6gSu8Wixo56EriIryx1DRp ygH60Tu95F9/meUZw/hUVe5ooSEvV4DUyL+AnjgINrfQqz23fqsQ/Urcza7iRtW9piEV THlnQvDWO68UWi/+f1lgxFSyUR51bTc8P/jVxtWV8kXPBWeM+/Vw42NYvsdPHbIvLwKl BtRHlbH4A6DhSIcH+duqkqgGkt1gIGp9zkGZLXy9w5ZW5bKRdMPnBS9mX2KVQ6Pv+KK4 P1gagNsItDlHvWIoB7ysOglwXcmuxg7hOzImH2QfoKH5JQ//E60JCop3/Ya58UM8UMrt o7bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8AKHIAUMGhsnollvhsjHBd8Sk5jMCuogkkUw2fq2mio=; b=N96yVyz+FrcJHg5IgFWlAdsjQzuHA7LlRTjF/MC9VwcwQj5bh2pD/ZCcl4/Pz5vEzM 0IozWVYBTgJsQXzaTMufgdiKfObl3RrewxWUXkTu0cuHNtWCi1MktehTd94l9V9vilW4 WKMv/56koXRZdPSBfz+u4f2JIEYg2pBPeyTgkhxLI/+cUw4aR8BF8kQALKEOymQnzVt4 IhiD8Vj7P1FC1fq9rXdOeAAVUutMEdsdzXhtsiBzCjYiv4QoAv82lYrA69k+h1sfs9lQ OtRb612SggoF3c3jeI4wh767D6YwBwV2Ti5absxtkBqWWiNj1yAkdUwmonZu5XA+fosr Dz9Q== X-Gm-Message-State: AOAM5336T4Od4ZR9fhxlefpOuDTv+zhOfDVU55d9KoplNTn9QdT05HZy Qj1KJx0G8XSNv8TwTcdk3PFVmvFEqms= X-Google-Smtp-Source: ABdhPJwvUW8uBcv5kN+9WJ7X27ALOaTkbkLM5E0yAmcoJPh2fi12GxrbHP+7afPcQ2MGB4Oun8BxCd/6FeU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:5d03:b0:6df:a042:d6d5 with SMTP id g3-20020a1709065d0300b006dfa042d6d5mr33258222ejt.678.1648557633185; Tue, 29 Mar 2022 05:40:33 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:31 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-3-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 02/48] stackdepot: reserve 5 extra bits in depot_stack_handle_t From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: zigtt88gbop7zqtyp5yuiq86q7boua5w Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XcHMPrIu; spf=pass (imf10.hostedemail.com: domain of 3Qf5CYgYKCGIGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3Qf5CYgYKCGIGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 87166C0005 X-HE-Tag: 1648557634-394313 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so by adding @extra_bits to __stack_depot_save() to store arbitrary flags, and providing stack_depot_get_extra_bits() to retrieve those flags. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0587f6c777667864768daf07821d594bce6d8ff9 --- include/linux/stackdepot.h | 8 ++++++++ lib/stackdepot.c | 29 ++++++++++++++++++++++++----- 2 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index 17f992fe6355b..fd641d266bead 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -14,9 +14,15 @@ #include typedef u32 depot_stack_handle_t; +/* + * Number of bits in the handle that stack depot doesn't use. Users may store + * information in them. + */ +#define STACK_DEPOT_EXTRA_BITS 5 depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t gfp_flags, bool can_alloc); /* @@ -41,6 +47,8 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle); + int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, int spaces); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index bf5ba9af05009..6dc11a3b7b88e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -42,7 +42,8 @@ #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ STACK_ALLOC_ALIGN) #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ - STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) + STACK_ALLOC_NULL_PROTECTION_BITS - \ + STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) #define STACK_ALLOC_SLABS_CAP 8192 #define STACK_ALLOC_MAX_SLABS \ (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ @@ -55,6 +56,7 @@ union handle_parts { u32 slabindex : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; + u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -73,6 +75,14 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_RAW_SPINLOCK(depot_lock); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle) +{ + union handle_parts parts = { .handle = handle }; + + return parts.extra; +} +EXPORT_SYMBOL(stack_depot_get_extra_bits); + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -136,6 +146,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) stack->handle.slabindex = depot_index; stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; + stack->handle.extra = 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); depot_offset += required_size; @@ -320,6 +331,7 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * * @entries: Pointer to storage array * @nr_entries: Size of the storage array + * @extra_bits: Flags to store in unused bits of depot_stack_handle_t * @alloc_flags: Allocation gfp flags * @can_alloc: Allocate stack slabs (increased chance of failure if false) * @@ -331,6 +343,10 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * If the stack trace in @entries is from an interrupt, only the portion up to * interrupt entry is saved. * + * Additional opaque flags can be passed in @extra_bits, stored in the unused + * bits of the stack handle, and retrieved using stack_depot_get_extra_bits() + * without calling stack_depot_fetch(). + * * Context: Any context, but setting @can_alloc to %false is required if * alloc_pages() cannot be used from the current context. Currently * this is the case from contexts where neither %GFP_ATOMIC nor @@ -340,10 +356,11 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); */ depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t alloc_flags, bool can_alloc) { struct stack_record *found = NULL, **bucket; - depot_stack_handle_t retval = 0; + union handle_parts retval = { .handle = 0 }; struct page *page = NULL; void *prealloc = NULL; unsigned long flags; @@ -427,9 +444,11 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, free_pages((unsigned long)prealloc, STACK_ALLOC_ORDER); } if (found) - retval = found->handle.handle; + retval.handle = found->handle.handle; fast_exit: - return retval; + retval.extra = extra_bits; + + return retval.handle; } EXPORT_SYMBOL_GPL(__stack_depot_save); @@ -449,6 +468,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int nr_entries, gfp_t alloc_flags) { - return __stack_depot_save(entries, nr_entries, alloc_flags, true); + return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true); } EXPORT_SYMBOL_GPL(stack_depot_save); From patchwork Tue Mar 29 12:39:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794753 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDD9AC433EF for ; Tue, 29 Mar 2022 12:40:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CF358D0008; Tue, 29 Mar 2022 08:40:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A4D28D0007; Tue, 29 Mar 2022 08:40:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36C708D0008; Tue, 29 Mar 2022 08:40:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 27AB28D0007 for ; Tue, 29 Mar 2022 08:40:38 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id EE7CC120E34 for ; Tue, 29 Mar 2022 12:40:37 +0000 (UTC) X-FDA: 79297382514.01.7F9124B Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf24.hostedemail.com (Postfix) with ESMTP id 4911A180002 for ; Tue, 29 Mar 2022 12:40:37 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id q25-20020a50aa99000000b004192a64d410so10947056edc.16 for ; Tue, 29 Mar 2022 05:40:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hetjka8F+4smAU2DbrfLtuY9Keg4VAYWsS2h7SzTHpM=; b=cHcpM//qa8SvJJrjPOG9wtT4Hj5D6kD8ib68SB0Mqb0q0PO5BtYL9X9Qere7Y8XjxC bOM2jC9VPziu2bY4mbcytTILLbSgpj6V9GVCh6WxioXetMKQTjrwkM8t/un72VJ5Mtgo iN8edkLwGflJ0otqfb1a7ajaVSPA/j+RPaBGepgVwwsEbS3i0p4LMpfh/BW0+mkCnvMU Dy9dZon78osDnNaQChTznTJymR927G4ea5V6szrz6kTy1ZvwLWbYPa/cpH3vDDIjeqkx FCLP31BQXYYVkPw5/Wg74buUelJiPy1QB+ObNDqL4zRzTdeyCzo8GYIkh1l7+0BmBJHW d2Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hetjka8F+4smAU2DbrfLtuY9Keg4VAYWsS2h7SzTHpM=; b=38S7PrpP0KlF1xzapncRUYHLQWaB+Ye/BSZ5OQVTRGRgxe6E5i3wTMgeMZURNeKbal Ph4Iy9GT1v15I1BQ+VacDkX5+oUI+r8gswjdVfK+k3DTFLpnfN/rupTPPFGdTiMIVOzr HdRniDtz2P1lOQqDAuCZUq745r/aX2ZwrQ1qJD0z+xBVEL5G/nyBO9+ZCcRZJkhnNGId kneptO1IRpmvqqR0GUIC8Tf7rGROEL0C+9PfqftXej0Ki5m/LxcKPREafNMlgfvKCvCX 8K9OTzIj+mG1Z9EYcrwgTudaUBFHpAM3Xns9iZ4/oEi3Q5B/QNZwr2YaLS6qz+T+Dxhx fNTQ== X-Gm-Message-State: AOAM532+LawDj6jws3vOR8sLuRb5dr5XkRQ740etnXbuldcwkTNXYJcl daxSJtd3Iphyc7e1uDPTBccutvp4jMI= X-Google-Smtp-Source: ABdhPJyG5Cqqhqcj/H0YOQo5c9+FB/qfdcBot+5TAfkkvUFt0c3NMnQ9bJxFD4PL2W5gYj33TH1h8vNqlJU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:aa7:c3d6:0:b0:419:2370:4856 with SMTP id l22-20020aa7c3d6000000b0041923704856mr4374152edr.180.1648557635703; Tue, 29 Mar 2022 05:40:35 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:32 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-4-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 03/48] kasan: common: adapt to the new prototype of __stack_depot_save() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="cHcpM//q"; spf=pass (imf24.hostedemail.com: domain of 3Q_5CYgYKCGQINKFGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3Q_5CYgYKCGQINKFGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4911A180002 X-Stat-Signature: pkkhc8fwbf64aag8qrpy4cbpbhk5zd4d X-HE-Tag: 1648557637-301797 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass extra_bits=0, as KASAN does not intend to store additional information in the stack handle. No functional change. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I932d8f4f11a41b7483e0d57078744cc94697607a --- mm/kasan/common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 92196562687b6..1182388ed3e0e 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -36,7 +36,7 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) unsigned int nr_entries; nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); - return __stack_depot_save(entries, nr_entries, flags, can_alloc); + return __stack_depot_save(entries, nr_entries, 0, flags, can_alloc); } void kasan_set_track(struct kasan_track *track, gfp_t flags) From patchwork Tue Mar 29 12:39:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794754 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7480BC433EF for ; Tue, 29 Mar 2022 12:40:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2E1D8D000A; Tue, 29 Mar 2022 08:40:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDE148D0009; Tue, 29 Mar 2022 08:40:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D576A8D000A; Tue, 29 Mar 2022 08:40:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id C74798D0009 for ; Tue, 29 Mar 2022 08:40:40 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id A167961602 for ; Tue, 29 Mar 2022 12:40:40 +0000 (UTC) X-FDA: 79297382640.14.E2C75F8 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf28.hostedemail.com (Postfix) with ESMTP id 22171C0014 for ; Tue, 29 Mar 2022 12:40:39 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id gv17-20020a1709072bd100b006dfcc7f7962so8125024ejc.5 for ; Tue, 29 Mar 2022 05:40:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZbpkAPrcek/iQP5wGrULo2GyXUubszl45zwfN6Jeluw=; b=ItupbKsXf4T5DY5u/yeeKCGIvUjyF/GlXb2acWFwSqEwOouWQJJd75u+lYjEdhwRsk x0pI62vVhg0Y4u8RI7UIPJU/Eqf79K9v97zc3PIqXZQTcfivaabLsxtwYPpNXZLOPN7A SZgOBkXKV3f84Llcqxo97iq6zKz+d2axtqQlOBgHN8rf8y82br0aiI2/9LCooptaQgPP P6ZvYfs4FbSv6hARDwyXMbcMlkWQVBNPbA/3Uoza/8Y05ryJE2Lr1VFxEJTMSrLmTfM4 jNRCJOZhJMXJQQuPXIpHYeQ3+puw+pEYGw6AQKRq8/qV73cd1+HEYhRSrqN+H56zMEzh Uy5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZbpkAPrcek/iQP5wGrULo2GyXUubszl45zwfN6Jeluw=; b=Gr7ke32Gjzrs+X4YKriuQeZ64VpColOLDiPJDkrRtJ5zsmksYCVOlrsJ9ib2ONmZIm 72g+3HuP/3HAqt86UWScU6B29VsLFX1mCepXDzHyyUatotvc4JHAi8Qzf6v5QV0fyPWh nHMSC3ScmoBBXafv+5tNWDHNJKmlXhis0YkQay47Pi88nLvQN09u0lS61m7hsy4pGQYq oqK57p2RqSN6om8jEGQFKa5DgnaqsG979GgYNvl6ZCppKDxdwek85UBFAUCB156SKvoZ IKLdz/EiUdhaklWQf2Eftjx50/eSp//MB+ZT0u6/mGiRXElJ2HA4wfsOhF20Gq5kQWVU ckAw== X-Gm-Message-State: AOAM530bE7SNBoulnRkxhsbMU5C0S65w6et6MS6OUaCbYIS/69yyQOm7 tFcocS0+RZcj5p3tVNtaI2E60VZ0dEA= X-Google-Smtp-Source: ABdhPJzbGaMHRSN6Foq0MN7pYOwtpEK1d+tqys1HKVelbpVg4A40evROqReC43MTXMy6pYq67i6tt1aBecs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:dc8b:b0:6df:7a71:1321 with SMTP id cs11-20020a170906dc8b00b006df7a711321mr33531777ejc.476.1648557638735; Tue, 29 Mar 2022 05:40:38 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:33 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-5-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 04/48] instrumented.h: allow instrumenting both sides of copy_from_user() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 1cw5xi3rk793ptzhuc6yios7tnxmzrq7 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ItupbKsX; spf=pass (imf28.hostedemail.com: domain of 3Rv5CYgYKCGcLQNIJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3Rv5CYgYKCGcLQNIJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 22171C0014 X-HE-Tag: 1648557639-836487 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce instrument_copy_from_user_before() and instrument_copy_from_user_after() hooks to be invoked before and after the call to copy_from_user(). KASAN and KCSAN will be only using instrument_copy_from_user_before(), but for KMSAN we'll need to insert code after copy_from_user(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I855034578f0b0f126734cbd734fb4ae1d3a6af99 --- include/linux/instrumented.h | 21 +++++++++++++++++++-- include/linux/uaccess.h | 19 ++++++++++++++----- lib/iov_iter.c | 9 ++++++--- lib/usercopy.c | 3 ++- 4 files changed, 41 insertions(+), 11 deletions(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 42faebbaa202a..ee8f7d17d34f5 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -120,7 +120,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) } /** - * instrument_copy_from_user - instrument writes of copy_from_user + * instrument_copy_from_user_before - add instrumentation before copy_from_user * * Instrument writes to kernel memory, that are due to copy_from_user (and * variants). The instrumentation should be inserted before the accesses. @@ -130,10 +130,27 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) * @n number of bytes to copy */ static __always_inline void -instrument_copy_from_user(const void *to, const void __user *from, unsigned long n) +instrument_copy_from_user_before(const void *to, const void __user *from, unsigned long n) { kasan_check_write(to, n); kcsan_check_write(to, n); } +/** + * instrument_copy_from_user_after - add instrumentation after copy_from_user + * + * Instrument writes to kernel memory, that are due to copy_from_user (and + * variants). The instrumentation should be inserted after the accesses. + * + * @to destination address + * @from source address + * @n number of bytes to copy + * @left number of bytes not copied (as returned by copy_from_user) + */ +static __always_inline void +instrument_copy_from_user_after(const void *to, const void __user *from, + unsigned long n, unsigned long left) +{ +} + #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index ac0394087f7d4..8dadd8642afbb 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -98,20 +98,28 @@ static inline void force_uaccess_end(mm_segment_t oldfs) static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { - instrument_copy_from_user(to, from, n); + unsigned long res; + + instrument_copy_from_user_before(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); + instrument_copy_from_user_before(to, from, n); if (should_fail_usercopy()) return n; - instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } /** @@ -155,8 +163,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 6dd5330f7a995..fb19401c29c4f 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -159,13 +159,16 @@ static int copyout(void __user *to, const void *from, size_t n) static int copyin(void *to, const void __user *from, size_t n) { + size_t res = n; + if (should_fail_usercopy()) return n; if (access_ok(from, n)) { - instrument_copy_from_user(to, from, n); - n = raw_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } - return n; + return res; } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, diff --git a/lib/usercopy.c b/lib/usercopy.c index 7413dd300516e..1505a52f23a01 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,8 +12,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); From patchwork Tue Mar 29 12:39:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3581C433EF for ; Tue, 29 Mar 2022 12:40:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 78C918D000C; Tue, 29 Mar 2022 08:40:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7169C8D000B; Tue, 29 Mar 2022 08:40:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B5CF8D000C; Tue, 29 Mar 2022 08:40:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 4480D8D000B for ; Tue, 29 Mar 2022 08:40:43 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 009E68249980 for ; Tue, 29 Mar 2022 12:40:42 +0000 (UTC) X-FDA: 79297382766.28.F34A380 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf31.hostedemail.com (Postfix) with ESMTP id 9F3F620018 for ; Tue, 29 Mar 2022 12:40:42 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id v9-20020a509549000000b00418d7c2f62aso10946482eda.15 for ; Tue, 29 Mar 2022 05:40:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3RM0qhEhF66BtuM0ZXwaEhPbHnfCR09wU8nC/pTqHi8=; b=ZfQ96zQsPsKfBQ/c9B+p0T/40FovJz4CFLCUEiSSbflULpKaBSwZh18ccdw9FjFMR/ 6CL212sJnEyxZj6LrueXeIA3af+ha93f5SXcMHFxIvZ8XcaA3jgjh2CgfsWeI5aC/FoO SsZxN2XfUHXoWAeOjgXaOjO75UgBK6Rk75Jr8fIsEu0Ij5ADYX0ek6w89sp3ufSZz+VH z3XgZl5AChOAcBCeLORpdjPovuJuV3Q3SgS4VWwGIZ606KqelRUzogETo02XK3PN9Ui9 gob299cB95qVQVVfrCpMAbauzPoQU2jiYcZtRwKNYM8m/tLM7wpaFE+a7CQHD2mcjmDA fVYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3RM0qhEhF66BtuM0ZXwaEhPbHnfCR09wU8nC/pTqHi8=; b=VACXFa8GUq0QmDmJ57ZE7hwvj84gyaS5x2tWkUIS8sVC7U6P9s294XMakY3hKNQjjC 8vIR+WGT/veLSh6rzeRb8IayM0saaaz2z3Gxu3Kc2gmSMiEsipdcSWrg4RjmYtt7dsOF t4c8lZF/SaNFgkpmih2e0miBQ6TwQKNzG+PVsJI/dJBAIRizE+cBMqJD8ERBbsbiX3wz bctmN+y1IiJcOIvG0Rz+/Cmmz5Mw3sklMm0t7I1Gk+KMYtf4pGQGIpKoAZV5LZnFGrR1 gaIe/RhaIaiKy0IcNrBBn4J34JiTChZ7HCRK1ZQtgLnDXguAWQ13yp+KaitjqdnkyNlJ cJLg== X-Gm-Message-State: AOAM531n3C7tiaoidS5y61uEknt4CaYs47wwQYiD56F3pRsejIZ55XKt 3YJQ3d/BfzPNvd3E2Noyy+6ErRvVRMk= X-Google-Smtp-Source: ABdhPJxZVbkRLRuOZHRe+6vIay7XNPuEELMpuQ9mw5W8uiPx5u+x394b7lBiwbP526tqAx4KRk6EWhiX5Zw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:2ce3:b0:6df:d4a8:9039 with SMTP id hz3-20020a1709072ce300b006dfd4a89039mr33974181ejc.697.1648557641243; Tue, 29 Mar 2022 05:40:41 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:34 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-6-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 05/48] x86: asm: instrument usercopy in get_user() and __put_user_size() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZfQ96zQs; spf=pass (imf31.hostedemail.com: domain of 3Sf5CYgYKCGoOTQLMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3Sf5CYgYKCGoOTQLMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: xr17i5ujgob7uop7f57pinaxba6co5do X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9F3F620018 X-HE-Tag: 1648557642-687646 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use hooks from instrumented.h to notify bug detection tools about usercopy events in get_user() and put_user_size(). It's still unclear how to instrument put_user(), which assumes that instrumentation code doesn't clobber RAX. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ia9f12bfe5832623250e20f1859fdf5cc485a2fce --- arch/x86/include/asm/uaccess.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index ac96f9b2d64b3..e6abe6f27ae99 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -5,6 +5,7 @@ * User space memory access functions */ #include +#include #include #include #include @@ -126,11 +127,13 @@ extern int __get_user_bad(void); int __ret_gu; \ register __inttype(*(ptr)) __val_gu asm("%"_ASM_DX); \ __chk_user_ptr(ptr); \ + instrument_copy_from_user_before((void *)&(x), ptr, sizeof(*(ptr))); \ asm volatile("call __" #fn "_%P4" \ : "=a" (__ret_gu), "=r" (__val_gu), \ ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ (x) = (__force __typeof__(*(ptr))) __val_gu; \ + instrument_copy_from_user_after((void *)&(x), ptr, sizeof(*(ptr)), 0); \ __builtin_expect(__ret_gu, 0); \ }) @@ -275,7 +278,9 @@ extern void __put_user_nocheck_8(void); #define __put_user_size(x, ptr, size, label) \ do { \ + __typeof__(*(ptr)) __pus_val = x; \ __chk_user_ptr(ptr); \ + instrument_copy_to_user(ptr, &(__pus_val), size); \ switch (size) { \ case 1: \ __put_user_goto(x, ptr, "b", "iq", label); \ @@ -313,6 +318,7 @@ do { \ #define __get_user_size(x, ptr, size, label) \ do { \ __chk_user_ptr(ptr); \ + instrument_copy_from_user_before((void *)&(x), ptr, size); \ switch (size) { \ case 1: { \ unsigned char x_u8__; \ @@ -332,6 +338,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + instrument_copy_from_user_after((void *)&(x), ptr, size, 0); \ } while (0) #define __get_user_asm(x, addr, itype, ltype, label) \ From patchwork Tue Mar 29 12:39:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF9AAC433F5 for ; Tue, 29 Mar 2022 12:40:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E38E8D000E; Tue, 29 Mar 2022 08:40:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 393398D000D; Tue, 29 Mar 2022 08:40:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 20C958D000E; Tue, 29 Mar 2022 08:40:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 0F7238D000D for ; Tue, 29 Mar 2022 08:40:46 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C6B591828A7F9 for ; Tue, 29 Mar 2022 12:40:45 +0000 (UTC) X-FDA: 79297382850.25.717828C Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf23.hostedemail.com (Postfix) with ESMTP id 72524140003 for ; Tue, 29 Mar 2022 12:40:45 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id f2-20020a50d542000000b00418ed3d95d8so10959508edj.11 for ; Tue, 29 Mar 2022 05:40:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=f/cngww90lUXjaM8lQFtWcyKhuwI9PqnftLYJV81Lhc=; b=ZhkI15SJzz773ZYscrUiWSWJETsRZFPteEYHqIHePZDMxUIF+3PQF+2Fw76owrqA6c UPmlNmwXK1Z6638+mZDSIi3ZaYImXqe+Uh0pSLh1rhQp3Xoe7jl74OrGzrvEG7z0eTSq o8MSM+eKeJVRDYa5y/OP8y60rroWEDJ8vzz/DDepligD6r9qfkOI3I92z4rW1hHLRtic cm0rD0pMAAL/xX1aSmAtPrmorukG9QpyLNGeNh6R44fL+JlPunKHnHJWEF85pS921n3e lbvxIMwP6/gs5F0Sdc52ZKShD71Ts7NFaxuC1noz1+PVJK55QErm2j6Lz/PX6SBoULEe K4FA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=f/cngww90lUXjaM8lQFtWcyKhuwI9PqnftLYJV81Lhc=; b=ScTIuCgnrd4xMZ7DUYZopVsI9wlFHlQmIED3mIxFrtmDk7tLJXNal4Fmqji0uJdjvB 8XEZxP/aw4wRkyY+LYv1eTW0Uve1XSp8P47IgzLEVmt8RvTxRoCWHazTADWgBke8CalT cGpp3hqb5oL/DLOAPeyeS1ni8+WyeTgqoO235ovLeZRRfwvSduUFRoYTQDbyjoimngHi girXCsRTK2A4y5/dAExplw4RCW7nnmXIJAiGpAcde8pq+H6+lLy7CGxup5M82o6OSvfN Z4D5fJSvpip5aEa5BCl4wNx/CtJcodbigg4zwVisoPaaaih4z+t+TF0mEsOxDOEdXViq oF4w== X-Gm-Message-State: AOAM5319Zu6hgN9VPZNpkLIjisvfsPsOlLwId85jFqxssIuivU57lQWx 1t6v4iIMrdwO6EcC7xgrhWRDSbs81nw= X-Google-Smtp-Source: ABdhPJx0Sl5gQKr3lXwaLxCri9MHvRT0DK5gtdgM1us4GPzvAAhQsz1qiosUClgYfwn1c4jOcnHsaUZStL0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:478e:b0:6db:7c67:c7e0 with SMTP id cw14-20020a170906478e00b006db7c67c7e0mr34151057ejc.335.1648557643821; Tue, 29 Mar 2022 05:40:43 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:35 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-7-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 06/48] asm-generic: instrument usercopy in cacheflush.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: z6jrcxxwmmqpntfsmqcs5k3bsnk7sts9 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 72524140003 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZhkI15SJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3S_5CYgYKCGwQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3S_5CYgYKCGwQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557645-75694 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Notify memory tools about usercopy events in copy_to_user_page() and copy_from_user_page(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic1ee8da1886325f46ad67f52176f48c2c836c48f --- include/asm-generic/cacheflush.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 4f07afacbc239..0f63eb325025f 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -2,6 +2,8 @@ #ifndef _ASM_GENERIC_CACHEFLUSH_H #define _ASM_GENERIC_CACHEFLUSH_H +#include + struct mm_struct; struct vm_area_struct; struct page; @@ -105,6 +107,7 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_to_user_page #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ + instrument_copy_to_user(dst, src, len); \ memcpy(dst, src, len); \ flush_icache_user_page(vma, page, vaddr, len); \ } while (0) @@ -112,7 +115,11 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_from_user_page #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ - memcpy(dst, src, len) + do { \ + instrument_copy_from_user_before(dst, src, len); \ + memcpy(dst, src, len); \ + instrument_copy_from_user_after(dst, src, len, 0); \ + } while (0) #endif #endif /* _ASM_GENERIC_CACHEFLUSH_H */ From patchwork Tue Mar 29 12:39:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794757 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E33C433F5 for ; Tue, 29 Mar 2022 12:40:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40B4F8D0010; Tue, 29 Mar 2022 08:40:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BB948D000F; Tue, 29 Mar 2022 08:40:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25B998D0010; Tue, 29 Mar 2022 08:40:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 138EF8D000F for ; Tue, 29 Mar 2022 08:40:50 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D3305208A9 for ; Tue, 29 Mar 2022 12:40:49 +0000 (UTC) X-FDA: 79297383018.03.75E8151 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf15.hostedemail.com (Postfix) with ESMTP id 515B2A0019 for ; Tue, 29 Mar 2022 12:40:49 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id x5-20020a50ba85000000b00418e8ce90ffso10938015ede.14 for ; Tue, 29 Mar 2022 05:40:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TN/IxFbfYSJBoxieUM3SQ/pLsTFSHtJCIFgRUhWzQ6I=; b=MnJm4q0Qbv0TnFel33gkHsMyPpqBzj6o4FjUDU9yREtrg/KJit6EMVlNBddwodEznp 9S/YExh8KAnG5MFuNoqTaCHTikpdeTZCx5ddPmPXcU7y7f323Ny6zx4cB/oW22PlgVp1 N9FdxrsPSJIs7UTsl0O/uURvZoDBKd1KJ2pkSHdSUPP3zoG/hYqbLwoiEIAeyA/SVmWu GzbIGSsiud+Ez7lsIrPLktyDvbHo0wrESSZgBAuuyd6XOg8jLBaDQBGOmCH4fEusE35f wkWki3WLYgFyOiUIpwDBmhnOXoitFGvVOucWvWVKtbIlLbypUvpkVbk00fuFCx66hV/4 TJPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TN/IxFbfYSJBoxieUM3SQ/pLsTFSHtJCIFgRUhWzQ6I=; b=bgMn840OHE+G3J8BLeKOABN5OzvATCN3+WVrgmcMZq84rjw8qgr1foLtQq0yxeifVI dF6WTUk61D1OynE+4QBTKTostXR0JzFHOFiXMyq9zNaFAJ1gmfd2YdBq10f639w9xP9f qIDfUBqGRCOfRUiFd3yOwKGCrP+UNRTMEzh8djJOCiJum4ku5qrJLItLIPSrcGc7xG/q pjSAUGtCnQqJ6ey9vdTQX9/5fFRLtcXizrEZEYgph7bm0dJQ/3bs26tL4NaW7s+KgilC dpCkSSlyFHC6dmmqYt06pWKlOSKu3KB7EIXq5X1Dbvx3XjC5XEsT9HZupMe6pvzjOgJk pdvA== X-Gm-Message-State: AOAM532vqxr9l4mYjSS342VdqmX2N5oGsN1lqiJj2s5NRzu2jIolr/DI VdE3+2nt9t/CtiBy5K2deLkK4tzs4HU= X-Google-Smtp-Source: ABdhPJyn/oA1l/U2fo5rey9KTTtH1HuAy5KcTBb2H/xsbD/q94GXmO05dXXU6kGFMH+oU3N7aSmJpMEhwCI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:72cc:b0:6e0:2d3:bcba with SMTP id du12-20020a17090772cc00b006e002d3bcbamr34220627ejc.642.1648557646754; Tue, 29 Mar 2022 05:40:46 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:36 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-8-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 07/48] kmsan: add ReST documentation From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: ms3rrefo6hwa5yans57dmff4nzpzqcox X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 515B2A0019 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MnJm4q0Q; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3Tv5CYgYKCG8TYVQReTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3Tv5CYgYKCG8TYVQReTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557649-284946 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add Documentation/dev-tools/kmsan.rst and reference it in the dev-tools index. Signed-off-by: Alexander Potapenko --- v2: -- added a note that KMSAN is not intended for production use Link: https://linux-review.googlesource.com/id/I751586f79418b95550a83c6035c650b5b01567cc --- Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/kmsan.rst | 414 ++++++++++++++++++++++++++++++ 2 files changed, 415 insertions(+) create mode 100644 Documentation/dev-tools/kmsan.rst diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index 4621eac290f46..6b0663075dc04 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -24,6 +24,7 @@ Documentation/dev-tools/testing-overview.rst kcov gcov kasan + kmsan ubsan kmemleak kcsan diff --git a/Documentation/dev-tools/kmsan.rst b/Documentation/dev-tools/kmsan.rst new file mode 100644 index 0000000000000..e116889da79d5 --- /dev/null +++ b/Documentation/dev-tools/kmsan.rst @@ -0,0 +1,414 @@ +============================= +KernelMemorySanitizer (KMSAN) +============================= + +KMSAN is a dynamic error detector aimed at finding uses of uninitialized +values. It is based on compiler instrumentation, and is quite similar to the +userspace `MemorySanitizer tool`_. + +An important note is that KMSAN is not intended for production use, because it +drastically increases kernel memory footprint and slows the whole system down. + +Example report +============== + +Here is an example of a KMSAN report:: + + ===================================================== + BUG: KMSAN: uninit-value in test_uninit_kmsan_check_memory+0x1be/0x380 [kmsan_test] + test_uninit_kmsan_check_memory+0x1be/0x380 mm/kmsan/kmsan_test.c:273 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Uninit was stored to memory at: + do_uninit_local_array+0xfa/0x110 mm/kmsan/kmsan_test.c:260 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Local variable uninit created at: + do_uninit_local_array+0x4a/0x110 mm/kmsan/kmsan_test.c:256 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + + Bytes 4-7 of 8 are uninitialized + Memory access of size 8 starts at ffff888083fe3da0 + + CPU: 0 PID: 6731 Comm: kunit_try_catch Tainted: G B E 5.16.0-rc3+ #104 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 + ===================================================== + + +The report says that the local variable ``uninit`` was created uninitialized in +``do_uninit_local_array()``. The lower stack trace corresponds to the place +where this variable was created. + +The upper stack shows where the uninit value was used - in +``test_uninit_kmsan_check_memory()``. The tool shows the bytes which were left +uninitialized in the local variable, as well as the stack where the value was +copied to another memory location before use. + +Please note that KMSAN only reports an error when an uninitialized value is +actually used (e.g. in a condition or pointer dereference). A lot of +uninitialized values in the kernel are never used, and reporting them would +result in too many false positives. + +KMSAN and Clang +=============== + +In order for KMSAN to work the kernel must be built with Clang, which so far is +the only compiler that has KMSAN support. The kernel instrumentation pass is +based on the userspace `MemorySanitizer tool`_. + +How to build +============ + +In order to build a kernel with KMSAN you will need a fresh Clang (14.0.0+). +Please refer to `LLVM documentation`_ for the instructions on how to build Clang. + +Now configure and build the kernel with CONFIG_KMSAN enabled. + +How KMSAN works +=============== + +KMSAN shadow memory +------------------- + +KMSAN associates a metadata byte (also called shadow byte) with every byte of +kernel memory. A bit in the shadow byte is set iff the corresponding bit of the +kernel memory byte is uninitialized. Marking the memory uninitialized (i.e. +setting its shadow bytes to ``0xff``) is called poisoning, marking it +initialized (setting the shadow bytes to ``0x00``) is called unpoisoning. + +When a new variable is allocated on the stack, it is poisoned by default by +instrumentation code inserted by the compiler (unless it is a stack variable +that is immediately initialized). Any new heap allocation done without +``__GFP_ZERO`` is also poisoned. + +Compiler instrumentation also tracks the shadow values with the help from the +runtime library in ``mm/kmsan/``. + +The shadow value of a basic or compound type is an array of bytes of the same +length. When a constant value is written into memory, that memory is unpoisoned. +When a value is read from memory, its shadow memory is also obtained and +propagated into all the operations which use that value. For every instruction +that takes one or more values the compiler generates code that calculates the +shadow of the result depending on those values and their shadows. + +Example:: + + int a = 0xff; // i.e. 0x000000ff + int b; + int c = a | b; + +In this case the shadow of ``a`` is ``0``, shadow of ``b`` is ``0xffffffff``, +shadow of ``c`` is ``0xffffff00``. This means that the upper three bytes of +``c`` are uninitialized, while the lower byte is initialized. + + +Origin tracking +--------------- + +Every four bytes of kernel memory also have a so-called origin assigned to +them. This origin describes the point in program execution at which the +uninitialized value was created. Every origin is associated with either the +full allocation stack (for heap-allocated memory), or the function containing +the uninitialized variable (for locals). + +When an uninitialized variable is allocated on stack or heap, a new origin +value is created, and that variable's origin is filled with that value. +When a value is read from memory, its origin is also read and kept together +with the shadow. For every instruction that takes one or more values the origin +of the result is one of the origins corresponding to any of the uninitialized +inputs. If a poisoned value is written into memory, its origin is written to the +corresponding storage as well. + +Example 1:: + + int a = 42; + int b; + int c = a + b; + +In this case the origin of ``b`` is generated upon function entry, and is +stored to the origin of ``c`` right before the addition result is written into +memory. + +Several variables may share the same origin address, if they are stored in the +same four-byte chunk. In this case every write to either variable updates the +origin for all of them. We have to sacrifice precision in this case, because +storing origins for individual bits (and even bytes) would be too costly. + +Example 2:: + + int combine(short a, short b) { + union ret_t { + int i; + short s[2]; + } ret; + ret.s[0] = a; + ret.s[1] = b; + return ret.i; + } + +If ``a`` is initialized and ``b`` is not, the shadow of the result would be +0xffff0000, and the origin of the result would be the origin of ``b``. +``ret.s[0]`` would have the same origin, but it will be never used, because +that variable is initialized. + +If both function arguments are uninitialized, only the origin of the second +argument is preserved. + +Origin chaining +~~~~~~~~~~~~~~~ + +To ease debugging, KMSAN creates a new origin for every store of an +uninitialized value to memory. The new origin references both its creation stack +and the previous origin the value had. This may cause increased memory +consumption, so we limit the length of origin chains in the runtime. + +Clang instrumentation API +------------------------- + +Clang instrumentation pass inserts calls to functions defined in +``mm/kmsan/instrumentation.c`` into the kernel code. + +Shadow manipulation +~~~~~~~~~~~~~~~~~~~ + +For every memory access the compiler emits a call to a function that returns a +pair of pointers to the shadow and origin addresses of the given memory:: + + typedef struct { + void *shadow, *origin; + } shadow_origin_ptr_t + + shadow_origin_ptr_t __msan_metadata_ptr_for_load_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_load_n(void *addr, uintptr_t size) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_n(void *addr, uintptr_t size) + +The function name depends on the memory access size. + +The compiler makes sure that for every loaded value its shadow and origin +values are read from memory. When a value is stored to memory, its shadow and +origin are also stored using the metadata pointers. + +Origin tracking +~~~~~~~~~~~~~~~ + +A special function is used to create a new origin value for a local variable and +set the origin of that variable to that value:: + + void __msan_poison_alloca(void *addr, uintptr_t size, char *descr) + +Access to per-task data +~~~~~~~~~~~~~~~~~~~~~~~~~ + +At the beginning of every instrumented function KMSAN inserts a call to +``__msan_get_context_state()``:: + + kmsan_context_state *__msan_get_context_state(void) + +``kmsan_context_state`` is declared in ``include/linux/kmsan.h``:: + + struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; + }; + +This structure is used by KMSAN to pass parameter shadows and origins between +instrumented functions. + +String functions +~~~~~~~~~~~~~~~~ + +The compiler replaces calls to ``memcpy()``/``memmove()``/``memset()`` with the +following functions. These functions are also called when data structures are +initialized or copied, making sure shadow and origin values are copied alongside +with the data:: + + void *__msan_memcpy(void *dst, void *src, uintptr_t n) + void *__msan_memmove(void *dst, void *src, uintptr_t n) + void *__msan_memset(void *dst, int c, uintptr_t n) + +Error reporting +~~~~~~~~~~~~~~~ + +For each pointer dereference and each condition the compiler emits a shadow +check that calls ``__msan_warning()`` in the case a poisoned value is being +used:: + + void __msan_warning(u32 origin) + +``__msan_warning()`` causes KMSAN runtime to print an error report. + +Inline assembly instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +KMSAN instruments every inline assembly output with a call to:: + + void __msan_instrument_asm_store(void *addr, uintptr_t size) + +, which unpoisons the memory region. + +This approach may mask certain errors, but it also helps to avoid a lot of +false positives in bitwise operations, atomics etc. + +Sometimes the pointers passed into inline assembly do not point to valid memory. +In such cases they are ignored at runtime. + +Disabling the instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A function can be marked with ``__no_kmsan_checks``. Doing so makes KMSAN +ignore uninitialized values in that function and mark its output as initialized. +As a result, the user will not get KMSAN reports related to that function. + +Another function attribute supported by KMSAN is ``__no_sanitize_memory``. +Applying this attribute to a function will result in KMSAN not instrumenting it, +which can be helpful if we do not want the compiler to mess up some low-level +code (e.g. that marked with ``noinstr``). + +This however comes at a cost: stack allocations from such functions will have +incorrect shadow/origin values, likely leading to false positives. Functions +called from non-instrumented code may also receive incorrect metadata for their +parameters. + +As a rule of thumb, avoid using ``__no_sanitize_memory`` explicitly. + +It is also possible to disable KMSAN for a single file (e.g. main.o):: + + KMSAN_SANITIZE_main.o := n + +or for the whole directory:: + + KMSAN_SANITIZE := n + +in the Makefile. Think of this as applying ``__no_sanitize_memory`` to every +function in the file or directory. Most users won't need KMSAN_SANITIZE, unless +their code gets broken by KMSAN (e.g. runs at early boot time). + +Runtime library +--------------- + +The code is located in ``mm/kmsan/``. + +Per-task KMSAN state +~~~~~~~~~~~~~~~~~~~~ + +Every task_struct has an associated KMSAN task state that holds the KMSAN +context (see above) and a per-task flag disallowing KMSAN reports:: + + struct kmsan_context { + ... + bool allow_reporting; + struct kmsan_context_state cstate; + ... + } + + struct task_struct { + ... + struct kmsan_context kmsan; + ... + } + + +KMSAN contexts +~~~~~~~~~~~~~~ + +When running in a kernel task context, KMSAN uses ``current->kmsan.cstate`` to +hold the metadata for function parameters and return values. + +But in the case the kernel is running in the interrupt, softirq or NMI context, +where ``current`` is unavailable, KMSAN switches to per-cpu interrupt state:: + + DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +Metadata allocation +~~~~~~~~~~~~~~~~~~~ + +There are several places in the kernel for which the metadata is stored. + +1. Each ``struct page`` instance contains two pointers to its shadow and +origin pages:: + + struct page { + ... + struct page *shadow, *origin; + ... + }; + +At boot-time, the kernel allocates shadow and origin pages for every available +kernel page. This is done quite late, when the kernel address space is already +fragmented, so normal data pages may arbitrarily interleave with the metadata +pages. + +This means that in general for two contiguous memory pages their shadow/origin +pages may not be contiguous. So, if a memory access crosses the boundary +of a memory block, accesses to shadow/origin memory may potentially corrupt +other pages or read incorrect values from them. + +In practice, contiguous memory pages returned by the same ``alloc_pages()`` +call will have contiguous metadata, whereas if these pages belong to two +different allocations their metadata pages can be fragmented. + +For the kernel data (``.data``, ``.bss`` etc.) and percpu memory regions +there also are no guarantees on metadata contiguity. + +In the case ``__msan_metadata_ptr_for_XXX_YYY()`` hits the border between two +pages with non-contiguous metadata, it returns pointers to fake shadow/origin regions:: + + char dummy_load_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + char dummy_store_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + +``dummy_load_page`` is zero-initialized, so reads from it always yield zeroes. +All stores to ``dummy_store_page`` are ignored. + +2. For vmalloc memory and modules, there is a direct mapping between the memory +range, its shadow and origin. KMSAN reduces the vmalloc area by 3/4, making only +the first quarter available to ``vmalloc()``. The second quarter of the vmalloc +area contains shadow memory for the first quarter, the third one holds the +origins. A small part of the fourth quarter contains shadow and origins for the +kernel modules. Please refer to ``arch/x86/include/asm/pgtable_64_types.h`` for +more details. + +When an array of pages is mapped into a contiguous virtual memory space, their +shadow and origin pages are similarly mapped into contiguous regions. + +3. For CPU entry area there are separate per-CPU arrays that hold its +metadata:: + + DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); + DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); + +When calculating shadow and origin addresses for a given memory address, KMSAN +checks whether the address belongs to the physical page range, the virtual page +range or CPU entry area. + +Handling ``pt_regs`` +~~~~~~~~~~~~~~~~~~~~ + +Many functions receive a ``struct pt_regs`` holding the register state at a +certain point. Registers do not have (easily calculatable) shadow or origin +associated with them, so we assume they are always initialized. + +References +========== + +E. Stepanov, K. Serebryany. `MemorySanitizer: fast detector of uninitialized +memory use in C++ +`_. +In Proceedings of CGO 2015. + +.. _MemorySanitizer tool: https://clang.llvm.org/docs/MemorySanitizer.html +.. _LLVM documentation: https://llvm.org/docs/GettingStarted.html From patchwork Tue Mar 29 12:39:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AA2AC433EF for ; Tue, 29 Mar 2022 12:40:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E42168D0012; Tue, 29 Mar 2022 08:40:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DF2728D0011; Tue, 29 Mar 2022 08:40:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF7DB8D0012; Tue, 29 Mar 2022 08:40:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id AD58F8D0011 for ; Tue, 29 Mar 2022 08:40:51 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 758C422E20 for ; Tue, 29 Mar 2022 12:40:51 +0000 (UTC) X-FDA: 79297383102.02.E7906B4 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf21.hostedemail.com (Postfix) with ESMTP id E87DC1C0011 for ; Tue, 29 Mar 2022 12:40:50 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id u17-20020a05640207d100b00418f00014f8so5707281edy.18 for ; Tue, 29 Mar 2022 05:40:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UaSjyZbuX4mRK40KL0zvcfvrCfayHvnW+1lMpNenzMM=; b=pT67FL+iJPznO4FlucflrliCBhKvTTg3KzQEhmg/BoDC0PIplw3vRbgledaaV+22QV PJ/2UpcagqAHZUoUmS6KMEOgg3Z7MJ6MkNWm1ZXIgZy3JMhpaXGLkAOvgIFYB1qAJ0KF I6NW2lJcnoBn97EYtu4eiCl6ODsGpg3JBpDnaoBDHnydn7JC8QLk7eLmg2KpVSgBLImy ia3dXD2aOt8PinRoXSw+HiAUjr2cNNWGiXvsfmcUXKj+k0Ec0UTSj4Yj90p9F6jv5nLE 8q9nAN/NikHW+3ZZmHM89/rK8iQoXaim6KSGFGoalxm66Xa0wGT7M2cU/uJ7ZdLT9o7f Az0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UaSjyZbuX4mRK40KL0zvcfvrCfayHvnW+1lMpNenzMM=; b=rE6VoLmYXaSLOctN95Dtqs4s+iwT7H99Vt+Fa+/m0nNk5cRD+s/070NsIfF7FQ2nfx BX1Bji2l4JdeC9NJdQ+6wnf8hJiukGg2lSBlr1AxFmIXj0qkBWXLt5lO8iAENDF6Om0z Daaiokf3DTT1UUgD1KbVCJ8jwr5ikcLcZ7ai+ru4sHIzSJldG3yGrIMarqI887fZMInz fC226yuU6lrA7CahkoR50DYXX5hxyxPKvNq6Fw83SYTG9mu1bNDSI/bBaDg+cIQqfp9d fq6vqt7SpyyhEm3hY0sCGQ9VJu2SQFDsIJoSuwIq5cSlJltKCJzWudi01JMR388CaTUd /3uA== X-Gm-Message-State: AOAM533ce8PlahiLfRnMCMHoKkZRT4kSguc1IW2j3cik+Rkwk6C8lmaV cwFW0e3qjzzJWJcbhu+ZUAL6ZHzVzC0= X-Google-Smtp-Source: ABdhPJzc7uQQqv9/qIuIbMIwaZGD04hNlnYkXly8TbXNnnJOwrMOtsjceNSNZRkURUbxFzT9/MpcuG+W9FY= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:1293:b0:418:fe9d:99c3 with SMTP id w19-20020a056402129300b00418fe9d99c3mr4374732edv.146.1648557649372; Tue, 29 Mar 2022 05:40:49 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:37 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-9-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 08/48] kmsan: introduce __no_sanitize_memory and __no_kmsan_checks From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Stat-Signature: wbqeaj5xpafsfy5si7n6nunjjrme3ii4 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pT67FL+i; spf=pass (imf21.hostedemail.com: domain of 3Uf5CYgYKCHIWbYTUhWeeWbU.SecbYdkn-ccalQSa.ehW@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3Uf5CYgYKCHIWbYTUhWeeWbU.SecbYdkn-ccalQSa.ehW@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E87DC1C0011 X-HE-Tag: 1648557650-341320 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __no_sanitize_memory is a function attribute that instructs KMSAN to skip a function during instrumentation. This is needed to e.g. implement the noinstr functions. __no_kmsan_checks is a function attribute that makes KMSAN ignore the uninitialized values coming from the function's inputs, and initialize the function's outputs. Functions marked with this attribute can't be inlined into functions not marked with it, and vice versa. This behavior is overridden by __always_inline. __SANITIZE_MEMORY__ is a macro that's defined iff the file is instrumented with KMSAN. This is not the same as CONFIG_KMSAN, which is defined for every file. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I004ff0360c918d3cd8b18767ddd1381c6d3281be --- include/linux/compiler-clang.h | 23 +++++++++++++++++++++++ include/linux/compiler-gcc.h | 6 ++++++ 2 files changed, 29 insertions(+) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 3c4de9b6c6e3e..5f11a6f269e28 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -51,6 +51,29 @@ #define __no_sanitize_undefined #endif +#if __has_feature(memory_sanitizer) +#define __SANITIZE_MEMORY__ +/* + * Unlike other sanitizers, KMSAN still inserts code into functions marked with + * no_sanitize("kernel-memory"). Using disable_sanitizer_instrumentation + * provides the behavior consistent with other __no_sanitize_ attributes, + * guaranteeing that __no_sanitize_memory functions remain uninstrumented. + */ +#define __no_sanitize_memory __disable_sanitizer_instrumentation + +/* + * The __no_kmsan_checks attribute ensures that a function does not produce + * false positive reports by: + * - initializing all local variables and memory stores in this function; + * - skipping all shadow checks; + * - passing initialized arguments to this function's callees. + */ +#define __no_kmsan_checks __attribute__((no_sanitize("kernel-memory"))) +#else +#define __no_sanitize_memory +#define __no_kmsan_checks +#endif + /* * Support for __has_feature(coverage_sanitizer) was added in Clang 13 together * with no_sanitize("coverage"). Prior versions of Clang support coverage diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index ccbbd31b3aae5..f6e69387aad05 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -129,6 +129,12 @@ #define __SANITIZE_ADDRESS__ #endif +/* + * GCC does not support KMSAN. + */ +#define __no_sanitize_memory +#define __no_kmsan_checks + /* * Turn individual warnings and errors on and off locally, depending * on version. From patchwork Tue Mar 29 12:39:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7246C433EF for ; Tue, 29 Mar 2022 12:40:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6559C8D0014; Tue, 29 Mar 2022 08:40:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6038E8D0013; Tue, 29 Mar 2022 08:40:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F3EF8D0014; Tue, 29 Mar 2022 08:40:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 3DD468D0013 for ; Tue, 29 Mar 2022 08:40:54 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 0C8CF81673 for ; Tue, 29 Mar 2022 12:40:54 +0000 (UTC) X-FDA: 79297383228.01.C934E91 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf26.hostedemail.com (Postfix) with ESMTP id B53F0140011 for ; Tue, 29 Mar 2022 12:40:53 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id i4-20020aa7c9c4000000b00419c542270dso5229330edt.8 for ; Tue, 29 Mar 2022 05:40:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O7QheLFrb7/gpanG0WgUH3CDvOfcWzGQKyw+meqIa2c=; b=nwdrOPNgrlRK/p217vtYWLPWJHANewfRLQpav+XSHbkAVHEnW3oP60FbQ/XxJx5Wey 8DRycD6Z8aEu0MC/3X0fisGnjf5imwmv/pW9pycKVeSdzaHTJ5Rg895wxW0ZrqOGPo10 IfXG+qAD/A/UF9Bi9etjM3pLZn0NbG7nPno2kSpwLlV8g6SDRLw6y8Wk3qoccWfMZ8be drDAk5gciM2kZ8tcU4aiCTs8FvlgiztSYF5pfb76GBFYTSFhMc7u/3/M4f77WEnkaJxj KHDu5r2kXFtX3OnAiGQmLBocYjFXf2eOhSXJmKa/Qj5ogXrOHpI3AL+qNsyyhKmK9AoV 8PsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O7QheLFrb7/gpanG0WgUH3CDvOfcWzGQKyw+meqIa2c=; b=mU+1uSothD/yJM8+4HsKblQ2kqONvWcYysWR8Eq/LALebsm7Vt5r8Z0SRzMUKnfFRx e2GvfTn/0EiuXceI+uoPyFsav+jB7A79B4RlBLKB/ednkhdUwDTr8vGvC5KP8FPhxRfG oI2l5ehwMeL8gNKGIXVoFDLHP6+h7vvuwc28UGw0Cq3XVL9/ZmF2eUxFGm0Itg1tdR73 5vMiJCfaLXoG5ILve0zMwZyj7b7OnFgqgKbayyD+Ehmqvbmvzl628S3fiurSUNCs7YBg RzyJudAiSA3eFuUEoVmssSqMbxy1DA+UrrqEPZDzo5B2fiRnVrqNVsrJZS1zXaePE7Hj N7MQ== X-Gm-Message-State: AOAM530QAj3YHHnMAXqy/sL6/Dx+q+NfZxizluEjzYfuI+bw61eEw+QA Sch4ToTscaVFy+PqT8w9zZ8qNU0n83w= X-Google-Smtp-Source: ABdhPJw72M5zTQlc12l/JxfWqejUITTKjW22Nisg7zg2gWpioCaXSJXtz0/OVqyNYeeR7potvImif+IfrBc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:60cc:b0:6e0:dab3:ca76 with SMTP id hv12-20020a17090760cc00b006e0dab3ca76mr19088650ejc.154.1648557652185; Tue, 29 Mar 2022 05:40:52 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:38 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-10-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 09/48] kmsan: mark noinstr as __no_sanitize_memory From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B53F0140011 X-Stat-Signature: pa4yq7e8w5dkay9j5k4r4hbmk68hx75f X-Rspam-User: Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nwdrOPNg; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3VP5CYgYKCHUZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3VP5CYgYKCHUZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com X-HE-Tag: 1648557653-513786 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: noinstr functions should never be instrumented, so make KMSAN skip them by applying the __no_sanitize_memory attribute. Signed-off-by: Alexander Potapenko --- v2: -- moved this patch earlier in the series per Mark Rutland's request Link: https://linux-review.googlesource.com/id/I3c9abe860b97b49bc0c8026918b17a50448dec0d --- include/linux/compiler_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 3c1795fdb5686..286675559cbba 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -221,7 +221,8 @@ struct ftrace_likely_data { /* Section for code which can't be instrumented at all */ #define noinstr \ noinline notrace __attribute((__section__(".noinstr.text"))) \ - __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage + __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage \ + __no_sanitize_memory #endif /* __KERNEL__ */ From patchwork Tue Mar 29 12:39:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7FB7C433F5 for ; Tue, 29 Mar 2022 12:40:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36F008D0016; Tue, 29 Mar 2022 08:40:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2F6998D0015; Tue, 29 Mar 2022 08:40:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16FD88D0016; Tue, 29 Mar 2022 08:40:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 05D188D0015 for ; Tue, 29 Mar 2022 08:40:58 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BD9381828A446 for ; Tue, 29 Mar 2022 12:40:57 +0000 (UTC) X-FDA: 79297383354.23.8C26860 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf02.hostedemail.com (Postfix) with ESMTP id 311AA8003A for ; Tue, 29 Mar 2022 12:40:56 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id s9-20020a50d489000000b00418d556edbdso10959631edi.4 for ; Tue, 29 Mar 2022 05:40:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6/sayBfyjsH2Y3AfJUr4VjD2mcwzYPMTZLT8of1IREg=; b=gPCDqH0qCHmbSNoYXOMeBTfYfmSmbXUFr3zh1f5ZPo0Pb3cPpOC3T9Li2Ba/s3wfmz gQ2bEXLw+QpWVfUvoSRzIkAV0HGDefVdPLjvrmeyUVW1CwUFY/o/0iTuMpy00qfMhXYD S7Ny7FKu+E+vuPhMe33YJ17U98elZnTICW7Rf7enJip6PmyGcM/IKGLtw9meI2L8r4T/ 1B+1Mwp5rxGR/FQmQuUKPOmSqvjvULvO7aaLABE+y2HIRD0KCdYz/+lXqsDIyraIiLZx Zx5IqS0hdEQnvzCuaM2/3M5V3KZQK9XBZNOZyH5gNz855mbOM3DEDHGoM3IbScEoxq6l Nn2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6/sayBfyjsH2Y3AfJUr4VjD2mcwzYPMTZLT8of1IREg=; b=OxITBAPn6972oY26bhPMgmKoOQfQYfR26NtXqUzGHcj0SR3c3NoaT4E8KUUPcbny5h PHH/2LcqvsL96aKHk4iZNrt1mK7FlfM1z2dK8/yXlDrEHnlZSTV/KzeXW1iEzX1ROUAC j4r76XMUdYXkeBOwPkpDQ6RBpDX+CK3WO3QZuCk8x4VNKeobXouBQb/uS1XU0kDcJM+N M8YqJ0OnGnyywb9si7lXvJFvMPMd56M5twJBaVlmFDXpzA0z8cBhRT7s5vVOJmyJ2Uvt WyhHuhAai2dmrCV/nYvz3mBEXm7oBqZcZR6+5R4KDGgJ4QfWPre2C9xCcQ/OggOpO10N 72gA== X-Gm-Message-State: AOAM5324LeJlTG9GGNZEC7AZZZFmDehrPqALVxVcHZNAufZdpZ0X+cvz N8D+YUO5ZWnpoCiqVheXVA0eMDF+Rf8= X-Google-Smtp-Source: ABdhPJz19CPV3rmf1QsGMB2W43cqP02JXhPWr4JS53hrOaKJGQqZA3fyKoTvfV9B2lQS0qWbgSPFKMg0F4A= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:40da:b0:6ce:51b:a593 with SMTP id a26-20020a17090640da00b006ce051ba593mr34588142ejk.604.1648557654844; Tue, 29 Mar 2022 05:40:54 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:39 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-11-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 10/48] x86: kmsan: pgtable: reduce vmalloc space From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 311AA8003A X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gPCDqH0q; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 3Vv5CYgYKCHcbgdYZmbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3Vv5CYgYKCHcbgdYZmbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--glider.bounces.google.com X-Stat-Signature: tsb7z3d5fnthotni3its8byaoyow5ds6 X-HE-Tag: 1648557656-44519 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko --- v2: -- added x86: to the title Link: https://linux-review.googlesource.com/id/I9d8b7f0a88a639f1263bc693cbd5c136626f7efd --- arch/x86/include/asm/pgtable_64_types.h | 41 ++++++++++++++++++++++++- arch/x86/mm/init_64.c | 2 +- 2 files changed, 41 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 91ac106545703..7f15d43754a34 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,46 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ -#define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +#define VMEMORY_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) + +#ifndef CONFIG_KMSAN +#define VMALLOC_END VMEMORY_END +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. The memory formerly + * belonging to vmalloc area is now laid out as follows: + * + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to + * VMALLOC_END+KMSAN_VMALLOC_SHADOW_OFFSET - vmalloc area shadow + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to + * VMALLOC_END+KMSAN_VMALLOC_ORIGIN_OFFSET - vmalloc area origins + * 4th quarter: KMSAN_MODULES_SHADOW_START to KMSAN_MODULES_ORIGIN_START + * - shadow for modules, + * KMSAN_MODULES_ORIGIN_START to + * KMSAN_MODULES_ORIGIN_START + MODULES_LEN - origins for modules. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) + +/* + * vmalloc metadata addresses are calculated by adding shadow/origin offsets + * to vmalloc address. + */ +#define KMSAN_VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define KMSAN_VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE << 1) + +#define KMSAN_VMALLOC_SHADOW_START (VMALLOC_START + KMSAN_VMALLOC_SHADOW_OFFSET) +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_START + KMSAN_VMALLOC_ORIGIN_OFFSET) + +/* + * The shadow/origin for modules are placed one by one in the last 1/4 of + * vmalloc space. + */ +#define KMSAN_MODULES_SHADOW_START (VMALLOC_END + KMSAN_VMALLOC_ORIGIN_OFFSET + 1) +#define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN) +#endif /* CONFIG_KMSAN */ #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 96d34ebb20a9e..fcea37beb3911 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1287,7 +1287,7 @@ static void __init preallocate_vmalloc_pages(void) unsigned long addr; const char *lvl; - for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + for (addr = VMALLOC_START; addr <= VMEMORY_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; pud_t *pud; From patchwork Tue Mar 29 12:39:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E564CC433EF for ; Tue, 29 Mar 2022 12:41:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B4088D0003; Tue, 29 Mar 2022 08:41:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7639F8D0001; Tue, 29 Mar 2022 08:41:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62C188D0003; Tue, 29 Mar 2022 08:41:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 539128D0001 for ; Tue, 29 Mar 2022 08:41:03 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 24358236F8 for ; Tue, 29 Mar 2022 12:41:03 +0000 (UTC) X-FDA: 79297383606.02.A279C5E Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf14.hostedemail.com (Postfix) with ESMTP id ABD8C10000C for ; Tue, 29 Mar 2022 12:41:02 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id ml20-20020a170906cc1400b006df8c9357efso8110522ejb.21 for ; Tue, 29 Mar 2022 05:41:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=km5owwejhlOXuN0CdtXT+rH3PwGCxkXBDIlOmz8F8mM=; b=BPBN+Gp/HybRIj+tr7lmAPtZ/gaSTg17YfV70uHe6dI4D03YCmDJ28J1xxu73jh2dk lunM+tJfRxIWWCTitc1zqeGcNO6ui2olt/ToR74zTeetb2t645zaZRfbXnEBWBeufQcz UZFI/a7dadkwv9KhIIRq4n83fNxSMacV6CuDKWLbiQmxkp72N4gUGhDLhrX/ZY/tzqb+ wwmGDuW3C5VDusIt/9kLL08QT9f400t0JOILiCvI+CWdGaKJ8/8ZTha18LD969Vanzph EmE7KmmpEFFeSjxO/yybRJOhoovSm59NPOFmQeJCjBwZk6WuGl9hl0wb88xwAHT3OeLk 9zLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=km5owwejhlOXuN0CdtXT+rH3PwGCxkXBDIlOmz8F8mM=; b=cRZDDqDLI27TUkULe6luY0JoXIi3IuHq5bowUxrt9nunBi5eCj3Opvs6RcAdwrc15h 1tgInjtvuqCqGYjH3wJlBHhLzL29tFZHejBKS0RLCKD/sk6JWjFNDlhkkmgD8wsAYVub u4OLy/vJ47FpZz3yNh4sHdIk1a19wuBnUV5hwNcdZ09YZDf8jYSXGSGMXzp/Y7sN6129 TaJ59btkig101kmN5vgXlVCClgHLD+uX07klYvaURIIwUTX3ZSZR4cPNKR8G6ZgLPu4O bLPYKCvPgZTy/i3ngw8SEyDxSE4dgIcSKx2CDrSmdftTfcMSpUMyL1UtIE7KA5J/WYVL yAAw== X-Gm-Message-State: AOAM5306oZRWJtRqj4oxzu4TNhXTp9Wr5UckpC7QOoS1BW5n2Q0t62CB XCeN4hN56WDHeKYPV9IEWM47YKFP5+k= X-Google-Smtp-Source: ABdhPJxTFdKgNMHXnOGxM/ndXx2ll4nIIoJnFlW5u5FSX6vDALAQ8Bv8xnOw9ZMBkOj50HuqwBsroT31ous= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:27c7:b0:41b:51ca:f542 with SMTP id c7-20020a05640227c700b0041b51caf542mr3174378ede.149.1648557657545; Tue, 29 Mar 2022 05:40:57 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:40 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-12-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 11/48] libnvdimm/pfn_dev: increase MAX_STRUCT_PAGE_SIZE From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 8h6zzpe7u4k67ykhu5sgftsb91r5q44p X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: ABD8C10000C Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="BPBN+Gp/"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3Wf5CYgYKCHoejgbcpemmejc.amkjglsv-kkitYai.mpe@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3Wf5CYgYKCHoejgbcpemmejc.amkjglsv-kkitYai.mpe@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557662-535802 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN adds extra metadata fields to struct page, so it does not fit into 64 bytes anymore. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I353796acc6a850bfd7bb342aa1b63e616fc614f1 --- drivers/nvdimm/nd.h | 2 +- drivers/nvdimm/pfn_devs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 6f8ce114032d0..b50aecd1dd423 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -663,7 +663,7 @@ void devm_namespace_disable(struct device *dev, struct nd_namespace_common *ndns); #if IS_ENABLED(CONFIG_ND_CLAIM) /* max struct page size independent of kernel config */ -#define MAX_STRUCT_PAGE_SIZE 64 +#define MAX_STRUCT_PAGE_SIZE 128 int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap); #else static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 58eda16f5c534..07a539195cc8b 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -785,7 +785,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) * when populating the vmemmap. This *should* be equal to * PMD_SIZE for most architectures. * - * Also make sure size of struct page is less than 64. We + * Also make sure size of struct page is less than 128. We * want to make sure we use large enough size here so that * we don't have a dynamic reserve space depending on * struct page size. But we also want to make sure we notice From patchwork Tue Mar 29 12:39:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57246C433F5 for ; Tue, 29 Mar 2022 12:41:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3A488D0008; Tue, 29 Mar 2022 08:41:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A6FFA8D0007; Tue, 29 Mar 2022 08:41:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93A3F8D0008; Tue, 29 Mar 2022 08:41:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 7CB9F8D0007 for ; Tue, 29 Mar 2022 08:41:04 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 36B0BA3241 for ; Tue, 29 Mar 2022 12:41:04 +0000 (UTC) X-FDA: 79297383648.20.5BBEE09 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf24.hostedemail.com (Postfix) with ESMTP id 9C08918000C for ; Tue, 29 Mar 2022 12:41:03 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id o20-20020aa7dd54000000b00413bc19ad08so10968337edw.7 for ; Tue, 29 Mar 2022 05:41:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cXkdgZbGJwU6k8snEczoh0k/WkFNvdUgCw+ayF9WLuQ=; b=kJ3CEmpah2i9qEH7T+C5Ga2iXnj2Y+3RwMf8YqsmELN6LOISLHU5lU3es8dFXLXr4G RwvPPr+lXFQJJVB8eC5SfjBqsSq8LtpCHYf3AswzsXu/1NhBVq2D9gsP4OH2L4CTMT4M htWXUw29Be33vwaebXorL3fSnDwOI8IYefJyHWs3/4YxoYcA+PWRfpq7JHQ6XEzLzzbD FlQgRQzQ9K3/oO1RgYlOuEtyVvqXUpOZFhIl55G6hZ3TouP4qi7NusxxsF5cTKn2krfS X+VguAz0cQlQcgvkoFLT3U0fU9ShmiXto/lOzeXy+6A0JzZKD80LdngdHu/dWikUglw0 6J5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cXkdgZbGJwU6k8snEczoh0k/WkFNvdUgCw+ayF9WLuQ=; b=jflW9+Wmnyh7reEOsozZMChKoXP8agA3jPVOgYrRyhgPYJtRMrfWdYZo1AB1YV9iYY VdQX/QTinsH6jr3wbjK6uVC5ydn6FlF31xrR6xjXE1JcklpiB84/NRghQn+cY1YYCAf2 U71nG1pz8sMYzbV2ovN1b5GbXgJtjCLoyBiLBTHBtC9goPgeC29GGZko49R3lYUFX5I+ hYR32EluG42IbDLnk8GJI7MoObGHddCQs3GPnFHTb3GWeP3YrMexjObq/YbrZ2LtgRjT EMcNTOVnkFGvyd7usubiwWWcw5ZSz/crNFz48walkvA+UGIkOifbe4u+KSecszBPK6hR 6a7g== X-Gm-Message-State: AOAM532LO6l1EWuO5D/8mgfeIntEhBVPtV/PASPKOp/kUdwNx0ItCOMk dpjCGcT4TWd5+dBhAsIRBEQxsTKAhoA= X-Google-Smtp-Source: ABdhPJywI+kpf/WbEulTkI+WWQEh96+s7tqreK3VONJIUmDgWPb5bSFILSkC6rnkhNtKnHdOQAMR0bfb/+4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:9743:b0:6d8:632a:a42d with SMTP id o3-20020a170906974300b006d8632aa42dmr34840939ejy.157.1648557662237; Tue, 29 Mar 2022 05:41:02 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:41 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-13-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 12/48] kcsan: clang: retire CONFIG_KCSAN_KCOV_BROKEN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kJ3CEmpa; spf=pass (imf24.hostedemail.com: domain of 3Xv5CYgYKCH8jolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3Xv5CYgYKCH8jolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 9C08918000C X-Stat-Signature: dztx79anpc3m7nwrjt17dy61bnnsyuqi X-HE-Tag: 1648557663-212924 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kcov used to be broken prior to Clang 11, but right now that version is already the minimum required to build with KCSAN, because no prior compiler has "-tsan-distinguish-volatile=1". Therefore KCSAN_KCOV_BROKEN is not needed anymore. Suggested-by: Marco Elver Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ida287421577f37de337139b5b5b9e977e4a6fee2 --- lib/Kconfig.kcsan | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index 63b70b8c55519..de022445fbba5 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -10,21 +10,10 @@ config HAVE_KCSAN_COMPILER For the list of compilers that support KCSAN, please see . -config KCSAN_KCOV_BROKEN - def_bool KCOV && CC_HAS_SANCOV_TRACE_PC - depends on CC_IS_CLANG - depends on !$(cc-option,-Werror=unused-command-line-argument -fsanitize=thread -fsanitize-coverage=trace-pc) - help - Some versions of clang support either KCSAN and KCOV but not the - combination of the two. - See https://bugs.llvm.org/show_bug.cgi?id=45831 for the status - in newer releases. - menuconfig KCSAN bool "KCSAN: dynamic data race detector" depends on HAVE_ARCH_KCSAN && HAVE_KCSAN_COMPILER depends on DEBUG_KERNEL && !KASAN - depends on !KCSAN_KCOV_BROKEN select STACKTRACE help The Kernel Concurrency Sanitizer (KCSAN) is a dynamic From patchwork Tue Mar 29 12:39:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6ECEC433EF for ; Tue, 29 Mar 2022 12:41:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67B648D0007; Tue, 29 Mar 2022 08:41:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62B2C8D0002; Tue, 29 Mar 2022 08:41:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36C738D0007; Tue, 29 Mar 2022 08:41:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 21A958D0002 for ; Tue, 29 Mar 2022 08:41:07 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E3D5320D55 for ; Tue, 29 Mar 2022 12:41:06 +0000 (UTC) X-FDA: 79297383732.10.37274D1 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf28.hostedemail.com (Postfix) with ESMTP id 4E3CAC0016 for ; Tue, 29 Mar 2022 12:41:06 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id c31-20020a509fa2000000b004190d43d28fso10860547edf.9 for ; Tue, 29 Mar 2022 05:41:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=biz+rIgT55uFrE6N/OSPE++HUy2NNspyJjcc8tnQ5F0=; b=oLTFqHECxdHQo2riVa6e2vk4JMzdrRa1nobyCYGqlnhiDE0UdJp+Oi1maxK3KKIeds yrNdR+/3nItl2kUXDWqESD1KAFY7oRRLfv54KjcYGz1FAemAYnBfHY4Srr5o+6tgXhy5 03Ldzu2oqeMoT6MQqtm4NLJouUQ1R7CtmlegonxogZiS0UCbzacZjcCNuMYDT7Kl8FJr bW8ZlFrsKssY1jjdDaliFGdSPlZIiVPBlbg4STStv5Gc3GqttGj2v1lMeFAhbUdzbx5L kPr1MSXcksO1CR8KyIAYZ7gGv5T0A9REaJe/mR20u0Yevzkjxxpiw5T8I9qH/+5FqIA9 ATPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=biz+rIgT55uFrE6N/OSPE++HUy2NNspyJjcc8tnQ5F0=; b=IzGKpeosMseYe563M+4AABuhW3FEDAnVleAKtqBNsxysd9w9RPPighhE8T5myw2wFF hYflMk9oqZY2lVPZdY7i3lchw+yZV87B8YWExQ13XKWCP56DmZSVeBNz+PdbZyFe6ab1 c6eeX5Xk7NeACvRS8sG278IjqQh874oBojWZNOyGrMt/DXSyujbUKiUyGNdAy8NIDtoa SgnVC+vMz+LnNruNst/+ymwSYFXwPWiQefGIkLb7EkGyEmDr0jAuIG8MBwufPlmgpm8R dqI4WhVYO1cpsxbHNWq6YkWfefZYtF5nppjWUcvVrG8q106cdaxXuat70NK7uEWJnfXT 2ULQ== X-Gm-Message-State: AOAM533fh3RlNvvDOxF9bFHIE7JsBiZ+0SRU/gRbYxJm1rp5ERUy4TYf joDJ0ncXtNP2BpcGwKF3WtghjvYtzRU= X-Google-Smtp-Source: ABdhPJxzkkrniU6QgqwuIr0F+dsVSYX1ecitHrX465QTJKPfG1GcAjlTfnc+Fpa5BJDcv5//XO+1tVMTk9U= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:86a8:b0:6db:6c1c:d9c4 with SMTP id qa40-20020a17090786a800b006db6c1cd9c4mr33198729ejc.640.1648557664969; Tue, 29 Mar 2022 05:41:04 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:42 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-14-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 13/48] kmsan: add KMSAN runtime core From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4E3CAC0016 X-Rspam-User: Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oLTFqHEC; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3YP5CYgYKCIElqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3YP5CYgYKCIElqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com X-Stat-Signature: z5cwqxudo98xrsfqr8miea191r8ejaok X-HE-Tag: 1648557666-733094 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For each memory location KernelMemorySanitizer maintains two types of metadata: 1. The so-called shadow of that location - а byte:byte mapping describing whether or not individual bits of memory are initialized (shadow is 0) or not (shadow is 1). 2. The origins of that location - а 4-byte:4-byte mapping containing 4-byte IDs of the stack traces where uninitialized values were created. Each struct page now contains pointers to two struct pages holding KMSAN metadata (shadow and origins) for the original struct page. Utility routines in mm/kmsan/core.c and mm/kmsan/shadow.c handle the metadata creation, addressing, copying and checking. mm/kmsan/report.c performs error reporting in the cases an uninitialized value is used in a way that leads to undefined behavior. KMSAN compiler instrumentation is responsible for tracking the metadata along with the kernel memory. mm/kmsan/instrumentation.c provides the implementation for instrumentation hooks that are called from files compiled with -fsanitize=kernel-memory. To aid parameter passing (also done at instrumentation level), each task_struct now contains a struct kmsan_task_state used to track the metadata of function parameters and return values for that task. Finally, this patch provides CONFIG_KMSAN that enables KMSAN, and declares CFLAGS_KMSAN, which are applied to files compiled with KMSAN. The KMSAN_SANITIZE:=n Makefile directive can be used to completely disable KMSAN instrumentation for certain files. Similarly, KMSAN_ENABLE_CHECKS:=n disables KMSAN checks and makes newly created stack memory initialized. Users can also use functions from include/linux/kmsan-checks.h to mark certain memory regions as uninitialized or initialized (this is called "poisoning" and "unpoisoning") or check that a particular region is initialized. Signed-off-by: Alexander Potapenko --- v2: -- as requested by Greg K-H, moved hooks for different subsystems to respective patches, rewrote the patch description; -- addressed comments by Dmitry Vyukov; -- added a note about KMSAN being not intended for production use. -- fix case of unaligned dst in kmsan_internal_memmove_metadata() Link: https://linux-review.googlesource.com/id/I9b71bfe3425466c97159f9de0062e5e8e4fec866 --- Makefile | 1 + include/linux/kmsan-checks.h | 64 +++++ include/linux/kmsan.h | 47 ++++ include/linux/mm_types.h | 12 + include/linux/sched.h | 5 + lib/Kconfig.debug | 1 + lib/Kconfig.kmsan | 23 ++ mm/Makefile | 1 + mm/kmsan/Makefile | 18 ++ mm/kmsan/core.c | 453 +++++++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 66 +++++ mm/kmsan/instrumentation.c | 267 +++++++++++++++++++++ mm/kmsan/kmsan.h | 183 ++++++++++++++ mm/kmsan/report.c | 211 ++++++++++++++++ mm/kmsan/shadow.c | 186 ++++++++++++++ scripts/Makefile.kmsan | 1 + scripts/Makefile.lib | 9 + 17 files changed, 1548 insertions(+) create mode 100644 include/linux/kmsan-checks.h create mode 100644 include/linux/kmsan.h create mode 100644 lib/Kconfig.kmsan create mode 100644 mm/kmsan/Makefile create mode 100644 mm/kmsan/core.c create mode 100644 mm/kmsan/hooks.c create mode 100644 mm/kmsan/instrumentation.c create mode 100644 mm/kmsan/kmsan.h create mode 100644 mm/kmsan/report.c create mode 100644 mm/kmsan/shadow.c create mode 100644 scripts/Makefile.kmsan diff --git a/Makefile b/Makefile index 7214f075e1f06..251fa89068ed1 100644 --- a/Makefile +++ b/Makefile @@ -1006,6 +1006,7 @@ include-y := scripts/Makefile.extrawarn include-$(CONFIG_DEBUG_INFO) += scripts/Makefile.debug include-$(CONFIG_KASAN) += scripts/Makefile.kasan include-$(CONFIG_KCSAN) += scripts/Makefile.kcsan +include-$(CONFIG_KMSAN) += scripts/Makefile.kmsan include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h new file mode 100644 index 0000000000000..a6522a0c28df9 --- /dev/null +++ b/include/linux/kmsan-checks.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN checks to be used for one-off annotations in subsystems. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef _LINUX_KMSAN_CHECKS_H +#define _LINUX_KMSAN_CHECKS_H + +#include + +#ifdef CONFIG_KMSAN + +/** + * kmsan_poison_memory() - Mark the memory range as uninitialized. + * @address: address to start with. + * @size: size of buffer to poison. + * @flags: GFP flags for allocations done by this function. + * + * Until other data is written to this range, KMSAN will treat it as + * uninitialized. Error reports for this memory will reference the call site of + * kmsan_poison_memory() as origin. + */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags); + +/** + * kmsan_unpoison_memory() - Mark the memory range as initialized. + * @address: address to start with. + * @size: size of buffer to unpoison. + * + * Until other data is written to this range, KMSAN will treat it as + * initialized. + */ +void kmsan_unpoison_memory(const void *address, size_t size); + +/** + * kmsan_check_memory() - Check the memory range for being initialized. + * @address: address to start with. + * @size: size of buffer to check. + * + * If any piece of the given range is marked as uninitialized, KMSAN will report + * an error. + */ +void kmsan_check_memory(const void *address, size_t size); + +#else + +static inline void kmsan_poison_memory(const void *address, size_t size, + gfp_t flags) +{ +} +static inline void kmsan_unpoison_memory(const void *address, size_t size) +{ +} +static inline void kmsan_check_memory(const void *address, size_t size) +{ +} + +#endif + +#endif /* _LINUX_KMSAN_CHECKS_H */ diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h new file mode 100644 index 0000000000000..4e35f43eceaa9 --- /dev/null +++ b/include/linux/kmsan.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN API for subsystems. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ +#ifndef _LINUX_KMSAN_H +#define _LINUX_KMSAN_H + +#include +#include +#include +#include +#include + +struct page; + +#ifdef CONFIG_KMSAN + +/* These constants are defined in the MSan LLVM instrumentation pass. */ +#define KMSAN_RETVAL_SIZE 800 +#define KMSAN_PARAM_SIZE 800 + +struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; +}; + +#undef KMSAN_PARAM_SIZE +#undef KMSAN_RETVAL_SIZE + +struct kmsan_ctx { + struct kmsan_context_state cstate; + int kmsan_in_runtime; + bool allow_reporting; +}; + +#endif + +#endif /* _LINUX_KMSAN_H */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0f549870da6a0..eace8b4ec083c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -206,6 +206,18 @@ struct page { not kmapped, ie. highmem) */ #endif /* WANT_PAGE_VIRTUAL */ +#ifdef CONFIG_KMSAN + /* + * KMSAN metadata for this page: + * - shadow page: every bit indicates whether the corresponding + * bit of the original page is initialized (0) or not (1); + * - origin page: every 4 bytes contain an id of the stack trace + * where the uninitialized value was created. + */ + struct page *kmsan_shadow; + struct page *kmsan_origin; +#endif + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index 75ba8aa60248b..d95305372b7a1 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1349,6 +1350,10 @@ struct task_struct { #endif #endif +#ifdef CONFIG_KMSAN + struct kmsan_ctx kmsan_ctx; +#endif + #if IS_ENABLED(CONFIG_KUNIT) struct kunit *kunit_test; #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 14b89aa37c5c9..d6dc915f26c76 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -968,6 +968,7 @@ config DEBUG_STACKOVERFLOW source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" +source "lib/Kconfig.kmsan" endmenu # "Memory Debugging" diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan new file mode 100644 index 0000000000000..199f79d031f94 --- /dev/null +++ b/lib/Kconfig.kmsan @@ -0,0 +1,23 @@ +config HAVE_ARCH_KMSAN + bool + +config HAVE_KMSAN_COMPILER + def_bool (CC_IS_CLANG && $(cc-option,-fsanitize=kernel-memory -mllvm -msan-disable-checks=1)) + +config KMSAN + bool "KMSAN: detector of uninitialized values use" + depends on HAVE_ARCH_KMSAN && HAVE_KMSAN_COMPILER + depends on SLUB && DEBUG_KERNEL && !KASAN && !KCSAN + depends on CC_IS_CLANG && CLANG_VERSION >= 140000 + select STACKDEPOT + select STACKDEPOT_ALWAYS_INIT + help + KernelMemorySanitizer (KMSAN) is a dynamic detector of uses of + uninitialized values in the kernel. It is based on compiler + instrumentation provided by Clang and thus requires Clang to build. + + An important note is that KMSAN is not intended for production use, + because it drastically increases kernel memory footprint and slows + the whole system down. + + See for more details. diff --git a/mm/Makefile b/mm/Makefile index 70d4309c9ce33..b04d9e8c8d4e0 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -89,6 +89,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ +obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile new file mode 100644 index 0000000000000..a80dde1de7048 --- /dev/null +++ b/mm/kmsan/Makefile @@ -0,0 +1,18 @@ +obj-y := core.o instrumentation.o hooks.o report.o shadow.o + +KMSAN_SANITIZE := n +KCOV_INSTRUMENT := n +UBSAN_SANITIZE := n + +# Disable instrumentation of KMSAN runtime with other tools. +CC_FLAGS_KMSAN_RUNTIME := -fno-stack-protector +CC_FLAGS_KMSAN_RUNTIME += $(call cc-option,-fno-conserve-stack) +CC_FLAGS_KMSAN_RUNTIME += -DDISABLE_BRANCH_PROFILING + +CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) + +CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c new file mode 100644 index 0000000000000..f4196f274e754 --- /dev/null +++ b/mm/kmsan/core.c @@ -0,0 +1,453 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN runtime library. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * Avoid creating too long origin chains, these are unlikely to participate in + * real reports. + */ +#define MAX_CHAIN_DEPTH 7 +#define NUM_SKIPPED_TO_WARN 10000 + +bool kmsan_enabled __read_mostly; + +/* + * Per-CPU KMSAN context to be used in interrupts, where current->kmsan is + * unavaliable. + */ +DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags) +{ + u32 extra_bits = + kmsan_extra_bits(/*depth*/ 0, poison_flags & KMSAN_POISON_FREE); + bool checked = poison_flags & KMSAN_POISON_CHECK; + depot_stack_handle_t handle; + + handle = kmsan_save_stack_with_flags(flags, extra_bits); + kmsan_internal_set_shadow_origin(address, size, -1, handle, checked); +} + +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked) +{ + kmsan_internal_set_shadow_origin(address, size, 0, 0, checked); +} + +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra) +{ + unsigned long entries[KMSAN_STACK_DEPTH]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0); + nr_entries = filter_irq_stacks(entries, nr_entries); + + /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */ + flags &= ~__GFP_DIRECT_RECLAIM; + + return __stack_depot_save(entries, nr_entries, extra, flags, true); +} + +/* Copy the metadata following the memmove() behavior. */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n) +{ + depot_stack_handle_t old_origin = 0, new_origin = 0; + int src_slots, dst_slots, i, iter, step, skip_bits; + depot_stack_handle_t *origin_src, *origin_dst; + void *shadow_src, *shadow_dst; + u32 *align_shadow_src, shadow; + bool backwards; + + shadow_dst = kmsan_get_metadata(dst, KMSAN_META_SHADOW); + if (!shadow_dst) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(dst, n)); + + shadow_src = kmsan_get_metadata(src, KMSAN_META_SHADOW); + if (!shadow_src) { + /* + * |src| is untracked: zero out destination shadow, ignore the + * origins, we're done. + */ + __memset(shadow_dst, 0, n); + return; + } + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(src, n)); + + __memmove(shadow_dst, shadow_src, n); + + origin_dst = kmsan_get_metadata(dst, KMSAN_META_ORIGIN); + origin_src = kmsan_get_metadata(src, KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin_dst || !origin_src); + src_slots = (ALIGN((u64)src + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)src, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + dst_slots = (ALIGN((u64)dst + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)dst, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + KMSAN_WARN_ON((src_slots < 1) || (dst_slots < 1)); + KMSAN_WARN_ON((src_slots - dst_slots > 1) || + (dst_slots - src_slots < -1)); + + backwards = dst > src; + i = backwards ? min(src_slots, dst_slots) - 1 : 0; + iter = backwards ? -1 : 1; + + align_shadow_src = + (u32 *)ALIGN_DOWN((u64)shadow_src, KMSAN_ORIGIN_SIZE); + for (step = 0; step < min(src_slots, dst_slots); step++, i += iter) { + KMSAN_WARN_ON(i < 0); + shadow = align_shadow_src[i]; + if (i == 0) { + /* + * If |src| isn't aligned on KMSAN_ORIGIN_SIZE, don't + * look at the first |src % KMSAN_ORIGIN_SIZE| bytes + * of the first shadow slot. + */ + skip_bits = ((u64)src % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + } + if (i == src_slots - 1) { + /* + * If |src + n| isn't aligned on + * KMSAN_ORIGIN_SIZE, don't look at the last + * |(src + n) % KMSAN_ORIGIN_SIZE| bytes of the + * last shadow slot. + */ + skip_bits = (((u64)src + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + } + /* + * Overwrite the origin only if the corresponding + * shadow is nonempty. + */ + if (origin_src[i] && (origin_src[i] != old_origin) && shadow) { + old_origin = origin_src[i]; + new_origin = kmsan_internal_chain_origin(old_origin); + /* + * kmsan_internal_chain_origin() may return + * NULL, but we don't want to lose the previous + * origin value. + */ + if (!new_origin) + new_origin = old_origin; + } + if (shadow) + origin_dst[i] = new_origin; + else + origin_dst[i] = 0; + } + /* + * If dst_slots is greater than src_slots (i.e. + * dst_slots == src_slots + 1), there is an extra origin slot at the + * beginning or end of the destination buffer, for which we take the + * origin from the previous slot. + * This is only done if the part of the source shadow corresponding to + * slot is non-zero. + * + * E.g. if we copy 8 aligned bytes that are marked as uninitialized + * and have origins o111 and o222, to an unaligned buffer with offset 1, + * these two origins are copied to three origin slots, so one of then + * needs to be duplicated, depending on the copy direction (@backwards) + * + * src shadow: |uuuu|uuuu|....| + * src origin: |o111|o222|....| + * + * backwards = 0: + * dst shadow: |.uuu|uuuu|u...| + * dst origin: |....|o111|o222| - fill the empty slot with o111 + * backwards = 1: + * dst shadow: |.uuu|uuuu|u...| + * dst origin: |o111|o222|....| - fill the empty slot with o222 + */ + if (src_slots < dst_slots) { + if (backwards) { + shadow = align_shadow_src[src_slots - 1]; + skip_bits = (((u64)dst + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + if (shadow) + /* src_slots > 0, therefore dst_slots is at least 2 */ + origin_dst[dst_slots - 1] = origin_dst[dst_slots - 2]; + } else { + shadow = align_shadow_src[0]; + skip_bits = ((u64)dst % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + if (shadow) + origin_dst[0] = origin_dst[1]; + } + } +} + +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id) +{ + unsigned long entries[3]; + u32 extra_bits; + int depth; + bool uaf; + + if (!id) + return id; + /* + * Make sure we have enough spare bits in |id| to hold the UAF bit and + * the chain depth. + */ + BUILD_BUG_ON((1 << STACK_DEPOT_EXTRA_BITS) <= (MAX_CHAIN_DEPTH << 1)); + + extra_bits = stack_depot_get_extra_bits(id); + depth = kmsan_depth_from_eb(extra_bits); + uaf = kmsan_uaf_from_eb(extra_bits); + + if (depth >= MAX_CHAIN_DEPTH) { + static atomic_long_t kmsan_skipped_origins; + long skipped = atomic_long_inc_return(&kmsan_skipped_origins); + + if (skipped % NUM_SKIPPED_TO_WARN == 0) { + pr_warn("not chained %ld origins\n", skipped); + dump_stack(); + kmsan_print_origin(id); + } + return id; + } + depth++; + extra_bits = kmsan_extra_bits(depth, uaf); + + entries[0] = KMSAN_CHAIN_MAGIC_ORIGIN; + entries[1] = kmsan_save_stack_with_flags(GFP_ATOMIC, 0); + entries[2] = id; + return __stack_depot_save(entries, ARRAY_SIZE(entries), extra_bits, + GFP_ATOMIC, true); +} + +void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b, + u32 origin, bool checked) +{ + u64 address = (u64)addr; + void *shadow_start; + u32 *origin_start; + size_t pad = 0; + int i; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + shadow_start = kmsan_get_metadata(addr, KMSAN_META_SHADOW); + if (!shadow_start) { + /* + * kmsan_metadata_is_contiguous() is true, so either all shadow + * and origin pages are NULL, or all are non-NULL. + */ + if (checked) { + pr_err("%s: not memsetting %ld bytes starting at %px, because the shadow is NULL\n", + __func__, size, addr); + BUG(); + } + return; + } + __memset(shadow_start, b, size); + + if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) { + pad = address % KMSAN_ORIGIN_SIZE; + address -= pad; + size += pad; + } + size = ALIGN(size, KMSAN_ORIGIN_SIZE); + origin_start = + (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); + + for (i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) + origin_start[i] = origin; +} + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr) +{ + struct page *page; + + if (!kmsan_internal_is_vmalloc_addr(vaddr) && + !kmsan_internal_is_module_addr(vaddr)) + return NULL; + page = vmalloc_to_page(vaddr); + if (pfn_valid(page_to_pfn(page))) + return page; + else + return NULL; +} + +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason) +{ + depot_stack_handle_t cur_origin = 0, new_origin = 0; + unsigned long addr64 = (unsigned long)addr; + depot_stack_handle_t *origin = NULL; + unsigned char *shadow = NULL; + int cur_off_start = -1; + int i, chunk_size; + size_t pos = 0; + + if (!size) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + while (pos < size) { + chunk_size = min(size - pos, + PAGE_SIZE - ((addr64 + pos) % PAGE_SIZE)); + shadow = kmsan_get_metadata((void *)(addr64 + pos), + KMSAN_META_SHADOW); + if (!shadow) { + /* + * This page is untracked. If there were uninitialized + * bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos - 1, user_addr, + reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + pos += chunk_size; + continue; + } + for (i = 0; i < chunk_size; i++) { + if (!shadow[i]) { + /* + * This byte is unpoisoned. If there were + * poisoned bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + continue; + } + origin = kmsan_get_metadata((void *)(addr64 + pos + i), + KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin); + new_origin = *origin; + /* + * Encountered new origin - report the previous + * uninitialized range. + */ + if (cur_origin != new_origin) { + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = new_origin; + cur_off_start = pos + i; + } + } + pos += chunk_size; + } + KMSAN_WARN_ON(pos != size); + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, cur_off_start, pos - 1, + user_addr, reason); + kmsan_leave_runtime(); + } +} + +bool kmsan_metadata_is_contiguous(void *addr, size_t size) +{ + char *cur_shadow = NULL, *next_shadow = NULL, *cur_origin = NULL, + *next_origin = NULL; + u64 cur_addr = (u64)addr, next_addr = cur_addr + PAGE_SIZE; + depot_stack_handle_t *origin_p; + bool all_untracked = false; + + if (!size) + return true; + + /* The whole range belongs to the same page. */ + if (ALIGN_DOWN(cur_addr + size - 1, PAGE_SIZE) == + ALIGN_DOWN(cur_addr, PAGE_SIZE)) + return true; + + cur_shadow = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ false); + if (!cur_shadow) + all_untracked = true; + cur_origin = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ true); + if (all_untracked && cur_origin) + goto report; + + for (; next_addr < (u64)addr + size; + cur_addr = next_addr, cur_shadow = next_shadow, + cur_origin = next_origin, next_addr += PAGE_SIZE) { + next_shadow = kmsan_get_metadata((void *)next_addr, false); + next_origin = kmsan_get_metadata((void *)next_addr, true); + if (all_untracked) { + if (next_shadow || next_origin) + goto report; + if (!next_shadow && !next_origin) + continue; + } + if (((u64)cur_shadow == ((u64)next_shadow - PAGE_SIZE)) && + ((u64)cur_origin == ((u64)next_origin - PAGE_SIZE))) + continue; + goto report; + } + return true; + +report: + pr_err("%s: attempting to access two shadow page ranges.\n", __func__); + pr_err("Access of size %ld at %px.\n", size, addr); + pr_err("Addresses belonging to different ranges: %px and %px\n", + (void *)cur_addr, (void *)next_addr); + pr_err("page[0].shadow: %px, page[1].shadow: %px\n", cur_shadow, + next_shadow); + pr_err("page[0].origin: %px, page[1].origin: %px\n", cur_origin, + next_origin); + origin_p = kmsan_get_metadata(addr, KMSAN_META_ORIGIN); + if (origin_p) { + pr_err("Origin: %08x\n", *origin_p); + kmsan_print_origin(*origin_p); + } else { + pr_err("Origin: unavailable\n"); + } + return false; +} + +bool kmsan_internal_is_module_addr(void *vaddr) +{ + return ((u64)vaddr >= MODULES_VADDR) && ((u64)vaddr < MODULES_END); +} + +bool kmsan_internal_is_vmalloc_addr(void *addr) +{ + return ((u64)addr >= VMALLOC_START) && ((u64)addr < VMALLOC_END); +} diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c new file mode 100644 index 0000000000000..4ac62fa67a02a --- /dev/null +++ b/mm/kmsan/hooks.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include + +#include "../internal.h" +#include "../slab.h" +#include "kmsan.h" + +/* + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_memory((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_poison_memory); + +void kmsan_unpoison_memory(const void *address, size_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_memory((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_memory); + +void kmsan_check_memory(const void *addr, size_t size) +{ + if (!kmsan_enabled) + return; + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c new file mode 100644 index 0000000000000..fe062d123a76f --- /dev/null +++ b/mm/kmsan/instrumentation.c @@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN compiler API. + * + * This file implements __msan_XXX hooks that Clang inserts into the code + * compiled with -fsanitize=kernel-memory. + * See Documentation/dev-tools/kmsan.rst for more information on how KMSAN + * instrumentation works. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" +#include +#include +#include + +static inline bool is_bad_asm_addr(void *addr, uintptr_t size, bool is_store) +{ + if ((u64)addr < TASK_SIZE) + return true; + if (!kmsan_get_metadata(addr, KMSAN_META_SHADOW)) + return true; + return false; +} + +static inline struct shadow_origin_ptr +get_shadow_origin_ptr(void *addr, u64 size, bool store) +{ + unsigned long ua_flags = user_access_save(); + struct shadow_origin_ptr ret; + + ret = kmsan_get_shadow_origin_ptr(addr, size, store); + user_access_restore(ua_flags); + return ret; +} + +/* Get shadow and origin pointers for a memory load with non-standard size. */ +struct shadow_origin_ptr __msan_metadata_ptr_for_load_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ false); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_n); + +/* Get shadow and origin pointers for a memory store with non-standard size. */ +struct shadow_origin_ptr __msan_metadata_ptr_for_store_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ true); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_n); + +/* + * Declare functions that obtain shadow/origin pointers for loads and stores + * with fixed size. + */ +#define DECLARE_METADATA_PTR_GETTER(size) \ + struct shadow_origin_ptr __msan_metadata_ptr_for_load_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ false); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_load_##size); \ + struct shadow_origin_ptr __msan_metadata_ptr_for_store_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ true); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_store_##size) + +DECLARE_METADATA_PTR_GETTER(1); +DECLARE_METADATA_PTR_GETTER(2); +DECLARE_METADATA_PTR_GETTER(4); +DECLARE_METADATA_PTR_GETTER(8); + +/* + * Handle a memory store performed by inline assembly. KMSAN conservatively + * attempts to unpoison the outputs of asm() directives to prevent false + * positives caused by missed stores. + */ +void __msan_instrument_asm_store(void *addr, uintptr_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + /* + * Most of the accesses are below 32 bytes. The two exceptions so far + * are clwb() (64 bytes) and FPU state (512 bytes). + * It's unlikely that the assembly will touch more than 512 bytes. + */ + if (size > 512) { + WARN_ONCE(1, "assembly store size too big: %ld\n", size); + size = 8; + } + if (is_bad_asm_addr(addr, size, /*is_store*/ true)) { + user_access_restore(ua_flags); + return; + } + kmsan_enter_runtime(); + /* Unpoisoning the memory on best effort. */ + kmsan_internal_unpoison_memory(addr, size, /*checked*/ false); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_instrument_asm_store); + +/* Handle llvm.memmove intrinsic. */ +void *__msan_memmove(void *dst, const void *src, uintptr_t n) +{ + void *result; + + result = __memmove(dst, src, n); + if (!n) + /* Some people call memmove() with zero length. */ + return result; + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_internal_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memmove); + +/* Handle llvm.memcpy intrinsic. */ +void *__msan_memcpy(void *dst, const void *src, uintptr_t n) +{ + void *result; + + result = __memcpy(dst, src, n); + if (!n) + /* Some people call memcpy() with zero length. */ + return result; + + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + /* Using memmove instead of memcpy doesn't affect correctness. */ + kmsan_internal_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memcpy); + +/* Handle llvm.memset intrinsic. */ +void *__msan_memset(void *dst, int c, uintptr_t n) +{ + void *result; + + result = __memset(dst, c, n); + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_enter_runtime(); + /* + * Clang doesn't pass parameter metadata here, so it is impossible to + * use shadow of @c to set up the shadow for @dst. + */ + kmsan_internal_unpoison_memory(dst, n, /*checked*/ false); + kmsan_leave_runtime(); + + return result; +} +EXPORT_SYMBOL(__msan_memset); + +/* + * Create a new origin from an old one. This is done when storing an + * uninitialized value to memory. When reporting an error, KMSAN unrolls and + * prints the whole chain of stores that preceded the use of this value. + */ +depot_stack_handle_t __msan_chain_origin(depot_stack_handle_t origin) +{ + depot_stack_handle_t ret = 0; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return ret; + + ua_flags = user_access_save(); + + /* Creating new origins may allocate memory. */ + kmsan_enter_runtime(); + ret = kmsan_internal_chain_origin(origin); + kmsan_leave_runtime(); + user_access_restore(ua_flags); + return ret; +} +EXPORT_SYMBOL(__msan_chain_origin); + +/* Poison a local variable when entering a function. */ +void __msan_poison_alloca(void *address, uintptr_t size, char *descr) +{ + depot_stack_handle_t handle; + unsigned long entries[4]; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + entries[0] = KMSAN_ALLOCA_MAGIC_ORIGIN; + entries[1] = (u64)descr; + entries[2] = (u64)__builtin_return_address(0); + /* + * With frame pointers enabled, it is possible to quickly fetch the + * second frame of the caller stack without calling the unwinder. + * Without them, simply do not bother. + */ + if (IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER)) + entries[3] = (u64)__builtin_return_address(1); + else + entries[3] = 0; + + /* stack_depot_save() may allocate memory. */ + kmsan_enter_runtime(); + handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + kmsan_leave_runtime(); + + kmsan_internal_set_shadow_origin(address, size, -1, handle, + /*checked*/ true); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_poison_alloca); + +/* Unpoison a local variable. */ +void __msan_unpoison_alloca(void *address, uintptr_t size) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + kmsan_enter_runtime(); + kmsan_internal_unpoison_memory(address, size, /*checked*/ true); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_unpoison_alloca); + +/* + * Report that an uninitialized value with the given origin was used in a way + * that constituted undefined behavior. + */ +void __msan_warning(u32 origin) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_report(origin, /*address*/ 0, /*size*/ 0, + /*off_first*/ 0, /*off_last*/ 0, /*user_addr*/ 0, + REASON_ANY); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_warning); + +/* + * At the beginning of an instrumented function, obtain the pointer to + * `struct kmsan_context_state` holding the metadata for function parameters. + */ +struct kmsan_context_state *__msan_get_context_state(void) +{ + return &kmsan_get_context()->cstate; +} +EXPORT_SYMBOL(__msan_get_context_state); diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h new file mode 100644 index 0000000000000..bfe38789950a6 --- /dev/null +++ b/mm/kmsan/kmsan.h @@ -0,0 +1,183 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Functions used by the KMSAN runtime. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef __MM_KMSAN_KMSAN_H +#define __MM_KMSAN_KMSAN_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#define KMSAN_ALLOCA_MAGIC_ORIGIN 0xabcd0100 +#define KMSAN_CHAIN_MAGIC_ORIGIN 0xabcd0200 + +#define KMSAN_POISON_NOCHECK 0x0 +#define KMSAN_POISON_CHECK 0x1 +#define KMSAN_POISON_FREE 0x2 + +#define KMSAN_ORIGIN_SIZE 4 + +#define KMSAN_STACK_DEPTH 64 + +#define KMSAN_META_SHADOW (false) +#define KMSAN_META_ORIGIN (true) + +extern bool kmsan_enabled; +extern int panic_on_kmsan; + +/* + * KMSAN performs a lot of consistency checks that are currently enabled by + * default. BUG_ON is normally discouraged in the kernel, unless used for + * debugging, but KMSAN itself is a debugging tool, so it makes little sense to + * recover if something goes wrong. + */ +#define KMSAN_WARN_ON(cond) \ + ({ \ + const bool __cond = WARN_ON(cond); \ + if (unlikely(__cond)) { \ + WRITE_ONCE(kmsan_enabled, false); \ + if (panic_on_kmsan) { \ + /* Can't call panic() here because */ \ + /* of uaccess checks.*/ \ + BUG(); \ + } \ + } \ + __cond; \ + }) + +/* + * A pair of metadata pointers to be returned by the instrumentation functions. + */ +struct shadow_origin_ptr { + void *shadow, *origin; +}; + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, + bool store); +void *kmsan_get_metadata(void *addr, bool is_origin); + +enum kmsan_bug_reason { + REASON_ANY, + REASON_COPY_TO_USER, + REASON_SUBMIT_URB, +}; + +void kmsan_print_origin(depot_stack_handle_t origin); + +/** + * kmsan_report() - Report a use of uninitialized value. + * @origin: Stack ID of the uninitialized value. + * @address: Address at which the memory access happens. + * @size: Memory access size. + * @off_first: Offset (from @address) of the first byte to be reported. + * @off_last: Offset (from @address) of the last byte to be reported. + * @user_addr: When non-NULL, denotes the userspace address to which the kernel + * is leaking data. + * @reason: Error type from enum kmsan_bug_reason. + * + * kmsan_report() prints an error message for a consequent group of bytes + * sharing the same origin. If an uninitialized value is used in a comparison, + * this function is called once without specifying the addresses. When checking + * a memory range, KMSAN may call kmsan_report() multiple times with the same + * @address, @size, @user_addr and @reason, but different @off_first and + * @off_last corresponding to different @origin values. + */ +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason); + +DECLARE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +static __always_inline struct kmsan_ctx *kmsan_get_context(void) +{ + return in_task() ? ¤t->kmsan_ctx : raw_cpu_ptr(&kmsan_percpu_ctx); +} + +/* + * When a compiler hook is invoked, it may make a call to instrumented code + * and eventually call itself recursively. To avoid that, we protect the + * runtime entry points with kmsan_enter_runtime()/kmsan_leave_runtime() and + * exit the hook if kmsan_in_runtime() is true. + */ + +static __always_inline bool kmsan_in_runtime(void) +{ + if ((hardirq_count() >> HARDIRQ_SHIFT) > 1) + return true; + return kmsan_get_context()->kmsan_in_runtime; +} + +static __always_inline void kmsan_enter_runtime(void) +{ + struct kmsan_ctx *ctx; + + ctx = kmsan_get_context(); + KMSAN_WARN_ON(ctx->kmsan_in_runtime++); +} + +static __always_inline void kmsan_leave_runtime(void) +{ + struct kmsan_ctx *ctx = kmsan_get_context(); + + KMSAN_WARN_ON(--ctx->kmsan_in_runtime); +} + +depot_stack_handle_t kmsan_save_stack(void); +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra_bits); + +/* + * Pack and unpack the origin chain depth and UAF flag to/from the extra bits + * provided by the stack depot. + * The UAF flag is stored in the lowest bit, followed by the depth in the upper + * bits. + * set_dsh_extra_bits() is responsible for clamping the value. + */ +static __always_inline unsigned int kmsan_extra_bits(unsigned int depth, + bool uaf) +{ + return (depth << 1) | uaf; +} + +static __always_inline bool kmsan_uaf_from_eb(unsigned int extra_bits) +{ + return extra_bits & 1; +} + +static __always_inline unsigned int kmsan_depth_from_eb(unsigned int extra_bits) +{ + return extra_bits >> 1; +} + +/* + * kmsan_internal_ functions are supposed to be very simple and not require the + * kmsan_in_runtime() checks. + */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n); +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags); +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked); +void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, + u32 origin, bool checked); +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); + +bool kmsan_metadata_is_contiguous(void *addr, size_t size); +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason); +bool kmsan_internal_is_module_addr(void *vaddr); +bool kmsan_internal_is_vmalloc_addr(void *addr); + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); + +#endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c new file mode 100644 index 0000000000000..59c9e7b979423 --- /dev/null +++ b/mm/kmsan/report.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN error reporting routines. + * + * Copyright (C) 2019-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include + +#include "kmsan.h" + +static DEFINE_SPINLOCK(kmsan_report_lock); +#define DESCR_SIZE 128 +/* Protected by kmsan_report_lock */ +static char report_local_descr[DESCR_SIZE]; +int panic_on_kmsan __read_mostly; + +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +#define MODULE_PARAM_PREFIX "kmsan." +module_param_named(panic, panic_on_kmsan, int, 0); + +/* + * Skip internal KMSAN frames. + */ +static int get_stack_skipnr(const unsigned long stack_entries[], + int num_entries) +{ + int len, skip; + char buf[64]; + + for (skip = 0; skip < num_entries; ++skip) { + len = scnprintf(buf, sizeof(buf), "%ps", + (void *)stack_entries[skip]); + + /* Never show __msan_* or kmsan_* functions. */ + if ((strnstr(buf, "__msan_", len) == buf) || + (strnstr(buf, "kmsan_", len) == buf)) + continue; + + /* + * No match for runtime functions -- @skip entries to skip to + * get to first frame of interest. + */ + break; + } + + return skip; +} + +/* + * Currently the descriptions of locals generated by Clang look as follows: + * ----local_name@function_name + * We want to print only the name of the local, as other information in that + * description can be confusing. + * The meaningful part of the description is copied to a global buffer to avoid + * allocating memory. + */ +static char *pretty_descr(char *descr) +{ + int i, pos = 0, len = strlen(descr); + + for (i = 0; i < len; i++) { + if (descr[i] == '@') + break; + if (descr[i] == '-') + continue; + report_local_descr[pos] = descr[i]; + if (pos + 1 == DESCR_SIZE) + break; + pos++; + } + report_local_descr[pos] = 0; + return report_local_descr; +} + +void kmsan_print_origin(depot_stack_handle_t origin) +{ + unsigned long *entries = NULL, *chained_entries = NULL; + unsigned int nr_entries, chained_nr_entries, skipnr; + void *pc1 = NULL, *pc2 = NULL; + depot_stack_handle_t head; + unsigned long magic; + char *descr = NULL; + + if (!origin) + return; + + while (true) { + nr_entries = stack_depot_fetch(origin, &entries); + magic = nr_entries ? entries[0] : 0; + if ((nr_entries == 4) && (magic == KMSAN_ALLOCA_MAGIC_ORIGIN)) { + descr = (char *)entries[1]; + pc1 = (void *)entries[2]; + pc2 = (void *)entries[3]; + pr_err("Local variable %s created at:\n", + pretty_descr(descr)); + if (pc1) + pr_err(" %pS\n", pc1); + if (pc2) + pr_err(" %pS\n", pc2); + break; + } + if ((nr_entries == 3) && (magic == KMSAN_CHAIN_MAGIC_ORIGIN)) { + head = entries[1]; + origin = entries[2]; + pr_err("Uninit was stored to memory at:\n"); + chained_nr_entries = + stack_depot_fetch(head, &chained_entries); + kmsan_internal_unpoison_memory( + chained_entries, + chained_nr_entries * sizeof(*chained_entries), + /*checked*/ false); + skipnr = get_stack_skipnr(chained_entries, + chained_nr_entries); + stack_trace_print(chained_entries + skipnr, + chained_nr_entries - skipnr, 0); + pr_err("\n"); + continue; + } + pr_err("Uninit was created at:\n"); + if (nr_entries) { + skipnr = get_stack_skipnr(entries, nr_entries); + stack_trace_print(entries + skipnr, nr_entries - skipnr, + 0); + } else { + pr_err("(stack is not available)\n"); + } + break; + } +} + +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason) +{ + unsigned long stack_entries[KMSAN_STACK_DEPTH]; + int num_stack_entries, skipnr; + char *bug_type = NULL; + unsigned long flags, ua_flags; + bool is_uaf; + + if (!kmsan_enabled) + return; + if (!current->kmsan_ctx.allow_reporting) + return; + if (!origin) + return; + + current->kmsan_ctx.allow_reporting = false; + ua_flags = user_access_save(); + spin_lock_irqsave(&kmsan_report_lock, flags); + pr_err("=====================================================\n"); + is_uaf = kmsan_uaf_from_eb(stack_depot_get_extra_bits(origin)); + switch (reason) { + case REASON_ANY: + bug_type = is_uaf ? "use-after-free" : "uninit-value"; + break; + case REASON_COPY_TO_USER: + bug_type = is_uaf ? "kernel-infoleak-after-free" : + "kernel-infoleak"; + break; + case REASON_SUBMIT_URB: + bug_type = is_uaf ? "kernel-usb-infoleak-after-free" : + "kernel-usb-infoleak"; + break; + } + + num_stack_entries = + stack_trace_save(stack_entries, KMSAN_STACK_DEPTH, 1); + skipnr = get_stack_skipnr(stack_entries, num_stack_entries); + + pr_err("BUG: KMSAN: %s in %pS\n", + bug_type, (void *)stack_entries[skipnr]); + stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, + 0); + pr_err("\n"); + + kmsan_print_origin(origin); + + if (size) { + pr_err("\n"); + if (off_first == off_last) + pr_err("Byte %d of %d is uninitialized\n", off_first, + size); + else + pr_err("Bytes %d-%d of %d are uninitialized\n", + off_first, off_last, size); + } + if (address) + pr_err("Memory access of size %d starts at %px\n", size, + address); + if (user_addr && reason == REASON_COPY_TO_USER) + pr_err("Data copied to user address %px\n", user_addr); + pr_err("\n"); + dump_stack_print_info(KERN_ERR); + pr_err("=====================================================\n"); + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + spin_unlock_irqrestore(&kmsan_report_lock, flags); + if (panic_on_kmsan) + panic("kmsan.panic set ...\n"); + user_access_restore(ua_flags); + current->kmsan_ctx.allow_reporting = true; +} diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c new file mode 100644 index 0000000000000..de58cfbc55b9d --- /dev/null +++ b/mm/kmsan/shadow.c @@ -0,0 +1,186 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN shadow implementation. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../internal.h" +#include "kmsan.h" + +#define shadow_page_for(page) ((page)->kmsan_shadow) + +#define origin_page_for(page) ((page)->kmsan_origin) + +static void *shadow_ptr_for(struct page *page) +{ + return page_address(shadow_page_for(page)); +} + +static void *origin_ptr_for(struct page *page) +{ + return page_address(origin_page_for(page)); +} + +static bool page_has_metadata(struct page *page) +{ + return shadow_page_for(page) && origin_page_for(page); +} + +static void set_no_shadow_origin_page(struct page *page) +{ + shadow_page_for(page) = NULL; + origin_page_for(page) = NULL; +} + +/* + * Dummy load and store pages to be used when the real metadata is unavailable. + * There are separate pages for loads and stores, so that every load returns a + * zero, and every store doesn't affect other loads. + */ +static char dummy_load_page[PAGE_SIZE] __aligned(PAGE_SIZE); +static char dummy_store_page[PAGE_SIZE] __aligned(PAGE_SIZE); + +/* + * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented version. + */ +static int kmsan_phys_addr_valid(unsigned long addr) +{ + if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT)) + return !(addr >> boot_cpu_data.x86_phys_bits); + else + return 1; +} + +/* + * Taken from arch/x86/mm/physaddr.c to avoid using an instrumented version. + */ +static bool kmsan_virt_addr_valid(void *addr) +{ + unsigned long x = (unsigned long)addr; + unsigned long y = x - __START_KERNEL_map; + + /* use the carry flag to determine if x was < __START_KERNEL_map */ + if (unlikely(x > y)) { + x = y + phys_base; + + if (y >= KERNEL_IMAGE_SIZE) + return false; + } else { + x = y + (__START_KERNEL_map - PAGE_OFFSET); + + /* carry flag will be set if starting x was >= PAGE_OFFSET */ + if ((x > y) || !kmsan_phys_addr_valid(x)) + return false; + } + + return pfn_valid(x >> PAGE_SHIFT); +} + +static unsigned long vmalloc_meta(void *addr, bool is_origin) +{ + unsigned long addr64 = (unsigned long)addr, off; + + KMSAN_WARN_ON(is_origin && !IS_ALIGNED(addr64, KMSAN_ORIGIN_SIZE)); + if (kmsan_internal_is_vmalloc_addr(addr)) { + off = addr64 - VMALLOC_START; + return off + (is_origin ? KMSAN_VMALLOC_ORIGIN_START : + KMSAN_VMALLOC_SHADOW_START); + } + if (kmsan_internal_is_module_addr(addr)) { + off = addr64 - MODULES_VADDR; + return off + (is_origin ? KMSAN_MODULES_ORIGIN_START : + KMSAN_MODULES_SHADOW_START); + } + return 0; +} + +static struct page *virt_to_page_or_null(void *vaddr) +{ + if (kmsan_virt_addr_valid(vaddr)) + return virt_to_page(vaddr); + else + return NULL; +} + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *address, u64 size, + bool store) +{ + struct shadow_origin_ptr ret; + void *shadow; + + /* + * Even if we redirect this memory access to the dummy page, it will + * go out of bounds. + */ + KMSAN_WARN_ON(size > PAGE_SIZE); + + if (!kmsan_enabled || kmsan_in_runtime()) + goto return_dummy; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(address, size)); + shadow = kmsan_get_metadata(address, KMSAN_META_SHADOW); + if (!shadow) + goto return_dummy; + + ret.shadow = shadow; + ret.origin = kmsan_get_metadata(address, KMSAN_META_ORIGIN); + return ret; + +return_dummy: + if (store) { + /* Ignore this store. */ + ret.shadow = dummy_store_page; + ret.origin = dummy_store_page; + } else { + /* This load will return zero. */ + ret.shadow = dummy_load_page; + ret.origin = dummy_load_page; + } + return ret; +} + +/* + * Obtain the shadow or origin pointer for the given address, or NULL if there's + * none. The caller must check the return value for being non-NULL if needed. + * The return value of this function should not depend on whether we're in the + * runtime or not. + */ +void *kmsan_get_metadata(void *address, bool is_origin) +{ + u64 addr = (u64)address, pad, off; + struct page *page; + void *ret; + + if (is_origin && !IS_ALIGNED(addr, KMSAN_ORIGIN_SIZE)) { + pad = addr % KMSAN_ORIGIN_SIZE; + addr -= pad; + } + address = (void *)addr; + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) + return (void *)vmalloc_meta(address, is_origin); + + page = virt_to_page_or_null(address); + if (!page) + return NULL; + if (!page_has_metadata(page)) + return NULL; + off = addr % PAGE_SIZE; + + ret = (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; + return ret; +} diff --git a/scripts/Makefile.kmsan b/scripts/Makefile.kmsan new file mode 100644 index 0000000000000..9793591f9855c --- /dev/null +++ b/scripts/Makefile.kmsan @@ -0,0 +1 @@ +export CFLAGS_KMSAN := -fsanitize=kernel-memory diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 79be57fdd32a4..a96cf2a5d1c09 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -162,6 +162,15 @@ _c_flags += $(if $(patsubst n%,, \ endif endif +ifeq ($(CONFIG_KMSAN),y) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_SANITIZE_$(basetarget).o)$(KMSAN_SANITIZE)y), \ + $(CFLAGS_KMSAN)) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_ENABLE_CHECKS_$(basetarget).o)$(KMSAN_ENABLE_CHECKS)y), \ + , -mllvm -msan-disable-checks=1) +endif + ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ $(UBSAN_SANITIZE_$(basetarget).o)$(UBSAN_SANITIZE)$(CONFIG_UBSAN_SANITIZE_ALL)), \ From patchwork Tue Mar 29 12:39:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7606EC433F5 for ; Tue, 29 Mar 2022 12:41:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 850D08D0017; Tue, 29 Mar 2022 08:41:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8269B8D0002; Tue, 29 Mar 2022 08:41:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F07C8D0017; Tue, 29 Mar 2022 08:41:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 5F7CF8D0002 for ; Tue, 29 Mar 2022 08:41:09 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 3D2BA81531 for ; Tue, 29 Mar 2022 12:41:09 +0000 (UTC) X-FDA: 79297383858.06.6989EE4 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf20.hostedemail.com (Postfix) with ESMTP id C20F71C000C for ; Tue, 29 Mar 2022 12:41:08 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id hg13-20020a1709072ccd00b006dfa484ec23so8155750ejc.8 for ; Tue, 29 Mar 2022 05:41:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FHzHVDPYZMTi95ptCkFXfySj60BESZp+qG5LjWpEp7c=; b=sWW5tRocZxxBnyH5c+5bB9SrN8yBbtlo2fkEMJuym0RIhB42RkiLDf+Mt3MIvJwiyJ 7j0vQ9s012H9UhxvQGmO+OVLPDT4DOl4vdbuAKldQJWeBJMxzqghX+s69XDhPtPnFmSZ ln18qWHYMmGUWOUIknOM0Bl7/4fn3Ax931TwAMeVd4qkvriskK/u0kVEIOnyYuiKjd5B kjzHcwWRbWcIxcxK6/78moEPo1Rs5jsyltN2n8NJo34SDw3L3nYlRBVlVJYFqaAjGTWX /iZyyw30rrol8h6QI3lmyhTL/EJEuNjLY2dUKHqXYZa6vP163p1YbzZ9JXRJ4sZt3h97 4xOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FHzHVDPYZMTi95ptCkFXfySj60BESZp+qG5LjWpEp7c=; b=x/9bwEHyFMgP91qG+jLBYBpINXTJiB51a3ZN6pVqjD1DJbKnn9b8aaiIyNYrO3gZjL GZyLThc+RK+WY4f5UbSvhUkLa2S6UIhr5T2xoyqdtmqQ9ai7zuv5s37x7PFiADh6eMkN HwOQ6OEEdbRL3p5sLWXdXVla33ZeZLMD3GyicMm2n0heo/sbHE5dpfOCYNrGsdJTVvRk VdvjgW0QID8hZfwmwLdCnttXR0in7EDohEIebqIltw962Trdx/bsWlXhn8mN4ABDyX5a bFo7wEczUOyJkdjR5SmqUTV7Ei9215KJbd78usxhQbQXYbjQ/jL4aRMnUJbrHaCssDkp atjA== X-Gm-Message-State: AOAM533zpcDM5DpDickm7nhSYj78HadpBAkQ0gdtYw3ctieuaGgzum6Z KNbUYoPID9KAlDxGpJp1aqCQF9nhg3A= X-Google-Smtp-Source: ABdhPJxIjE0+jxZBSo1pLmOTWSGay+gWDn7tl9m8Nl4sQMVHZ9JOsMlLrJDRqSA80Wdi3w3BwjBTXdlkc28= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:a1d3:b0:6d0:80ea:2fde with SMTP id bx19-20020a170906a1d300b006d080ea2fdemr33544446ejb.344.1648557667490; Tue, 29 Mar 2022 05:41:07 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:43 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-15-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 14/48] kmsan: implement kmsan_init(), initialize READ_ONCE_NOCHECK() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C20F71C000C X-Rspam-User: Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=sWW5tRoc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3Y_5CYgYKCIQotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3Y_5CYgYKCIQotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com X-Stat-Signature: atpubf4ax96mnd5zjshpgqfadxpg8iee X-HE-Tag: 1648557668-15993 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_init() is a macro that takes a possibly uninitialized value and returns an initialized value of the same type. It can be used e.g. in cases when a value comes from non-instrumented code to avoid false positive reports. In particular, we use kmsan_init() in READ_ONCE_NOCHECK() so that it returns initialized values. This helps defeat false positives e.g. from leftover stack contents accessed by stack unwinders. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Icd1260073666f944922f031bfb6762379ba1fa38 --- include/asm-generic/rwonce.h | 5 +++-- include/linux/kmsan-checks.h | 40 ++++++++++++++++++++++++++++++++++++ mm/kmsan/Makefile | 5 ++++- mm/kmsan/annotations.c | 28 +++++++++++++++++++++++++ 4 files changed, 75 insertions(+), 3 deletions(-) create mode 100644 mm/kmsan/annotations.c diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h index 8d0a6280e9824..7cf993af8e1ea 100644 --- a/include/asm-generic/rwonce.h +++ b/include/asm-generic/rwonce.h @@ -25,6 +25,7 @@ #include #include #include +#include /* * Yes, this permits 64-bit accesses on 32-bit architectures. These will @@ -69,14 +70,14 @@ unsigned long __read_once_word_nocheck(const void *addr) /* * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a - * word from memory atomically but without telling KASAN/KCSAN. This is + * word from memory atomically but without telling KASAN/KCSAN/KMSAN. This is * usually used by unwinding code when walking the stack of a running process. */ #define READ_ONCE_NOCHECK(x) \ ({ \ compiletime_assert(sizeof(x) == sizeof(unsigned long), \ "Unsupported access size for READ_ONCE_NOCHECK()."); \ - (typeof(x))__read_once_word_nocheck(&(x)); \ + kmsan_init((typeof(x))__read_once_word_nocheck(&(x))); \ }) static __no_kasan_or_inline diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h index a6522a0c28df9..ecd8336190fc0 100644 --- a/include/linux/kmsan-checks.h +++ b/include/linux/kmsan-checks.h @@ -14,6 +14,44 @@ #ifdef CONFIG_KMSAN +/* + * Helper functions that mark the return value initialized. + * See mm/kmsan/annotations.c. + */ +u8 kmsan_init_1(u8 value); +u16 kmsan_init_2(u16 value); +u32 kmsan_init_4(u32 value); +u64 kmsan_init_8(u64 value); + +static inline void *kmsan_init_ptr(void *ptr) +{ + return (void *)kmsan_init_8((u64)ptr); +} + +static inline char kmsan_init_char(char value) +{ + return (u8)kmsan_init_1((u8)value); +} + +#define __decl_kmsan_init_type(type, fn) unsigned type : fn, signed type : fn + +/** + * kmsan_init - Make the value initialized. + * @val: 1-, 2-, 4- or 8-byte integer that may be treated as uninitialized by + * KMSAN. + * + * Return: value of @val that KMSAN treats as initialized. + */ +#define kmsan_init(val) \ + ( \ + (typeof(val))(_Generic((val), \ + __decl_kmsan_init_type(char, kmsan_init_1), \ + __decl_kmsan_init_type(short, kmsan_init_2), \ + __decl_kmsan_init_type(int, kmsan_init_4), \ + __decl_kmsan_init_type(long, kmsan_init_8), \ + char : kmsan_init_char, \ + void * : kmsan_init_ptr)(val))) + /** * kmsan_poison_memory() - Mark the memory range as uninitialized. * @address: address to start with. @@ -48,6 +86,8 @@ void kmsan_check_memory(const void *address, size_t size); #else +#define kmsan_init(value) (value) + static inline void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index a80dde1de7048..73b705cbf75b9 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -1,9 +1,11 @@ -obj-y := core.o instrumentation.o hooks.o report.o shadow.o +obj-y := core.o instrumentation.o hooks.o report.o shadow.o annotations.o KMSAN_SANITIZE := n KCOV_INSTRUMENT := n UBSAN_SANITIZE := n +KMSAN_SANITIZE_kmsan_annotations.o := y + # Disable instrumentation of KMSAN runtime with other tools. CC_FLAGS_KMSAN_RUNTIME := -fno-stack-protector CC_FLAGS_KMSAN_RUNTIME += $(call cc-option,-fno-conserve-stack) @@ -11,6 +13,7 @@ CC_FLAGS_KMSAN_RUNTIME += -DDISABLE_BRANCH_PROFILING CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) +CFLAGS_annotations.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/annotations.c b/mm/kmsan/annotations.c new file mode 100644 index 0000000000000..8ccde90bcd12b --- /dev/null +++ b/mm/kmsan/annotations.c @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN annotations. + * + * The kmsan_init_SIZE functions reside in a separate translation unit to + * prevent inlining them. Clang may inline functions marked with + * __no_sanitize_memory attribute into functions without it, which effectively + * results in ignoring the attribute. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include + +#define DECLARE_KMSAN_INIT(size, t) \ + __no_sanitize_memory t kmsan_init_##size(t value) \ + { \ + return value; \ + } \ + EXPORT_SYMBOL(kmsan_init_##size) + +DECLARE_KMSAN_INIT(1, u8); +DECLARE_KMSAN_INIT(2, u16); +DECLARE_KMSAN_INIT(4, u32); +DECLARE_KMSAN_INIT(8, u64); From patchwork Tue Mar 29 12:39:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7254C433EF for ; Tue, 29 Mar 2022 12:41:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4814C8D0019; Tue, 29 Mar 2022 08:41:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 407498D0018; Tue, 29 Mar 2022 08:41:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 233AD8D0019; Tue, 29 Mar 2022 08:41:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 127AD8D0018 for ; Tue, 29 Mar 2022 08:41:12 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B590FA2B01 for ; Tue, 29 Mar 2022 12:41:11 +0000 (UTC) X-FDA: 79297383942.29.79AD964 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf09.hostedemail.com (Postfix) with ESMTP id 37306140016 for ; Tue, 29 Mar 2022 12:41:11 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id z22-20020a50cd16000000b0041960ea8555so10961235edi.2 for ; Tue, 29 Mar 2022 05:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sAExT7SSSN10XTsfQtAC8lmzod9gG+fvZFiYZO+nEYE=; b=D6F1SQpsqCx29Z1hyC0VZ9QPHGSIXqGIXoKTX1t41Ec2/BI7aZiEaJshNZSDnAZ7nS N0yqCJQ1PeJoGgtRjd8nBmpux8LP0+eBYvj27cf5rvJF2ttWc9OSvKWGhvOFaPjbxaN2 iLVvCLMm9/YCzNogYHden+UNUQUg7Gu8PKy6sCZn5PGrBo+vg4vbm5DyaR/Hu6lfTDxe bBZIR0eLro+O+0mwMGrZMyVEi8Yrq+V0gNiY+vIntbaE/O8ubQGR276mculXvjUvzEt3 F9sCA4kDsGFBoicnwuxq29jXZYbDMu4MZup8guVtOlSWws6VucaX5kmprRui40j9NH5o h+ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sAExT7SSSN10XTsfQtAC8lmzod9gG+fvZFiYZO+nEYE=; b=FSjETNxAqo1mSrhR6/iDImtl8fDQLPnJLmkLeOv21fTXRbWGgSTaDQ78SVt0W0+7hr R34SrTWDVMrmaAGEPae068f/efy+SXpGQ+BmEzyW241vetFWX7i6ylK0EHANohuupNMp nue4JQns28UOwkVbeToLJfeuZwNcVOluKpHxE6KQj6+OBlZFat1AESiBtUw0Irs4ch3g J9dqFlRqq37o9dzLL6Pv/RdG4XFXKFyVzLcdzAlBwl+PZGDyjIbBl2fjiMZ8AiwDm3tu rFg0l+rNT/I0PHq7eFNwG5utC9LnDgSMJcZ+XtvGXewyzVc0E4MrJ8TRkSraGyua9q1L 0HGw== X-Gm-Message-State: AOAM5317I1lJbhwuFDBu9tXoT8AkCQSGYkgv/A7sAuelAPTm/CTd/iHi BiHpTObv4gj/bnOVlbzhvM2wfL49e2w= X-Google-Smtp-Source: ABdhPJxiJ9oKFDmYGNh7Br69DkPJpOkn+XAOKfovDuKDd9aMeedhy/6xJEB6s/tleVFOHEB0WVg/Meo2NMo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:d69:b0:418:f7bd:b076 with SMTP id ec41-20020a0564020d6900b00418f7bdb076mr4306755edb.268.1648557669972; Tue, 29 Mar 2022 05:41:09 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:44 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-16-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 15/48] kmsan: disable instrumentation of unsupported common kernel code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 6e1ahuoxjgdezhn8hk9fs81cdoezcayb X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 37306140016 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=D6F1SQps; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3Zf5CYgYKCIYqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3Zf5CYgYKCIYqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557671-981817 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: EFI stub cannot be linked with KMSAN runtime, so we disable instrumentation for it. Instrumenting kcov, stackdepot or lockdep leads to infinite recursion caused by instrumentation hooks calling instrumented code again. This patch was previously part of "kmsan: disable KMSAN instrumentation for certain kernel parts", but was split away per Mark Rutland's request. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I41ae706bd3474f074f6a870bfc3f0f90e9c720f7 --- drivers/firmware/efi/libstub/Makefile | 1 + kernel/Makefile | 1 + kernel/locking/Makefile | 3 ++- lib/Makefile | 1 + 4 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index d0537573501e9..81432d0c904b1 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -46,6 +46,7 @@ GCOV_PROFILE := n # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n UBSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/kernel/Makefile b/kernel/Makefile index 56f4ee97f3284..80f6cfb60c020 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -39,6 +39,7 @@ KCOV_INSTRUMENT_kcov.o := n KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n +KMSAN_SANITIZE_kcov.o := n CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector # Don't instrument error handlers diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index d51cabf28f382..ea925731fa40f 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -5,8 +5,9 @@ KCOV_INSTRUMENT := n obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o -# Avoid recursion lockdep -> KCSAN -> ... -> lockdep. +# Avoid recursion lockdep -> sanitizer -> ... -> lockdep. KCSAN_SANITIZE_lockdep.o := n +KMSAN_SANITIZE_lockdep.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE) diff --git a/lib/Makefile b/lib/Makefile index 300f569c626b0..0ac9b38ec172e 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -269,6 +269,7 @@ obj-$(CONFIG_IRQ_POLL) += irq_poll.o CFLAGS_stackdepot.o += -fno-builtin obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n +KMSAN_SANITIZE_stackdepot.o := n KCOV_INSTRUMENT_stackdepot.o := n obj-$(CONFIG_REF_TRACKER) += ref_tracker.o From patchwork Tue Mar 29 12:39:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4097EC433F5 for ; Tue, 29 Mar 2022 12:41:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBC368D0006; Tue, 29 Mar 2022 08:41:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6DA98D0005; Tue, 29 Mar 2022 08:41:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ABE398D0006; Tue, 29 Mar 2022 08:41:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 9BDA18D0005 for ; Tue, 29 Mar 2022 08:41:14 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 70FA5AC0 for ; Tue, 29 Mar 2022 12:41:14 +0000 (UTC) X-FDA: 79297384068.14.6F23C5E Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf04.hostedemail.com (Postfix) with ESMTP id DFFDF4001D for ; Tue, 29 Mar 2022 12:41:13 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id og28-20020a1709071ddc00b006dfb92d8e3fso8109630ejc.14 for ; Tue, 29 Mar 2022 05:41:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=17K480cFHZmjgZnEKs0kvaSf6Wt6+7fFqZhB6I4ek4M=; b=RSDNW7nZXrX14ECEyw7iYD8PmPG9kTQkYLbWQ7qMATA0/b6xXjIDXSHO98HP2EtZKs lhAoCw7gzmWs/rvYGDw/eTGXSRrJlJh6DUibcUhZyQhiQH6E63/DD0cNXhwbf8sHDm1W k8DlOA7/CM9UnHu1KQJYWBvxHNHY5VbXChFuhjmADE4ZsvBWU5u5hHrXpd+CNyWFkine L34oSIh5HCj7mFNxyr5ujACe/1SMjwIg4y0zKxsN8e+BwlG9OCcKTrKsNaEfTRS+Hsmn uKtH1oVpOBVYoMma60yhuXibPqsneRA9ZLArVfhZgEkVmZVR70UWp2VL4AZQRC8m6TIs gR4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=17K480cFHZmjgZnEKs0kvaSf6Wt6+7fFqZhB6I4ek4M=; b=Zstws8rtQffzpxkBxkaWjtxktI/eBxX3pOK4238Mi7UKNb/3vUTLoER8fxJXomzoZt RqeyZ70Y1KVTosl/CnR8umUsZT/sCxfUDVXyC/Un6CenPnn3TqkCGYE7/xf+i9hJKgNY 7uEa2QwZ4dGnvddsdnn44o77ST56LPsyKNtd8dhVt5zGnf+3Pf+r9xcsk/028KodJg9j rtvfpgNFk6eAkQxw6yMfxM6WtqY8GI3mwh+Rd61YU7lYInMnRXt4WxzYmAWFOHg1uAC5 Lk4uBIs2MWAxL+gtKYYh1oShww0Vizs0Y43oNzO3xBDrG/lmvRJiKFabaH0ZG2l0s3T5 RcJQ== X-Gm-Message-State: AOAM532DWvIqSeto+kbR6g2jz3RamBxBQi4GoIfIi3Xey+iSnmZSX3Qy uH8asjGx8XUUnzZ7MCC2ZnM6bdYY+S8= X-Google-Smtp-Source: ABdhPJw7IcXW/4bj+dfFYbcSuDH6uP+LYtDkdrlnTYVetr1ku4F+UEshV4bZLv/czt62R1ghVW3NtkuMjeM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a50:baa1:0:b0:418:849a:c66a with SMTP id x30-20020a50baa1000000b00418849ac66amr4402991ede.234.1648557672349; Tue, 29 Mar 2022 05:41:12 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:45 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-17-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 16/48] MAINTAINERS: add entry for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RSDNW7nZ; spf=pass (imf04.hostedemail.com: domain of 3aP5CYgYKCIktyvqr4t11tyr.p1zyv07A-zzx8npx.14t@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3aP5CYgYKCIktyvqr4t11tyr.p1zyv07A-zzx8npx.14t@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: efzxtd7uppo1rrye94od73zcm3xn3khy X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: DFFDF4001D X-HE-Tag: 1648557673-182237 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add entry for KMSAN maintainers/reviewers. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic5836c2bceb6b63f71a60d3327d18af3aa3dab77 --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index cd0f68d4a34a6..4053523a1e890 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10721,6 +10721,18 @@ F: kernel/kmod.c F: lib/test_kmod.c F: tools/testing/selftests/kmod/ +KMSAN +M: Alexander Potapenko +R: Marco Elver +R: Dmitry Vyukov +L: kasan-dev@googlegroups.com +S: Maintained +F: Documentation/dev-tools/kmsan.rst +F: include/linux/kmsan*.h +F: lib/Kconfig.kmsan +F: mm/kmsan/ +F: scripts/Makefile.kmsan + KPROBES M: Naveen N. Rao M: Anil S Keshavamurthy From patchwork Tue Mar 29 12:39:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFB06C433EF for ; Tue, 29 Mar 2022 12:41:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C45F8D0003; Tue, 29 Mar 2022 08:41:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 874778D0001; Tue, 29 Mar 2022 08:41:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C6858D0003; Tue, 29 Mar 2022 08:41:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 5E8BD8D0001 for ; Tue, 29 Mar 2022 08:41:17 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1D6C8A4A77 for ; Tue, 29 Mar 2022 12:41:17 +0000 (UTC) X-FDA: 79297384194.16.A898F84 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf14.hostedemail.com (Postfix) with ESMTP id 927C910000C for ; Tue, 29 Mar 2022 12:41:16 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id v9-20020a509549000000b00418d7c2f62aso10947264eda.15 for ; Tue, 29 Mar 2022 05:41:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=yKduOyp99ugSQWVTLSsIolDzuknPE7+2zaI7xOYspzg=; b=aLGX3+btp72E9I8iIP13PehJlqEjCIHhZ5kJ0YIImU9bsuvDP6AjUX4QPmgUH7A8Rb B7bMzdl+XRl1RkpaCUZyltrTlhfYjnKMCZgY3w0Uct7bJ9T9jZ48zBNKDY3DHaTIM9iv U81PRQmRtd1z4pn9iCih2gfAEtcNRfKIx+xJ5o1fRVKFgV/8cgKVrqOt2TF6XtH8HRY2 YW5JZvAc00JXKzkJOz/l5dJNsp/yZNWAr0xJJe9E4WoCYcSQBT9FGSSvzcYJHIdEKYQa YxAZyvLBpD8pvgprSSiDq5FakEZfbL7eNIXyrhnHP4w5zS0OkSn3kUZQ28xwQhtBkdV1 bCRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=yKduOyp99ugSQWVTLSsIolDzuknPE7+2zaI7xOYspzg=; b=8LZQ/CzPg8dszpVLtXJyF7pry7uWA1G59EXjBId2ZbLh6p6sbOChBuyuwVFqwAQslg 2qDwqIOfv9Oz24esqazdyyi+9FnX2M2JE5uq0rTWh4wfffv8q/IxgQZzeJvBJQv7vVmH aylw6NQ8eoNfMQG2faeyhQKRMRUkORf3BojxG+72ReEb3q/0VgzqoU3HAjRUsM9fgFhE p20DZCaAkk1gzHjxtqAx5YbKiw8iaSmRRV+3CSeLP2i94a+FhfqCwpm712DON60Niu+1 24ZT/+X4Fbhis5fFDgjouM59TJFHROWHgjFkeOlZp022YY1JsnKu7M6YZUO4ToYohVlM uuNg== X-Gm-Message-State: AOAM530Aag3l9PeI0j7EfzpzlWtu/3yCB6c7mW8hB3QkWv3DiMWbfn/p VkSd6kTvUlCC1HalcGm7VsZUZLBrKBI= X-Google-Smtp-Source: ABdhPJxGPoJ8SsJw1yoJtEPicRhFIsuEKTSWyo8YXKacL9FRobmfyn/uSDVKER220NA/RkMYrCJCsHs74w8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a50:9eaa:0:b0:418:f708:f59c with SMTP id a39-20020a509eaa000000b00418f708f59cmr4214006edf.333.1648557675310; Tue, 29 Mar 2022 05:41:15 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:46 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-18-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 17/48] kmsan: mm: maintain KMSAN metadata for page operations From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: itwp1prhrb9kzesiah9k6abecw5qotzr Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aLGX3+bt; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3a_5CYgYKCIww1ytu7w44w1u.s421y3AD-220Bqs0.47w@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3a_5CYgYKCIww1ytu7w44w1u.s421y3AD-220Bqs0.47w@flex--glider.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 927C910000C X-HE-Tag: 1648557676-439696 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - poison page shadow and origins in alloc_pages()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko --- v2: -- move page metadata hooks implementation here -- remove call to kmsan_memblock_free_pages() Link: https://linux-review.googlesource.com/id/I6d4f53a0e7eab46fa29f0348f3095d9f2e326850 --- arch/x86/include/asm/page_64.h | 13 ++++ arch/x86/mm/ioremap.c | 3 + include/linux/highmem.h | 3 + include/linux/kmsan.h | 123 +++++++++++++++++++++++++++++++++ mm/internal.h | 6 ++ mm/kmsan/hooks.c | 87 +++++++++++++++++++++++ mm/kmsan/shadow.c | 114 ++++++++++++++++++++++++++++++ mm/memory.c | 2 + mm/page_alloc.c | 11 +++ mm/vmalloc.c | 20 +++++- 10 files changed, 380 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index e9c86299b8351..36e270a8ea9a4 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -45,14 +45,27 @@ void clear_page_orig(void *page); void clear_page_rep(void *page); void clear_page_erms(void *page); +/* This is an assembly header, avoid including too much of kmsan.h */ +#ifdef CONFIG_KMSAN +void kmsan_unpoison_memory(const void *addr, size_t size); +#endif +__no_sanitize_memory static inline void clear_page(void *page) { +#ifdef CONFIG_KMSAN + /* alternative_call_2() changes @page. */ + void *page_copy = page; +#endif alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, "=D" (page), "0" (page) : "cc", "memory", "rax", "rcx"); +#ifdef CONFIG_KMSAN + /* Clear KMSAN shadow for the pages that have it. */ + kmsan_unpoison_memory(page_copy, PAGE_SIZE); +#endif } void copy_page(void *to, void *from); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 17a492c273069..0da8608778221 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -474,6 +475,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9cd..3e1898a44d7e3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -277,6 +278,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_memory(page_address(to), PAGE_SIZE); kunmap_local(vto); kunmap_local(vfrom); } @@ -292,6 +294,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_local(vto); kunmap_local(vfrom); } diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 4e35f43eceaa9..da41850b46cbd 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -42,6 +42,129 @@ struct kmsan_ctx { bool allow_reporting; }; +/** + * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. + * @page: struct page pointer returned by alloc_pages(). + * @order: order of allocated struct page. + * @flags: GFP flags used by alloc_pages() + * + * KMSAN marks 1<<@order pages starting at @page as uninitialized, unless + * @flags contain __GFP_ZERO. + */ +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags); + +/** + * kmsan_free_page() - Notify KMSAN about a free_pages() call. + * @page: struct page pointer passed to free_pages(). + * @order: order of deallocated struct page. + * + * KMSAN marks freed memory as uninitialized. + */ +void kmsan_free_page(struct page *page, unsigned int order); + +/** + * kmsan_copy_page_meta() - Copy KMSAN metadata between two pages. + * @dst: destination page. + * @src: source page. + * + * KMSAN copies the contents of metadata pages for @src into the metadata pages + * for @dst. If @dst has no associated metadata pages, nothing happens. + * If @src has no associated metadata pages, @dst metadata pages are unpoisoned. + */ +void kmsan_copy_page_meta(struct page *dst, struct page *src); + +/** + * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. + * @start: start of vmapped range. + * @end: end of vmapped range. + * @prot: page protection flags used for vmap. + * @pages: array of pages. + * @page_shift: page_shift passed to vmap_range_noflush(). + * + * KMSAN maps shadow and origin pages of @pages into contiguous ranges in + * vmalloc metadata address range. + */ +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + +/** + * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap. + * @start: start of vunmapped range. + * @end: end of vunmapped range. + * + * KMSAN unmaps the contiguous metadata ranges created by + * kmsan_map_kernel_range_noflush(). + */ +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end); + +/** + * kmsan_ioremap_page_range() - Notify KMSAN about a ioremap_page_range() call. + * @addr: range start. + * @end: range end. + * @phys_addr: physical range start. + * @prot: page protection flags used for ioremap_page_range(). + * @page_shift: page_shift argument passed to vmap_range_noflush(). + * + * KMSAN creates new metadata pages for the physical pages mapped into the + * virtual memory. + */ +void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift); + +/** + * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. + * @start: range start. + * @end: range end. + * + * KMSAN unmaps the metadata pages for the given range and, unlike for + * vunmap_page_range(), also deallocates them. + */ +void kmsan_iounmap_page_range(unsigned long start, unsigned long end); + +#else + +static inline int kmsan_alloc_page(struct page *page, unsigned int order, + gfp_t flags) +{ + return 0; +} + +static inline void kmsan_free_page(struct page *page, unsigned int order) +{ +} + +static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ +} + +static inline void kmsan_vmap_pages_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages, + unsigned int page_shift) +{ +} + +static inline void kmsan_vunmap_range_noflush(unsigned long start, + unsigned long end) +{ +} + +static inline void kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, + pgprot_t prot, + unsigned int page_shift) +{ +} + +static inline void kmsan_iounmap_page_range(unsigned long start, + unsigned long end) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/internal.h b/mm/internal.h index d80300392a194..2d8ca0ae4774f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -713,8 +713,14 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, } #endif +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + void vunmap_range_noflush(unsigned long start, unsigned long end); +void __vunmap_range_noflush(unsigned long start, unsigned long end); + int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 4ac62fa67a02a..5d886df57adca 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,93 @@ * skipping effects of functions like memset() inside instrumented code. */ +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_ORIGIN); +} + +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end) +{ + __vunmap_range_noflush(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_range_noflush(vmalloc_origin(start), vmalloc_origin(end)); + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); +} +EXPORT_SYMBOL(kmsan_vunmap_range_noflush); + +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) +{ + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; + struct page *shadow, *origin; + unsigned long off = 0; + int i, nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + for (i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_pages_range_noflush( + vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, + page_shift); + __vmap_pages_range_noflush( + vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, + page_shift); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_ioremap_page_range); + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + unsigned long v_shadow, v_origin; + struct page *shadow, *origin; + int i, nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (i = 0; i < nr; i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = kmsan_vmalloc_to_page_or_null((void *)v_shadow); + origin = kmsan_vmalloc_to_page_or_null((void *)v_origin); + __vunmap_range_noflush(v_shadow, vmalloc_shadow(end)); + __vunmap_range_noflush(v_origin, vmalloc_origin(end)); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_iounmap_page_range); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index de58cfbc55b9d..8fe6a5ed05e67 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -184,3 +184,117 @@ void *kmsan_get_metadata(void *address, bool is_origin) ret = (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; return ret; } + +void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + if (!dst || !page_has_metadata(dst)) + return; + if (!src || !page_has_metadata(src)) { + kmsan_internal_unpoison_memory(page_address(dst), PAGE_SIZE, + /*checked*/ false); + return; + } + + kmsan_enter_runtime(); + __memcpy(shadow_ptr_for(dst), shadow_ptr_for(src), PAGE_SIZE); + __memcpy(origin_ptr_for(dst), origin_ptr_for(src), PAGE_SIZE); + kmsan_leave_runtime(); +} + +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) +{ + bool initialized = (flags & __GFP_ZERO) || !kmsan_enabled; + struct page *shadow, *origin; + depot_stack_handle_t handle; + int pages = 1 << order; + int i; + + if (!page) + return; + + shadow = shadow_page_for(page); + origin = origin_page_for(page); + + if (initialized) { + __memset(page_address(shadow), 0, PAGE_SIZE * pages); + __memset(page_address(origin), 0, PAGE_SIZE * pages); + return; + } + + /* Zero pages allocated by the runtime should also be initialized. */ + if (kmsan_in_runtime()) + return; + + __memset(page_address(shadow), -1, PAGE_SIZE * pages); + kmsan_enter_runtime(); + handle = kmsan_save_stack_with_flags(flags, /*extra_bits*/ 0); + kmsan_leave_runtime(); + /* + * Addresses are page-aligned, pages are contiguous, so it's ok + * to just fill the origin pages with |handle|. + */ + for (i = 0; i < PAGE_SIZE * pages / sizeof(handle); i++) + ((depot_stack_handle_t *)page_address(origin))[i] = handle; +} + +void kmsan_free_page(struct page *page, unsigned int order) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(page_address(page), + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift) +{ + unsigned long shadow_start, origin_start, shadow_end, origin_end; + struct page **s_pages, **o_pages; + int nr, i, mapped; + + if (!kmsan_enabled) + return; + + shadow_start = vmalloc_meta((void *)start, KMSAN_META_SHADOW); + shadow_end = vmalloc_meta((void *)end, KMSAN_META_SHADOW); + if (!shadow_start) + return; + + nr = (end - start) / PAGE_SIZE; + s_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + o_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + if (!s_pages || !o_pages) + goto ret; + for (i = 0; i < nr; i++) { + s_pages[i] = shadow_page_for(pages[i]); + o_pages[i] = origin_page_for(pages[i]); + } + prot = __pgprot(pgprot_val(prot) | _PAGE_NX); + prot = PAGE_KERNEL; + + origin_start = vmalloc_meta((void *)start, KMSAN_META_ORIGIN); + origin_end = vmalloc_meta((void *)end, KMSAN_META_ORIGIN); + kmsan_enter_runtime(); + mapped = __vmap_pages_range_noflush(shadow_start, shadow_end, prot, + s_pages, page_shift); + KMSAN_WARN_ON(mapped); + mapped = __vmap_pages_range_noflush(origin_start, origin_end, prot, + o_pages, page_shift); + KMSAN_WARN_ON(mapped); + kmsan_leave_runtime(); + flush_tlb_kernel_range(shadow_start, shadow_end); + flush_tlb_kernel_range(origin_start, origin_end); + flush_cache_vmap(shadow_start, shadow_end); + flush_cache_vmap(origin_start, origin_end); + +ret: + kfree(s_pages); + kfree(o_pages); +} diff --git a/mm/memory.c b/mm/memory.c index c125c4969913a..7465eb43e6d3e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include #include @@ -3026,6 +3027,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) put_page(old_page); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3589febc6d319..98a066c0a9f63 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -1301,6 +1302,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); if (unlikely(PageHWPoison(page)) && !order) { /* @@ -3679,6 +3681,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -5409,6 +5419,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, } trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); return page; } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4165304d35471..7bcbf7a08597a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -321,6 +321,9 @@ int ioremap_page_range(unsigned long addr, unsigned long end, err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot), ioremap_max_page_shift); flush_cache_vmap(addr, end); + if (!err) + kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; } @@ -420,7 +423,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -void vunmap_range_noflush(unsigned long start, unsigned long end) +void __vunmap_range_noflush(unsigned long start, unsigned long end) { unsigned long next; pgd_t *pgd; @@ -442,6 +445,12 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) arch_sync_kernel_mappings(start, end); } +void vunmap_range_noflush(unsigned long start, unsigned long end) +{ + kmsan_vunmap_range_noflush(start, end); + __vunmap_range_noflush(start, end); +} + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -576,7 +585,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { unsigned int i, nr = (end - addr) >> PAGE_SHIFT; @@ -602,6 +611,13 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, return 0; } +int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); +} + /** * vmap_pages_range - map pages to a kernel virtual address * @addr: start of the VM area to map From patchwork Tue Mar 29 12:39:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2155C433FE for ; Tue, 29 Mar 2022 12:41:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F94D8D000B; Tue, 29 Mar 2022 08:41:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 280DE8D0001; Tue, 29 Mar 2022 08:41:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 123308D000B; Tue, 29 Mar 2022 08:41:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id F3C898D0001 for ; Tue, 29 Mar 2022 08:41:19 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A525C8249980 for ; Tue, 29 Mar 2022 12:41:19 +0000 (UTC) X-FDA: 79297384278.19.4B5ED04 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf19.hostedemail.com (Postfix) with ESMTP id 3189D1A0009 for ; Tue, 29 Mar 2022 12:41:19 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id n4-20020a5099c4000000b00418ed58d92fso10976197edb.0 for ; Tue, 29 Mar 2022 05:41:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eLTwq5fXtDdDgS/orJshG1NPMtxpnPMwecHlrlYjY5k=; b=YeLodpZfduGO15yqSmZY77L+R71ULPMnJ6yUkyXtKqSlSSA54mtn2FcyNxxJVc4T8n Ps9eMetHiUKmh6e62QxJb+qy1wCTBSB9PsJGwttP10YPFhLiqtO3KCvAT2VGmKImTn5h FOXZ/dkZtREgpO/E7GOkbwTdi+os2zsItYWb7iH5IdjoGExdSbceax2P1ziSXdr+hgpK OMqfkCr+biZ8QakcknP/V/sQQP0lnJKG1x7AXoh5aOPYwyaJWOwVekm8k6WsAVaSt6u9 +09yHcZhlZUPQeEMReQIf2GWG6+2oppKnWRZv+PgK+qDgEndnY4xeZLppveUBoHHTq28 oreA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eLTwq5fXtDdDgS/orJshG1NPMtxpnPMwecHlrlYjY5k=; b=s0+TMPYu/f6wO874iHg+jK4j9hUXRefvmtxgMta/g6BYf/OubU5kIEv3dcNSRXMMb1 yo6wjX99lgK4rgbOjm5EdUERlkwD69mTKswumaZWc0HsCvdFfLHX0qzIp3+La+demRxm Mtdzb2VvyW3Yht5tqCEar0KYez2sIatStAWgd9gM53Ljl0CXDmhm/bII9rJbVdC7O0YV 4b0zMt/D7ba/vata6ABrjR6R69QToDPMG4iNyj14tEdTDk/EYGZVPYzWWcmQqPT5f5k+ QoN5qutr/Ws4p419vbZIGqL32d129FqhwtMVy0P9+oHX7mO+E+lagOhZyn3fkXu6Lmw2 hqjg== X-Gm-Message-State: AOAM533Bp+fGRFKg5XRltu9ToBOmu4NznvzkdqbPZ8iHFQ1uK3/H8NNX rfNlf+SbtcPUP8dWNtSjVwYtVdJEp5c= X-Google-Smtp-Source: ABdhPJzioc26w+W1I/VvZu1bf1qX6b/mh6UAssfjndgHHlyk+n1AtDxpbTRF9xgagMeR4K1GA1wf6PNlQSQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:1c10:b0:6da:6316:d009 with SMTP id nc16-20020a1709071c1000b006da6316d009mr34133251ejc.621.1648557677849; Tue, 29 Mar 2022 05:41:17 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:47 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-19-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 18/48] kmsan: mm: call KMSAN hooks from SLUB code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: xdupbar4joasryx79pa6hakuzgqywh46 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YeLodpZf; spf=pass (imf19.hostedemail.com: domain of 3bf5CYgYKCI4y30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3bf5CYgYKCI4y30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 3189D1A0009 X-HE-Tag: 1648557679-825930 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko --- v2: -- move the implementation of SLUB hooks here Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e --- include/linux/kmsan.h | 57 ++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 80 +++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 1 + mm/slub.c | 21 ++++++++++-- 4 files changed, 157 insertions(+), 2 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index da41850b46cbd..ed3630068e2ef 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -16,6 +16,7 @@ #include struct page; +struct kmem_cache; #ifdef CONFIG_KMSAN @@ -73,6 +74,44 @@ void kmsan_free_page(struct page *page, unsigned int order); */ void kmsan_copy_page_meta(struct page *dst, struct page *src); +/** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it as initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + /** * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. * @start: start of vmapped range. @@ -139,6 +178,24 @@ static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) { } +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ +} + +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) +{ +} + +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ +} + +static inline void kmsan_kfree_large(const void *ptr) +{ +} + static inline void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 5d886df57adca..e7c3ff48ed5cd 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,86 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + if (unlikely(object == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_memory(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * until the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_free); + +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + if (unlikely(ptr == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory((void *)ptr, size, + /*checked*/ true); + else + kmsan_internal_poison_memory((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + KMSAN_WARN_ON(ptr != page_address(page)); + kmsan_internal_poison_memory((void *)ptr, + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kfree_large); + static unsigned long vmalloc_shadow(unsigned long addr) { return (unsigned long)kmsan_get_metadata((void *)addr, diff --git a/mm/slab.h b/mm/slab.h index c7f2abc2b154c..c2538d856ec45 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -734,6 +734,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slub.c b/mm/slub.c index 261474092e43e..9b266f6b384b9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -357,18 +358,28 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use kmsan_init() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return kmsan_init(get_freepointer(s, object)); object = kasan_reset_tag(object); freepointer_addr = (unsigned long)object + s->offset; copy_from_kernel_nofault(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return kmsan_init(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1683,6 +1694,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1690,12 +1702,14 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -3729,6 +3743,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, */ slab_post_alloc_hook(s, objcg, flags, size, p, slab_want_init_on_alloc(flags, s)); + return i; error: slub_put_cpu_ptr(s->cpu_slab); @@ -5910,6 +5925,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -6011,6 +6027,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(struct saved_alias)); return 0; } From patchwork Tue Mar 29 12:39:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B10EC433F5 for ; Tue, 29 Mar 2022 12:41:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4B8A8D000A; Tue, 29 Mar 2022 08:41:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FAEF8D0009; Tue, 29 Mar 2022 08:41:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C2968D000A; Tue, 29 Mar 2022 08:41:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0036.hostedemail.com [216.40.44.36]) by kanga.kvack.org (Postfix) with ESMTP id 7E2618D0009 for ; Tue, 29 Mar 2022 08:41:23 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 362121828A44C for ; Tue, 29 Mar 2022 12:41:23 +0000 (UTC) X-FDA: 79297384446.30.7C40F3B Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf01.hostedemail.com (Postfix) with ESMTP id B95B940005 for ; Tue, 29 Mar 2022 12:41:22 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id v9-20020a509549000000b00418d7c2f62aso10947384eda.15 for ; Tue, 29 Mar 2022 05:41:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KExSZOK6Ak71zPKr5IrSRQ27+lVgS6k5uhCxlphiTtg=; b=tC2AC8qcVBGGwy5YubyQq84+DV0rcKjoONhQrYq3TKU20eKqh0tvDqhPRf7gRqM4PT dJmEH+z3DLtHBAC40BR8tQ/MaqO0WsxWK2QMgj40Hw2S7T7BAU6mtiu18LqGVLCo5/Qo jIKc2qIgz1E97QzbvGKvXcUu6cxEFeypgm1jOEzMJ2+zTDI7QZ7lhF0Ldt76B+O9/+/p 4pJKclD9Afk4fMQo1aDljx1Ws5Se+KObPjS8eyzNIzamBK0W0f/R7pNdSE7W8atZml4H uCyfBqUv6W9GA2MHL83z28IXWM2iPHYU3cZnXPjc+0QxA/mGQCWJHKnuELSvEXq12m1S 7Z0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KExSZOK6Ak71zPKr5IrSRQ27+lVgS6k5uhCxlphiTtg=; b=WIaOH7k7Qkeik2x2d7Dee79kwsp5v3+cM+uBX/5Orgrfp8/5NvWFa3KFoml6ntlipr kJheyBllvIHQLvSGtBpD3IjPQhdEhGPaJdhExSvjN62FJFObcGcQMWgg5NcJTX3Ln4Kv JnXmTcl3h7kfhtaNPu3SB5f9LZBlo+hjHAc3rGXGRZ9u3oFHzstQGwx+/CBjnC1RG2DE 418chLZyoqCOiFAzpQHPuoDcYQxK+ada1+M77Q948UGQy4iITnqligbBCpkJ2o1EuyqT uhTLBM0eHZ/YYpMe03NNfaV616U1PGNT1lp5DT9kfkazUmLm+driZvzFTl9yOYvg4jj/ HyYg== X-Gm-Message-State: AOAM530xWjAnuZij6NT9X9JrzDNZycfPkES12ZRqdRXE30M6DXHOXsOa YWsnfV2skaCr/mjT4go90xSnL72Weqw= X-Google-Smtp-Source: ABdhPJx++M+AfNEUSZZQiWR+2EsjU8nZXkRMWuKEK5h5tMt4uYSC+sCaPadi/60UsvXveKH0DEZLOUoc2uE= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:c092:b0:6cd:f3a1:a11e with SMTP id f18-20020a170906c09200b006cdf3a1a11emr33573510ejz.185.1648557680652; Tue, 29 Mar 2022 05:41:20 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:48 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-20-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 19/48] kmsan: handle task creation and exiting From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tC2AC8qc; spf=pass (imf01.hostedemail.com: domain of 3cP5CYgYKCJE163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3cP5CYgYKCJE163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: po6xwmu8zigtdmcfpe8f8pduobe8ea7a X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B95B940005 X-HE-Tag: 1648557682-697681 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell KMSAN that a new task is created, so the tool creates a backing metadata structure for that task. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_task_create() and kmsan_task_exit() here Link: https://linux-review.googlesource.com/id/I0f41c3a1c7d66f7e14aabcfdfc7c69addb945805 --- include/linux/kmsan.h | 17 +++++++++++++++++ kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ mm/kmsan/core.c | 10 ++++++++++ mm/kmsan/hooks.c | 19 +++++++++++++++++++ mm/kmsan/kmsan.h | 2 ++ 6 files changed, 52 insertions(+) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index ed3630068e2ef..dca42e0e91991 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -17,6 +17,7 @@ struct page; struct kmem_cache; +struct task_struct; #ifdef CONFIG_KMSAN @@ -43,6 +44,14 @@ struct kmsan_ctx { bool allow_reporting; }; +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + /** * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. * @page: struct page pointer returned by alloc_pages(). @@ -164,6 +173,14 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); #else +static inline void kmsan_task_create(struct task_struct *task) +{ +} + +static inline void kmsan_task_exit(struct task_struct *task) +{ +} + static inline int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) { diff --git a/kernel/exit.c b/kernel/exit.c index b00a25bb4ab93..15e1bf7fe1fa1 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -59,6 +59,7 @@ #include #include #include +#include #include #include #include @@ -752,6 +753,7 @@ void __noreturn do_exit(long code) force_uaccess_begin(); kcov_task_exit(tsk); + kmsan_task_exit(tsk); coredump_task_exit(tsk); ptrace_event(PTRACE_EVENT_EXIT, code); diff --git a/kernel/fork.c b/kernel/fork.c index f1e89007f2288..f62c51d9cbfb1 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -956,6 +957,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) account_kernel_stack(tsk, 1); kcov_task_init(tsk); + kmsan_task_create(tsk); kmap_local_fork(tsk); #ifdef CONFIG_FAULT_INJECTION diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c index f4196f274e754..8e594361332c6 100644 --- a/mm/kmsan/core.c +++ b/mm/kmsan/core.c @@ -44,6 +44,16 @@ bool kmsan_enabled __read_mostly; */ DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); +void kmsan_internal_task_create(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + __memset(ctx, 0, sizeof(struct kmsan_ctx)); + ctx->allow_reporting = true; + kmsan_internal_unpoison_memory(current_thread_info(), + sizeof(struct thread_info), false); +} + void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, unsigned int poison_flags) { diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index e7c3ff48ed5cd..a13e15ef2bfd5 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,25 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_task_create(struct task_struct *task) +{ + kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_task_create); + +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ctx->allow_reporting = false; +} +EXPORT_SYMBOL(kmsan_task_exit); + void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) { if (unlikely(object == NULL)) diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index bfe38789950a6..a1b5900ffd97b 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -172,6 +172,8 @@ void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, u32 origin, bool checked); depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); +void kmsan_internal_task_create(struct task_struct *task); + bool kmsan_metadata_is_contiguous(void *addr, size_t size); void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, int reason); From patchwork Tue Mar 29 12:39:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46393C433F5 for ; Tue, 29 Mar 2022 12:41:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0CB48D000D; Tue, 29 Mar 2022 08:41:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1DB88D000C; Tue, 29 Mar 2022 08:41:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE6738D000E; Tue, 29 Mar 2022 08:41:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 9FEF98D000C for ; Tue, 29 Mar 2022 08:41:25 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6B03B22E84 for ; Tue, 29 Mar 2022 12:41:25 +0000 (UTC) X-FDA: 79297384530.01.68104F6 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf22.hostedemail.com (Postfix) with ESMTP id 072DEC0005 for ; Tue, 29 Mar 2022 12:41:24 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id n4-20020a5099c4000000b00418ed58d92fso10976328edb.0 for ; Tue, 29 Mar 2022 05:41:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=77upEedV9MVWrhKieM+rh+8U4NcJYbcZy6sBqJQFlQA=; b=bnq8hQDOyCgftdNzwXDG/d91U07TnwCybE7fbjRK9l+Km9ccpsbPgpL6A5FRXtFi3h DAZlXoCRZE9ATLpK4lsLef8LJ+7L0QtfFMGiySRhOosaJSPL9aQlMN6TUArQSAlU/IZH X/GEA37JcLkYxodxNzOfq2hhriU8zPP4mQLJIr9rJz3Pgs6TJEbUfKgLtf+9xHvxkToL YGGTqE1B6aNUn6uXtMv9XW7u/kkVLc3AN5E3T7s3MDGrzrHTRuEyq/SdICW+CHxNTwyW QeTG6sbkosyQSzTdvIsKd+EadhvOS0HmjhvbsNcCDuon5xitTIyvmh3aXJ6iYAEciXLJ PgdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=77upEedV9MVWrhKieM+rh+8U4NcJYbcZy6sBqJQFlQA=; b=JJpOKqLp7gjD5X5TclbZxGmALlh389q2uLWaA38eunW6aFZ0GmwvzbPCC3PSACTOIi QVzmiMtrUxWNx0RVx+Azlxd8KFgCvFggLLG1SKY8IaTPJS9RVHLpbUEqi8ehozDR1cLV 6147tGIRTQloAmBNQ5Ah+xix5N2LvDhA2bW8LaKDsgs2BltHNiDvSkrLPdnsjmvP1UGv /qUBKpR5va/mZfFATpNrKX39jhNbqr9PeiJuj1H08693i3FINbrRHuPRwHiGaw9EIir9 9WEUmNHn/veGe2SZRVb4xInBtiovKxEPNfCwsjga+IHppijvMo36j59YVBxI+iqnRT/0 e3UA== X-Gm-Message-State: AOAM5335IQd/ro54MRMAaAprhVUr4ImCNCueKzQOysgeqd7ubH1o31kW L36zh3PKmQqCnlulZgNWzfmD6oALpGg= X-Google-Smtp-Source: ABdhPJxfRHTi2f4Xo91+bXHbsM1wH3q6jI2ZFHeAyuCAPY6LAlGBC/wE0GczFznCZuQjg0wOJP1fGQHeEA8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:16ac:b0:6e0:1646:9121 with SMTP id hc44-20020a17090716ac00b006e016469121mr35199102ejc.194.1648557683611; Tue, 29 Mar 2022 05:41:23 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:49 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-21-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 20/48] kmsan: init: call KMSAN initialization routines From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bnq8hQDO; spf=pass (imf22.hostedemail.com: domain of 3c_5CYgYKCJQ49612F4CC492.0CA96BIL-AA8Jy08.CF4@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3c_5CYgYKCJQ49612F4CC492.0CA96BIL-AA8Jy08.CF4@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: x1o9k5u8uiybajzg9dfth4se1t9jf88w X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 072DEC0005 X-HE-Tag: 1648557684-77530 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_init_shadow() scans the mappings created at boot time and creates metadata pages for those mappings. When the memblock allocator returns pages to pagealloc, we reserve 2/3 of those pages and use them as metadata for the remaining 1/3. Once KMSAN starts, every page allocated by pagealloc has its associated shadow and origin pages. kmsan_initialize() initializes the bookkeeping for init_task and enables KMSAN. Signed-off-by: Alexander Potapenko --- v2: -- move mm/kmsan/init.c and kmsan_memblock_free_pages() to this patch -- print a warning that KMSAN is a debugging tool (per Greg K-H's request) Link: https://linux-review.googlesource.com/id/I7bc53706141275914326df2345881ffe0cdd16bd --- include/linux/kmsan.h | 48 +++++++++ init/main.c | 3 + mm/kmsan/Makefile | 3 +- mm/kmsan/init.c | 240 ++++++++++++++++++++++++++++++++++++++++++ mm/kmsan/kmsan.h | 3 + mm/kmsan/shadow.c | 36 +++++++ mm/page_alloc.c | 3 + 7 files changed, 335 insertions(+), 1 deletion(-) create mode 100644 mm/kmsan/init.c diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index dca42e0e91991..a5767c728a46b 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -52,6 +52,40 @@ void kmsan_task_create(struct task_struct *task); */ void kmsan_task_exit(struct task_struct *task); +/** + * kmsan_init_shadow() - Initialize KMSAN shadow at boot time. + * + * Allocate and initialize KMSAN metadata for early allocations. + */ +void __init kmsan_init_shadow(void); + +/** + * kmsan_init_runtime() - Initialize KMSAN state and enable KMSAN. + */ +void __init kmsan_init_runtime(void); + +/** + * kmsan_memblock_free_pages() - handle freeing of memblock pages. + * @page: struct page to free. + * @order: order of @page. + * + * Freed pages are either returned to buddy allocator or held back to be used + * as metadata pages. + */ +bool __init kmsan_memblock_free_pages(struct page *page, unsigned int order); + +/** + * kmsan_task_create() - Initialize KMSAN state for the task. + * @task: task to initialize. + */ +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + /** * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. * @page: struct page pointer returned by alloc_pages(). @@ -173,6 +207,20 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); #else +static inline void kmsan_init_shadow(void) +{ +} + +static inline void kmsan_init_runtime(void) +{ +} + +static inline bool kmsan_memblock_free_pages(struct page *page, + unsigned int order) +{ + return true; +} + static inline void kmsan_task_create(struct task_struct *task) { } diff --git a/init/main.c b/init/main.c index 65fa2e41a9c09..88be337b54298 100644 --- a/init/main.c +++ b/init/main.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -834,6 +835,7 @@ static void __init mm_init(void) init_mem_debugging_and_hardening(); kfence_alloc_pool(); report_meminit(); + kmsan_init_shadow(); stack_depot_early_init(); mem_init(); mem_init_print_info(); @@ -851,6 +853,7 @@ static void __init mm_init(void) init_espfix_bsp(); /* Should be run after espfix64 is set up. */ pti_init(); + kmsan_init_runtime(); } #ifdef CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index 73b705cbf75b9..f57a956cb1c8b 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -1,4 +1,4 @@ -obj-y := core.o instrumentation.o hooks.o report.o shadow.o annotations.o +obj-y := core.o instrumentation.o init.o hooks.o report.o shadow.o annotations.o KMSAN_SANITIZE := n KCOV_INSTRUMENT := n @@ -16,6 +16,7 @@ CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) CFLAGS_annotations.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c new file mode 100644 index 0000000000000..45757d1390402 --- /dev/null +++ b/mm/kmsan/init.c @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN initialization routines. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" + +#include +#include +#include + +#include "../internal.h" + +#define NUM_FUTURE_RANGES 128 +struct start_end_pair { + u64 start, end; +}; + +static struct start_end_pair start_end_pairs[NUM_FUTURE_RANGES] __initdata; +static int future_index __initdata; + +/* + * Record a range of memory for which the metadata pages will be created once + * the page allocator becomes available. + */ +static void __init kmsan_record_future_shadow_range(void *start, void *end) +{ + u64 nstart = (u64)start, nend = (u64)end, cstart, cend; + bool merged = false; + int i; + + KMSAN_WARN_ON(future_index == NUM_FUTURE_RANGES); + KMSAN_WARN_ON((nstart >= nend) || !nstart || !nend); + nstart = ALIGN_DOWN(nstart, PAGE_SIZE); + nend = ALIGN(nend, PAGE_SIZE); + + /* + * Scan the existing ranges to see if any of them overlaps with + * [start, end). In that case, merge the two ranges instead of + * creating a new one. + * The number of ranges is less than 20, so there is no need to organize + * them into a more intelligent data structure. + */ + for (i = 0; i < future_index; i++) { + cstart = start_end_pairs[i].start; + cend = start_end_pairs[i].end; + if ((cstart < nstart && cend < nstart) || + (cstart > nend && cend > nend)) + /* ranges are disjoint - do not merge */ + continue; + start_end_pairs[i].start = min(nstart, cstart); + start_end_pairs[i].end = max(nend, cend); + merged = true; + break; + } + if (merged) + return; + start_end_pairs[future_index].start = nstart; + start_end_pairs[future_index].end = nend; + future_index++; +} + +/* + * Initialize the shadow for existing mappings during kernel initialization. + * These include kernel text/data sections, NODE_DATA and future ranges + * registered while creating other data (e.g. percpu). + * + * Allocations via memblock can be only done before slab is initialized. + */ +void __init kmsan_init_shadow(void) +{ + const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE); + phys_addr_t p_start, p_end; + int nid; + u64 i; + + for_each_reserved_mem_range(i, &p_start, &p_end) + kmsan_record_future_shadow_range(phys_to_virt(p_start), + phys_to_virt(p_end)); + /* Allocate shadow for .data */ + kmsan_record_future_shadow_range(_sdata, _edata); + + for_each_online_node(nid) + kmsan_record_future_shadow_range( + NODE_DATA(nid), (char *)NODE_DATA(nid) + nd_size); + + for (i = 0; i < future_index; i++) + kmsan_init_alloc_meta_for_range( + (void *)start_end_pairs[i].start, + (void *)start_end_pairs[i].end); +} +EXPORT_SYMBOL(kmsan_init_shadow); + +struct page_pair { + struct page *shadow, *origin; +}; +static struct page_pair held_back[MAX_ORDER] __initdata; + +/* + * Eager metadata allocation. When the memblock allocator is freeing pages to + * pagealloc, we use 2/3 of them as metadata for the remaining 1/3. + * We store the pointers to the returned blocks of pages in held_back[] grouped + * by their order: when kmsan_memblock_free_pages() is called for the first + * time with a certain order, it is reserved as a shadow block, for the second + * time - as an origin block. On the third time the incoming block receives its + * shadow and origin ranges from the previously saved shadow and origin blocks, + * after which held_back[order] can be used again. + * + * At the very end there may be leftover blocks in held_back[]. They are + * collected later by kmsan_memblock_discard(). + */ +bool kmsan_memblock_free_pages(struct page *page, unsigned int order) +{ + struct page *shadow, *origin; + + if (!held_back[order].shadow) { + held_back[order].shadow = page; + return false; + } + if (!held_back[order].origin) { + held_back[order].origin = page; + return false; + } + shadow = held_back[order].shadow; + origin = held_back[order].origin; + kmsan_setup_meta(page, shadow, origin, order); + + held_back[order].shadow = NULL; + held_back[order].origin = NULL; + return true; +} + +#define MAX_BLOCKS 8 +struct smallstack { + struct page *items[MAX_BLOCKS]; + int index; + int order; +}; + +struct smallstack collect = { + .index = 0, + .order = MAX_ORDER, +}; + +static void smallstack_push(struct smallstack *stack, struct page *pages) +{ + KMSAN_WARN_ON(stack->index == MAX_BLOCKS); + stack->items[stack->index] = pages; + stack->index++; +} +#undef MAX_BLOCKS + +static struct page *smallstack_pop(struct smallstack *stack) +{ + struct page *ret; + + KMSAN_WARN_ON(stack->index == 0); + stack->index--; + ret = stack->items[stack->index]; + stack->items[stack->index] = NULL; + return ret; +} + +static void do_collection(void) +{ + struct page *page, *shadow, *origin; + + while (collect.index >= 3) { + page = smallstack_pop(&collect); + shadow = smallstack_pop(&collect); + origin = smallstack_pop(&collect); + kmsan_setup_meta(page, shadow, origin, collect.order); + __free_pages_core(page, collect.order); + } +} + +static void collect_split(void) +{ + struct smallstack tmp = { + .order = collect.order - 1, + .index = 0, + }; + struct page *page; + + if (!collect.order) + return; + while (collect.index) { + page = smallstack_pop(&collect); + smallstack_push(&tmp, &page[0]); + smallstack_push(&tmp, &page[1 << tmp.order]); + } + __memcpy(&collect, &tmp, sizeof(struct smallstack)); +} + +/* + * Memblock is about to go away. Split the page blocks left over in held_back[] + * and return 1/3 of that memory to the system. + */ +static void kmsan_memblock_discard(void) +{ + int i; + + /* + * For each order=N: + * - push held_back[N].shadow and .origin to |collect|; + * - while there are >= 3 elements in |collect|, do garbage collection: + * - pop 3 ranges from |collect|; + * - use two of them as shadow and origin for the third one; + * - repeat; + * - split each remaining element from |collect| into 2 ranges of + * order=N-1, + * - repeat. + */ + collect.order = MAX_ORDER - 1; + for (i = MAX_ORDER - 1; i >= 0; i--) { + if (held_back[i].shadow) + smallstack_push(&collect, held_back[i].shadow); + if (held_back[i].origin) + smallstack_push(&collect, held_back[i].origin); + held_back[i].shadow = NULL; + held_back[i].origin = NULL; + do_collection(); + collect_split(); + } +} + +void __init kmsan_init_runtime(void) +{ + /* Assuming current is init_task */ + kmsan_internal_task_create(current); + kmsan_memblock_discard(); + pr_info("Starting KernelMemorySanitizer\n"); + pr_info("ATTENTION: KMSAN is a debugging tool! Do not use it on production machines!\n"); + kmsan_enabled = true; +} +EXPORT_SYMBOL(kmsan_init_runtime); diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index a1b5900ffd97b..059f21c39ec1b 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -66,6 +66,7 @@ struct shadow_origin_ptr { struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, bool store); void *kmsan_get_metadata(void *addr, bool is_origin); +void __init kmsan_init_alloc_meta_for_range(void *start, void *end); enum kmsan_bug_reason { REASON_ANY, @@ -181,5 +182,7 @@ bool kmsan_internal_is_module_addr(void *vaddr); bool kmsan_internal_is_vmalloc_addr(void *addr); struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order); #endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index 8fe6a5ed05e67..99cb9436eddc6 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -298,3 +298,39 @@ void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, kfree(s_pages); kfree(o_pages); } + +/* Allocate metadata for pages allocated at boot time. */ +void __init kmsan_init_alloc_meta_for_range(void *start, void *end) +{ + struct page *shadow_p, *origin_p; + void *shadow, *origin; + struct page *page; + u64 addr, size; + + start = (void *)ALIGN_DOWN((u64)start, PAGE_SIZE); + size = ALIGN((u64)end - (u64)start, PAGE_SIZE); + shadow = memblock_alloc(size, PAGE_SIZE); + origin = memblock_alloc(size, PAGE_SIZE); + for (addr = 0; addr < size; addr += PAGE_SIZE) { + page = virt_to_page_or_null((char *)start + addr); + shadow_p = virt_to_page_or_null((char *)shadow + addr); + set_no_shadow_origin_page(shadow_p); + shadow_page_for(page) = shadow_p; + origin_p = virt_to_page_or_null((char *)origin + addr); + set_no_shadow_origin_page(origin_p); + origin_page_for(page) = origin_p; + } +} + +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order) +{ + int i; + + for (i = 0; i < (1 << order); i++) { + set_no_shadow_origin_page(&shadow[i]); + set_no_shadow_origin_page(&origin[i]); + shadow_page_for(&page[i]) = &shadow[i]; + origin_page_for(&page[i]) = &origin[i]; + } +} diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 98a066c0a9f63..4237b7290e619 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1751,6 +1751,9 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + if (!kmsan_memblock_free_pages(page, order)) + /* KMSAN will take care of these pages. */ + return; __free_pages_core(page, order); } From patchwork Tue Mar 29 12:39:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91390C433FE for ; Tue, 29 Mar 2022 12:41:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25BF68D0006; Tue, 29 Mar 2022 08:41:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2437F8D0005; Tue, 29 Mar 2022 08:41:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0AD008D0006; Tue, 29 Mar 2022 08:41:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0208.hostedemail.com [216.40.44.208]) by kanga.kvack.org (Postfix) with ESMTP id EEC658D0005 for ; Tue, 29 Mar 2022 08:41:27 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B43E81828B303 for ; Tue, 29 Mar 2022 12:41:27 +0000 (UTC) X-FDA: 79297384614.17.A099854 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf22.hostedemail.com (Postfix) with ESMTP id 5400AC0003 for ; Tue, 29 Mar 2022 12:41:27 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id w11-20020a170907270b00b006df8927010eso8132698ejk.0 for ; Tue, 29 Mar 2022 05:41:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uZfE98ax1//CK0YNbt0DpMbtwkdW56rBwmCC8QtRUVQ=; b=rze2RjOlA/9+bEmOt3GgL06iHO9tVctaBkBSNuYthH10DOkJoKc11FE48K74+k0ivC Y0nWiNQ4G1OYqS8rf/bOrZUnKPL4+s3Qg8Otbm5Qlrkc06XPathZWsOTt7qxe2+1gGu5 FQenUx3Cf6aNrVRr+SdeQSwMQ5LzNFQJT8QbStroSWij6LrjHD/nihi+tJAX5W+RJUrr HxLXz3xj47nfDdtYPis5g1jTn7KD+ivWymw+OHkntILtKy17lMhGhOuhUrTAddrGgeSQ 8yAh5HjKFp4PVxBIxkWGlL+Kip5Ab0jkaFPz+ndh/En9oJK7jpmUZwOcf0YBR3XglVyQ wshg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uZfE98ax1//CK0YNbt0DpMbtwkdW56rBwmCC8QtRUVQ=; b=Vnn7Bhyh1MYeEEXkKXIo5GQXuIxfYMWvZGgeGk436yaKBPvHwuZlIpqA8k+fck1rfe QyAlC38ts3Ma2QZ16c7xHEu8XoKPOynxowAB6kCzyxH3FhhfZd5wj4TpV679hvuAJrxN i+lKD9FJGQK2oyft1FCRoZOeF5lJVMgAeyIHT2JiS+m1ZaOHUY7Su2EuFUmmATabR/Tt hyF0k9ef/bz00JpZ7CDT/AGIcgppu2ER1xA/u/5Tn3wNaND54yi24a07wZJLd+b6OnTS 7ZAk3I08hvDTXq+jXbf/x3s6bgy3TFwk7DLvM0SrHMdKpmQi9fUdCcRNI3hisMHPIK76 AQmQ== X-Gm-Message-State: AOAM530TAaD+3Xzkh6sEfQrQw6iMzsO9E1UJ7tXEfAkojQ2h6YEyx0Rr kjtOaL49ITNIibpsyNmXdhPUn+W+940= X-Google-Smtp-Source: ABdhPJzxZmXUGeX7ZhTgf8n/xUXjeDK/oOkNktPicPG81/7+K/3c1esguj6n+m5UzDsEoEAM1ZAmk0LXhJs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:657:b0:418:d875:bf12 with SMTP id u23-20020a056402065700b00418d875bf12mr4238058edx.89.1648557686159; Tue, 29 Mar 2022 05:41:26 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:50 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-22-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 21/48] instrumented.h: add KMSAN support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: yeg1jjef55b9cjzx1ii7odxc7be8eboh Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rze2RjOl; spf=pass (imf22.hostedemail.com: domain of 3dv5CYgYKCJc7C945I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3dv5CYgYKCJc7C945I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5400AC0003 X-HE-Tag: 1648557687-603705 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, KMSAN needs to unpoison the data copied from the userspace. To detect infoleaks - check the memory buffer passed to copy_to_user(). Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_copy_to_user() here Link: https://linux-review.googlesource.com/id/I43e93b9c02709e6be8d222342f1b044ac8bdbaaf --- include/linux/instrumented.h | 5 ++++- include/linux/kmsan-checks.h | 19 ++++++++++++++++++ mm/kmsan/hooks.c | 38 ++++++++++++++++++++++++++++++++++++ 3 files changed, 61 insertions(+), 1 deletion(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index ee8f7d17d34f5..c73c1b19e9227 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -2,7 +2,7 @@ /* * This header provides generic wrappers for memory access instrumentation that - * the compiler cannot emit for: KASAN, KCSAN. + * the compiler cannot emit for: KASAN, KCSAN, KMSAN. */ #ifndef _LINUX_INSTRUMENTED_H #define _LINUX_INSTRUMENTED_H @@ -10,6 +10,7 @@ #include #include #include +#include #include /** @@ -117,6 +118,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) { kasan_check_read(from, n); kcsan_check_read(from, n); + kmsan_copy_to_user(to, from, n, 0); } /** @@ -151,6 +153,7 @@ static __always_inline void instrument_copy_from_user_after(const void *to, const void __user *from, unsigned long n, unsigned long left) { + kmsan_unpoison_memory(to, n - left); } #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h index ecd8336190fc0..aabaf1ba7c251 100644 --- a/include/linux/kmsan-checks.h +++ b/include/linux/kmsan-checks.h @@ -84,6 +84,21 @@ void kmsan_unpoison_memory(const void *address, size_t size); */ void kmsan_check_memory(const void *address, size_t size); +/** + * kmsan_copy_to_user() - Notify KMSAN about a data transfer to userspace. + * @to: destination address in the userspace. + * @from: source address in the kernel. + * @to_copy: number of bytes to copy. + * @left: number of bytes not copied. + * + * If this is a real userspace data transfer, KMSAN checks the bytes that were + * actually copied to ensure there was no information leak. If @to belongs to + * the kernel space (which is possible for compat syscalls), KMSAN just copies + * the metadata. + */ +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, + size_t left); + #else #define kmsan_init(value) (value) @@ -98,6 +113,10 @@ static inline void kmsan_unpoison_memory(const void *address, size_t size) static inline void kmsan_check_memory(const void *address, size_t size) { } +static inline void kmsan_copy_to_user(void __user *to, const void *from, + size_t to_copy, size_t left) +{ +} #endif diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index a13e15ef2bfd5..365eedcb08953 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -212,6 +212,44 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(kmsan_iounmap_page_range); +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, + size_t left) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy <= left) + return; + + ua_flags = user_access_save(); + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + user_access_restore(ua_flags); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + kmsan_internal_memmove_metadata((void *)to, (void *)from, + to_copy - left); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Tue Mar 29 12:39:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 024ADC433F5 for ; Tue, 29 Mar 2022 12:41:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B37A8D0008; Tue, 29 Mar 2022 08:41:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 762898D0003; Tue, 29 Mar 2022 08:41:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 604B88D0008; Tue, 29 Mar 2022 08:41:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 4DE8B8D0003 for ; Tue, 29 Mar 2022 08:41:31 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1BB3320D55 for ; Tue, 29 Mar 2022 12:41:31 +0000 (UTC) X-FDA: 79297384782.05.FA5F702 Received: from mail-lj1-f201.google.com (mail-lj1-f201.google.com [209.85.208.201]) by imf20.hostedemail.com (Postfix) with ESMTP id 94B761C0002 for ; Tue, 29 Mar 2022 12:41:30 +0000 (UTC) Received: by mail-lj1-f201.google.com with SMTP id v8-20020a2e7a08000000b002498273eb20so7402953ljc.7 for ; Tue, 29 Mar 2022 05:41:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BAzeUERAw6yLQmtyQnYKjc6/ZWWpRBj0mGa3aEgUqwY=; b=ZXTRoZrgutkv7II3RgLRe7BIf7v6I4j8hJJZqYQ4g8PulMF1nWo8ubAU21o9zCo6og 7XBKKviqr7NMytXoYSl85SE+bb3+8jznUOcFKa7H4EfRJL1oCEVy4UXGzDb0BQ4KlMwY EBXkG4Uwv5g0U86kh/wZsWl0ZxVsJugX71jrS1cq2SiiKeYxM2S9e9CzF0yqfyAB7oMi T4oGblkIF/ayb7nfMcBDRVAt1sbzaznoywLIxzm1ieHGWctjb5MdaLEykaspo2SdhHvF 4nzwX4TkXzi+AOfJ+V1Cf2u4NZt6lo6PUhaz3zGGMcsWgjPIXLv2JnNVaViWv1sI8XVk WNew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BAzeUERAw6yLQmtyQnYKjc6/ZWWpRBj0mGa3aEgUqwY=; b=UySvUJQhic5VMjqnsd5ZL2fcH2APmXX3POuW3++zQsgsxyrtXiblWdhO8CjfWe+jK0 OKKmzC2OwEfssbbLvqEh+zJTV9D2vtjayEf/kVFyFOJ1fmleIuFzE43LZTCyq+vgiLpF r6wh3ZE4xfut9cnwIVD0l0b4Cz2PhG6U/Ps93u4qS8Y3ApUUJejRrLXfgpOIiV4XPt79 rxHvVDr+1u/uRkidLUt1EhVijZ4DIAwTiQYw/aLmrVWb3OG+pTJVjcCs/zQRR6e96XTq JffxphTR/Jttqtvearea0MAwR5iCrxQ9JLXYcSo6m1Ej7HDxWSLCrkH3inFH4GMRU+Lt dlKg== X-Gm-Message-State: AOAM530Yqpfo5+Uz1ZWvH6ZmrfCNPrbcfQz43eed8tIyzTk0Ru0zDrlz FG0clDC0LH/Z/hUiPg1oizoVCqtLFUQ= X-Google-Smtp-Source: ABdhPJx1CLANrsI9lz+hySXN8o14iMay2HDGrWuM+l9vsgDilsmJEoz0jnKi7yedCbB7BcBmZTRdVovSVm0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a19:e05c:0:b0:44a:15b9:68b9 with SMTP id g28-20020a19e05c000000b0044a15b968b9mr2466190lfj.575.1648557688677; Tue, 29 Mar 2022 05:41:28 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:51 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-23-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 22/48] kmsan: unpoison @tlb in arch_tlb_gather_mmu() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 94B761C0002 X-Stat-Signature: gbj6kkydrkj8o4dhj3fm3yh178jwpf8h Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZXTRoZrg; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3eP5CYgYKCJk9EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com designates 209.85.208.201 as permitted sender) smtp.mailfrom=3eP5CYgYKCJk9EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557690-36180 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a hack to reduce stackdepot pressure. struct mmu_gather contains 7 1-bit fields packed into a 32-bit unsigned int value. The remaining 25 bits remain uninitialized and are never used, but KMSAN updates the origin for them in zap_pXX_range() in mm/memory.c, thus creating very long origin chains. This is technically correct, but consumes too much memory. Unpoisoning the whole structure will prevent creating such chains. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I76abee411b8323acfdbc29bc3a60dca8cff2de77 --- mm/mmu_gather.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index afb7185ffdc45..2f3821268b311 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -253,6 +254,15 @@ void tlb_flush_mmu(struct mmu_gather *tlb) static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) { + /* + * struct mmu_gather contains 7 1-bit fields packed into a 32-bit + * unsigned int value. The remaining 25 bits remain uninitialized + * and are never used, but KMSAN updates the origin for them in + * zap_pXX_range() in mm/memory.c, thus creating very long origin + * chains. This is technically correct, but consumes too much memory. + * Unpoisoning the whole structure will prevent creating such chains. + */ + kmsan_unpoison_memory(tlb, sizeof(*tlb)); tlb->mm = mm; tlb->fullmm = fullmm; From patchwork Tue Mar 29 12:39:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA19FC433EF for ; Tue, 29 Mar 2022 12:41:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5DD768D000F; Tue, 29 Mar 2022 08:41:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 566A68D000E; Tue, 29 Mar 2022 08:41:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 455BA8D000F; Tue, 29 Mar 2022 08:41:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id 3967E8D000E for ; Tue, 29 Mar 2022 08:41:33 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id EDB74A4DC7 for ; Tue, 29 Mar 2022 12:41:32 +0000 (UTC) X-FDA: 79297384866.22.5BC4FB1 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 6ED0C160005 for ; Tue, 29 Mar 2022 12:41:32 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id d28-20020a50f69c000000b00419125c67f4so10952874edn.17 for ; Tue, 29 Mar 2022 05:41:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1qZ36eD+kaWsJnQ8auEpiZ1rW+bfAeskKwd1x/lbm8E=; b=Rcnu0ftYaviimZCkKchXhLzv4Y9UohEdLtNeE87+i4g2lUAFIdAHcrYb6VunYkRO45 0OlJa6TS8CgDxoAQ27qSa4/PMQF86Knx0tCOfbCLtEU24OihmG2+K8/Xo6wOyhahHFhr JmmVtHN25QPgPBe0WNM/XDHAv3tqmwVnno70IommFUD2T5T0L84175cKoN/YlYtKFc+X aXelqCm+GznSbMRU3j6C+wZrte+7TkDPvPe87ccCSj4gMUczWPnDSQWiPrlld8xpNWhg l9rwkEHT/9p/eJIqOK330aaMwuIxjN+oh9bbey1ho0lgCeKyd6Q+1EnhAOJQA5Oa789Z LNHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1qZ36eD+kaWsJnQ8auEpiZ1rW+bfAeskKwd1x/lbm8E=; b=vRxPGfOB7GXxZGuuuccHea5eTewqeeZj9mIFJWSjLOyVE0E9ATm5gIwcAClFvmMBOB vvdolZ3+Wl/Vu6l39idJCs2uhpY4Dhbr64i8nioJsScLSXqEcOwyU93dqE5A+5I34br5 0gjSoAcn+WjOAPp4nnLxcvvRUv213BL1W0s6C3wN7w/n5FucPcn5gyIoLThNSoZPZ7VN 8uji33Yz2otpMm5xhdKq+8FdB9vX30z0taOt3vBCKOgb2PZBZOfDylJwaoTE3yZpBIAT W/3zJhLgAGllQYCsBouxrwd6x3k0Z1FGgqKh6MFs+GV1x+KE/dlkISPuvFmo5S9HfUKf TrJg== X-Gm-Message-State: AOAM532bFAS2OFr9+6SuFdKBoel83rNgLX1qUY0QTD4XTA9x/G7bw+bo dCLt83VY2ZOEs4H/DIw3pbMRX6AsyZw= X-Google-Smtp-Source: ABdhPJwlWxNKietnPeU8wgYIdm3etjxbDw8r9KenX80watwB3MiI5sKIPJR8qvY4K4Ud6K1EwuvorSt75Fw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:6c2:b0:419:1a14:8fb2 with SMTP id n2-20020a05640206c200b004191a148fb2mr4326522edy.415.1648557691238; Tue, 29 Mar 2022 05:41:31 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:52 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-24-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 23/48] kmsan: add iomap support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6ED0C160005 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Rcnu0ftY; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3e_5CYgYKCJwCHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3e_5CYgYKCJwCHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com X-Stat-Signature: oqm9u95j4syxr6x8zknfr7u6aqyzdai4 X-HE-Tag: 1648557692-34851 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions from lib/iomap.c interact with hardware, so KMSAN must ensure that: - every read function returns an initialized value - every write function checks values before sending them to hardware. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I45527599f09090aca046dfe1a26df453adab100d --- lib/iomap.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/lib/iomap.c b/lib/iomap.c index fbaa3e8f19d6c..bdda1a42771b2 100644 --- a/lib/iomap.c +++ b/lib/iomap.c @@ -6,6 +6,7 @@ */ #include #include +#include #include @@ -70,26 +71,31 @@ static void bad_io_access(unsigned long port, const char *access) #define mmio_read64be(addr) swab64(readq(addr)) #endif +__no_sanitize_memory unsigned int ioread8(const void __iomem *addr) { IO_COND(addr, return inb(port), return readb(addr)); return 0xff; } +__no_sanitize_memory unsigned int ioread16(const void __iomem *addr) { IO_COND(addr, return inw(port), return readw(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread16be(const void __iomem *addr) { IO_COND(addr, return pio_read16be(port), return mmio_read16be(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread32(const void __iomem *addr) { IO_COND(addr, return inl(port), return readl(addr)); return 0xffffffff; } +__no_sanitize_memory unsigned int ioread32be(const void __iomem *addr) { IO_COND(addr, return pio_read32be(port), return mmio_read32be(addr)); @@ -142,18 +148,21 @@ static u64 pio_read64be_hi_lo(unsigned long port) return lo | (hi << 32); } +__no_sanitize_memory u64 ioread64_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64_hi_lo(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64be_lo_hi(port), @@ -161,6 +170,7 @@ u64 ioread64be_lo_hi(const void __iomem *addr) return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64be_hi_lo(port), @@ -188,22 +198,32 @@ EXPORT_SYMBOL(ioread64be_hi_lo); void iowrite8(u8 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outb(val,port), writeb(val, addr)); } void iowrite16(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outw(val,port), writew(val, addr)); } void iowrite16be(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write16be(val,port), mmio_write16be(val, addr)); } void iowrite32(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outl(val,port), writel(val, addr)); } void iowrite32be(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write32be(val,port), mmio_write32be(val, addr)); } EXPORT_SYMBOL(iowrite8); @@ -239,24 +259,32 @@ static void pio_write64be_hi_lo(u64 val, unsigned long port) void iowrite64_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_lo_hi(val, port), writeq(val, addr)); } void iowrite64_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_hi_lo(val, port), writeq(val, addr)); } void iowrite64be_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_lo_hi(val, port), mmio_write64be(val, addr)); } void iowrite64be_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_hi_lo(val, port), mmio_write64be(val, addr)); } @@ -328,14 +356,20 @@ static inline void mmio_outsl(void __iomem *addr, const u32 *src, int count) void ioread8_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insb(port,dst,count), mmio_insb(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count); } void ioread16_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insw(port,dst,count), mmio_insw(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 2); } void ioread32_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insl(port,dst,count), mmio_insl(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 4); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -343,14 +377,20 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count); IO_COND(addr, outsb(port, src, count), mmio_outsb(addr, src, count)); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 2); IO_COND(addr, outsw(port, src, count), mmio_outsw(addr, src, count)); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 4); IO_COND(addr, outsl(port, src,count), mmio_outsl(addr, src, count)); } EXPORT_SYMBOL(iowrite8_rep); From patchwork Tue Mar 29 12:39:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B4B2C433EF for ; Tue, 29 Mar 2022 12:41:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C463D8D0011; Tue, 29 Mar 2022 08:41:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1D0C8D0010; Tue, 29 Mar 2022 08:41:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B0F668D0011; Tue, 29 Mar 2022 08:41:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id 9AB4D8D0010 for ; Tue, 29 Mar 2022 08:41:35 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 58FD78249980 for ; Tue, 29 Mar 2022 12:41:35 +0000 (UTC) X-FDA: 79297384950.17.3878194 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf29.hostedemail.com (Postfix) with ESMTP id 8D038120008 for ; Tue, 29 Mar 2022 12:41:35 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id x5-20020a50ba85000000b00418e8ce90ffso10939000ede.14 for ; Tue, 29 Mar 2022 05:41:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lsB3lK0aPu+8HWqdJ6/yM1q11PTSMtec/DSdWJ9paXQ=; b=Y2VTXLEBSbXPLu5uhbEKKqB5AO9gIHu9jp/jSny35LuwmGZ+c+hEVtLUWlhSpPVc22 TxOvlY9Yoi7apyfs9Ow6Y9SD8BcLDmBd8zCFsJWqvD1r8+3P/s4FNpxe3VpbUFeGmI1r M2s3smZ8WKAQfLh/zIYUs4yveEnHKHAWaMCHAYZM2EsQtnTEE9sa+vHBJ2c8G01CB99w h+wWzCDc56ZhTweuBlI3h1rdzZJaGibE55vpeLcLzYyMpMH4Nsc8X53WytkodgNs2hJR 4GEA20pYgJcw96lcVe50mTK7DAkc1u2xxRnNcQ3uEduO2cpHhhWmbgpFaG8L9ULcZ04m Xtfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lsB3lK0aPu+8HWqdJ6/yM1q11PTSMtec/DSdWJ9paXQ=; b=CvVyRqNG4tc5gTV7aVfGQRMJrV3ATi218/pv2TBvYafATj8LOurchp1p7A8uRU6tTq CHznkMCmIquOATulefkqT83q+akmjQtATvQlzFLrM6rvtOmz9/NKO0XKFplU/7LOC1EP dor2/+gCPfKFw4pD/TQ73yre6o+iia3JnNmpszT/w6lrQPoFxJI0uu58eIAycpeFcn5D 78pIFeV4ZbCmhU6807xyh8Yxb1FuhTOwzuEkEmF28lX80DDSJWDSeSiEu7u+e1Mh6DJ6 1tbtu+VPhMq0AktbmFNHu1bf6NAgJp6KkMjVOR6Gz3+ZznoOPBJhPwaxCLRqQml8QuI2 cpTw== X-Gm-Message-State: AOAM5331gGOfnM80qI1nW8mhUTnEMkOOutUzZbNQ6YzDEcSJRDkaaJIt BcscWwNW0e5IqsqxqAwo1UxHcfSLK80= X-Google-Smtp-Source: ABdhPJxMT7rO1P57N71ZmYbIOwRuf4arJokbiW9dyI3uwrKnuskidOOGFTc8VrM1ts4i4n0XHHumgcOFJA8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:1ec3:b0:6cf:d118:59e2 with SMTP id m3-20020a1709061ec300b006cfd11859e2mr34473282ejj.767.1648557693731; Tue, 29 Mar 2022 05:41:33 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:53 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-25-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 24/48] Input: libps2: mark data received in __ps2_command() as initialized From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: ycdh4zdhfo35m6zaga4nhknqmxqfz3f7 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Y2VTXLEB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3ff5CYgYKCJ4EJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3ff5CYgYKCJ4EJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com X-Rspamd-Queue-Id: 8D038120008 X-HE-Tag: 1648557695-120215 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not know that the device initializes certain bytes in ps2dev->cmdbuf. Call kmsan_unpoison_memory() to explicitly mark them as initialized. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I2d26f6baa45271d37320d3f4a528c39cb7e545f0 --- drivers/input/serio/libps2.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/input/serio/libps2.c b/drivers/input/serio/libps2.c index 250e213cc80c6..3e19344eda93c 100644 --- a/drivers/input/serio/libps2.c +++ b/drivers/input/serio/libps2.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -294,9 +295,11 @@ int __ps2_command(struct ps2dev *ps2dev, u8 *param, unsigned int command) serio_pause_rx(ps2dev->serio); - if (param) + if (param) { for (i = 0; i < receive; i++) param[i] = ps2dev->cmdbuf[(receive - 1) - i]; + kmsan_unpoison_memory(param, receive); + } if (ps2dev->cmdcnt && (command != PS2_CMD_RESET_BAT || ps2dev->cmdcnt != 1)) { From patchwork Tue Mar 29 12:39:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C275AC433F5 for ; Tue, 29 Mar 2022 12:41:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 58FEB8D0013; Tue, 29 Mar 2022 08:41:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 542918D0012; Tue, 29 Mar 2022 08:41:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40A118D0013; Tue, 29 Mar 2022 08:41:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 26DF38D0012 for ; Tue, 29 Mar 2022 08:41:38 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 009B46095B for ; Tue, 29 Mar 2022 12:41:37 +0000 (UTC) X-FDA: 79297385076.10.D7464DE Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf25.hostedemail.com (Postfix) with ESMTP id 8C62CA0003 for ; Tue, 29 Mar 2022 12:41:37 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id l24-20020a056402231800b00410f19a3103so10967130eda.5 for ; Tue, 29 Mar 2022 05:41:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kipmwjlnp7WROf90XDDYlpUL7HaVlKXhrZy7k38OjBk=; b=swBA+8/qBkvXXmxorMDqi6vUFeeZi3bN3raFZeA6IFlfLjInfXV+4iwKXj0XrLGgoC JQ7bqpVR4Aq6pLKUjKxxUVfZRxlPvshj9+PPkWZx9bHUxFIXGZ9Mq+01KZGeGQ12fXDH hkgtDJsBgTYCvyqeM4bo+uyBanKqRZjXekJyVVND/ULHYZSnwRSRhS0uDUYE2S32ihmM dc66EcjcYlTDsHsqTVtcqfQaGQIfQQjOUGnIkCiRunD8AdG9BsQx+9zQnqfMk6znpSBZ VdRLWG/DUzMLxudxXPvP6Z5TI95jhTKxcLr2iFonvlWLmVR44+5uS324vfekljJlpbJr pDqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kipmwjlnp7WROf90XDDYlpUL7HaVlKXhrZy7k38OjBk=; b=uk4MsNDmlMyrEili7Chvxc/+lMXbmpKdj27taZn4B8jXAniV0U1WLcQoq2OqQfVUhj FzWvPmqEGQCqrSPhGsa5rgr0AVGyVlNzqVS0yZZkfbbktR3RRQALhoLiTYM4FhI4s1PU dE04nDTc+g4CgPbI/Q+3DQzQY5LMhgciP5IQs2Staokp2Bwwx5lUqWIV4FfV70PKpGIU XcC6DcHZ/I/tlK8P0Ld4U3ZGNXwITJGTpt4wTlflKKyGyHgj1wGcygGNX2HHSWiyPgqc s0ONJOXSP8tRh9MUW09czU+2EQbSY0Jy25wR/q9Eti3uzeqpplWgpfJ0oQWWGiiKw6ey y82Q== X-Gm-Message-State: AOAM5326kRyHOAMLAP+B/UUwvNI/t9BhdqrCmOLU+d/ElF+Fs/zwT8nB YPlYQq3HTK/wqEWkeDa1aNm+EBIrmYk= X-Google-Smtp-Source: ABdhPJxtsqcpedBaD+pU00yaPAgOV5o7+L5K3nWzYJ9Q8ykrvoNlGaLEfr4A4ubrOrFSZ9FsAwRvka1gJuQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:1623:b0:6df:c9da:a6a8 with SMTP id hb35-20020a170907162300b006dfc9daa6a8mr33823933ejc.303.1648557696191; Tue, 29 Mar 2022 05:41:36 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:54 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-26-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 25/48] kmsan: dma: unpoison DMA mappings From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="swBA+8/q"; spf=pass (imf25.hostedemail.com: domain of 3gP5CYgYKCKEHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3gP5CYgYKCKEHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: zs3rz11ds7611dxejja6nb8zfps8d5qk X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8C62CA0003 X-HE-Tag: 1648557697-751141 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_handle_dma() and kmsan_handle_dma_sg() here Link: https://linux-review.googlesource.com/id/Ia162dc4c5a92e74d4686c1be32a4dfeffc5c32cd --- include/linux/kmsan.h | 41 +++++++++++++++++++++++++++++ kernel/dma/mapping.c | 9 ++++--- mm/kmsan/hooks.c | 61 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 108 insertions(+), 3 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index a5767c728a46b..d8667161a10c8 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -9,6 +9,7 @@ #ifndef _LINUX_KMSAN_H #define _LINUX_KMSAN_H +#include #include #include #include @@ -18,6 +19,7 @@ struct page; struct kmem_cache; struct task_struct; +struct scatterlist; #ifdef CONFIG_KMSAN @@ -205,6 +207,35 @@ void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, */ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); +/** + * kmsan_handle_dma() - Handle a DMA data transfer. + * @page: first page of the buffer. + * @offset: offset of the buffer within the first page. + * @size: buffer size. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffer, if it is copied to device; + * * initializes the buffer, if it is copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir); + +/** + * kmsan_handle_dma_sg() - Handle a DMA transfer using scatterlist. + * @sg: scatterlist holding DMA buffers. + * @nents: number of scatterlist entries. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffers in the scatterlist, if they are copied to device; + * * initializes the buffers, if they are copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir); + #else static inline void kmsan_init_shadow(void) @@ -287,6 +318,16 @@ static inline void kmsan_iounmap_page_range(unsigned long start, { } +static inline void kmsan_handle_dma(struct page *page, size_t offset, + size_t size, enum dma_data_direction dir) +{ +} + +static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 9478eccd1c8e6..0560080813761 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -156,6 +156,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); else addr = ops->map_page(dev, page, offset, size, dir, attrs); + kmsan_handle_dma(page, offset, size, dir); debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); return addr; @@ -194,11 +195,13 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, else ents = ops->map_sg(dev, sg, nents, dir, attrs); - if (ents > 0) + if (ents > 0) { + kmsan_handle_dma_sg(sg, nents, dir); debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); - else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && - ents != -EIO)) + } else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && + ents != -EIO)) { return -EIO; + } return ents; } diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 365eedcb08953..cc3465bd69754 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -10,9 +10,11 @@ */ #include +#include #include #include #include +#include #include #include @@ -250,6 +252,65 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, addr; + + if (PageHighMem(page)) + return; + addr = (u64)page_address(page) + offset; + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = addr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)addr, to_go, dir); + addr += to_go; + size -= to_go; + } +} +EXPORT_SYMBOL(kmsan_handle_dma); + +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ + struct scatterlist *item; + int i; + + for_each_sg(sg, item, nents, i) + kmsan_handle_dma(sg_page(item), item->offset, item->length, + dir); +} +EXPORT_SYMBOL(kmsan_handle_dma_sg); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Tue Mar 29 12:39:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7402DC433F5 for ; Tue, 29 Mar 2022 12:41:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0FCDF8D0014; Tue, 29 Mar 2022 08:41:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0ABAD8D0002; Tue, 29 Mar 2022 08:41:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB8A78D0014; Tue, 29 Mar 2022 08:41:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id DC9C78D0002 for ; Tue, 29 Mar 2022 08:41:40 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AC3F320B38 for ; Tue, 29 Mar 2022 12:41:40 +0000 (UTC) X-FDA: 79297385160.06.A529DCD Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf23.hostedemail.com (Postfix) with ESMTP id 4A40D140003 for ; Tue, 29 Mar 2022 12:41:40 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id m21-20020a50d7d5000000b00418c7e4c2bbso10963353edj.6 for ; Tue, 29 Mar 2022 05:41:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Eb/Abij6XQ3iqbkwUGDaEHK0fSpQBifUF5HLZH6JcRc=; b=Krkj/S1jh5/H05+jCH+MSNkam2il8xetzlLh3mofzSw3z1EUZVLwTavHzuR07R13in fg9//+1CujBKky+Yg98dxF0i9pubcLdyg1zZweQ1XjNQleJiDN/XoHbConhYKPEsItrW Hqor3bLi18aLxBGMha4/9ZF65JCN2E/xs869FWNbVFPNZlCCTsmL0+sn19SYqkD+nycO 8yokw7MAHVM6QSu96rl3jBfgqKH9wHHao396wVZYsEKnozljUrrFpvS6mVwzvzPs2PCk rZme07+nb4Ir4EhmUhbog2UgvqaC31UAfTNt+DsOKVK0oofUTAwJkiObvGLB48vzpwDI J6HA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Eb/Abij6XQ3iqbkwUGDaEHK0fSpQBifUF5HLZH6JcRc=; b=20H7W4v/ysKfbpEg6CB0HxRrdowdHJ4y1Ftd/GoVwcuFZu1FpbWg0+MNPE3HR+X3s0 HCvh8OL5UhbNjzOFNNZtRMXxia6MdL3Y2zt0g7b2sG+TyjJxtIchijLzPEeR6TjBry3T OX7tOZS7YBV/iL5QXW5Hk2CX81L01h7AGRyHk69JIw/iPfUNl8B8568bfR/pwnWBydzh sTPF9OVfI3ZSpSifbQ5iEiW2NnSXt3Yat+SdvTvNV0EqaiqKy7E8WRYpoK0nLysytPOu gg/zUtGCEwpgJUDY4mHRCphC89i7ggSavBwLN1O4WB8WLRtKak8uQ9oK/ciO6uidyOwI 9dZg== X-Gm-Message-State: AOAM532pkQGPhGbcN1l3rdEKR5M4Eis5lD/lkZbc4US4wVjGUxx8SlOE asdFEIZt0SKu2P0d40EHjuz12A61XKY= X-Google-Smtp-Source: ABdhPJxL1GeaEMAohn+FBay8BfgBp3WhpguNB4XCKO3e3e1LP1mlhUbZL/tt/mlN/CXlhJVrae+cgzEdiVU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:7f09:b0:6e0:395d:cc88 with SMTP id qf9-20020a1709077f0900b006e0395dcc88mr34433043ejc.566.1648557698741; Tue, 29 Mar 2022 05:41:38 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:55 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-27-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 26/48] kmsan: virtio: check/unpoison scatterlist in vring_map_one_sg() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: crq31beg1dqfjxyhpcd6e8adc6ir5z9r X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 4A40D140003 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Krkj/S1j"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3gv5CYgYKCKMJOLGHUJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3gv5CYgYKCKMJOLGHUJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557700-525240 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If vring doesn't use the DMA API, KMSAN is unable to tell whether the memory is initialized by hardware. Explicitly call kmsan_handle_dma() from vring_map_one_sg() in this case to prevent false positives. Signed-off-by: Alexander Potapenko Acked-by: Michael S. Tsirkin --- Link: https://linux-review.googlesource.com/id/I211533ecb86a66624e151551f83ddd749536b3af --- drivers/virtio/virtio_ring.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 962f1477b1fab..461e08f7f0a0f 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -331,8 +332,15 @@ static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction) { - if (!vq->use_dma_api) + if (!vq->use_dma_api) { + /* + * If DMA is not used, KMSAN doesn't know that the scatterlist + * is initialized by the hardware. Explicitly check/unpoison it + * depending on the direction. + */ + kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); return (dma_addr_t)sg_phys(sg); + } /* * We can't use dma_map_sg, because we don't use scatterlists in From patchwork Tue Mar 29 12:39:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 770ABC4332F for ; Tue, 29 Mar 2022 12:41:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D53B8D000B; Tue, 29 Mar 2022 08:41:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 088528D0001; Tue, 29 Mar 2022 08:41:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCB6A8D000B; Tue, 29 Mar 2022 08:41:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id CE4DC8D0001 for ; Tue, 29 Mar 2022 08:41:43 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9F33E549 for ; Tue, 29 Mar 2022 12:41:43 +0000 (UTC) X-FDA: 79297385286.06.1994AA3 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf02.hostedemail.com (Postfix) with ESMTP id 22BED80008 for ; Tue, 29 Mar 2022 12:41:42 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id 71-20020adf82cd000000b00203dc43d216so5032048wrc.22 for ; Tue, 29 Mar 2022 05:41:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fD7vTHTdDGWZj4L/ZpXEU7Cya0QLQJr2pYgffDh+E8w=; b=N5UUNqK7Dai6IC1Y08iiwp58OrTPktY0M/5uk3b7VTaXlaZiBP0iSzQIB5Iqcuot/B gEdH6p8/PLZmopCfTne4pFI4BmtruzrBSebrmitvqAQQE668M5KU4xRexTBPV9uC7ymZ +zOyylf/ah4aCgFKkajbof9fP8R1DcGP/XfHg7RJHfBw7th9QUxFB2hzyRO/1i8jAYJw 4EiSRL5zIyd2VX+QXlGoJXv/JlIO0ZiPYaFKrDKxaDNsgIzCjmznnE0eBKYCbEdPOLhZ W2SH91MCEC/gi2icPHSfjByyfpBxvyMEggdLnFEEIDpgnzzwJUdxKAXvo4DKQ2rPKqaU 4f6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fD7vTHTdDGWZj4L/ZpXEU7Cya0QLQJr2pYgffDh+E8w=; b=5yImtpVScUQrJ1Ux41gt3hxS9IqNlCJN1xJRk6eTjE0Jykl404y4bYivGzIIzJKuhh ux4pI6RnhM9JY347DeqwYqHv3J2flo75XMbHYr7bzyhvJkfdNEVkrLy4EFBM4JrjBlAe 4Ny5o0eOy7E1DBcwmB+zYegFT07t2Moy38lU/Y/QUNCv0cMglx89r8ODHvmLhSXJy2B4 Lg+fE4MptZyI6uHRTqDZKzhtC8rDexNvZUbUfSJrJKrIm0VINYnVZORJQlhjFsx2Yw3T lpUQWYW/m0d8Efgulk9/+ppxOROWunJwIcMMU19LiiuUBsCNCwI9m9n6uWpAqeSlwW/O /h+g== X-Gm-Message-State: AOAM533v6bH89N+rS+bdItaZC2yA//Mz6D5M3NEGkmCzBVkFmeFHuB8I wOlgzK4N7yt3Y1u/e9h45YMwThR4Vnw= X-Google-Smtp-Source: ABdhPJxIDBdkZcX7+d/MV/S8nH/sLZTjHyeXwqLiz2xe0oEWISYLlV51D9JqwEaJHLYksvw574hlzkx8kmA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a5d:6b4c:0:b0:1e6:8ece:62e8 with SMTP id x12-20020a5d6b4c000000b001e68ece62e8mr30978626wrw.201.1648557701775; Tue, 29 Mar 2022 05:41:41 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:56 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-28-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 27/48] kmsan: handle memory sent to/from USB From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: f98agnsda1riwcuko1uw7dc44ahe763h X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 22BED80008 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=N5UUNqK7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of 3hf5CYgYKCKYMROJKXMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--glider.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3hf5CYgYKCKYMROJKXMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557702-401198 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Depending on the value of is_out kmsan_handle_urb() KMSAN either marks the data copied to the kernel from a USB device as initialized, or checks the data sent to the device for being initialized. Signed-off-by: Alexander Potapenko --- v2: -- move kmsan_handle_urb() implementation to this patch Link: https://linux-review.googlesource.com/id/Ifa67fb72015d4de14c30e971556f99fc8b2ee506 --- drivers/usb/core/urb.c | 2 ++ include/linux/kmsan.h | 15 +++++++++++++++ mm/kmsan/hooks.c | 17 +++++++++++++++++ 3 files changed, 34 insertions(+) diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c index 33d62d7e3929f..1fe3f23205624 100644 --- a/drivers/usb/core/urb.c +++ b/drivers/usb/core/urb.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -426,6 +427,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) URB_SETUP_MAP_SINGLE | URB_SETUP_MAP_LOCAL | URB_DMA_SG_COMBINED); urb->transfer_flags |= (is_out ? URB_DIR_OUT : URB_DIR_IN); + kmsan_handle_urb(urb, is_out); if (xfertype != USB_ENDPOINT_XFER_CONTROL && dev->state < USB_STATE_CONFIGURED) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index d8667161a10c8..55f976b721566 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -20,6 +20,7 @@ struct page; struct kmem_cache; struct task_struct; struct scatterlist; +struct urb; #ifdef CONFIG_KMSAN @@ -236,6 +237,16 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size, void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, enum dma_data_direction dir); +/** + * kmsan_handle_urb() - Handle a USB data transfer. + * @urb: struct urb pointer. + * @is_out: data transfer direction (true means output to hardware). + * + * If @is_out is true, KMSAN checks the transfer buffer of @urb. Otherwise, + * KMSAN initializes the transfer buffer. + */ +void kmsan_handle_urb(const struct urb *urb, bool is_out); + #else static inline void kmsan_init_shadow(void) @@ -328,6 +339,10 @@ static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, { } +static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index cc3465bd69754..d95fd16a4b1dc 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "../internal.h" #include "../slab.h" @@ -252,6 +253,22 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/ false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + static void kmsan_handle_dma_page(const void *addr, size_t size, enum dma_data_direction dir) { From patchwork Tue Mar 29 12:39:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07876C433FE for ; Tue, 29 Mar 2022 12:41:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 911AA8D0007; Tue, 29 Mar 2022 08:41:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C2AE8D0001; Tue, 29 Mar 2022 08:41:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 763B08D0007; Tue, 29 Mar 2022 08:41:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 675478D0001 for ; Tue, 29 Mar 2022 08:41:46 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 262AA9D66C for ; Tue, 29 Mar 2022 12:41:46 +0000 (UTC) X-FDA: 79297385412.22.B549591 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 913CB180025 for ; Tue, 29 Mar 2022 12:41:45 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id i4-20020aa7c9c4000000b00419c542270dso5230416edt.8 for ; Tue, 29 Mar 2022 05:41:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NcXyKO/GU5wv+AINtBvHToR0v2IpKVpreq+8SU9KtO4=; b=DIWfKg7bP7QnSfYaoX5/svSrQlbraANJFz4ZkJPOWATm87bCAU2GbOOvwevJcEHoKi wbdwLM4FSdQAgqD+aGltFF1nNlgaZOezyHxrai+rBf8lAExLt9TsIObDqEZCzGpIi7nH xwWzJXVyfWrhAFtE1DDgdB13tMX7M2TNnBpf5xo5AKOhlt6Ra/sgjz/MvMdpU1BtHlRO AokxmkFDUGancBTS6WUAyS0jZ1/25Z/5/NnX5iVI36FKkApjAbgSShp4qxN4eRySlGQN LaDPdH7+3zFsETzZOVpB1jP5H2P3l6+9CkkA76YAQaezI3vMYZPtpAZ47ee1hp/xomfj t6zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NcXyKO/GU5wv+AINtBvHToR0v2IpKVpreq+8SU9KtO4=; b=TdCxXvvJM7AABPoAtiTZokwfyT5Hxn1k/WkT6vFsZTUCbYo4+3iS+WI5u+t+2MTXEH LPm0sF9fzaveaEiu9a7VlUxA0eyUb1OcyPb88oUlJffebdAXydcvRi5W+OOfCfPCavrb n8MW5c/CY5C3ZQNU7rRmS+lEEEYgIQDM++iiySxnxr9wwIavujQsujLTvs5k2PPPmRrZ OVQusr6vA4/dEYh6ecxJk+PROK8wu1mwVe5eeBJHkraY7eZwFkLxMIH4Sfq7jPYzuZe8 vJ+0vliqQGWMpXMNdIzjUvm8jMLX8mtp2SMM3nrWaNz6BOfUhJ/i3aLIX+6ifnVHNrnc XotA== X-Gm-Message-State: AOAM530/Ee2njBzf1/k7woo4G3aozAN1RiwJGAt474SfqTBudI/1Nz+n KEM3k6kGQ333BX5VaSlGuTbusexa3Y4= X-Google-Smtp-Source: ABdhPJwPdBpHIJ0mASGI8QRZZRZI3D7LOk/yhFizlkS2OYABeZsKpvCixcNsVmSaozXNqIJyvElCXIKXLNA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:1dc8:b0:6df:f5fc:f4f9 with SMTP id og8-20020a1709071dc800b006dff5fcf4f9mr34096038ejc.739.1648557704420; Tue, 29 Mar 2022 05:41:44 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:57 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-29-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 28/48] kmsan: instrumentation.h: add instrumentation_begin_with_regs() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Stat-Signature: bmsfpf86o3mqrae18cr36kpopej9yn8q Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DIWfKg7b; spf=pass (imf16.hostedemail.com: domain of 3iP5CYgYKCKkPURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3iP5CYgYKCKkPURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 913CB180025 X-HE-Tag: 1648557705-156552 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When calling KMSAN-instrumented functions from non-instrumented functions, function parameters may not be initialized properly, leading to false positive reports. In particular, this happens all the time when calling interrupt handlers from `noinstr` IDT entries. We introduce instrumentation_begin_with_regs(), which calls instrumentation_begin() and notifies KMSAN about the beginning of the potentially instrumented region by calling kmsan_instrumentation_begin(), which: - wipes the current KMSAN state at the beginning of the region, ensuring that the first call of an instrumented function receives initialized parameters (this is a pretty good approximation of having all other instrumented functions receive initialized parameters); - unpoisons the `struct pt_regs` set up by the non-instrumented assembly code. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0f5e3372e00bd5fe25ddbf286f7260aae9011858 --- include/linux/instrumentation.h | 6 ++++++ include/linux/kmsan.h | 11 +++++++++++ mm/kmsan/hooks.c | 16 ++++++++++++++++ 3 files changed, 33 insertions(+) diff --git a/include/linux/instrumentation.h b/include/linux/instrumentation.h index 24359b4a96053..3bbce9d556381 100644 --- a/include/linux/instrumentation.h +++ b/include/linux/instrumentation.h @@ -15,6 +15,11 @@ }) #define instrumentation_begin() __instrumentation_begin(__COUNTER__) +#define instrumentation_begin_with_regs(regs) do { \ + __instrumentation_begin(__COUNTER__); \ + kmsan_instrumentation_begin(regs); \ +} while (0) + /* * Because instrumentation_{begin,end}() can nest, objtool validation considers * _begin() a +1 and _end() a -1 and computes a sum over the instructions. @@ -55,6 +60,7 @@ #define instrumentation_end() __instrumentation_end(__COUNTER__) #else # define instrumentation_begin() do { } while(0) +# define instrumentation_begin_with_regs(regs) kmsan_instrumentation_begin(regs) # define instrumentation_end() do { } while(0) #endif diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 55f976b721566..209a5a2192e22 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -247,6 +247,13 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, */ void kmsan_handle_urb(const struct urb *urb, bool is_out); +/** + * kmsan_instrumentation_begin() - handle instrumentation_begin(). + * @regs: pointer to struct pt_regs that non-instrumented code passes to + * instrumented code. + */ +void kmsan_instrumentation_begin(struct pt_regs *regs); + #else static inline void kmsan_init_shadow(void) @@ -343,6 +350,10 @@ static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) { } +static inline void kmsan_instrumentation_begin(struct pt_regs *regs) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index d95fd16a4b1dc..6b133533ff7d9 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -366,3 +366,19 @@ void kmsan_check_memory(const void *addr, size_t size) REASON_ANY); } EXPORT_SYMBOL(kmsan_check_memory); + +void kmsan_instrumentation_begin(struct pt_regs *regs) +{ + struct kmsan_context_state *state = &kmsan_get_context()->cstate; + + if (state) + __memset(state, 0, sizeof(struct kmsan_context_state)); + if (!kmsan_enabled || !regs) + return; + /* + * @regs may reside in cpu_entry_area, for which KMSAN does not allocate + * metadata. Do not force an error in that case. + */ + kmsan_internal_unpoison_memory(regs, sizeof(*regs), /*checked*/ false); +} +EXPORT_SYMBOL(kmsan_instrumentation_begin); From patchwork Tue Mar 29 12:39:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C247CC433F5 for ; Tue, 29 Mar 2022 12:41:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 569DA8D0003; Tue, 29 Mar 2022 08:41:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F1268D0001; Tue, 29 Mar 2022 08:41:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36ADB8D0003; Tue, 29 Mar 2022 08:41:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 28D6D8D0001 for ; Tue, 29 Mar 2022 08:41:49 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E0554A00FB for ; Tue, 29 Mar 2022 12:41:48 +0000 (UTC) X-FDA: 79297385496.28.451E7A1 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf15.hostedemail.com (Postfix) with ESMTP id 7A862A0007 for ; Tue, 29 Mar 2022 12:41:48 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id b71-20020a509f4d000000b00418d658e9d1so10843874edf.19 for ; Tue, 29 Mar 2022 05:41:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=O02VuhTFNs3OTl+6EdHhE8hyAxR47CEkJ1c0ak5Vr4s=; b=C7gGD19m/XveB8Ws/Mrjzquo4UNw5iFhMF9IPlHcdW4Pk7ozQxZ0PNkEpCOMh/tb00 yCiWkUdHR91r9//CRvGweQPLnnFSzPbvrVzmonGkPHKHJKpSLL7cZLAlDCFYeFIGL0Ss /IHCNCWokewlL1qO8a1mb2PQPEKWlCRk5OzGhzGAqj2hGFDN9YdHtSVskka/fzfTrsMa j1HMSwQQR8sw9adaQ+zA9d1GRw1pKIsT10NfugO2m9bXeTnFmGN1T9OiXau82BOJCc7K C/JGIJ/LSEbzSTmqkiIN+J1LwkC97hJDp0NnsDzTO1d3X/gZnjcZCBRAdS59i777oAr4 GMUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=O02VuhTFNs3OTl+6EdHhE8hyAxR47CEkJ1c0ak5Vr4s=; b=MVWXKQmYihoFmpkXYk+7pCAaUa2zYeJQMcW0MvIqoJhgSJbH9Fcw6A4sLEqz7pR8Bo XG0yHUG9eBDl/BM8p6oZgqztiRsmaiNyzoTV3Np7aO+d+Ok40gVplp5JXQ0M4YCaOWio FJPcAlDcikjzKweZVCGoW7D5P5AU1zOpiShnq2DOgXbCfXvPVL1/0X3bAJ7DE2nc+W5k rkx3tA/yYLJ6Q/9hp4U1lb1MPfrN0nv6LkbS2Rx2UOuE+oDqeH+BzNlKxoesIwkoTny0 x90qXWPEO7V2dUG03tbV0jhKnGvl3OX1GZznZVQpWDEqRuGgXz+qPF59QfKoJtl0MBPh z/vA== X-Gm-Message-State: AOAM533Px2PvuvjDzuhbFs0cJuJMIc5OS6cFNCaVceiXQJMdwU8SUrzu LBJYw1hIjRNdcfwUswJczh+6BcmVzSg= X-Google-Smtp-Source: ABdhPJyB/bcFcN+qpi65yKy1mIxhG89Sd6SxBmOpeaEdxDCuTuOOvV2uj+2kKbIDqHQp3uDGpI9M/qBVr2s= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:4786:b0:6e0:c7b:d267 with SMTP id cw6-20020a170906478600b006e00c7bd267mr34808708ejc.115.1648557707096; Tue, 29 Mar 2022 05:41:47 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:58 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-30-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 29/48] kmsan: entry: handle register passing from uninstrumented code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 7A862A0007 X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=C7gGD19m; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3i_5CYgYKCKwSXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3i_5CYgYKCKwSXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com X-Stat-Signature: xkmobin3hqmazu91iogyy5mb316wwcjs X-HE-Tag: 1648557708-738340 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace instrumentation_begin() with instrumentation_begin_with_regs() to let KMSAN handle the non-instrumented code and unpoison pt_regs passed from the instrumented part. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7f0a9809b66bd85faae43142971d0095771b7a42 --- kernel/entry/common.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/kernel/entry/common.c b/kernel/entry/common.c index bad713684c2e3..dcf91ab14512a 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -21,7 +21,7 @@ static __always_inline void __enter_from_user_mode(struct pt_regs *regs) CT_WARN_ON(ct_state() != CONTEXT_USER); user_exit_irqoff(); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); trace_hardirqs_off_finish(); instrumentation_end(); } @@ -103,7 +103,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs *regs, long syscall) __enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); local_irq_enable(); ret = __syscall_enter_from_user_work(regs, syscall); instrumentation_end(); @@ -114,7 +114,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs *regs, long syscall) noinstr void syscall_enter_from_user_mode_prepare(struct pt_regs *regs) { __enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); local_irq_enable(); instrumentation_end(); } @@ -296,7 +296,7 @@ void syscall_exit_to_user_mode_work(struct pt_regs *regs) __visible noinstr void syscall_exit_to_user_mode(struct pt_regs *regs) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); __syscall_exit_to_user_mode_work(regs); instrumentation_end(); __exit_to_user_mode(); @@ -309,7 +309,7 @@ noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) noinstr void irqentry_exit_to_user_mode(struct pt_regs *regs) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); exit_to_user_mode_prepare(regs); instrumentation_end(); __exit_to_user_mode(); @@ -357,7 +357,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) */ lockdep_hardirqs_off(CALLER_ADDR0); rcu_irq_enter(); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); trace_hardirqs_off_finish(); instrumentation_end(); @@ -372,7 +372,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) * in having another one here. */ lockdep_hardirqs_off(CALLER_ADDR0); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); rcu_irq_enter_check_tick(); trace_hardirqs_off_finish(); instrumentation_end(); @@ -409,7 +409,7 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) * and RCU as the return to user mode path. */ if (state.exit_rcu) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* Tell the tracer that IRET will enable interrupts */ trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(CALLER_ADDR0); @@ -419,7 +419,7 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) return; } - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (IS_ENABLED(CONFIG_PREEMPTION)) { #ifdef CONFIG_PREEMPT_DYNAMIC static_call(irqentry_exit_cond_resched)(); @@ -451,7 +451,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs) lockdep_hardirq_enter(); rcu_nmi_enter(); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); trace_hardirqs_off_finish(); ftrace_nmi_enter(); instrumentation_end(); @@ -461,7 +461,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs) void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); ftrace_nmi_exit(); if (irq_state.lockdep) { trace_hardirqs_on_prepare(); From patchwork Tue Mar 29 12:39:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794780 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17A4FC433EF for ; Tue, 29 Mar 2022 12:41:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A23308D0009; Tue, 29 Mar 2022 08:41:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D2DE8D0008; Tue, 29 Mar 2022 08:41:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84D118D0009; Tue, 29 Mar 2022 08:41:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 756118D0008 for ; Tue, 29 Mar 2022 08:41:52 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4778E61A71 for ; Tue, 29 Mar 2022 12:41:52 +0000 (UTC) X-FDA: 79297385664.07.29CD60E Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf04.hostedemail.com (Postfix) with ESMTP id A935F4000F for ; Tue, 29 Mar 2022 12:41:51 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id c22-20020a50f616000000b004196649d144so10957388edn.10 for ; Tue, 29 Mar 2022 05:41:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ORrNB40QDu3vcG+RZ93rBZX+e6sP+7+NX9TJISDi1qU=; b=oUfuMQe4TQ+YUVO04o+/5f035up8s48zJgn2wOB/vGVKcysfaPQ3rpHsQsQnxAlk2K O+mWJ0G9+ApPZSexcv6yXSJQoAKL86AfFqHN+FQxYZUtsm9vcZJt2gzIoYs97OoNBD50 umhg213hsSqiC935IB+uL6VLQdbGU3Ddo76zvvBGgNg1Kmv2tT1jmJwsyuxQpZ0C1lko HLeRJP/YxjgQf957b46rdio72dqlF51FUpXKUhih35es6QcqUbPB070b8fwciMoDAzGl 0edohXBEVCzP8AUEcgY81RWhPpdiDspDAI83AE5JKZFXGiFj/H15DHThcw7C+hs7HWLk /EkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ORrNB40QDu3vcG+RZ93rBZX+e6sP+7+NX9TJISDi1qU=; b=vGYoSq/1Z6MG2wyav/aF9WqbUtDmESAXJznaNMRciRXj9nCrBoWE3vur8hbGSB9EKl hTd6g30Yui/KSoLOlUfFnOrVK166enyyx+81nJIDRc/16MH4UM3b/Ag9slRy/0dNIvnz 8Vnf4UtLVc7uKdi+GLD8xwbIz7UIGQrmXKtrbLnc8HK7D2gDmmZxqMGQm0hK69Ohiwa8 PSVG2T50SHo1OK5YkrLxuPG5YFaSF3VQB0wm2o42MguKWDPNrKGk82zjRMF/92UziRIS ZMzztJAmherJHiRmXVkuYBXWKMSP6rEPTd89lM+kh39DbFTqZn6Rze2oTr1mf71m8UXu Xi+Q== X-Gm-Message-State: AOAM531JJHaV8R3Usqi95uxmhqzbvZOQjjMhdFxT/N3OxPs3dJ5NLQDG hrtiIbTn0g2c6hQ6yFhwtMkBrOl+/cE= X-Google-Smtp-Source: ABdhPJwKLhapJB5OnJLqm/L8rpOj5PpjmP7ckM5yXynxdh1052FVxkjBNwGwfBfb35psEULLalcUgLvNcDs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:5186:b0:419:49af:428f with SMTP id q6-20020a056402518600b0041949af428fmr4323040edd.177.1648557709877; Tue, 29 Mar 2022 05:41:49 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:59 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-31-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 30/48] kmsan: add tests for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: oksk95cqybuzoptupkxya6tsh1ag3q8o Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oUfuMQe4; spf=pass (imf04.hostedemail.com: domain of 3jf5CYgYKCK4UZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3jf5CYgYKCK4UZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A935F4000F X-HE-Tag: 1648557711-845922 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The testing module triggers KMSAN warnings in different cases and checks that the errors are properly reported, using console probes to capture the tool's output. Signed-off-by: Alexander Potapenko --- v2: -- add memcpy tests Link: https://linux-review.googlesource.com/id/I49c3f59014cc37fd13541c80beb0b75a75244650 --- lib/Kconfig.kmsan | 16 ++ mm/kmsan/Makefile | 4 + mm/kmsan/kmsan_test.c | 536 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 556 insertions(+) create mode 100644 mm/kmsan/kmsan_test.c diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan index 199f79d031f94..a68fdb5ed5d92 100644 --- a/lib/Kconfig.kmsan +++ b/lib/Kconfig.kmsan @@ -21,3 +21,19 @@ config KMSAN the whole system down. See for more details. + +if KMSAN + +config KMSAN_KUNIT_TEST + tristate "KMSAN integration test suite" if !KUNIT_ALL_TESTS + default KUNIT_ALL_TESTS + depends on TRACEPOINTS && KUNIT + help + Test suite for KMSAN, testing various error detection scenarios, + and checking that reports are correctly output to console. + + Say Y here if you want the test to be built into the kernel and run + during boot; say M if you want the test to build as a module; say N + if you are unsure. + +endif diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index f57a956cb1c8b..7be6a7e92394f 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -20,3 +20,7 @@ CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) + +obj-$(CONFIG_KMSAN_KUNIT_TEST) += kmsan_test.o +KMSAN_SANITIZE_kmsan_test.o := y +CFLAGS_kmsan_test.o += $(call cc-disable-warning, uninitialized) diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c new file mode 100644 index 0000000000000..44bb2e0f87d81 --- /dev/null +++ b/mm/kmsan/kmsan_test.c @@ -0,0 +1,536 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for KMSAN. + * For each test case checks the presence (or absence) of generated reports. + * Relies on 'console' tracepoint to capture reports as they appear in the + * kernel log. + * + * Copyright (C) 2021-2022, Google LLC. + * Author: Alexander Potapenko + * + */ + +#include +#include "kmsan.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DEFINE_PER_CPU(int, per_cpu_var); + +/* Report as observed from console. */ +static struct { + spinlock_t lock; + bool available; + bool ignore; /* Stop console output collection. */ + char header[256]; +} observed = { + .lock = __SPIN_LOCK_UNLOCKED(observed.lock), +}; + +/* Probe for console output: obtains observed lines of interest. */ +static void probe_console(void *ignore, const char *buf, size_t len) +{ + unsigned long flags; + + if (observed.ignore) + return; + spin_lock_irqsave(&observed.lock, flags); + + if (strnstr(buf, "BUG: KMSAN: ", len)) { + /* + * KMSAN report and related to the test. + * + * The provided @buf is not NUL-terminated; copy no more than + * @len bytes and let strscpy() add the missing NUL-terminator. + */ + strscpy(observed.header, buf, + min(len + 1, sizeof(observed.header))); + WRITE_ONCE(observed.available, true); + observed.ignore = true; + } + spin_unlock_irqrestore(&observed.lock, flags); +} + +/* Check if a report related to the test exists. */ +static bool report_available(void) +{ + return READ_ONCE(observed.available); +} + +/* Information we expect in a report. */ +struct expect_report { + const char *error_type; /* Error type. */ + /* + * Kernel symbol from the error header, or NULL if no report is + * expected. + */ + const char *symbol; +}; + +/* Check observed report matches information in @r. */ +static bool report_matches(const struct expect_report *r) +{ + typeof(observed.header) expected_header; + unsigned long flags; + bool ret = false; + const char *end; + char *cur; + + /* Doubled-checked locking. */ + if (!report_available() || !r->symbol) + return (!report_available() && !r->symbol); + + /* Generate expected report contents. */ + + /* Title */ + cur = expected_header; + end = &expected_header[sizeof(expected_header) - 1]; + + cur += scnprintf(cur, end - cur, "BUG: KMSAN: %s", r->error_type); + + scnprintf(cur, end - cur, " in %s", r->symbol); + /* The exact offset won't match, remove it; also strip module name. */ + cur = strchr(expected_header, '+'); + if (cur) + *cur = '\0'; + + spin_lock_irqsave(&observed.lock, flags); + if (!report_available()) + goto out; /* A new report is being captured. */ + + /* Finally match expected output to what we actually observed. */ + ret = strstr(observed.header, expected_header); +out: + spin_unlock_irqrestore(&observed.lock, flags); + + return ret; +} + +/* ===== Test cases ===== */ + +/* Prevent replacing branch with select in LLVM. */ +static noinline void check_true(char *arg) +{ + pr_info("%s is true\n", arg); +} + +static noinline void check_false(char *arg) +{ + pr_info("%s is false\n", arg); +} + +#define USE(x) \ + do { \ + if (x) \ + check_true(#x); \ + else \ + check_false(#x); \ + } while (0) + +#define EXPECTATION_ETYPE_FN(e, reason, fn) \ + struct expect_report e = { \ + .error_type = reason, \ + .symbol = fn, \ + } + +#define EXPECTATION_NO_REPORT(e) EXPECTATION_ETYPE_FN(e, NULL, NULL) +#define EXPECTATION_UNINIT_VALUE_FN(e, fn) \ + EXPECTATION_ETYPE_FN(e, "uninit-value", fn) +#define EXPECTATION_UNINIT_VALUE(e) EXPECTATION_UNINIT_VALUE_FN(e, __func__) +#define EXPECTATION_USE_AFTER_FREE(e) \ + EXPECTATION_ETYPE_FN(e, "use-after-free", __func__) + +/* Test case: ensure that kmalloc() returns uninitialized memory. */ +static void test_uninit_kmalloc(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + int *ptr; + + kunit_info(test, "uninitialized kmalloc test (UMR report)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that kmalloc'ed memory becomes initialized after memset(). + */ +static void test_init_kmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kmalloc test (no reports)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + memset(ptr, 0, sizeof(int)); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that kzalloc() returns initialized memory. */ +static void test_init_kzalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kzalloc test (no reports)\n"); + ptr = kzalloc(sizeof(int), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that local variables are uninitialized by default. */ +static void test_uninit_stack_var(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int cond; + + kunit_info(test, "uninitialized stack variable (UMR report)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that local variables with initializers are initialized. */ +static void test_init_stack_var(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + volatile int cond = 1; + + kunit_info(test, "initialized stack variable (no reports)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static noinline void two_param_fn_2(int arg1, int arg2) +{ + USE(arg1); + USE(arg2); +} + +static noinline void one_param_fn(int arg) +{ + two_param_fn_2(arg, arg); + USE(arg); +} + +static noinline void two_param_fn(int arg1, int arg2) +{ + int init = 0; + + one_param_fn(init); + USE(arg1); + USE(arg2); +} + +static void test_params(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "two_param_fn"); + volatile int uninit, init = 1; + + kunit_info(test, + "uninit passed through a function parameter (UMR report)\n"); + two_param_fn(uninit, init); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static int signed_sum3(int a, int b, int c) +{ + return a + b + c; +} + +/* + * Test case: ensure that uninitialized values are tracked through function + * arguments. + */ +static void test_uninit_multiple_params(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile char b = 3, c; + volatile int a; + + kunit_info(test, "uninitialized local passed to fn (UMR report)\n"); + USE(signed_sum3(a, b, c)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Helper function to make an array uninitialized. */ +static noinline void do_uninit_local_array(char *array, int start, int stop) +{ + volatile char uninit; + int i; + + for (i = start; i < stop; i++) + array[i] = uninit; +} + +/* + * Test case: ensure kmsan_check_memory() reports an error when checking + * uninitialized memory. + */ +static void test_uninit_kmsan_check_memory(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_uninit_kmsan_check_memory"); + volatile char local_array[8]; + + kunit_info( + test, + "kmsan_check_memory() called on uninit local (UMR report)\n"); + do_uninit_local_array((char *)local_array, 5, 7); + + kmsan_check_memory((char *)local_array, 8); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: check that a virtual memory range created with vmap() from + * initialized pages is still considered as initialized. + */ +static void test_init_kmsan_vmap_vunmap(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + const int npages = 2; + struct page **pages; + void *vbuf; + int i; + + kunit_info(test, "pages initialized via vmap (no reports)\n"); + + pages = kmalloc_array(npages, sizeof(struct page), GFP_KERNEL); + for (i = 0; i < npages; i++) + pages[i] = alloc_page(GFP_KERNEL); + vbuf = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + memset(vbuf, 0xfe, npages * PAGE_SIZE); + for (i = 0; i < npages; i++) + kmsan_check_memory(page_address(pages[i]), PAGE_SIZE); + + if (vbuf) + vunmap(vbuf); + for (i = 0; i < npages; i++) + if (pages[i]) + __free_page(pages[i]); + kfree(pages); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memset() can initialize a buffer allocated via + * vmalloc(). + */ +static void test_init_vmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int npages = 8, i; + char *buf; + + kunit_info(test, "vmalloc buffer can be initialized (no reports)\n"); + buf = vmalloc(PAGE_SIZE * npages); + buf[0] = 1; + memset(buf, 0xfe, PAGE_SIZE * npages); + USE(buf[0]); + for (i = 0; i < npages; i++) + kmsan_check_memory(&buf[PAGE_SIZE * i], PAGE_SIZE); + vfree(buf); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that use-after-free reporting works. */ +static void test_uaf(struct kunit *test) +{ + EXPECTATION_USE_AFTER_FREE(expect); + volatile int value; + volatile int *var; + + kunit_info(test, "use-after-free in kmalloc-ed buffer (UMR report)\n"); + var = kmalloc(80, GFP_KERNEL); + var[3] = 0xfeedface; + kfree((int *)var); + /* Copy the invalid value before checking it. */ + value = var[3]; + USE(value); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that uninitialized values are propagated through per-CPU + * memory. + */ +static void test_percpu_propagate(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int uninit, check; + + kunit_info(test, + "uninit local stored to per_cpu memory (UMR report)\n"); + + this_cpu_write(per_cpu_var, uninit); + check = this_cpu_read(per_cpu_var); + USE(check); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that passing uninitialized values to printk() leads to an + * error report. + */ +static void test_printk(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "number"); + volatile int uninit; + + kunit_info(test, "uninit local passed to pr_info() (UMR report)\n"); + pr_info("%px contains %d\n", &uninit, uninit); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and `dst`. + */ +static void test_memcpy_aligned_to_aligned(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_aligned"); + volatile int uninit_src; + volatile int dst = 0; + + kunit_info(test, "memcpy()ing aligned uninit src to aligned dst (UMR report)\n"); + memcpy((void *)&dst, (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)&dst, sizeof(dst)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and unaligned `dst`. + * + * Copying aligned 4-byte value to an unaligned one leads to touching two + * aligned 4-byte values. This test case checks that KMSAN correctly reports an + * error on the first of the two values. + */ +static void test_memcpy_aligned_to_unaligned(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_unaligned"); + volatile int uninit_src; + volatile char dst[8] = {0}; + + kunit_info(test, "memcpy()ing aligned uninit src to unaligned dst (UMR report)\n"); + memcpy((void *)&dst[1], (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)dst, 4); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and unaligned `dst`. + * + * Copying aligned 4-byte value to an unaligned one leads to touching two + * aligned 4-byte values. This test case checks that KMSAN correctly reports an + * error on the second of the two values. + */ +static void test_memcpy_aligned_to_unaligned2(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_unaligned2"); + volatile int uninit_src; + volatile char dst[8] = {0}; + + kunit_info(test, "memcpy()ing aligned uninit src to unaligned dst - part 2 (UMR report)\n"); + memcpy((void *)&dst[1], (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)&dst[4], sizeof(uninit_src)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static struct kunit_case kmsan_test_cases[] = { + KUNIT_CASE(test_uninit_kmalloc), + KUNIT_CASE(test_init_kmalloc), + KUNIT_CASE(test_init_kzalloc), + KUNIT_CASE(test_uninit_stack_var), + KUNIT_CASE(test_init_stack_var), + KUNIT_CASE(test_params), + KUNIT_CASE(test_uninit_multiple_params), + KUNIT_CASE(test_uninit_kmsan_check_memory), + KUNIT_CASE(test_init_kmsan_vmap_vunmap), + KUNIT_CASE(test_init_vmalloc), + KUNIT_CASE(test_uaf), + KUNIT_CASE(test_percpu_propagate), + KUNIT_CASE(test_printk), + KUNIT_CASE(test_memcpy_aligned_to_aligned), + KUNIT_CASE(test_memcpy_aligned_to_unaligned), + KUNIT_CASE(test_memcpy_aligned_to_unaligned2), + {}, +}; + +/* ===== End test cases ===== */ + +static int test_init(struct kunit *test) +{ + unsigned long flags; + + spin_lock_irqsave(&observed.lock, flags); + observed.header[0] = '\0'; + observed.ignore = false; + observed.available = false; + spin_unlock_irqrestore(&observed.lock, flags); + + return 0; +} + +static void test_exit(struct kunit *test) +{ +} + +static struct kunit_suite kmsan_test_suite = { + .name = "kmsan", + .test_cases = kmsan_test_cases, + .init = test_init, + .exit = test_exit, +}; +static struct kunit_suite *kmsan_test_suites[] = { &kmsan_test_suite, NULL }; + +static void register_tracepoints(struct tracepoint *tp, void *ignore) +{ + check_trace_callback_type_console(probe_console); + if (!strcmp(tp->name, "console")) + WARN_ON(tracepoint_probe_register(tp, probe_console, NULL)); +} + +static void unregister_tracepoints(struct tracepoint *tp, void *ignore) +{ + if (!strcmp(tp->name, "console")) + tracepoint_probe_unregister(tp, probe_console, NULL); +} + +/* + * We only want to do tracepoints setup and teardown once, therefore we have to + * customize the init and exit functions and cannot rely on kunit_test_suite(). + */ +static int __init kmsan_test_init(void) +{ + /* + * Because we want to be able to build the test as a module, we need to + * iterate through all known tracepoints, since the static registration + * won't work here. + */ + for_each_kernel_tracepoint(register_tracepoints, NULL); + return __kunit_test_suites_init(kmsan_test_suites); +} + +static void kmsan_test_exit(void) +{ + __kunit_test_suites_exit(kmsan_test_suites); + for_each_kernel_tracepoint(unregister_tracepoints, NULL); + tracepoint_synchronize_unregister(); +} + +late_initcall_sync(kmsan_test_init); +module_exit(kmsan_test_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Alexander Potapenko "); From patchwork Tue Mar 29 12:40:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32D86C433F5 for ; Tue, 29 Mar 2022 12:41:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C20528D000E; Tue, 29 Mar 2022 08:41:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B83328D000A; Tue, 29 Mar 2022 08:41:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FBC98D000E; Tue, 29 Mar 2022 08:41:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0086.hostedemail.com [216.40.44.86]) by kanga.kvack.org (Postfix) with ESMTP id 920748D000A for ; Tue, 29 Mar 2022 08:41:54 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 502F5A4A77 for ; Tue, 29 Mar 2022 12:41:54 +0000 (UTC) X-FDA: 79297385748.16.CE9C799 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf27.hostedemail.com (Postfix) with ESMTP id D9D7040009 for ; Tue, 29 Mar 2022 12:41:53 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id c22-20020a50f616000000b004196649d144so10957446edn.10 for ; Tue, 29 Mar 2022 05:41:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=enpw47tW5AlKxiO7mK8zQaDRtxatFy2+pOtakd8BlZk=; b=mCwjmjg5IEjrVFruDI8qNciYEOnA2jiLkX9pLqN6BnRcekJoE0XFuHwS9ksz9BZuKY n2Vq+3ICaYrPhLsEY2JcrP3sAWEO8412GRb/cLfojFWc/Jv/zfHf6VmhfoHQDyibrhRx 8IC9UaRnp3e1xpCjw97idSW8aarWoqZh0n2KfPjUQhED20Y5dJyYXlGP9d7TDcAtLJjJ tGVXH5XRlRg71IARaSa3IaKomYbSYugJMCjnDyryrStOIVf7B/EHB5TBnzADPTie1FVi apknmdO2z63KmAh6CYf0AHjMa0AeJ4eU4IOL9/Ql/UgZ21OFy1M4gXnTW1xPdquE8NbE svEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=enpw47tW5AlKxiO7mK8zQaDRtxatFy2+pOtakd8BlZk=; b=ys8do+tO28NEeOFLsE0W+3ZQuRSEQBpINJkMnqE0HqnofROuRqa30gyt8efj156Akj 4rw1a5JfJNRRXh+nmcDuGgkxqXQtk3OVqtlk1geaVZRBTWTpAiUxp2iIF50bwRUmGRWX pOFqqcQmu+LuAcekZ8pxwH+RYqSbG8+QzYq8HmYMraeh4mxrmLxvcprXbwWAkgCJad7b 1OjDUrpLdKromgs3FOJjPwGebeEnqJmJQfOIujJnwyLTy9wQ5BlcvzrnP8YSTBoghjJL JO5ooHE3Nka14rRaXYFoIomciLBjfWWV4RVEM2Fn6/hcQeTrzDj9StE4SPXUQKLOcBFw 4eAQ== X-Gm-Message-State: AOAM532DzAuOxu85TTEXlLbNXPQ8p6RAy+RFEqstoLbc0WXsq2DQoSXc ihfgZdloXaGWEuyUjpSn9JhA3vtRvjI= X-Google-Smtp-Source: ABdhPJyvopsmMnmkArFS+YVZS01vYvUWgOoKNLhFelQAgUTgMKFm4yxx2n/LM5t8wnbOjIRaXG8/wiLoWwo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:6297:b0:6da:6388:dc58 with SMTP id nd23-20020a170907629700b006da6388dc58mr35414945ejc.472.1648557712510; Tue, 29 Mar 2022 05:41:52 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:00 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-32-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 31/48] kernel: kmsan: don't instrument stacktrace.c From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=mCwjmjg5; spf=pass (imf27.hostedemail.com: domain of 3kP5CYgYKCLEXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3kP5CYgYKCLEXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D9D7040009 X-Stat-Signature: g3yhzfrqthysesdxqahrbwgdwynzz9h4 X-HE-Tag: 1648557713-38835 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When unwinding stack traces, the kernel may pick uninitialized data from the stack. To avoid false reports on that data, we do not instrument stacktrace.c Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iadb72036ff6868b1d7c9f1ed6630a66be6c57a42 --- kernel/Makefile | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/Makefile b/kernel/Makefile index 80f6cfb60c020..1147f0bd6e022 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -40,6 +40,11 @@ KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n KMSAN_SANITIZE_kcov.o := n + +# Code in stactrace.c may branch on random values taken from the stack. +# Prevent KMSAN false positives by not instrumenting this file. +KMSAN_SANITIZE_stacktrace.o := n + CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector # Don't instrument error handlers From patchwork Tue Mar 29 12:40:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794782 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6040C433EF for ; Tue, 29 Mar 2022 12:41:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5B67B8D0010; Tue, 29 Mar 2022 08:41:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5663C8D000F; Tue, 29 Mar 2022 08:41:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45A2E8D0010; Tue, 29 Mar 2022 08:41:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 307AA8D000F for ; Tue, 29 Mar 2022 08:41:57 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 13FC4606CD for ; Tue, 29 Mar 2022 12:41:57 +0000 (UTC) X-FDA: 79297385874.10.471CFDD Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf25.hostedemail.com (Postfix) with ESMTP id 89EE9A0009 for ; Tue, 29 Mar 2022 12:41:56 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id i4-20020aa7c9c4000000b00419c542270dso5230674edt.8 for ; Tue, 29 Mar 2022 05:41:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RDAmLqPngdYXKHnjNo9GDMAFOLr0Hdlw9RyMH9Gzg2s=; b=FNcO9gUSy3+PyH4e35W5RF6+lKn5SBE99oNAhVPz9pbYEq9m/C8L6ZgtW5ZmHXHd88 hQHJyPhLbOckmp0EqPMhP2HFzsxaGQxiGxliYPsshxLsXZvRvNDQ3UiI+w2rcId/Pty2 F64ZcwqpBt31uCBn0TdlaFTImU9JlBg1RYV2VHIqSgThEHMQahV/qTUZD+t5uPqd7ue6 fH/tn196F1PiJHLimD7A+gQDUww4aLBtiNt+XEamDrdfD269XRiydzui+vDM+yTJPgEp rF70JXIzeqHaym/T57YbYOQSbgbJ7VSdFIY7t07gue+zb/IWNfau8QfmJWCRW+7p6THF iMvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RDAmLqPngdYXKHnjNo9GDMAFOLr0Hdlw9RyMH9Gzg2s=; b=mbJ2rMssxiWQLOD1bqQq8p5mMRlFWJzgax6nYvLYgX/e8+hTbI7gIx2I5BW4Ua+pVy 6AjtJov7hpiY+JE2r22qHM9ThkDRcK6jSQYGqiJVuZzG19L8K8I4Hk79hkSq3pgNlOZz BJN/3Y9w1wgl+nLVA2X606Isk6CWV9yediM7uQWqMCSENxf9ShA5kwd0e38ZU0GG6rJK /LYaf/UDlKxXvVO19hFLoLa037mxYPZPaK4lzGfFrzh0ZflZBAcBS+0KWl3H3MJ+0cf+ x+3j0KlboKRjcB0iwaKyBI9k4Bq7RaD3rpM0znQuDE9411EgcAz/Eizs+IG58QIKBLCZ 5fsg== X-Gm-Message-State: AOAM531CIWAcyoi4bvMeZIgXHMshb4twfxSKnXppd9TTyYGCheRAtdwb d2y+DJ7OHPaslbxLYF776peEx3Tg1Ho= X-Google-Smtp-Source: ABdhPJzvRP1sTqhryqj6ueAWd65Bk5TF0Xd1dq7WPSkbBowXyk0DvSiadnoCaN6qak+2HIZ7cYi2Zk2fLDo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:af6b:b0:6df:83a9:67db with SMTP id os11-20020a170906af6b00b006df83a967dbmr34685899ejb.222.1648557715258; Tue, 29 Mar 2022 05:41:55 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:01 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-33-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 32/48] kmsan: disable strscpy() optimization under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: mx7yms1pjpfxr7omturooiz8tqqfiub9 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FNcO9gUS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3k_5CYgYKCLQafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3k_5CYgYKCLQafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com X-Rspamd-Queue-Id: 89EE9A0009 X-HE-Tag: 1648557716-462174 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Disable the efficient 8-byte reading under KMSAN to avoid false positives. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iffd8336965e88fce915db2e6a9d6524422975f69 --- lib/string.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/lib/string.c b/lib/string.c index 485777c9da832..4ece4c7e7831b 100644 --- a/lib/string.c +++ b/lib/string.c @@ -197,6 +197,14 @@ ssize_t strscpy(char *dest, const char *src, size_t count) max = 0; #endif + /* + * read_word_at_a_time() below may read uninitialized bytes after the + * trailing zero and use them in comparisons. Disable this optimization + * under KMSAN to prevent false positive reports. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + max = 0; + while (max >= sizeof(unsigned long)) { unsigned long c, data; From patchwork Tue Mar 29 12:40:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4E5FC433F5 for ; Tue, 29 Mar 2022 12:42:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 766698D0015; Tue, 29 Mar 2022 08:42:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 714878D0011; Tue, 29 Mar 2022 08:42:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5DC5B8D0015; Tue, 29 Mar 2022 08:42:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 4EFEA8D0011 for ; Tue, 29 Mar 2022 08:42:00 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1970523E67 for ; Tue, 29 Mar 2022 12:42:00 +0000 (UTC) X-FDA: 79297386000.13.33F8C06 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 8F508180006 for ; Tue, 29 Mar 2022 12:41:59 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id f2-20020a50d542000000b00418ed3d95d8so10961151edj.11 for ; Tue, 29 Mar 2022 05:41:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=C2KQWOZCabgVYfvOHTlGiLbP5VJcDGz40QZF+vn2Rc0=; b=QTYt9ALOL2uekj5EhavmcqsWsXm1i9tTG8lVqsixbMUAwldjnzPa1iDzQOX0A70yz8 rZ2AFCEZilfJ0aeA6qTNHyg3VcyIHeTwWo7kOmqAXeGPwO3fWdVa0C8Xa6VXQAlD4+Y6 4we8BlhM172eyBuCv/H43tzwj5DooXSoeJ1OQOT6wty0KgfRUGrTKTMg7sjoOZbr+GZQ h162PLcOdfLvVAUtf7cw9VNk8uejOkBnNekjdkyuuQjepaVKYqpAZA9/UhqLrHLGAIKG W46UQq3sCAo4Wono9Zi4fmnwXyNRv6konSpuKDMIK9u0VMX0KqUi5loVTdk36IKS+vKO HExQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=C2KQWOZCabgVYfvOHTlGiLbP5VJcDGz40QZF+vn2Rc0=; b=ngxh9Wtv3Os+iSlQTRq9Mmx94j+rYktCikvBgWi1JjodxRxgdtVhuyk+s444WiSotP gIry9kIb1mdv0B/pW9FYAMG033639BZWNDTJz/sODZFFbLX5IkwoGQQiIQCXcx9Gg0Ba zKE4IAhFHQB8OYln7I5rgYmhY2U8UqiGNzso6SeKC+xayMqpnIyOIpfv5/BPY36zcB3n uw9AUm6l+YnXFR895kGIVHA01wZoahvwfOGXSv/Ho9P/cniC2gRZ+/inzlMUf9PUjZ38 y2xM37ZeLg5bseYHTBVr4mm5iRrDXLs50pZnxJlL6WEfcoC8JOPkprrMF/FAX4VTkaOB iGMQ== X-Gm-Message-State: AOAM533Jrc5ZI499sTmeb1hAPl1r1MTRP3QUUBQurg+WyDkPxN5K6TBd ojQrjFosk65qAw8Am721tu5ff4kvNO4= X-Google-Smtp-Source: ABdhPJzqBcloD2KBNwEOa+2Cq88rT9VWS0jiDv0M/QAzLmjAZQp9Ior/Bcdzgfid36cN5BimbzeHjNpxRI0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:2513:b0:6e1:2c10:2ada with SMTP id y19-20020a170907251300b006e12c102adamr5699618ejl.211.1648557717959; Tue, 29 Mar 2022 05:41:57 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:02 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-34-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 33/48] crypto: kmsan: disable accelerated configs under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: dx3tfntodgdqgy451f1uenor7ypo395b Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=QTYt9ALO; spf=pass (imf16.hostedemail.com: domain of 3lf5CYgYKCLYcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3lf5CYgYKCLYcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8F508180006 X-HE-Tag: 1648557719-657614 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is unable to understand when initialized values come from assembly. Disable accelerated configs in KMSAN builds to prevent false positive reports. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Idb2334bf3a1b68b31b399709baefaa763038cc50 --- crypto/Kconfig | 30 ++++++++++++++++++++++++++++++ drivers/net/Kconfig | 1 + 2 files changed, 31 insertions(+) diff --git a/crypto/Kconfig b/crypto/Kconfig index 442765219c375..c5e1c86043bbf 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -290,6 +290,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -338,11 +339,13 @@ config CRYPTO_AEGIS128 config CRYPTO_AEGIS128_SIMD bool "Support SIMD acceleration for AEGIS-128" depends on CRYPTO_AEGIS128 && ((ARM || ARM64) && KERNEL_MODE_NEON) + depends on !KMSAN # avoid false positives from assembly default y config CRYPTO_AEGIS128_AESNI_SSE2 tristate "AEGIS-128 AEAD algorithm (x86_64 AESNI+SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_SIMD help @@ -478,6 +481,7 @@ config CRYPTO_NHPOLY1305 config CRYPTO_NHPOLY1305_SSE2 tristate "NHPoly1305 hash function (x86_64 SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help SSE2 optimized implementation of the hash function used by the @@ -486,6 +490,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -599,6 +604,7 @@ config CRYPTO_CRC32C config CRYPTO_CRC32C_INTEL tristate "CRC32c INTEL hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help In Intel processor with SSE4.2 supported, the processor will @@ -639,6 +645,7 @@ config CRYPTO_CRC32 config CRYPTO_CRC32_PCLMUL tristate "CRC32 PCLMULQDQ hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH select CRC32 help @@ -704,6 +711,7 @@ config CRYPTO_BLAKE2S config CRYPTO_BLAKE2S_X86 tristate "BLAKE2s digest algorithm (x86 accelerated version)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_BLAKE2S_GENERIC select CRYPTO_ARCH_HAVE_LIB_BLAKE2S @@ -718,6 +726,7 @@ config CRYPTO_CRCT10DIF config CRYPTO_CRCT10DIF_PCLMUL tristate "CRCT10DIF PCLMULQDQ hardware acceleration" depends on X86 && 64BIT && CRC_T10DIF + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help For x86_64 processors with SSE4.2 and PCLMULQDQ supported, @@ -765,6 +774,7 @@ config CRYPTO_POLY1305 config CRYPTO_POLY1305_X86_64 tristate "Poly1305 authenticator algorithm (x86_64/SSE2/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_POLY1305_GENERIC select CRYPTO_ARCH_HAVE_LIB_POLY1305 help @@ -853,6 +863,7 @@ config CRYPTO_SHA1 config CRYPTO_SHA1_SSSE3 tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA1 select CRYPTO_HASH help @@ -864,6 +875,7 @@ config CRYPTO_SHA1_SSSE3 config CRYPTO_SHA256_SSSE3 tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA256 select CRYPTO_HASH help @@ -876,6 +888,7 @@ config CRYPTO_SHA256_SSSE3 config CRYPTO_SHA512_SSSE3 tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA512 select CRYPTO_HASH help @@ -1034,6 +1047,7 @@ config CRYPTO_WP512 config CRYPTO_GHASH_CLMUL_NI_INTEL tristate "GHASH hash function (CLMUL-NI accelerated)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CRYPTD help This is the x86_64 CLMUL-NI accelerated implementation of @@ -1084,6 +1098,7 @@ config CRYPTO_AES_TI config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_LIB_AES select CRYPTO_ALGAPI @@ -1208,6 +1223,7 @@ config CRYPTO_BLOWFISH_COMMON config CRYPTO_BLOWFISH_X86_64 tristate "Blowfish cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_BLOWFISH_COMMON imply CRYPTO_CTR @@ -1238,6 +1254,7 @@ config CRYPTO_CAMELLIA config CRYPTO_CAMELLIA_X86_64 tristate "Camellia cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER imply CRYPTO_CTR help @@ -1254,6 +1271,7 @@ config CRYPTO_CAMELLIA_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAMELLIA_X86_64 select CRYPTO_SIMD @@ -1272,6 +1290,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help Camellia cipher algorithm module (x86_64/AES-NI/AVX2). @@ -1317,6 +1336,7 @@ config CRYPTO_CAST5 config CRYPTO_CAST5_AVX_X86_64 tristate "CAST5 (CAST-128) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST5 select CRYPTO_CAST_COMMON @@ -1340,6 +1360,7 @@ config CRYPTO_CAST6 config CRYPTO_CAST6_AVX_X86_64 tristate "CAST6 (CAST-256) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST6 select CRYPTO_CAST_COMMON @@ -1373,6 +1394,7 @@ config CRYPTO_DES_SPARC64 config CRYPTO_DES3_EDE_X86_64 tristate "Triple DES EDE cipher algorithm (x86-64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_DES imply CRYPTO_CTR @@ -1430,6 +1452,7 @@ config CRYPTO_CHACHA20 config CRYPTO_CHACHA20_X86_64 tristate "ChaCha stream cipher algorithms (x86_64/SSSE3/AVX2/AVX-512VL)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA @@ -1473,6 +1496,7 @@ config CRYPTO_SERPENT config CRYPTO_SERPENT_SSE2_X86_64 tristate "Serpent cipher algorithm (x86_64/SSE2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1492,6 +1516,7 @@ config CRYPTO_SERPENT_SSE2_X86_64 config CRYPTO_SERPENT_SSE2_586 tristate "Serpent cipher algorithm (i586/SSE2)" depends on X86 && !64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1511,6 +1536,7 @@ config CRYPTO_SERPENT_SSE2_586 config CRYPTO_SERPENT_AVX_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1531,6 +1557,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. @@ -1672,6 +1699,7 @@ config CRYPTO_TWOFISH_586 config CRYPTO_TWOFISH_X86_64 tristate "Twofish cipher algorithm (x86_64)" depends on (X86 || UML_X86) && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_ALGAPI select CRYPTO_TWOFISH_COMMON imply CRYPTO_CTR @@ -1689,6 +1717,7 @@ config CRYPTO_TWOFISH_X86_64 config CRYPTO_TWOFISH_X86_64_3WAY tristate "Twofish cipher algorithm (x86_64, 3-way parallel)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_TWOFISH_COMMON select CRYPTO_TWOFISH_X86_64 @@ -1709,6 +1738,7 @@ config CRYPTO_TWOFISH_X86_64_3WAY config CRYPTO_TWOFISH_AVX_X86_64 tristate "Twofish cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SIMD select CRYPTO_TWOFISH_COMMON diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index b2a4f998c180e..fed89b6981759 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -76,6 +76,7 @@ config WIREGUARD tristate "WireGuard secure network tunnel" depends on NET && INET depends on IPV6 || !IPV6 + depends on !KMSAN # KMSAN doesn't support the crypto configs below select NET_UDP_TUNNEL select DST_CACHE select CRYPTO From patchwork Tue Mar 29 12:40:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794784 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56646C433FE for ; Tue, 29 Mar 2022 12:42:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DCD848D0017; Tue, 29 Mar 2022 08:42:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D565E8D0016; Tue, 29 Mar 2022 08:42:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1E2E8D0017; Tue, 29 Mar 2022 08:42:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id B21418D0016 for ; Tue, 29 Mar 2022 08:42:02 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7122F1828A44C for ; Tue, 29 Mar 2022 12:42:02 +0000 (UTC) X-FDA: 79297386084.23.D34027C Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf10.hostedemail.com (Postfix) with ESMTP id EA86CC000B for ; Tue, 29 Mar 2022 12:42:01 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id x2-20020a1709065ac200b006d9b316257fso8150218ejs.12 for ; Tue, 29 Mar 2022 05:42:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=g4O/g3yGtjFTnRq6Tym+8EhZXJHjgp/RAJrohmerxSE=; b=qOFgBTIRaBAgytLmSPVdMFIKWup6qEgqCTL9zlLnfBr+1qisKGAs7mZQzS5fF1N0/w 75BSqMadVv+ZwN4R/Y1880VwCu1sYPXecdfh1+jci1QJBCu4bN2ZpoxrkGg5fNAeuw4/ uqe7fS8vFdcz0MK3boRrEmswcahRoIs30mkNDrxgpAlwqh22khuALzbXXL7Bb57cf0og bNlPI0uZOKQd1VXKE1IEuIH6YQWnxA8v5sTwzz+4zJH90L3gpVOfXta50Pkpjz1H1DCt 0Gsp0BrH+su3tpLOFauLfvXBA1RwcNoUCkd+jh5ebsHf0jr1Db/zRDrg1iivqcmqkSXF Zf4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=g4O/g3yGtjFTnRq6Tym+8EhZXJHjgp/RAJrohmerxSE=; b=EXPtw7VUeOU6ju1WUnPRWtbg1OI+EySmaxxj3gKb8xRXZmNNC6oEfvBeF5DXl4YuwS cnZ/ulXG6KX1lJ+OvnvxbZusacIXU1EPZNBjSqzs2HKbJK+7G7nLXEwR5+A2VtHW7rO+ KlD2cHZQ1xI1sASufB4DcMqjfT7/jtiXJwMYsuUCb0Z79+I9a3JqDoQQrs/n+uCgrvxu j1SgSQCOoCC62uxs/OgDTsS1D+iMnXdLnLZ+KtVU9TpqjQwM62zMX6dvWDozcr40AzbU +omJuXTEZlPVrMd3fXvscbGJGpK293ZrPl7owUkSp10SYuGsoVbxOsyfLWaqrj46PvB8 6ZlQ== X-Gm-Message-State: AOAM532Dg3ZIBMUAbz2CAC3WfvqvIgisUs/gWhB5t9FAr9+/yNYTnzV8 24/3gfORmDi301vc+x2N+T7Z1QtZKBo= X-Google-Smtp-Source: ABdhPJxPAPS1bh/Xpqng2jeIO39gOYiCpSBLkakMLcyM6oP3/B0k9R3pYWtakO+RdreM9L/Qpt7ZGcmIJWk= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:da6:b0:6e0:c59:f3ad with SMTP id go38-20020a1709070da600b006e00c59f3admr34366333ejc.85.1648557720656; Tue, 29 Mar 2022 05:42:00 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:03 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-35-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 34/48] kmsan: disable physical page merging in biovec From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: mnx9hfptee4xpgdirpb8j3fy1uu3bicb Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qOFgBTIR; spf=pass (imf10.hostedemail.com: domain of 3mP5CYgYKCLkfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3mP5CYgYKCLkfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: EA86CC000B X-HE-Tag: 1648557721-439362 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN metadata for adjacent physical pages may not be adjacent, therefore accessing such pages together may lead to metadata corruption. We disable merging pages in biovec to prevent such corruptions. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iece16041be5ee47904fbc98121b105e5be5fea5c --- block/blk.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/block/blk.h b/block/blk.h index 8bd43b3ad33d5..eb349916ac116 100644 --- a/block/blk.h +++ b/block/blk.h @@ -93,6 +93,13 @@ static inline bool biovec_phys_mergeable(struct request_queue *q, phys_addr_t addr1 = page_to_phys(vec1->bv_page) + vec1->bv_offset; phys_addr_t addr2 = page_to_phys(vec2->bv_page) + vec2->bv_offset; + /* + * Merging adjacent physical pages may not work correctly under KMSAN + * if their metadata pages aren't adjacent. Just disable merging. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + return false; + if (addr1 + vec1->bv_len != addr2) return false; if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page)) From patchwork Tue Mar 29 12:40:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794785 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E740BC433EF for ; Tue, 29 Mar 2022 12:42:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 803858D0014; Tue, 29 Mar 2022 08:42:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B27E8D0002; Tue, 29 Mar 2022 08:42:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6515E8D0014; Tue, 29 Mar 2022 08:42:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 58D2C8D0002 for ; Tue, 29 Mar 2022 08:42:05 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1C7848248D52 for ; Tue, 29 Mar 2022 12:42:05 +0000 (UTC) X-FDA: 79297386210.29.30F4472 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf03.hostedemail.com (Postfix) with ESMTP id 8E9792000C for ; Tue, 29 Mar 2022 12:42:04 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id de52-20020a1709069bf400b006dffb966922so8125421ejc.2 for ; Tue, 29 Mar 2022 05:42:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/KmrLQz2B9LUGG5T0s5DB1smaxsYg6k7avQ3Zjj3eGk=; b=nf15GMhub2RWppTwCYZIrWIlhESGQRfc/W+NIYyqHWkrAY5ZTdWS9tNg5R+WFNEwDR ws+bWlr0jAPjLQgTU8HBYWK/xBDV/+Z4gS8/E0xP18HFUtngLfocVvFSvgaAAKlkp4H3 kXATs/dpgLrmitCYFBWHPxmJvSrUCCPUNzG/KXTkCq2AyohIUvlifgXGdxppP4apBE5A LAliLdtfENt0IXvOlwHuX/jsHQwOwjSPAiUrAj6DAH7RdyI9h/hc7xkf9IhqtKQUhdA7 KWoj2cicXsXsWzrnFpLIgQlZXdwXJns/c+61ORU+3rxi9tyOWn8w7u20i5xnvlynULyv A3iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/KmrLQz2B9LUGG5T0s5DB1smaxsYg6k7avQ3Zjj3eGk=; b=0D5N7MoR7k3XDvIajRgddh+a/qZ35JcDNcv1049UoYN/QAPffntguHWz8VtIB0rKfW 17Ys+6JywnCp5cLcaR2WEl+YAIllK++H4VqBvhiknIqYz+5lJmxvlAMOoml669hpyxrq aZADszHPoRqhU91eXOq6WuhxIjOYcfM5iE1uGVKBitGejmccFxBpmd5pNjK21G1b93hx yjooLabli3hlIXNN/pweB2aCI++eXZzCtR0QbgaaS7S+bcnyq5GBljr9WroEMuWOLGfP 2hr+y8yeeSh+vbRvfhy9UvJNFj06V/Vy9OoCE9McBmCm6sX5xBLZBV/+xZutyds4dlu0 952A== X-Gm-Message-State: AOAM531Kc3YfVV7Img/SQRONeIT4Ym07lvpl8aJSF6Y3TRKFnxgw3BRm YLsiXpPJUqxmr+BEUggM7YNsKVK2DTo= X-Google-Smtp-Source: ABdhPJxVb0eTOD5/UioVaLlWvIUvUDI3ZO47Rl6mwf+HN91ZLWrwoGAWiWD7v0UD4gilbb/v2zNMV7AgqQo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:3583:b0:6d1:c07:fac0 with SMTP id o3-20020a170906358300b006d10c07fac0mr34312763ejb.749.1648557723241; Tue, 29 Mar 2022 05:42:03 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:04 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-36-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 35/48] kmsan: block: skip bio block merging logic for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Biggers X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8E9792000C X-Stat-Signature: 3do8sxtogfdph4hpoa6xoxz4e1t1hpj5 X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nf15GMhu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 3m_5CYgYKCLwinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3m_5CYgYKCLwinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com X-HE-Tag: 1648557724-754779 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't allow treating adjacent memory pages as such, if they were allocated by different alloc_pages() calls. The block layer however does so: adjacent pages end up being used together. To prevent this, make page_is_mergeable() return false under KMSAN. Suggested-by: Eric Biggers Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie29cc2464c70032347c32ab2a22e1e7a0b37b905 --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index 4312a8085396b..3c8806bbe3a81 100644 --- a/block/bio.c +++ b/block/bio.c @@ -808,6 +808,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return false; *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); + if (!*same_page && IS_ENABLED(CONFIG_KMSAN)) + return false; if (*same_page) return true; return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE); From patchwork Tue Mar 29 12:40:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794786 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46DA0C433EF for ; Tue, 29 Mar 2022 12:42:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DCCA98D000E; Tue, 29 Mar 2022 08:42:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7BAB8D000A; Tue, 29 Mar 2022 08:42:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C95128D000E; Tue, 29 Mar 2022 08:42:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id B97068D000A for ; Tue, 29 Mar 2022 08:42:07 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 9809460C90 for ; Tue, 29 Mar 2022 12:42:07 +0000 (UTC) X-FDA: 79297386294.03.1674AB3 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf05.hostedemail.com (Postfix) with ESMTP id 32E2D100004 for ; Tue, 29 Mar 2022 12:42:07 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id 13-20020a170906328d00b006982d0888a4so8050005ejw.9 for ; Tue, 29 Mar 2022 05:42:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XAH4acQx+nIWehlAydXzel6kNYbNIXLv2CmwLsxzisk=; b=YpPQVEPURN2ajRY1n46PwqwFtWp7RlcVqb0MoMfwIeS/m0CwFkAlmK9veRToXX/Nh0 b6E4uZScKbMj2h/RFOMhq1XJYUrJq4RW8RMzRp9Z6yBNz7KYsvHe+FzobtEv1SE+z8z0 bBRHxkEnq08sMogy1ImUi3Oswt0oie9fh2FqU81kZAPoJIolXyiL4DuviryqlAYyJ0cl fP2//D93tGjpn2gvBqCfUGSh263yJnEsQm+Ik3DPQuyteuBuA1cheprPsGBkm0p9BeA1 Z+owbpU+GHDWSgeR7eKuS6SEv9VgACxZntokNlAC2HOBzVv0eGiDKuMeIWei48sQH+i1 OfdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XAH4acQx+nIWehlAydXzel6kNYbNIXLv2CmwLsxzisk=; b=PyqIDb/FvWtjJ8Afmo46EbxFaHcsolBXwr2LA+lhL2suGQuw3YI7Dyz8JvY9z+skji YwVEr6eCJJU3xKtN+3/QRqb2v3vCiruMtCx5CFEBnuVAG2JzZffAw0LqyONVVilPNYYd 06GUByFBoi6eGel0dFd0XArFCaqJOrAUbY1hdPVvJs1TxnqXmz0bMKHo5daQTVOJ7aMG 25vA5ggGNcXvL5FtcVJH/PJLXhNZPhvp0YTly8SdbtPGxBQ+kRNK8YVnA9eKQ4/8559N bLrphQeuZfz0XQvFWLPpyn0sgZh1pvr+tp6/8T3BdfyGzPIX7TLf9SwAioktZ3XCzz64 aO0g== X-Gm-Message-State: AOAM531RUGzxfNA3eWiA3M6hwnCzInKVfkFZir5WAMcMkzs/6es5gBk1 duZWyuQeQ/HVsvSnizcXKuEPIFMVglI= X-Google-Smtp-Source: ABdhPJw8wyFM62Ut2Sf7rG+vUT6fWVoAt6oG05coCAtugWeflqaBDOLOCgDn81VaIvvOcjBAOP8lwoc0QHY= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:4313:b0:6b8:b3e5:a46 with SMTP id j19-20020a170906431300b006b8b3e50a46mr33446870ejm.417.1648557725791; Tue, 29 Mar 2022 05:42:05 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:05 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-37-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 36/48] kmsan: kcov: unpoison area->list in kcov_remote_area_put() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: np3zmuxb5r86npa7wpsg5h78ecwqw4mo Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YpPQVEPU; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3nf5CYgYKCL4kpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3nf5CYgYKCL4kpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 32E2D100004 X-HE-Tag: 1648557727-215411 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not instrument kernel/kcov.c for performance reasons (with CONFIG_KCOV=y virtually every place in the kernel invokes kcov instrumentation). Therefore the tool may miss writes from kcov.c that initialize memory. When CONFIG_DEBUG_LIST is enabled, list pointers from kernel/kcov.c are passed to instrumented helpers in lib/list_debug.c, resulting in false positives. To work around these reports, we unpoison the contents of area->list after initializing it. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie17f2ee47a7af58f5cdf716d585ebf0769348a5a --- kernel/kcov.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/kcov.c b/kernel/kcov.c index 36ca640c4f8e7..88ffdddc99ba1 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -152,6 +153,12 @@ static void kcov_remote_area_put(struct kcov_remote_area *area, INIT_LIST_HEAD(&area->list); area->size = size; list_add(&area->list, &kcov_remote_areas); + /* + * KMSAN doesn't instrument this file, so it may not know area->list + * is initialized. Unpoison it explicitly to avoid reports in + * kcov_remote_area_get(). + */ + kmsan_unpoison_memory(&area->list, sizeof(struct list_head)); } static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) From patchwork Tue Mar 29 12:40:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D49DDC433FE for ; Tue, 29 Mar 2022 12:42:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A2358D0006; Tue, 29 Mar 2022 08:42:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 606938D0005; Tue, 29 Mar 2022 08:42:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F0C38D0006; Tue, 29 Mar 2022 08:42:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 4155F8D0005 for ; Tue, 29 Mar 2022 08:42:10 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1866F23904 for ; Tue, 29 Mar 2022 12:42:10 +0000 (UTC) X-FDA: 79297386420.13.8C2459E Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf20.hostedemail.com (Postfix) with ESMTP id 974C11C0002 for ; Tue, 29 Mar 2022 12:42:09 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id jx2-20020a170907760200b006dfc374c502so8135218ejc.7 for ; Tue, 29 Mar 2022 05:42:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hN0px5Y6w+gPiIIsc2Y11OjC7QzplCcP/UQmq/enNWc=; b=Hba3tqo2ZktNSh5VayjCUcXgsuLQihspG9CdbIkgwV0XnzhptRTheqtvOJvtrkRDxi Ss5riSjqs79q4sj6vEzrl9e5NAnMEYkMtOD7DR0kjiXToleB0xMmrngB/lOmVp53Pm6g sL4jzNQgPIYiWPmAUDH+RYqBmhUxqdDakm3uNuUeS06KDhE0eZO/1cXNArkf4IMFhORp upUi5nfSsYrTnu7bITVW1QpAkL5mzKSSvi2xge0Ig/ckS6wgZKchB3Otj/IUHLOME2c5 nfU0OUdtpKDV6uNoZH/AIPbh7jb1irxWmMv0KTxL8c9e0tXERBb3wU1Xna/UfTWWtWS9 xT2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hN0px5Y6w+gPiIIsc2Y11OjC7QzplCcP/UQmq/enNWc=; b=m/Wc2mt3eunZghvgHigqF4GYzJg5tNXxp96Ur269q0ly2z5L3hEIff9v+Rs39zz0sQ OVFUNWSJ7PFGHrhRuTI/Yr402WPvGTi1bsKti4kfigPZfgsj/Jn8elGXpB8w1PA2UBp6 HCyTTuW0uKvINLY8vwsx0WhYnrqHf/UgmMfgLMnV5ZnqXhx07sB9km1SLwKXE464GFRd rwzbKyFOegIHWP1DGid/V/5lWlvfCtb3KnyWKBbWyTK5v5z2UdIZDSq4FW8xiV5PJ1+7 8hKzPYJbQegAm0TT+/aiecqRcnhr776QrrcOBT/Hgk46EXLgZyR2BQ0Qu89N2GxH3BQR wWiw== X-Gm-Message-State: AOAM530MyjfGRiwPSetuuWa6Jb+5oq6uWgLlLt6YYQNjK0su+POzWvzG 05qw835ndzAe1C+0ZzRH68+ngq7tsDw= X-Google-Smtp-Source: ABdhPJw+mVHIQcKBP/V+6lL9kzLF8SImygn1gNkK6rHWczIwHLyPFu9Mt2wT/36dHdaMhbuj9deBSL0XS6g= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:1e8b:b0:3da:58e6:9a09 with SMTP id f11-20020a0564021e8b00b003da58e69a09mr4276833edf.155.1648557728308; Tue, 29 Mar 2022 05:42:08 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:06 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-38-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 37/48] security: kmsan: fix interoperability with auto-initialization From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: yc8cpks9qc1zsjjg6zhw679oukqn84yr Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Hba3tqo2; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3oP5CYgYKCMEnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3oP5CYgYKCMEnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 974C11C0002 X-HE-Tag: 1648557729-642718 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heap and stack initialization is great, but not when we are trying uses of uninitialized memory. When the kernel is built with KMSAN, having kernel memory initialization enabled may introduce false negatives. We disable CONFIG_INIT_STACK_ALL_PATTERN and CONFIG_INIT_STACK_ALL_ZERO under CONFIG_KMSAN, making it impossible to auto-initialize stack variables in KMSAN builds. We also disable CONFIG_INIT_ON_ALLOC_DEFAULT_ON and CONFIG_INIT_ON_FREE_DEFAULT_ON to prevent accidental use of heap auto-initialization. We however still let the users enable heap auto-initialization at boot-time (by setting init_on_alloc=1 or init_on_free=1), in which case a warning is printed. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I86608dd867018683a14ae1870f1928ad925f42e9 --- mm/page_alloc.c | 4 ++++ security/Kconfig.hardening | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4237b7290e619..ef0906296c57f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -868,6 +868,10 @@ void init_mem_debugging_and_hardening(void) else static_branch_disable(&init_on_free); + if (IS_ENABLED(CONFIG_KMSAN) && + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); + #ifdef CONFIG_DEBUG_PAGEALLOC if (!debug_pagealloc_enabled()) return; diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index d051f8ceefddd..bd13a46024457 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -106,6 +106,7 @@ choice config INIT_STACK_ALL_PATTERN bool "pattern-init everything (strongest)" depends on CC_HAS_AUTO_VAR_INIT_PATTERN + depends on !KMSAN help Initializes everything on the stack (including padding) with a specific debug value. This is intended to eliminate @@ -124,6 +125,7 @@ choice config INIT_STACK_ALL_ZERO bool "zero-init everything (strongest and safest)" depends on CC_HAS_AUTO_VAR_INIT_ZERO + depends on !KMSAN help Initializes everything on the stack (including padding) with a zero value. This is intended to eliminate all @@ -208,6 +210,7 @@ config STACKLEAK_RUNTIME_DISABLE config INIT_ON_ALLOC_DEFAULT_ON bool "Enable heap memory zeroing on allocation by default" + depends on !KMSAN help This has the effect of setting "init_on_alloc=1" on the kernel command line. This can be disabled with "init_on_alloc=0". @@ -220,6 +223,7 @@ config INIT_ON_ALLOC_DEFAULT_ON config INIT_ON_FREE_DEFAULT_ON bool "Enable heap memory zeroing on free by default" + depends on !KMSAN help This has the effect of setting "init_on_free=1" on the kernel command line. This can be disabled with "init_on_free=0". From patchwork Tue Mar 29 12:40:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794788 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66777C433FE for ; Tue, 29 Mar 2022 12:42:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 012038D0007; Tue, 29 Mar 2022 08:42:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F040C8D0005; Tue, 29 Mar 2022 08:42:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA35E8D0007; Tue, 29 Mar 2022 08:42:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id CB3818D0005 for ; Tue, 29 Mar 2022 08:42:12 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9ED9D23E63 for ; Tue, 29 Mar 2022 12:42:12 +0000 (UTC) X-FDA: 79297386504.04.8E9345F Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 1E186A0003 for ; Tue, 29 Mar 2022 12:42:11 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id zd21-20020a17090698d500b006df778721f7so8058103ejb.3 for ; Tue, 29 Mar 2022 05:42:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T39GYI/vBjMdfMtH+Dx38ArgDU/LjDSVRO95NSUKYys=; b=Ga2g0KSm7zN5HCLMp88eZDrOW0thBYZOgl2SJsxhqq1/9JhR9hzR13gng3zeCxso+w i/qtco4gzCEgnTlb2mhL9BTKSiQMQJdKNozm6n1+hL+sDqOwULwTLCdwyrZELS+NA7W/ c5qB98YvHHLJ3BMBq1pJMPQpE9RNLyE4eXN+o9MmFC6t7LQ+u14xMMOIVA+bA5naY+8s gRNrHJavp3e4jVaVQltJlMKiFbXubOTSrISRSTmFuJe4c/qmXIPakHSoPpaD9bKOA9pq oeNXL7OnbYXNrhPT7iXlkE3Z7JYvDzAHoBqV8prd4N81xr62OjBryDQm9qe/C7bS4tTD qDsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T39GYI/vBjMdfMtH+Dx38ArgDU/LjDSVRO95NSUKYys=; b=ZH1a6XwC4EvXV16uMquMXwQ6qMvzoyIJg6J+8nt0hcA85hFLYIIrsnnBJoLVO2ymc0 gwSJ+2H3GM60cg66s8HxWJvUOTgU+yYFJ6FgdGbNhPQ/AFb2MEa4A55weHR/K+EJt6Xb +60CzBveyP3WXtGTXh+WU3awKO1RIKhW4df0XO3yK7NxSCMu8zu/VR2/ho68Pe6V74Hp k5RBNTfSjgpfhYg14S5UwUkUTBT32dTlVzYO3/PVAJnnbkjJHQ7W602+kUQ8N2LCMJKo 9RchdLgtzUREG4lh6BZPfbIXEH8OxA9xtGmBvHlNMBs3dBn3Vg8gtyW5S54Kp5ftiiXm O37A== X-Gm-Message-State: AOAM532DO0Ib1WM7e/GU8PGgarqC77K0zb17N20pfu++3MDDokAPpwsC YmZXPFKTCi5/yBZdBXaO6L0hGWeuDxA= X-Google-Smtp-Source: ABdhPJxnAD1WnRha3aT8bQpiv1ryLHN37wPgdzyd6u5jtCYTidxo5YWhjtFuZMhdjM/eFbjpolf6XzG13+Y= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:1ca4:b0:6da:86a4:1ec7 with SMTP id nb36-20020a1709071ca400b006da86a41ec7mr34900718ejc.556.1648557730788; Tue, 29 Mar 2022 05:42:10 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:07 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-39-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 38/48] objtool: kmsan: list KMSAN API functions as uaccess-safe From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ga2g0KSm; spf=pass (imf25.hostedemail.com: domain of 3ov5CYgYKCMMpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3ov5CYgYKCMMpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 1E186A0003 X-Stat-Signature: 1z37tozdxeoj7u8s9is99jcxu7qs8fps X-HE-Tag: 1648557731-658167 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN inserts API function calls in a lot of places (function entries and exits, local variables, memory accesses), so they may get called from the uaccess regions as well. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I242bc9816273fecad4ea3d977393784396bb3c35 --- tools/objtool/check.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 7c33ec67c4a95..8518eaf05bff0 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -943,6 +943,25 @@ static const char *uaccess_safe_builtin[] = { "__sanitizer_cov_trace_cmp4", "__sanitizer_cov_trace_cmp8", "__sanitizer_cov_trace_switch", + /* KMSAN */ + "kmsan_copy_to_user", + "kmsan_report", + "kmsan_unpoison_memory", + "__msan_chain_origin", + "__msan_get_context_state", + "__msan_instrument_asm_store", + "__msan_metadata_ptr_for_load_1", + "__msan_metadata_ptr_for_load_2", + "__msan_metadata_ptr_for_load_4", + "__msan_metadata_ptr_for_load_8", + "__msan_metadata_ptr_for_load_n", + "__msan_metadata_ptr_for_store_1", + "__msan_metadata_ptr_for_store_2", + "__msan_metadata_ptr_for_store_4", + "__msan_metadata_ptr_for_store_8", + "__msan_metadata_ptr_for_store_n", + "__msan_poison_alloca", + "__msan_warning", /* UBSAN */ "ubsan_type_mismatch_common", "__ubsan_handle_type_mismatch", From patchwork Tue Mar 29 12:40:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 503C8C433FE for ; Tue, 29 Mar 2022 12:42:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7DF28D0019; Tue, 29 Mar 2022 08:42:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D06F28D0018; Tue, 29 Mar 2022 08:42:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCE978D0019; Tue, 29 Mar 2022 08:42:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id AF5308D0018 for ; Tue, 29 Mar 2022 08:42:15 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 60DCA1828A809 for ; Tue, 29 Mar 2022 12:42:15 +0000 (UTC) X-FDA: 79297386630.20.B2A1FBE Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf26.hostedemail.com (Postfix) with ESMTP id E903B140014 for ; Tue, 29 Mar 2022 12:42:14 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id x2-20020a1709065ac200b006d9b316257fso8150475ejs.12 for ; Tue, 29 Mar 2022 05:42:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vfNewWnsnLKjgrLxloInM3ksKktMuhZ88Cjjen8dSfw=; b=k2grIaWEX10+pjmxqhkcyCA78d3h23qdz2hQwriW/AtBkEbXWlTAWHSjR5jVjwbRd0 HKX1zG6uUcVNaozu8WxucSiCuQwHnnudt3nW2uxWtac7hF54EJ1kbSIlqPv4TwsjdCTr G92SqYktPjAwRps3tSeMbzJyGmoMawImGJXnVDm7RQbHHOMHmWqFKxMC2IeRQOg1Xhxq OJFN3e7mKc2NOaXlMGXsa9yHEVB8W3tfRWFhpSrRETeApxMNk01V3f1Rz7tcgby1LDeH XGjGGEIIihzXVuI4E4ZD/omsJ5xBfcLQ6Erf+tZ7ZaeE1xczews93WWLLOk7DrdHF6hz Ns5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vfNewWnsnLKjgrLxloInM3ksKktMuhZ88Cjjen8dSfw=; b=jSrGVmhALXgOngWnqtLaE8IREpMq0UgdieVVb2aYxZQ1bnqEyd2qsDeiyXEhEblsb1 QneHXKExXnOACA09vbfFGDnpFBsEDXyXPP04v0J1BvCXA+aIkslUZU0suYHMZRlfJoM2 aE7ydXZDJH8a0OUoKWkhbCk6X4mpEjHGQTkSEqn9H2pkuzixfKgGQBImcbgH2uI4JAZS 7UiFY9IXabpcJYSkkUHCYv8K/DdfAiol0Vvl/mYaDesJa+MgdqaD3n8DPOYaVC3cqiUb /DkYhVyDJH99ALfITmgxm8XQsvsiuL9kZIAcXL8b+ERPYG2gZbrCvBh5sfDxCKgnq8Cr aGDw== X-Gm-Message-State: AOAM5319AB32poHMtpUi59jqFSeGG4EGzLp7Xg5VHoj2+w8aYTwBzdL7 EcnVrxuHzl+GpyUFemCGeQs/mypjKDc= X-Google-Smtp-Source: ABdhPJxYhi4C5lMRGBP4ot4j5u4X+r6OfqpTJ+iIL9kMkH3POTY1BlEuc6eV3fFfpCDp3jcSR0ZuHMnxazU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:2699:b0:6d0:9f3b:a6a7 with SMTP id t25-20020a170906269900b006d09f3ba6a7mr33007227ejc.397.1648557733360; Tue, 29 Mar 2022 05:42:13 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:08 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-40-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 39/48] x86: kmsan: make READ_ONCE_TASK_STACK() return initialized values From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=k2grIaWE; spf=pass (imf26.hostedemail.com: domain of 3pf5CYgYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3pf5CYgYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: 65xkbud7zefoczjttar6kcyqaytf5f1g X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E903B140014 X-HE-Tag: 1648557734-793591 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, assume that reading from the task stack always produces initialized values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I9e2350bf3e88688dd83537e12a23456480141997 --- arch/x86/include/asm/unwind.h | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h index 2a1f8734416dc..51173b19ac4d5 100644 --- a/arch/x86/include/asm/unwind.h +++ b/arch/x86/include/asm/unwind.h @@ -129,18 +129,19 @@ unsigned long unwind_recover_ret_addr(struct unwind_state *state, } /* - * This disables KASAN checking when reading a value from another task's stack, - * since the other task could be running on another CPU and could have poisoned - * the stack in the meantime. + * This disables KASAN/KMSAN checking when reading a value from another task's + * stack, since the other task could be running on another CPU and could have + * poisoned the stack in the meantime. Frame pointers are uninitialized by + * default, so for KMSAN we mark the return value initialized unconditionally. */ -#define READ_ONCE_TASK_STACK(task, x) \ -({ \ - unsigned long val; \ - if (task == current) \ - val = READ_ONCE(x); \ - else \ - val = READ_ONCE_NOCHECK(x); \ - val; \ +#define READ_ONCE_TASK_STACK(task, x) \ +({ \ + unsigned long val; \ + if (task == current && !IS_ENABLED(CONFIG_KMSAN)) \ + val = READ_ONCE(x); \ + else \ + val = READ_ONCE_NOCHECK(x); \ + val; \ }) static inline bool task_on_another_cpu(struct task_struct *task) From patchwork Tue Mar 29 12:40:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794790 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23F5EC433EF for ; Tue, 29 Mar 2022 12:42:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0D928D001B; Tue, 29 Mar 2022 08:42:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ABDA38D001A; Tue, 29 Mar 2022 08:42:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9857F8D001B; Tue, 29 Mar 2022 08:42:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 899548D001A for ; Tue, 29 Mar 2022 08:42:18 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4B1299A812 for ; Tue, 29 Mar 2022 12:42:18 +0000 (UTC) X-FDA: 79297386756.24.BC72169 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf15.hostedemail.com (Postfix) with ESMTP id A56D9A000B for ; Tue, 29 Mar 2022 12:42:17 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id zd21-20020a17090698d500b006df778721f7so8058220ejb.3 for ; Tue, 29 Mar 2022 05:42:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=06dyFy4NZEX890kyFUV8Iv1YE1gnoJOMR7Gzlteh8jY=; b=Q2viYIJcVvUQijU4U9D1Dj2AFiuH23Eb1Cg6kjs6VZINfQ//kbrjiRJ1yRFJzKaL95 tQJlNRbpesfMyVc/wyOht/HynqAFff9Su6k44SSjUB8e2ASIhP69zbj05HIMXeK6L1OL ySYUfvObdacOc2wdAG0IyYrl2RHLnydnHHmfzRRrcKEWnpWTcd8jHV+TE7DNEa1UCekz 6KJVzVYtGJ3IVan1hIEN3NujxMnzK82nblY1rg0IK6jZi+5p1Rf4+CXyXuOLIkxIKFLO 1LM1HLYTxGIOL5VXdKr2d8uHtUFahOWpwBd/6N2sHJ5/7Wh6V+e3Ncbzfm+da1hhXAXU 5dTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=06dyFy4NZEX890kyFUV8Iv1YE1gnoJOMR7Gzlteh8jY=; b=bjynAJ4mHjoSFpRJpOgO2fkGB5lMwq3Et7k8tcsgPduQOYBq/wXWydG2ZBL0zw1s7q iEogs0WQQioqPWlE56VLeIV1ODTCTTkMmPCuFlm+YroFU9YfY6f6OIdvXR+OCfFuCRIv 5iyno3Il/fZCa+jGGNtuaoxWszL+a4m6P41RqCX1+SkeC1g0h29/sdP2UXoZ/dg4fuKX MKL/lEEG4CwNrzEmrbJbuhWOhlUHKvb+LrpKkl5EVHdgpJkD9uoPdVNvTOcDX7ZEQWGS Q/sa+eoQeFKpI89Cw3CjeYAwR0/jc0Ghy2mHdLttX75DaQgZ2q4eiZynknqI0Gze799V WjBA== X-Gm-Message-State: AOAM532/HiAtsPmgyKsBg5WX9uQ+daGFsQ1aBzpe1n5O1IcROxhzondD D3aKgxxUiimXPS7mFN8OwavMTyWp/AQ= X-Google-Smtp-Source: ABdhPJyK+xCDqG5EBgv8UrRhuZ4ui8rzuRpy41w829H2Gd00Q0wKOzSf8/FZHSjDv6y2E59F+2K0K4M0Fus= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:57c1:b0:6d6:da73:e9c0 with SMTP id u1-20020a17090657c100b006d6da73e9c0mr35219641ejr.45.1648557735848; Tue, 29 Mar 2022 05:42:15 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:09 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-41-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 40/48] x86: kmsan: disable instrumentation of unsupported code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: hnhqa5kpxaoo9mdrykx1ox6sb1p6dtzs Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Q2viYIJc; spf=pass (imf15.hostedemail.com: domain of 3p_5CYgYKCMguzwrs5u22uzs.q20zw18B-00y9oqy.25u@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3p_5CYgYKCMguzwrs5u22uzs.q20zw18B-00y9oqy.25u@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A56D9A000B X-HE-Tag: 1648557737-352313 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instrumenting some files with KMSAN will result in kernel being unable to link, boot or crashing at runtime for various reasons (e.g. infinite recursion caused by instrumentation hooks calling instrumented code again). Completely omit KMSAN instrumentation in the following places: - arch/x86/boot and arch/x86/realmode/rm, as KMSAN doesn't work for i386; - arch/x86/entry/vdso, which isn't linked with KMSAN runtime; - three files in arch/x86/kernel - boot problems; - arch/x86/mm/cpu_entry_area.c - recursion. Signed-off-by: Alexander Potapenko --- v2: - moved the patch earlier in the series so that KMSAN can compile - split off the non-x86 part into a separate patch Link: https://linux-review.googlesource.com/id/Id5e5c4a9f9d53c24a35ebb633b814c414628d81b --- arch/x86/boot/Makefile | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/entry/vdso/Makefile | 3 +++ arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/mm/Makefile | 2 ++ arch/x86/realmode/rm/Makefile | 1 + 7 files changed, 11 insertions(+) diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index b5aecb524a8aa..d5623232b763f 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -12,6 +12,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Kernel does not boot with kcov instrumentation here. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 6115274fe10fc..6e2e34d2655ce 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -20,6 +20,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 693f8b9031fb8..4f835eaa03ec1 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -11,6 +11,9 @@ include $(srctree)/lib/vdso/Makefile # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n +KMSAN_SANITIZE_vclock_gettime.o := n +KMSAN_SANITIZE_vgetcpu.o := n + UBSAN_SANITIZE := n KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 6aef9ee28a394..ad645fb8b02dd 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -35,6 +35,8 @@ KASAN_SANITIZE_cc_platform.o := n # With some compiler versions the generated code results in boot hangs, caused # by several compilation units. To be safe, disable all instrumentation. KCSAN_SANITIZE := n +KMSAN_SANITIZE_head$(BITS).o := n +KMSAN_SANITIZE_nmi.o := n OBJECT_FILES_NON_STANDARD_test_nx.o := y diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 9661e3e802be5..f10a921ee7565 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -12,6 +12,7 @@ endif # If these files are instrumented, boot hangs during the first second. KCOV_INSTRUMENT_common.o := n KCOV_INSTRUMENT_perf_event.o := n +KMSAN_SANITIZE_common.o := n # As above, instrumenting secondary CPU boot code causes boot hangs. KCSAN_SANITIZE_common.o := n diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index fe3d3061fc116..ada726784012f 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -12,6 +12,8 @@ KASAN_SANITIZE_mem_encrypt_identity.o := n # Disable KCSAN entirely, because otherwise we get warnings that some functions # reference __initdata sections. KCSAN_SANITIZE := n +# Avoid recursion by not calling KMSAN hooks for CEA code. +KMSAN_SANITIZE_cpu_entry_area.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index 83f1b6a56449f..f614009d3e4e2 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -10,6 +10,7 @@ # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. From patchwork Tue Mar 29 12:40:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F14CFC433FE for ; Tue, 29 Mar 2022 12:42:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7FF708D0010; Tue, 29 Mar 2022 08:42:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AF818D000F; Tue, 29 Mar 2022 08:42:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 604D68D001D; Tue, 29 Mar 2022 08:42:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 520668D001C for ; Tue, 29 Mar 2022 08:42:20 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1A2261828A809 for ; Tue, 29 Mar 2022 12:42:20 +0000 (UTC) X-FDA: 79297386840.20.701A73A Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf26.hostedemail.com (Postfix) with ESMTP id C39F2140013 for ; Tue, 29 Mar 2022 12:42:19 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id x5-20020a50ba85000000b00418e8ce90ffso10939929ede.14 for ; Tue, 29 Mar 2022 05:42:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eekKcS683P9sWiTSuhgkvdPXp9EdJwN5ZjFsPJMzm7E=; b=nThiYlFFBsJrC6J4CsRmdsKff+pDAxmWAIIBI48Vo80VmKhgV7dastoHrc6SVW9X6d jx/eMpAi1+TKGKqks/kz1GydwiVFtB3s4/Iqslz0Sb0I1p4yER6QQnh5/UL9bNG2RLeo C5XtiIVh5zpbhTTlL5airTyDdq93mQxCRjBtLqk2mbuMHgpMJxTKy/8BxllO8FXvoXri ekBImg/blQGPTzvxUnzHyySXJs6MIGJ9C3qLF6RA9zgxjyAbLcaI48UrWAydgcCoo14J kueGk3ukqrtHmHCvYLUUERRZzvhgDhUfNxGvcRCiVUwTYx9WsgjVrySPvU/5ycGH8wZP hUMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eekKcS683P9sWiTSuhgkvdPXp9EdJwN5ZjFsPJMzm7E=; b=3rFI5NQrpGnrQcYdOSCsce9Iln3/S1BczXG/ZJuAR437dpFISm1J7C7m2JL5Pr5gH+ Hx3sAIpoWJQY/r9VFN9X6gQwHoFf6y1Plz0IWodgwrYGpfXYydmHzsysg0VXY9QNJRMo Gvn5HDREkhIBw9kgzZFXvaJZpvOs97o60QZOu5GUsPNPekLyptRp6P4j9B5QA/nMSHme HIpsWfHfUYF6ZW6jVsoEs9kjL6HEFPD1JzkaCL3L2Upleu0jQpqmxrEjfoWHYUPKxTMH inxduKBl3L+8/o/WrDXRO3JeaTEQNXxA6TtNQFZhCtUbrs1e0E6BmSuMhxAuMa/J2F5P ArRA== X-Gm-Message-State: AOAM5318zU41ZZVRc4G+MOmYNG0BzsnL7DKrTgOaNiSWTR6l34g/YYUB AybMfsiSt4JQGJoQ1xiuiwv6StPnFE0= X-Google-Smtp-Source: ABdhPJzQ+mzPatwMfV3rPhv+G2Q81dnqKFY/srjOhBDb0t1yefVaA5tyYkGIYP5vkw1Slryk14/AH+9Tlek= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a50:c010:0:b0:418:d53c:24ec with SMTP id r16-20020a50c010000000b00418d53c24ecmr4358428edb.17.1648557738276; Tue, 29 Mar 2022 05:42:18 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:10 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-42-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 41/48] x86: kmsan: skip shadow checks in __switch_to() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nThiYlFF; spf=pass (imf26.hostedemail.com: domain of 3qv5CYgYKCMsx2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3qv5CYgYKCMsx2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C39F2140013 X-Stat-Signature: kr6tfi9gh6bcywog9mdd5j1y3ka35ew6 X-HE-Tag: 1648557739-799786 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When instrumenting functions, KMSAN obtains the per-task state (mostly pointers to metadata for function arguments and return values) once per function at its beginning, using the `current` pointer. Every time the instrumented function calls another function, this state (`struct kmsan_context_state`) is updated with shadow/origin data of the passed and returned values. When `current` changes in the low-level arch code, instrumented code can not notice that, and will still refer to the old state, possibly corrupting it or using stale data. This may result in false positive reports. To deal with that, we need to apply __no_kmsan_checks to the functions performing context switching - this will result in skipping all KMSAN shadow checks and marking newly created values as initialized, preventing all false positive reports in those functions. False negatives are still possible, but we expect them to be rare and impersistent. Suggested-by: Marco Elver Signed-off-by: Alexander Potapenko --- v2: -- This patch was previously called "kmsan: skip shadow checks in files doing context switches". Per Mark Rutland's suggestion, we now only skip checks in low-level arch-specific code, as context switches in common code should be invisible to KMSAN. We also apply the checks to precisely the functions performing the context switch instead of the whole file. Link: https://linux-review.googlesource.com/id/I45e3ed9c5f66ee79b0409d1673d66ae419029bcb --- arch/x86/kernel/process_64.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 3402edec236c4..838b1e9808d6f 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -553,6 +553,7 @@ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp, bool x32) * Kprobes not supported here. Set the probe on schedule instead. * Function graph tracer not supported too. */ +__no_kmsan_checks __visible __notrace_funcgraph struct task_struct * __switch_to(struct task_struct *prev_p, struct task_struct *next_p) { From patchwork Tue Mar 29 12:40:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794792 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD268C433EF for ; Tue, 29 Mar 2022 12:42:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 569308D000F; Tue, 29 Mar 2022 08:42:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5179F8D0003; Tue, 29 Mar 2022 08:42:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DF388D000F; Tue, 29 Mar 2022 08:42:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 2FF758D0003 for ; Tue, 29 Mar 2022 08:42:23 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E9C888248D52 for ; Tue, 29 Mar 2022 12:42:22 +0000 (UTC) X-FDA: 79297386924.30.2B611A1 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf01.hostedemail.com (Postfix) with ESMTP id 3A92440005 for ; Tue, 29 Mar 2022 12:42:22 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id i4-20020aa7c9c4000000b00419c542270dso5231201edt.8 for ; Tue, 29 Mar 2022 05:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6EKMd0bjP1WkGhZZPkjCgU25E3DWfYJCasURW7mFnYs=; b=P1k71GssRqu7HsErV73nnblS85PhkUFheLyBFZgmPWLpl7OKiy1Bqi/JlOtZzfJVPy VAc7cvO8fwjHIxHwD4P9pVu2rdNH8e0p6iE439JugyKP3tEZpAogt6YoEY/b5rvhs6YN OlYMWP+8vwtXjv8eNiLfoejuKaPkdCuMEozoeNlW8PpIb+edtD9db/0+pmWKLdNhwURR 5E3Lth/p8S4d05TzSLze03fVsfkwrT++EyTKtUjOhLVyC4kbo3lKGGRltm0PtD2GB/IH Q3WE4SbVcH6AFhiknkVN5O3R1HMm1J+ZfOcalpdSexIx4dKDpVcT7q34bBD07dT9kvEi C7Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6EKMd0bjP1WkGhZZPkjCgU25E3DWfYJCasURW7mFnYs=; b=UkFTKI0T5rpk11Fixb5pDlH+dkMirGvVddaDgnqJfwDF36VTqbvDZF6kBANTYN5cCo Wjl3+PeUHO0a8CiSEn5vUmVGmvLDx5BscdK5Q4ZGf44tcJnBF64tbOmC2tBXvxKBGQT5 yDPIyVIgKatsjngZPjRzhgGyk+V6bpnfZQ3LWg3NSPVAfE6Sq6RIxk/RB0LDxVOjfGe7 LNXC3WpUaTZNbW4QmvE+kgDCSpje/NV/DKzy/YRk4gNm5BOmqTpV5YShfQVPldXzTpme 9sve+3ZknTQm2qFfRhx53NeVIVyKsY+ranwsPeGCOPvaqmcL3jclntsp1xJidhhgu1Lb M64g== X-Gm-Message-State: AOAM531YQHU8qrQpR7UagLo6xxOkKmns4oRP1KPPMQYZRgK/thFmRcke dTMKOt+Il9q2zZTtIz89mupHD+W10xs= X-Google-Smtp-Source: ABdhPJyN/u61iYoRLkPCDJv7Xs3UegKZjTjkx8b5A87Fx1oaelxWk3uqKnX5mytmcMKvTp7b3Wewv/gEEvA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:3042:b0:6cd:20ed:7c5c with SMTP id d2-20020a170906304200b006cd20ed7c5cmr33818169ejd.241.1648557740796; Tue, 29 Mar 2022 05:42:20 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:11 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-43-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 42/48] x86: kmsan: handle open-coded assembly in lib/iomem.c From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 9uqdf9nmceumw9xsg8qbe99171c6he31 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=P1k71Gss; spf=pass (imf01.hostedemail.com: domain of 3rP5CYgYKCM0z41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3rP5CYgYKCM0z41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3A92440005 X-HE-Tag: 1648557742-779611 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN cannot intercept memory accesses within asm() statements. That's why we add kmsan_unpoison_memory() and kmsan_check_memory() to hint it how to handle memory copied from/to I/O memory. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Icb16bf17269087e475debf07a7fe7d4bebc3df23 --- arch/x86/lib/iomem.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/lib/iomem.c b/arch/x86/lib/iomem.c index df50451d94ef7..2307770f3f4c8 100644 --- a/arch/x86/lib/iomem.c +++ b/arch/x86/lib/iomem.c @@ -1,6 +1,7 @@ #include #include #include +#include #define movs(type,to,from) \ asm volatile("movs" type:"=&D" (to), "=&S" (from):"0" (to), "1" (from):"memory") @@ -37,6 +38,8 @@ void memcpy_fromio(void *to, const volatile void __iomem *from, size_t n) n-=2; } rep_movs(to, (const void *)from, n); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(to, n); } EXPORT_SYMBOL(memcpy_fromio); @@ -45,6 +48,8 @@ void memcpy_toio(volatile void __iomem *to, const void *from, size_t n) if (unlikely(!n)) return; + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(from, n); /* Align any unaligned destination IO */ if (unlikely(1 & (unsigned long)to)) { movs("b", to, from); From patchwork Tue Mar 29 12:40:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BB88C433FE for ; Tue, 29 Mar 2022 12:42:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C45C78D001C; Tue, 29 Mar 2022 08:42:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF57A8D001D; Tue, 29 Mar 2022 08:42:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA3798D001C; Tue, 29 Mar 2022 08:42:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 988458D001C for ; Tue, 29 Mar 2022 08:42:25 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5E37CA4A6B for ; Tue, 29 Mar 2022 12:42:25 +0000 (UTC) X-FDA: 79297387050.26.97E8268 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf14.hostedemail.com (Postfix) with ESMTP id DCD0210001E for ; Tue, 29 Mar 2022 12:42:24 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id v15-20020a50f08f000000b0041902200ab4so10941176edl.22 for ; Tue, 29 Mar 2022 05:42:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NOfDS0CsOY5h9mAj6JbXzNq92U67NVHjp1LhoXoozQE=; b=Nn0hcJcow4Zi7/qxTCnawsZd86i/kfajgqOZt7a19BzANMrQn3RZVXK/4TSbOgTtKy EZXwKZ2ETjPWwTQI84e+ZS4kCQZicjI4YNLp506aUC1vJIqm4sYJ9Jk00ZDxlbxl+PLB I5mp72VYTHiRMctKwlnD1s+EvmhkJ/kSzPgw4eOi0GnsAWgkzobtSIgiRYb2F6NVLdQr ygOxPLA0uG7pXUYXgxRRYvjcZdbBM+cj+52SkT7eFAPFsazO+XOHu6xCOBRJhGkNlNFY jvJktjAJ2UXDOtvgbcc5Nl2IbozKqFTgnIdEGvzEGMEsOCP/KyQRrmJM8Q+toW8lmnUu BZAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NOfDS0CsOY5h9mAj6JbXzNq92U67NVHjp1LhoXoozQE=; b=vgkuuUInNGK4eWBDl161wqPTPnqB7htDSYag9zy2IVJ6Au34azDYz4whb7xai6gqNm ZfeOvfxeaGLbFKw4MyW6yu7I197qSUAk0Zm8Ud6cMfnLBW1AxNtFR+b+RoVykhso7sbq L0Ypcp4z9aYl9H/ZiHBhR6r5th1j/kBDVbWqM/9oRkmj47tcgD91u9YzLEIIelQDd2hJ UsEiHrz0rFI9WyVWAWTbh2IVY3Ghk5cq8qkc7acXG1mL07dNLsB/6qbZ4DrYQW4it3Le PGVHky7LllLO04Mioc0KPNnr10DuI6cAZprBWs6iItImh3NTQVjm/bmYOCiMuBsU5Gen jBig== X-Gm-Message-State: AOAM530lzpwoKTgIvKDkY58FvOlu/Uf3GZZu5qnIu1Dyze5qOODH1WOU cxodQNatAXcBOFu+x2VmOoHP2qx5c5g= X-Google-Smtp-Source: ABdhPJwjvewOCtdgKLCr3b6pMPkXnDTkN3wiCJIAUwFNMFNSlC6T2GR80JxGo48vEzlzJdJxvNcfeTGHeAU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a05:6402:d72:b0:419:938d:f4ce with SMTP id ec50-20020a0564020d7200b00419938df4cemr4402146edb.166.1648557743423; Tue, 29 Mar 2022 05:42:23 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:12 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-44-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 43/48] x86: kmsan: use __msan_ string functions where possible. From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: DCD0210001E X-Stat-Signature: 8jobtyhwqn68e99c81erc9spnf817dsr Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Nn0hcJco; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3r_5CYgYKCNA274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3r_5CYgYKCNA274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1648557744-601702 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7ca9bd6b4f5c9b9816404862ae87ca7984395f33 --- arch/x86/include/asm/string_64.h | 23 +++++++++++++++++++++-- include/linux/fortify-string.h | 2 ++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 6e450827f677a..3b87d889b6e16 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ #define __HAVE_ARCH_MEMCPY 1 +#if defined(__SANITIZE_MEMORY__) +#undef memcpy +void *__msan_memcpy(void *dst, const void *src, size_t size); +#define memcpy __msan_memcpy +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); #define __HAVE_ARCH_MEMSET +#if defined(__SANITIZE_MEMORY__) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset __msan_memset +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) } #define __HAVE_ARCH_MEMMOVE +#if defined(__SANITIZE_MEMORY__) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove __msan_memmove +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); int memcmp(const void *cs, const void *ct, size_t count); @@ -64,8 +82,7 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) - +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) /* * For files that not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. @@ -73,7 +90,9 @@ int strcmp(const char *cs, const char *ct); #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index a6cd6815f2490..b2c74cb85e20e 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -198,6 +198,7 @@ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count) return p; } +#ifndef CONFIG_KMSAN __FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size) { size_t p_size = __builtin_object_size(p, 0); @@ -240,6 +241,7 @@ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size) fortify_panic(__func__); return __underlying_memmove(p, q, size); } +#endif extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan); __FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size) From patchwork Tue Mar 29 12:40:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794794 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D92BEC433EF for ; Tue, 29 Mar 2022 12:42:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EDC78D001E; Tue, 29 Mar 2022 08:42:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 69FA28D0006; Tue, 29 Mar 2022 08:42:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53F4E8D001E; Tue, 29 Mar 2022 08:42:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 471528D0006 for ; Tue, 29 Mar 2022 08:42:28 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 24833601E6 for ; Tue, 29 Mar 2022 12:42:28 +0000 (UTC) X-FDA: 79297387176.11.6154163 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf31.hostedemail.com (Postfix) with ESMTP id AC7B120010 for ; Tue, 29 Mar 2022 12:42:27 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id hz15-20020a1709072cef00b006dfeceff2d1so8118076ejc.17 for ; Tue, 29 Mar 2022 05:42:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GRTJFN5fI/eCQ6Ah2RQjb7/dqw7JAuAGzue9isPThnU=; b=V2A969r8azagnZbvbCjDak0q6MRkqtC8n5tFydFSoeWA+AZwbef6NPKlrrU+H+BRBM P6taNp8QY9FavRqmNYSwqrdYGQ97zsjJxhBcCNj+utxQuJ9nOPiopPtbBqzQ8gBr6Qzm 7SYENQDGsHWHESxdBNNfqefvYaEkKbqQ7Uw9rn3v77vq15hOdIJrJeEt188vunugqZcn Y6At8xC/ehmqb5IdQtbwYrfxqEDNNN28/KTOPS67Mp49F5QHDODlthVjEKpkMZlKaPm0 nF0jHasnFvBSUC3gLePFYURhT77L4PhtEKIV+vHO9KG0REyFSTNFBTme4w/CaQQAv4EV cm0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GRTJFN5fI/eCQ6Ah2RQjb7/dqw7JAuAGzue9isPThnU=; b=vXr8d+oGlWFtAGwL1oRFKA2JyNavSTxd3VKiEDfzvGuw7zTDErVmOIFMkGse7WcXB2 twdmty+z926w9My+7Uis+82omPkgKpCP0zftkNJYepbSMTGUbrSYpGJI9gn99SsUlXMq BMGJxkoy9MoZKj9vsolpapcJ1Il6Ao9u5u0+wr15y5Nh4nJu7CweMtZ7WPj7VpXq0b/N xU0h0K/E5Jta7TmSznOr8tPq7kDuoLWA4dVeDwZBaBQy3+EC543efHL36t/5nvs6LIyQ 3iPDv31eZ71hZJ4ruOkWGalf9eqhij/NAvYUiwjZ4lTTY6L2hhh7Ha6R0r0OWyNNCHg2 7Ysg== X-Gm-Message-State: AOAM5304q/Ryp0dGvn+0bb+RL54wp5udEPie16060R7GD1SBDxyFFJ4y xYFkrrxM/3zXiJxySlw76Up6yLTa+vQ= X-Google-Smtp-Source: ABdhPJzeBfVX40XVB+0DxQzCOfCbyY7YIWfJ+7MS3btbGbfEX8I9gsHnpMWWbEmqsPh4mt6QWJwzC2vkQok= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:c02:b0:6df:fb64:2770 with SMTP id ga2-20020a1709070c0200b006dffb642770mr34762165ejc.221.1648557746364; Tue, 29 Mar 2022 05:42:26 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:13 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-45-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 44/48] x86: kmsan: sync metadata pages on page fault From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Stat-Signature: bubs8iakrq7c995uf71qat8oi5cyn9xk Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=V2A969r8; spf=pass (imf31.hostedemail.com: domain of 3sv5CYgYKCNM5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3sv5CYgYKCNM5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: AC7B120010 X-HE-Tag: 1648557747-630959 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN assumes shadow and origin pages for every allocated page are accessible. For pages between [VMALLOC_START, VMALLOC_END] those metadata pages start at KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START, therefore we must sync a bigger memory region. Signed-off-by: Alexander Potapenko --- v2: -- addressed reports from kernel test robot Link: https://linux-review.googlesource.com/id/Ia5bd541e54f1ecc11b86666c3ec87c62ac0bdfb8 --- arch/x86/mm/fault.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index d0074c6ed31a3..f2250a32a10ca 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -260,7 +260,7 @@ static noinline int vmalloc_fault(unsigned long address) } NOKPROBE_SYMBOL(vmalloc_fault); -void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +static void __arch_sync_kernel_mappings(unsigned long start, unsigned long end) { unsigned long addr; @@ -284,6 +284,27 @@ void arch_sync_kernel_mappings(unsigned long start, unsigned long end) } } +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + __arch_sync_kernel_mappings(start, end); +#ifdef CONFIG_KMSAN + /* + * KMSAN maintains two additional metadata page mappings for the + * [VMALLOC_START, VMALLOC_END) range. These mappings start at + * KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START and + * have to be synced together with the vmalloc memory mapping. + */ + if (start >= VMALLOC_START && end < VMALLOC_END) { + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START, + end - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START); + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START, + end - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START); + } +#endif +} + static bool low_pfn(unsigned long pfn) { return pfn < max_low_pfn; From patchwork Tue Mar 29 12:40:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F5E4C433EF for ; Tue, 29 Mar 2022 12:42:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 010568D000B; Tue, 29 Mar 2022 08:42:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EDD318D0006; Tue, 29 Mar 2022 08:42:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7CAC8D000B; Tue, 29 Mar 2022 08:42:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id C89D78D0006 for ; Tue, 29 Mar 2022 08:42:30 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8AB751828A44C for ; Tue, 29 Mar 2022 12:42:30 +0000 (UTC) X-FDA: 79297387260.26.79280FF Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf01.hostedemail.com (Postfix) with ESMTP id 133C34000C for ; Tue, 29 Mar 2022 12:42:29 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id hz15-20020a1709072cef00b006dfeceff2d1so8118167ejc.17 for ; Tue, 29 Mar 2022 05:42:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RlYSbadXoEfpVHc0vV/XNEsRdhGi1prUW+giHTFauGk=; b=Dk6OfzDOhhWLYgl5XDrPV/wHbOcl0QIYza/6JtkZqHgLudavSZzmjMk+5f28h2orgU iHrCh6plBIwBN36pcmW+p7rGqqO0YMM7u/i5PR6jUmB+3M7Gvzrldrx36VWhdEly5wqL OCFVfyQ70Xqs++xgzaju3Y7LpDid5Y2SH/Q8CDmTucqxRIowd4s6C0QFAALHjEO+0nwB fuNr7hQPe/Epycuo17M1wj5VYq+p5MbKDJtrQQ5brwghYeQJKrjFvhP1FeN553lIB4Ty /plVPtHNwRzPgAYEUGGThxWuO/QztfBN6vGOWx0oeHAc0cwrK2ZfAWzl7GaTEqAJcx59 JBIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RlYSbadXoEfpVHc0vV/XNEsRdhGi1prUW+giHTFauGk=; b=dZSXniGl3VVV1mbM43mX6NLC2IVL1pfEwwwjYtE7tilFGahRxun1TCiBOhxaeW9p2o DWZloBW4UcICcrLm3Tqhgi3R3AlGu5ZKZiRJY1Eo7AbW9cekl2RziaWR+igzlT3KYm5Y GXrYbTfRQLVqioUdLh7P5ejymNtwJqh8kx4LXcqnMLP/YQQcrlsF3bLsLFAPrmiyK/fc QuqQqi9E6y2Pwe8yEkKQG1BSSYuMM0OgCl5tyPj6rQzZRHWMaU1XusGL765oIxZSUD/9 q/jVa/qBP9cVOrcD6RIpb6SNy9VL4iKMnW34mPH5QZ5ge7eBAXq12bLGpdsT39EX0HOC QTOg== X-Gm-Message-State: AOAM530F0DTe2eEl0HYTh8NHQ3L1O2+MN3jUatMZxA/bS5qaMZih2k3+ TCw0gai80R1wP9SJ0+AJBX3hw1EPm7M= X-Google-Smtp-Source: ABdhPJwAQ2KpkKZtt/s3LazFEoxsBevCE6SqXEdguMxH7F8FiFrFs/1K5CRo60urh/1AvLKrhdVF+1gfvd0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:2d20:b0:6df:8c3f:730a with SMTP id gs32-20020a1709072d2000b006df8c3f730amr34340038ejc.725.1648557748867; Tue, 29 Mar 2022 05:42:28 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:14 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-46-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 45/48] x86: kasan: kmsan: support CONFIG_GENERIC_CSUM on x86, enable it for KASAN/KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: fg5wmt7nk3jdzhf8euyouoor3t4iwt9i Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Dk6OfzDO; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3tP5CYgYKCNU7C945I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3tP5CYgYKCNU7C945I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--glider.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 133C34000C X-HE-Tag: 1648557749-891463 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is needed to allow memory tools like KASAN and KMSAN see the memory accesses from the checksum code. Without CONFIG_GENERIC_CSUM the tools can't see memory accesses originating from handwritten assembly code. For KASAN it's a question of detecting more bugs, for KMSAN using the C implementation also helps avoid false positives originating from seemingly uninitialized checksum values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I3e95247be55b1112af59dbba07e8cbf34e50a581 --- arch/x86/Kconfig | 4 ++++ arch/x86/include/asm/checksum.h | 16 ++++++++++------ arch/x86/lib/Makefile | 2 ++ 3 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9f5bd41bf660c..86df15017f79d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -315,6 +315,10 @@ config GENERIC_ISA_DMA def_bool y depends on ISA_DMA_API +config GENERIC_CSUM + bool + default y if KMSAN || KASAN + config GENERIC_BUG def_bool y depends on BUG diff --git a/arch/x86/include/asm/checksum.h b/arch/x86/include/asm/checksum.h index bca625a60186c..6df6ece8a28ec 100644 --- a/arch/x86/include/asm/checksum.h +++ b/arch/x86/include/asm/checksum.h @@ -1,9 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 -#define HAVE_CSUM_COPY_USER -#define _HAVE_ARCH_CSUM_AND_COPY -#ifdef CONFIG_X86_32 -# include +#ifdef CONFIG_GENERIC_CSUM +# include #else -# include +# define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 +# define HAVE_CSUM_COPY_USER +# define _HAVE_ARCH_CSUM_AND_COPY +# ifdef CONFIG_X86_32 +# include +# else +# include +# endif #endif diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index f76747862bd2e..7ba5f61d72735 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -65,7 +65,9 @@ ifneq ($(CONFIG_X86_CMPXCHG64),y) endif else obj-y += iomap_copy_64.o +ifneq ($(CONFIG_GENERIC_CSUM),y) lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o +endif lib-y += clear_page_64.o copy_page_64.o lib-y += memmove_64.o memset_64.o lib-y += copy_user_64.o From patchwork Tue Mar 29 12:40:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794796 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D855C433EF for ; Tue, 29 Mar 2022 12:42:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A49E68D0006; Tue, 29 Mar 2022 08:42:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D3708D0001; Tue, 29 Mar 2022 08:42:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 89A1B8D0006; Tue, 29 Mar 2022 08:42:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id 7ADF88D0001 for ; Tue, 29 Mar 2022 08:42:33 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3F4ABA2B0D for ; Tue, 29 Mar 2022 12:42:33 +0000 (UTC) X-FDA: 79297387386.22.15D0638 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf18.hostedemail.com (Postfix) with ESMTP id C35E21C0004 for ; Tue, 29 Mar 2022 12:42:32 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id do20-20020a170906c11400b006e0de97a0e9so3085045ejc.19 for ; Tue, 29 Mar 2022 05:42:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0QWUQ+RqE8jiHyj7/O+KujG6tRpsbOFA8yuMseJPgVs=; b=OfXq72ZoBUQGSWSvm4KVtJ07Y6J70d5V6dAjwsYPNQ8KvQs+UemAzoD9b+EXKX/OTw Q+izHPhM0oLIkOLIiuKDw0qaaM3HmxBX8q8lxNCUTVCL6uMVfLhs0yBqTwJyZYo3a2Dd PHMlh5zEbtKHkaJRb6sAKEdQbGhEcA7fqUGVsUSQYvLsCZPqYBZ6kCgNZI+IIO0DTiYw dKXtVOTB8fspK7u9aonJCNi788zBxPM+etWoLtYNpYHL5mhBEusrVsXgVIm9RsvEQvVL BVUcgD1GuztoqNHxl+h1HcQQt1n0AN97ckOSFFHrUUEbkVxi2NvTa3+e0HN0zz3+BFfO qT4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0QWUQ+RqE8jiHyj7/O+KujG6tRpsbOFA8yuMseJPgVs=; b=FCZC254BtaCw7PFDaGwz2HZpiEH+idXIIedAfEvQp4fPBNkp3b5tVrwWCXhaU7GQdL Xcs7qgr+Jt1dEwHQ3PqqZ9YfMgGSBmD41VKn5lroAdC8rbjN/1EhJ5M6dtGN8PrTUfbm f6vIhnEVlVKiM43yL5YMMQd84lrzS+DvRjTSmQ+L+QPaCfMk4BE0F7ilpTXd4aLKx1FO mQK9xC01++8WCYh+uBAWW+pEaAORiQcCtjb1FVhB3V3Mhw0Ja+Q58VgiBrZO0txH7KFZ Qg8LYMbS307HN8CPsYyXqzcfHUhLmKU8oIRjtlrlUeHQQUiFnqR2VsR7niaupZ1yXg23 dKCw== X-Gm-Message-State: AOAM533Vh5sUbukRnI/se4QfcjsvWjWb1+EG6wsSOCcTZuQwlo78MBEy 3Ax/xi7hY6wbzWMmRqkyC6hVEpjP4G4= X-Google-Smtp-Source: ABdhPJytokWuo6v19GSWg2KLHdtKqtVYC2QIF6Shk0LT4bep8x7ZGLJY2F5r9sIe9kWDKcupdjioDqnZsL8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:948:b0:6d6:e479:1fe4 with SMTP id j8-20020a170906094800b006d6e4791fe4mr33417649ejd.240.1648557751382; Tue, 29 Mar 2022 05:42:31 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:15 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-47-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 46/48] x86: fs: kmsan: disable CONFIG_DCACHE_WORD_ACCESS From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C35E21C0004 X-Stat-Signature: qgk8qoqjjskjip6t18bjqnnk33dwdr8k X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=OfXq72Zo; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3t_5CYgYKCNgAFC78LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3t_5CYgYKCNgAFC78LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--glider.bounces.google.com X-HE-Tag: 1648557752-94888 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dentry_string_cmp() calls read_word_at_a_time(), which might read uninitialized bytes to optimize string comparisons. Disabling CONFIG_DCACHE_WORD_ACCESS should prohibit this optimization, as well as (probably) similar ones. Suggested-by: Andrey Konovalov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I4c0073224ac2897cafb8c037362c49dda9cfa133 --- arch/x86/Kconfig | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 86df15017f79d..646a7849be4cf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -126,7 +126,9 @@ config X86 select CLKEVT_I8253 select CLOCKSOURCE_VALIDATE_LAST_CYCLE select CLOCKSOURCE_WATCHDOG - select DCACHE_WORD_ACCESS + # Word-size accesses may read uninitialized data past the trailing \0 + # in strings and cause false KMSAN reports. + select DCACHE_WORD_ACCESS if !KMSAN select DYNAMIC_SIGFRAME select EDAC_ATOMIC_SCRUB select EDAC_SUPPORT From patchwork Tue Mar 29 12:40:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2127FC433F5 for ; Tue, 29 Mar 2022 12:42:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A98E68D0010; Tue, 29 Mar 2022 08:42:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A48108D0007; Tue, 29 Mar 2022 08:42:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 90FD58D0010; Tue, 29 Mar 2022 08:42:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 821CB8D0007 for ; Tue, 29 Mar 2022 08:42:37 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 33EAD22E84 for ; Tue, 29 Mar 2022 12:42:35 +0000 (UTC) X-FDA: 79297387512.03.B226DE5 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf14.hostedemail.com (Postfix) with ESMTP id 566FE100004 for ; Tue, 29 Mar 2022 12:42:35 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id f2-20020a50d542000000b00418ed3d95d8so10961923edj.11 for ; Tue, 29 Mar 2022 05:42:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SXzoEK+WJk4pacaOPaL1kxqndb2ET/uQVttgxEkML+w=; b=EBgX4Z2SSMT4cCzIkiyvRhgKVrX4GCoNERzYeL3X7WwtwGGVnYOFgUHYNinxWjvjqh kPkONHRo3AKKxZf7R9b/cbfY+0RkKOgrN2sHOaZlXcyZw9J83ocbcq3eROphMcFU1OEF YKJAZFiUIL4gSt2ikNRjoKRlkYyZREXJJz7VZhoC8/sgm/33yNzmvyYsBx61nm/5RFi7 Uan2pjppFHg5+0ZzLB/M100362XN/KNKnBk+E1pryuHawtyJIDPxaGso4I1pLwK0vTOC Sf0uIg6BZNx/aUM8AwFe9jVH9IO3gu9QyjACglxQsPj/loJje/iBcBYgMBagUP0EX/7M jrPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SXzoEK+WJk4pacaOPaL1kxqndb2ET/uQVttgxEkML+w=; b=2O35//W9FQ5IKU/Cbd8r5okkqi03NYvfMAv2voXig4r+y6Ea+wIunHgRXBEhiJ8P9K TspR55Gd5xPj+EMwpvhkdhS4uzCvkZ4/7BjhIZRHlr24OPgvv589pITUzthDkSSnw15O Lc+0Cs5ciOyne5CrNSZCu10q+3K4Cmw4vIS2clXA8vTW6Zo6iPA3mkVnZ+eIRateeRz2 sgJpNcmqbtE+2NlWvDWKQiNAcnBCYLOha3DZtvfT6vq24EWWJOb3ZvQoypM+BT3hZlKn /HD1BSuof9fud/EpiT+uWttp/M3ZpWdA5I4Lt47dUXBfUsPcNnO0QWtr8bN3KD/5WZOc 3Fpw== X-Gm-Message-State: AOAM531Xt9svFBTiBi/KtjHNi/lZImtY42w4IyL5FQVyx+eZ8WD1ftkj DJNQWIdzqpACZMoFdAgzA9GM27TCmx4= X-Google-Smtp-Source: ABdhPJy8fACfQ4jyGr9xF342/z75NoNcLTDY10G+LWoaiStO0hTLZbGlHwL73c3ZjRYtS31XbeKhwJgw3p0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:9acd:b0:6e0:b74d:d932 with SMTP id ah13-20020a1709069acd00b006e0b74dd932mr24156373ejc.695.1648557754011; Tue, 29 Mar 2022 05:42:34 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:16 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-48-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 47/48] x86: kmsan: handle register passing from uninstrumented code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 566FE100004 X-Stat-Signature: ukzmk1nwgaq9iq411dxg8k83hjjx6938 X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EBgX4Z2S; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3uv5CYgYKCNsDIFABODLLDIB.9LJIFKRU-JJHS79H.LOD@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3uv5CYgYKCNsDIFABODLLDIB.9LJIFKRU-JJHS79H.LOD@flex--glider.bounces.google.com X-HE-Tag: 1648557755-546371 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace instrumentation_begin() with instrumentation_begin_with_regs() to let KMSAN handle the non-instrumented code and unpoison pt_regs passed from the instrumented part. This is done to reduce the number of false positive reports. Signed-off-by: Alexander Potapenko --- v2: -- this patch was previously called "x86: kmsan: handle register passing from uninstrumented code". Instead of adding KMSAN-specific code to every instrumentation_begin()/instrumentation_end() section, we changed instrumentation_begin() to instrumentation_begin_with_regs() where applicable. Link: https://linux-review.googlesource.com/id/I435ec076cd21752c2f877f5da81f5eced62a2ea4 --- arch/x86/entry/common.c | 3 ++- arch/x86/include/asm/idtentry.h | 10 +++++----- arch/x86/kernel/cpu/mce/core.c | 2 +- arch/x86/kernel/kvm.c | 2 +- arch/x86/kernel/nmi.c | 2 +- arch/x86/kernel/sev.c | 4 ++-- arch/x86/kernel/traps.c | 14 +++++++------- arch/x86/mm/fault.c | 2 +- 8 files changed, 20 insertions(+), 19 deletions(-) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6c2826417b337..047d157987859 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -75,7 +76,7 @@ __visible noinstr void do_syscall_64(struct pt_regs *regs, int nr) add_random_kstack_offset(); nr = syscall_enter_from_user_mode(regs, nr); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!do_syscall_x64(regs, nr) && !do_syscall_x32(regs, nr) && nr != -1) { /* Invalid system call, but still a system call. */ diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index 1345088e99025..f24ff33fc3681 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -51,7 +51,7 @@ __visible noinstr void func(struct pt_regs *regs) \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ __##func (regs); \ instrumentation_end(); \ irqentry_exit(regs, state); \ @@ -98,7 +98,7 @@ __visible noinstr void func(struct pt_regs *regs, \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ __##func (regs, error_code); \ instrumentation_end(); \ irqentry_exit(regs, state); \ @@ -195,7 +195,7 @@ __visible noinstr void func(struct pt_regs *regs, \ irqentry_state_t state = irqentry_enter(regs); \ u32 vector = (u32)(u8)error_code; \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ run_irq_on_irqstack_cond(__##func, regs, vector); \ instrumentation_end(); \ @@ -235,7 +235,7 @@ __visible noinstr void func(struct pt_regs *regs) \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ run_sysvec_on_irqstack_cond(__##func, regs); \ instrumentation_end(); \ @@ -262,7 +262,7 @@ __visible noinstr void func(struct pt_regs *regs) \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ __irq_enter_raw(); \ kvm_set_cpu_l1tf_flush_l1d(); \ __##func (regs); \ diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 5818b837fd4d4..7b8c43d8727cc 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1355,7 +1355,7 @@ static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callba /* Handle unconfigured int18 (should never happen) */ static noinstr void unexpected_machine_check(struct pt_regs *regs) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); pr_err("CPU#%d: Unexpected int18 (Machine Check)\n", smp_processor_id()); instrumentation_end(); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index d77481ecb0d5f..eaed9b412908c 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -249,7 +249,7 @@ noinstr bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token) return false; state = irqentry_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * If the host managed to inject an async #PF into an interrupt diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 4bce802d25fb1..3f987a5dc38c7 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -329,7 +329,7 @@ static noinstr void default_do_nmi(struct pt_regs *regs) __this_cpu_write(last_nmi_rip, regs->ip); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); handled = nmi_handle(NMI_LOCAL, regs); __this_cpu_add(nmi_stats.normal, handled); diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index e6d316a01fdd4..9bfc29fc9c983 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1330,7 +1330,7 @@ DEFINE_IDTENTRY_VC_KERNEL(exc_vmm_communication) irq_state = irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!vc_raw_handle_exception(regs, error_code)) { /* Show some debug info */ @@ -1362,7 +1362,7 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) } irqentry_enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!vc_raw_handle_exception(regs, error_code)) { /* diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 8143693a7ea6e..f08741abc0e5b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -229,7 +229,7 @@ static noinstr bool handle_bug(struct pt_regs *regs) /* * All lies, just get the WARN/BUG out. */ - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * Since we're emulating a CALL with exceptions, restore the interrupt * state to what it was at the exception site. @@ -260,7 +260,7 @@ DEFINE_IDTENTRY_RAW(exc_invalid_op) return; state = irqentry_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); handle_invalid_op(regs); instrumentation_end(); irqentry_exit(regs, state); @@ -414,7 +414,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault) #endif irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV); tsk->thread.error_code = error_code; @@ -690,14 +690,14 @@ DEFINE_IDTENTRY_RAW(exc_int3) */ if (user_mode(regs)) { irqentry_enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); do_int3_user(regs); instrumentation_end(); irqentry_exit_to_user_mode(regs); } else { irqentry_state_t irq_state = irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!do_int3(regs)) die("int3", regs, 0); instrumentation_end(); @@ -896,7 +896,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs, */ unsigned long dr7 = local_db_save(); irqentry_state_t irq_state = irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * If something gets miswired and we end up here for a user mode @@ -975,7 +975,7 @@ static __always_inline void exc_debug_user(struct pt_regs *regs, */ irqentry_enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * Start the virtual/ptrace DR6 value with just the DR_STEP mask diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index f2250a32a10ca..676e394f1af5b 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1557,7 +1557,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) */ state = irqentry_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); handle_page_fault(regs, error_code, address); instrumentation_end(); From patchwork Tue Mar 29 12:40:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12794798 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEB11C433EF for ; Tue, 29 Mar 2022 12:42:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C9E08D0008; Tue, 29 Mar 2022 08:42:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B5878D0007; Tue, 29 Mar 2022 08:42:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42DEE8D0008; Tue, 29 Mar 2022 08:42:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 30DDE8D0007 for ; Tue, 29 Mar 2022 08:42:39 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DF45292E13 for ; Tue, 29 Mar 2022 12:42:38 +0000 (UTC) X-FDA: 79297387596.18.2960E63 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf19.hostedemail.com (Postfix) with ESMTP id 756B61A001A for ; Tue, 29 Mar 2022 12:42:38 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id er8-20020a170907738800b006e003254d86so8131188ejc.11 for ; Tue, 29 Mar 2022 05:42:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BxrNv7I9TyEaPnGly7TdSY5yHzK/Q3Z4xNLrOUnkClQ=; b=F1YYot40OfwHFlJIBEjvvi7+vEf+sISqzpxyEMm9RXg7yBeYOovpz1ifEVjxkb+8Tp ug846iZxjLR7LfZGindUZ5W5ZYWx9SXUM/LIzPff3L9xOwWnmqR34pPLPPicL7EpVaWh BceAVU1QYjQ33h5UnMxF7NMbTCP79zz+hCjBe7DzaqQKiuOmpo0Q92dqE8k4H4xhYPJ5 8QBDPgbSjW2Dxw8rRJh4j124haAPk3WZ7kH3MZRK7aTRX+53Kfdw+whZ5Qh3OkEuEe4V /YzlCDf1dkIQcIX6qS1kNTRw9/AS3wlsNX6fJ044mUGy/rWkdQ4G5sXgY/2nnNeFCaLN ovQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BxrNv7I9TyEaPnGly7TdSY5yHzK/Q3Z4xNLrOUnkClQ=; b=TQL5RKiue1DlZEUvq6mQGUqOnZVswqZHGFHLyJeWw69A/iN7Gsom0W82COjQ80bt76 c3AWpYBinCB90lMY1QqKCCpZmZfMftNUWlgHCHaLY5WPgcJGILSmEPijgGLgDp/zl2tB zi2afgL3tLrXYeN4nr7DjKyEp1YlLe/3EyULK8lPdsPzU2BaBQz4Zqk2plJkQKJoIT5C Nut/bCnbnskwsRpWpK1Gcy0YyfcQQY8ImNxk1N0uWI01MwBUthqbcDX1oIs43upMTDIL Zdi5ig9RK+vQXBoqBD1zBhGmsxKzID5AWblNURluezFkOXka5cQz0UE70rLCtse6P/D8 sYrQ== X-Gm-Message-State: AOAM533ymdN7a6ynbHozoPSmLyUDRFizCPCvN+FQrVPAHLEAQOGkOR3P eyBMCDALjFrthV6+9B9fH5ZTz170vIc= X-Google-Smtp-Source: ABdhPJwCpxSBAahtijoWXTLXmqk0cIsPTNznjrswrMwUWM6rymRvh0IEM+UvEt2qc+MfcoyqAjf1DJp1rgo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:906:c1c6:b0:6d5:cc27:a66c with SMTP id bw6-20020a170906c1c600b006d5cc27a66cmr35151648ejb.650.1648557756848; Tue, 29 Mar 2022 05:42:36 -0700 (PDT) Date: Tue, 29 Mar 2022 14:40:17 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-49-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 48/48] x86: kmsan: enable KMSAN builds for x86 From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: bi45nf3fdpwikuis4yff4gw9doszswmf Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=F1YYot40; spf=pass (imf19.hostedemail.com: domain of 3vP5CYgYKCN0FKHCDQFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3vP5CYgYKCN0FKHCDQFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 756B61A001A X-HE-Tag: 1648557758-769043 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make KMSAN usable by adding the necessary Kconfig bits. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I1d295ce8159ce15faa496d20089d953a919c125e --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 646a7849be4cf..1c4601e198d5c 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -165,6 +165,7 @@ config X86 select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KFENCE + select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT