From patchwork Mon Sep 5 12:24:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E21CECAAD5 for ; Mon, 5 Sep 2022 12:25:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C97C58D0068; Mon, 5 Sep 2022 08:25:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C476C8D0050; Mon, 5 Sep 2022 08:25:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE7D28D0068; Mon, 5 Sep 2022 08:25:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9D5808D0050 for ; Mon, 5 Sep 2022 08:25:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7C6CB413BB for ; Mon, 5 Sep 2022 12:25:02 +0000 (UTC) X-FDA: 79877951244.27.C2DEC5E Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf29.hostedemail.com (Postfix) with ESMTP id 291D41200A4 for ; Mon, 5 Sep 2022 12:25:01 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id q32-20020a05640224a000b004462f105fa9so5686446eda.4 for ; Mon, 05 Sep 2022 05:25:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=H32LKJ4QtRE8PW83scu/rnCN+O5l16ld3Egd+3MOYzM=; b=aLfolEGF+4z8StWe09PixkRhroqvOPaKUooAdeTFsdKSl+v49Odm5vs4CnLK4fsq4S Jq/AR2w90iO6+0l+zBekr5ajfLWj2kU/XP1ultUmMcjL/ul1CQk62zeHwEWlR5EaPCy6 xtEK7vi/KQ6RS+ZuXFb9epBPjB1wW1c7oIClJGqTZzAROZRQOKU79SXsmYSjjrE2ztj1 zJTmRhJindsP0wNuL26Hs/hUR3y7PcifrgmpFh6HPlZKRzPQOtdWukP1Dh9II1PAI949 igQPGgi38AzhhZXpXIUCQqjOqv7RyRkj6cvRWjsdhzpIjgyg29OzL21jiUcjEweBFcPc OORg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=H32LKJ4QtRE8PW83scu/rnCN+O5l16ld3Egd+3MOYzM=; b=gRV6RUL3RcPjDMF0sq+R24LMagYjBaE4fh5OnFEpMjp8OB7/1W/Z82fp7B7eSGM3fg nvFefjJ31RW+G2nlQ5MivRNHzdUjEPwkY+gIbQ5MmeFM/RpstEPhyMgBDVyWJ2JG6LTU D7oeoYPaYNQmTe/4mKvG/OpnRaPTc2VA+vQPLyxv/etSt4XBlFWMkmZOcZRd7+eGR1Yh jF10xPqt1UyXJhdoPQsoZyxLDhV6fkgY+3iDrkCwaAZpBtyCmZWKfVYUhquXZQ+8Ifak 9l2pfA/2aNBqVVYdZvfHnFDA4M6ihbg+YXiBtukG+9aDPur9kVnIMTpaMLNdZ4s3Czn1 onPg== X-Gm-Message-State: ACgBeo15V5u04jzQKn5Fw/RJkom0XnLiBX+AgBP9Fgsn/QBnUZV0x7NL zpldFrQBxwqqRH1RF2O5a6ggjNdNUQU= X-Google-Smtp-Source: AA6agR5vWoTcukwzW12FJUqSgHwZUEspZmnu3gOZAV6odx1vq7y7ZtPkLoi8mVSYWK1T4yJjFIX66KC09ZU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:c556:0:b0:44e:9c95:a9a4 with SMTP id s22-20020aa7c556000000b0044e9c95a9a4mr2284696edr.301.1662380699688; Mon, 05 Sep 2022 05:24:59 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:09 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-2-glider@google.com> Subject: [PATCH v6 01/44] x86: add missing include to sparsemem.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380701; a=rsa-sha256; cv=none; b=qvq4BGNh3b7SYAZK22uZ+c6we3MYFd/82GIQFEy/Dw8vBZC2tmII4WhRhnopDGrjwlUuzN Hudyi8aaMQOyUp+730d5z4D1u1q3hzlQfXNNb4uOqaJdxLo2+stIgv1pqpd6sKCJGzB5fv k7gMLoApz0uphMn9Qs4HRs/zMwRTSo0= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aLfolEGF; spf=pass (imf29.hostedemail.com: domain of 3m-oVYwYKCOYOTQLMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3m-oVYwYKCOYOTQLMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380701; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H32LKJ4QtRE8PW83scu/rnCN+O5l16ld3Egd+3MOYzM=; b=eFR+Zg3pb/g+74lZLkk5M4z99Q04rGD5d2iXWgyQRN504epxmbQjaB609oTE3IhSmmhlKB 0YXFq7gMlhbyvDFCHEuYZcImyuKrhFxi/QrsopARjP2iyiZROoclZdA0pPl+Ay0hz+LYFV eeVcYOnRqjSPj/zZBvyA3ZvFyzxQYHc= X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aLfolEGF; spf=pass (imf29.hostedemail.com: domain of 3m-oVYwYKCOYOTQLMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3m-oVYwYKCOYOTQLMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam09 X-Stat-Signature: 1dnee57xipkxx58b9e69ciadwjzk4whw X-Rspamd-Queue-Id: 291D41200A4 X-HE-Tag: 1662380700-761421 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dmitry Vyukov Including sparsemem.h from other files (e.g. transitively via asm/pgtable_64_types.h) results in compilation errors due to unknown types: sparsemem.h:34:32: error: unknown type name 'phys_addr_t' extern int phys_to_target_node(phys_addr_t start); ^ sparsemem.h:36:39: error: unknown type name 'u64' extern int memory_add_physaddr_to_nid(u64 start); ^ Fix these errors by including linux/types.h from sparsemem.h This is required for the upcoming KMSAN patches. Signed-off-by: Dmitry Vyukov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ifae221ce85d870d8f8d17173bd44d5cf9be2950f --- arch/x86/include/asm/sparsemem.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 6a9ccc1b2be5d..64df897c0ee30 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -2,6 +2,8 @@ #ifndef _ASM_X86_SPARSEMEM_H #define _ASM_X86_SPARSEMEM_H +#include + #ifdef CONFIG_SPARSEMEM /* * generic non-linear memory support: From patchwork Mon Sep 5 12:24:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12FA8ECAAD3 for ; Mon, 5 Sep 2022 12:25:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A26878D0069; Mon, 5 Sep 2022 08:25:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AFBF8D0050; Mon, 5 Sep 2022 08:25:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7661D8D0069; Mon, 5 Sep 2022 08:25:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6939D8D0050 for ; Mon, 5 Sep 2022 08:25:04 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3799016059C for ; Mon, 5 Sep 2022 12:25:04 +0000 (UTC) X-FDA: 79877951328.01.9480021 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf29.hostedemail.com (Postfix) with ESMTP id D7E5F1200A4 for ; Mon, 5 Sep 2022 12:25:03 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id m15-20020a056402430f00b00448af09b674so5698729edc.13 for ; Mon, 05 Sep 2022 05:25:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=myl6nRKQ/2lBkAfAPAxtf2MFDBMrGN4kiGAEDqUjhHA=; b=IBpKp0yu7hath3Co70M/j4bs2EU3HCIGiKtB6PMhhOq7WSR2S+sr7K7xvybqX5tiPf KMPu6eRVtR9bJt9vg1t7nwWY7WgX8rjB7vSwu+xkDC2COoLN7bfIujnQGccb9tiuNN2I +tvcy8vtAqgC8/rOhasI+jB3iaWQ09gj5u+PZXUGsOUzskQ4vR5VS2PzQP0rNKu3m/o/ HuvftnQpFDuleOZ6lltnEfpy11C4VbBbXBRUCGafqrf5OjRzUIgq9PKL1SA1LkvboCaB 03BkpVi0fEzItGaGnZXDzs3yF73raYOzYiFhGt55TzT/HGwTN2ogG055IZb40fN68SOe uVhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=myl6nRKQ/2lBkAfAPAxtf2MFDBMrGN4kiGAEDqUjhHA=; b=HX6GKqBgktCl6GNO7CuzYDCHvSP+98AHKIZN8QO/imAIIzb2Hk5W0ByFR/t3W2Mu0n wVmrTioefwUIrrRVKhf6TUpSJTW0RMe5gRlERLXoLUOJ9kgHqPjXnXrFymqoCuYlpBTH EHqlJ/V9ipo5tWq3qBkf2Lhdg4G6XVOygMMhC0BouohlvqOPhEuZPXRAf3juhZyrmO3U Tx/kn+kqJLxnc5grNapAFL33nLEvsOrTaQNNku5ZVIck/vZipPTWYMWTwPf7AOsS/jTt HAoqpchN0TBzz7dpRZuonlBBCMQHTLfVovMhXtUo1JqmHHKyfj8RgNj2hluT15gOqWuL aXfQ== X-Gm-Message-State: ACgBeo39nzBiEFi9MJfx15Jnv1d8RJ+6Lganmlx8JB9gtBatyR/0kNXI JzE7aKvWtvR7lsgbTcNTRPnCRMahjFc= X-Google-Smtp-Source: AA6agR6xV1TZSgiLmkIZWl68NeMwqlaZJ3Uy43Sstc+4oSDnuNmt+N8iETmY4ZbImXHuf9sM078ffusV1kM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:7310:b0:741:85de:ead0 with SMTP id di16-20020a170906731000b0074185deead0mr26364424ejc.441.1662380702571; Mon, 05 Sep 2022 05:25:02 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:10 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-3-glider@google.com> Subject: [PATCH v6 02/44] stackdepot: reserve 5 extra bits in depot_stack_handle_t From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380703; a=rsa-sha256; cv=none; b=YduQ2v9u26U0p4+xprPvNnrX5BOrQHkkwp4XvQyHvKlUhCXwNAtP9M05gJQgYxIPJrWP12 2Esn+Ko2IDRDQcDTtbBoPRVKh0v49zavJkDnhIPi5O4rhSXhYq7iFT250QQjJftlIslsS7 pEIi28d4lYuzAGmPbU8vDwWa7vEtfxM= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IBpKp0yu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3nuoVYwYKCOkRWTOPcRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3nuoVYwYKCOkRWTOPcRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--glider.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380703; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=myl6nRKQ/2lBkAfAPAxtf2MFDBMrGN4kiGAEDqUjhHA=; b=OSbOcCvGUnf3aQbr0GGypV4jpHca6z9v1VYrTXtqQX0me0z8Dr35HnNjeXuYfy41o8NyfA eBRk5HDJHGaN7ukom11cKOrkoy6cGrcD5vNnpz7zhqMjJzl4rNdzinVixrgvZknDYtWOBv 3wQU2b6aFDISLBx3/H4xOk7E0GKEN9g= X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IBpKp0yu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3nuoVYwYKCOkRWTOPcRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3nuoVYwYKCOkRWTOPcRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D7E5F1200A4 X-Stat-Signature: ysiq53couxhqx83unu3x496hqx398t4r X-HE-Tag: 1662380703-294222 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so by adding @extra_bits to __stack_depot_save() to store arbitrary flags, and providing stack_depot_get_extra_bits() to retrieve those flags. Also adapt KASAN to the new prototype by passing extra_bits=0, as KASAN does not intend to store additional information in the stack handle. Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v4: -- per Marco Elver's request, fold "kasan: common: adapt to the new prototype of __stack_depot_save()" into this patch to prevent bisection breakages. Link: https://linux-review.googlesource.com/id/I0587f6c777667864768daf07821d594bce6d8ff9 --- include/linux/stackdepot.h | 8 ++++++++ lib/stackdepot.c | 29 ++++++++++++++++++++++++----- mm/kasan/common.c | 2 +- 3 files changed, 33 insertions(+), 6 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index bc2797955de90..9ca7798d7a318 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -14,9 +14,15 @@ #include typedef u32 depot_stack_handle_t; +/* + * Number of bits in the handle that stack depot doesn't use. Users may store + * information in them. + */ +#define STACK_DEPOT_EXTRA_BITS 5 depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t gfp_flags, bool can_alloc); /* @@ -59,6 +65,8 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle); + int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, int spaces); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index e73fda23388d8..79e894cf84064 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -43,7 +43,8 @@ #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ STACK_ALLOC_ALIGN) #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ - STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) + STACK_ALLOC_NULL_PROTECTION_BITS - \ + STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) #define STACK_ALLOC_SLABS_CAP 8192 #define STACK_ALLOC_MAX_SLABS \ (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ @@ -56,6 +57,7 @@ union handle_parts { u32 slabindex : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; + u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -77,6 +79,14 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_RAW_SPINLOCK(depot_lock); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle) +{ + union handle_parts parts = { .handle = handle }; + + return parts.extra; +} +EXPORT_SYMBOL(stack_depot_get_extra_bits); + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -140,6 +150,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) stack->handle.slabindex = depot_index; stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; + stack->handle.extra = 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); depot_offset += required_size; @@ -382,6 +393,7 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * * @entries: Pointer to storage array * @nr_entries: Size of the storage array + * @extra_bits: Flags to store in unused bits of depot_stack_handle_t * @alloc_flags: Allocation gfp flags * @can_alloc: Allocate stack slabs (increased chance of failure if false) * @@ -393,6 +405,10 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * If the stack trace in @entries is from an interrupt, only the portion up to * interrupt entry is saved. * + * Additional opaque flags can be passed in @extra_bits, stored in the unused + * bits of the stack handle, and retrieved using stack_depot_get_extra_bits() + * without calling stack_depot_fetch(). + * * Context: Any context, but setting @can_alloc to %false is required if * alloc_pages() cannot be used from the current context. Currently * this is the case from contexts where neither %GFP_ATOMIC nor @@ -402,10 +418,11 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); */ depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t alloc_flags, bool can_alloc) { struct stack_record *found = NULL, **bucket; - depot_stack_handle_t retval = 0; + union handle_parts retval = { .handle = 0 }; struct page *page = NULL; void *prealloc = NULL; unsigned long flags; @@ -489,9 +506,11 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, free_pages((unsigned long)prealloc, STACK_ALLOC_ORDER); } if (found) - retval = found->handle.handle; + retval.handle = found->handle.handle; fast_exit: - return retval; + retval.extra = extra_bits; + + return retval.handle; } EXPORT_SYMBOL_GPL(__stack_depot_save); @@ -511,6 +530,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int nr_entries, gfp_t alloc_flags) { - return __stack_depot_save(entries, nr_entries, alloc_flags, true); + return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true); } EXPORT_SYMBOL_GPL(stack_depot_save); diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 69f583855c8be..94caa2d46a327 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -36,7 +36,7 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) unsigned int nr_entries; nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); - return __stack_depot_save(entries, nr_entries, flags, can_alloc); + return __stack_depot_save(entries, nr_entries, 0, flags, can_alloc); } void kasan_set_track(struct kasan_track *track, gfp_t flags) From patchwork Mon Sep 5 12:24:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5AB0ECAAD3 for ; Mon, 5 Sep 2022 12:25:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BD3B8D006A; Mon, 5 Sep 2022 08:25:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4596A8D0050; Mon, 5 Sep 2022 08:25:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F8958D006A; Mon, 5 Sep 2022 08:25:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 20E738D0050 for ; Mon, 5 Sep 2022 08:25:07 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id F21B81A0D1A for ; Mon, 5 Sep 2022 12:25:06 +0000 (UTC) X-FDA: 79877951412.13.8C95148 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf19.hostedemail.com (Postfix) with ESMTP id 7735D1A0058 for ; Mon, 5 Sep 2022 12:25:06 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id gn30-20020a1709070d1e00b0074144af99d1so2275606ejc.17 for ; Mon, 05 Sep 2022 05:25:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=msgzHzsxvMZcLqG5jRq1WtDuSKgOL7FjSU3mDqU1uyE=; b=dumDoYlxWlqzIiGUDfqWn3PFiIBFZKD1lqkrAhUUBcEqNMiBMcDon+gWY8MdM+QiUr pWp+u5jPqzIFfKf5tVQi2QqYtcqYeNuaKFi2vbMNpzUjkLtYPXmYCEuL7ldChv8TS4ZZ tB954Kfzo5EF/k9ozAtLqALzbmeZ6D3Yoe50OAmmd0pdlOfdojPkwvf+Byb8szV2CveR v3GMz6jJinSIaBtcb8qSFsT2UvtckDVzEsXrWwoOOhXqNtFnAStcofJldqoHv2ByTv7b rt7xZUGIKqp9Q8RXgw6skfMscW8rUhTvzo/0tKYpJzsY/+5QiYgDklM1d5LEzenEvOla XTPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=msgzHzsxvMZcLqG5jRq1WtDuSKgOL7FjSU3mDqU1uyE=; b=rX+CN4h2SMZFat1fhrnNYm7jgEvO0SBJHcTycNbSuu4jvJ1Pj1SsUaE9Qy/X6LDZF1 3MWZoANYhBtBfXG5p7X4Oc/2ln9GUaaKGEOttDKoBMsPO+/9saGxcEyX+xjRFldZBU7y GPof18NQBxvCoreUTM0cN45xjMroid9r9G4Zo8GTWeSRPWnb0Tq+NW3s7eOeESLejKPs wEk+WCaQf0ul86KiOIwH5/2h4Zhtj7N2/zNl8tYzHX6cZb1lytiAASG9TE3DWo2Jq7ec WeIJyg9t3UgxomNpMRMwhdsuweFl1Ys0b0UA+Xe3DXwoI1kHSjNpApPPvI3HiPxlZet9 wQAQ== X-Gm-Message-State: ACgBeo0SeZFGo8ZchgoFWQ0AfATGOpTE226Dnrw2qiyzUaTX3pdLj3EV DTJz1YlDCjWxuWXxOVJ9HrZbniA25i4= X-Google-Smtp-Source: AA6agR6kIhinvJWZyX7Pt3MOzEZEE4BA3UOEbJgHd7ytmVme1lMAQEmg4wwM0064l90rLhx4xAa5CErJGFg= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:448b:b0:43b:5ec6:8863 with SMTP id er11-20020a056402448b00b0043b5ec68863mr42758320edb.377.1662380705217; Mon, 05 Sep 2022 05:25:05 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:11 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-4-glider@google.com> Subject: [PATCH v6 03/44] instrumented.h: allow instrumenting both sides of copy_from_user() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380706; a=rsa-sha256; cv=none; b=2D2w42a/sM1RlUnoyb+BgGAQezowuWm2UGORxTK3fOTB+uCVn3D14OZJ+ESFqR1jM5VEoZ nIXNlEqro7SOJwKqTSn0jUk6zxSMdX/y/eFzLnRtbAv4ZMzrFb3Vwk/AxRkjMRHDBu2CE0 vcJNZXlAbAtY72hEcH8rDYdXPD0surE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=dumDoYlx; spf=pass (imf19.hostedemail.com: domain of 3oeoVYwYKCOwUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3oeoVYwYKCOwUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=msgzHzsxvMZcLqG5jRq1WtDuSKgOL7FjSU3mDqU1uyE=; b=FF5N0zAPnMFJtAsVa4aUCMvX1HHd45Lkrn39WDcejEAyqNybwcXGunGxzWN7cH2f3nqAJa HhLbbaIpJ+YR6YAp2SByltCp01d7bogjsZdETW1xY1V8weu3ia8Zledd0W6Jf+AS5N5ShC P7KZTu4sTo7zq4i1RHpnVivaR13UypA= X-Rspam-User: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=dumDoYlx; spf=pass (imf19.hostedemail.com: domain of 3oeoVYwYKCOwUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3oeoVYwYKCOwUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam09 X-Stat-Signature: peyx1ogt1utb1j5p6jt11zm3t4b1sbc6 X-Rspamd-Queue-Id: 7735D1A0058 X-HE-Tag: 1662380706-869429 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce instrument_copy_from_user_before() and instrument_copy_from_user_after() hooks to be invoked before and after the call to copy_from_user(). KASAN and KCSAN will be only using instrument_copy_from_user_before(), but for KMSAN we'll need to insert code after copy_from_user(). Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v4: -- fix _copy_from_user_key() in arch/s390/lib/uaccess.c (Reported-by: kernel test robot ) Link: https://linux-review.googlesource.com/id/I855034578f0b0f126734cbd734fb4ae1d3a6af99 --- arch/s390/lib/uaccess.c | 3 ++- include/linux/instrumented.h | 21 +++++++++++++++++++-- include/linux/uaccess.h | 19 ++++++++++++++----- lib/iov_iter.c | 9 ++++++--- lib/usercopy.c | 3 ++- 5 files changed, 43 insertions(+), 12 deletions(-) diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c index d7b3b193d1088..58033dfcb6d45 100644 --- a/arch/s390/lib/uaccess.c +++ b/arch/s390/lib/uaccess.c @@ -81,8 +81,9 @@ unsigned long _copy_from_user_key(void *to, const void __user *from, might_fault(); if (!should_fail_usercopy()) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user_key(to, from, n, key); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 42faebbaa202a..ee8f7d17d34f5 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -120,7 +120,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) } /** - * instrument_copy_from_user - instrument writes of copy_from_user + * instrument_copy_from_user_before - add instrumentation before copy_from_user * * Instrument writes to kernel memory, that are due to copy_from_user (and * variants). The instrumentation should be inserted before the accesses. @@ -130,10 +130,27 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) * @n number of bytes to copy */ static __always_inline void -instrument_copy_from_user(const void *to, const void __user *from, unsigned long n) +instrument_copy_from_user_before(const void *to, const void __user *from, unsigned long n) { kasan_check_write(to, n); kcsan_check_write(to, n); } +/** + * instrument_copy_from_user_after - add instrumentation after copy_from_user + * + * Instrument writes to kernel memory, that are due to copy_from_user (and + * variants). The instrumentation should be inserted after the accesses. + * + * @to destination address + * @from source address + * @n number of bytes to copy + * @left number of bytes not copied (as returned by copy_from_user) + */ +static __always_inline void +instrument_copy_from_user_after(const void *to, const void __user *from, + unsigned long n, unsigned long left) +{ +} + #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 47e5d374c7ebe..afb18f198843b 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -58,20 +58,28 @@ static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { - instrument_copy_from_user(to, from, n); + unsigned long res; + + instrument_copy_from_user_before(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); + instrument_copy_from_user_before(to, from, n); if (should_fail_usercopy()) return n; - instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } /** @@ -115,8 +123,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 4b7fce72e3e52..c3ca28ca68a65 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -174,13 +174,16 @@ static int copyout(void __user *to, const void *from, size_t n) static int copyin(void *to, const void __user *from, size_t n) { + size_t res = n; + if (should_fail_usercopy()) return n; if (access_ok(from, n)) { - instrument_copy_from_user(to, from, n); - n = raw_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } - return n; + return res; } static inline struct pipe_buffer *pipe_buf(const struct pipe_inode_info *pipe, diff --git a/lib/usercopy.c b/lib/usercopy.c index 7413dd300516e..1505a52f23a01 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,8 +12,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); From patchwork Mon Sep 5 12:24:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8615BC6FA8B for ; Mon, 5 Sep 2022 12:25:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 174F68D006B; Mon, 5 Sep 2022 08:25:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1242C8D0050; Mon, 5 Sep 2022 08:25:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDFF28D006B; Mon, 5 Sep 2022 08:25:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DC6E48D0050 for ; Mon, 5 Sep 2022 08:25:09 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BD3C8A0ECD for ; Mon, 5 Sep 2022 12:25:09 +0000 (UTC) X-FDA: 79877951538.26.75670FC Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf04.hostedemail.com (Postfix) with ESMTP id 6967E40069 for ; Mon, 5 Sep 2022 12:25:09 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id p4-20020a056402500400b00447e8b6f62bso5758264eda.17 for ; Mon, 05 Sep 2022 05:25:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=gSZzwrXvhkj2wpolUx3YVTouxdXHw9PaTEDZ4n2e3I0=; b=tGEGAx8uMhsDCwjOO4YeavI0k5QsjHgjDrM4zDrLN5sS9ToamP5OIkdyf8ZLJoQyOw e2LoiIBveMGNkL1Elhx+wY4FUi2lWQYPS946Ofc2zpeiYXDghDghxvMhyBVvvt9lMxWQ vgiRcYONLBIiMgCzXeNVdaYQRCNi3V8gspn7+0+/HuYQO9Jmovxsizf4gqTMVYmCS/BT 3wSbxDLRcA/TelDo7MMeZ9iCCDJ/9i/v/s7AvSw7tPA5gGWT+7eyJ7eSlRgra5ZKcjZj wqC4BAaZNeAxQzofrZ1I/wGoXPlrz8wTvBufMgNH8R9/2cyykGlVsTniZ4E04Wlp4Kq1 6j4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=gSZzwrXvhkj2wpolUx3YVTouxdXHw9PaTEDZ4n2e3I0=; b=wv3peOlgpQhqRmsX4jhqTr7mkHK3Wkkwnyb4RTy7EOp03kmeXKty+M1d6kNpdFC6fW K2V6924bwNBBkMm1/HKLEdjfIlyimzk1r1krzVUT1AiJ2pg2fpJ1y7gRDAAG7QAgEI84 wBPjh0gHOmgRIp+J4scRGk/RdLXqSh8+M+CbD1DG9zLgww701eFdRu5pqWWFceTuhKGD s9UP0xSe3HHPC6sI52vDfJI/lvxymMsErmav+MiZkzoXZDvOAkfgwYi4gajaT9dyxouu TfR0kXMLJ6HCJoJpLqgCwzXCjA+ruh15R7AGpQMDc2TnRGT7WFQxzF67RYzHOxh+6/M6 wHNw== X-Gm-Message-State: ACgBeo3vOYfLjGB9LBeZOofxTkFaO0gj7lEluJh2Ylt8mb61lcJMBlz4 uij2kreZfWTwgou9sqqgKaAyjufXUXM= X-Google-Smtp-Source: AA6agR7uZSb4I8x12+mloMHQ0B/R9zNrKGyUDj6RqUDl0l3ypI90dtuco15DreoHh/j3D7zLcpHmCF1MiQM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:4414:b0:434:f58c:ee2e with SMTP id y20-20020a056402441400b00434f58cee2emr43097203eda.362.1662380707978; Mon, 05 Sep 2022 05:25:07 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:12 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-5-glider@google.com> Subject: [PATCH v6 04/44] x86: asm: instrument usercopy in get_user() and put_user() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tGEGAx8u; spf=pass (imf04.hostedemail.com: domain of 3o-oVYwYKCO4WbYTUhWeeWbU.SecbYdkn-ccalQSa.ehW@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3o-oVYwYKCO4WbYTUhWeeWbU.SecbYdkn-ccalQSa.ehW@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380709; a=rsa-sha256; cv=none; b=DHtwgAj3p1T3nIQ9ZICFGUIzBaSXCCFQzGT27/31Lc+QqnznN30Fk8Llm2sb8kJAY75cgA 9gGHSWQ79HUZ8uupAB8F3b0OUo1dQ+l/jCjLJD2UFdVadPMSKIZsgO1GM9/eYurlJf7EuI Y0rwsDv45V0LVuIbFmGvWrNrG1IbzWU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380709; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gSZzwrXvhkj2wpolUx3YVTouxdXHw9PaTEDZ4n2e3I0=; b=2rMFjcHyAU4HudTb0p9P7PT7inOilZPmwZ0kHQO86E66oHDXGrFX8lOAAIyf3/AMmrUr3v hHOL1QBVKbuFS52UbkWKuh885T22Q7mz5MXT2QbRBo9civ6+gD29DE5757eK8Nred0saA2 /kpVVSewHKbSslCVw9h/+q3ySE8FnGw= X-Stat-Signature: p1hw9xjttx86b4asjhmi64pgmdhbsj3c X-Rspamd-Queue-Id: 6967E40069 X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tGEGAx8u; spf=pass (imf04.hostedemail.com: domain of 3o-oVYwYKCO4WbYTUhWeeWbU.SecbYdkn-ccalQSa.ehW@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3o-oVYwYKCO4WbYTUhWeeWbU.SecbYdkn-ccalQSa.ehW@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-HE-Tag: 1662380709-405862 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use hooks from instrumented.h to notify bug detection tools about usercopy events in variations of get_user() and put_user(). Signed-off-by: Alexander Potapenko --- v5: -- handle put_user(), make sure to not evaluate pointer/value twice v6: -- add missing empty definitions of instrument_get_user() and instrument_put_user() Link: https://linux-review.googlesource.com/id/Ia9f12bfe5832623250e20f1859fdf5cc485a2fce --- arch/x86/include/asm/uaccess.h | 22 +++++++++++++++------- include/linux/instrumented.h | 28 ++++++++++++++++++++++++++++ 2 files changed, 43 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 913e593a3b45f..c1b8982899eca 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -5,6 +5,7 @@ * User space memory access functions */ #include +#include #include #include #include @@ -103,6 +104,7 @@ extern int __get_user_bad(void); : "=a" (__ret_gu), "=r" (__val_gu), \ ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ + instrument_get_user(__val_gu); \ (x) = (__force __typeof__(*(ptr))) __val_gu; \ __builtin_expect(__ret_gu, 0); \ }) @@ -192,9 +194,11 @@ extern void __put_user_nocheck_8(void); int __ret_pu; \ void __user *__ptr_pu; \ register __typeof__(*(ptr)) __val_pu asm("%"_ASM_AX); \ - __chk_user_ptr(ptr); \ - __ptr_pu = (ptr); \ - __val_pu = (x); \ + __typeof__(*(ptr)) __x = (x); /* eval x once */ \ + __typeof__(ptr) __ptr = (ptr); /* eval ptr once */ \ + __chk_user_ptr(__ptr); \ + __ptr_pu = __ptr; \ + __val_pu = __x; \ asm volatile("call __" #fn "_%P[size]" \ : "=c" (__ret_pu), \ ASM_CALL_CONSTRAINT \ @@ -202,6 +206,7 @@ extern void __put_user_nocheck_8(void); "r" (__val_pu), \ [size] "i" (sizeof(*(ptr))) \ :"ebx"); \ + instrument_put_user(__x, __ptr, sizeof(*(ptr))); \ __builtin_expect(__ret_pu, 0); \ }) @@ -248,23 +253,25 @@ extern void __put_user_nocheck_8(void); #define __put_user_size(x, ptr, size, label) \ do { \ + __typeof__(*(ptr)) __x = (x); /* eval x once */ \ __chk_user_ptr(ptr); \ switch (size) { \ case 1: \ - __put_user_goto(x, ptr, "b", "iq", label); \ + __put_user_goto(__x, ptr, "b", "iq", label); \ break; \ case 2: \ - __put_user_goto(x, ptr, "w", "ir", label); \ + __put_user_goto(__x, ptr, "w", "ir", label); \ break; \ case 4: \ - __put_user_goto(x, ptr, "l", "ir", label); \ + __put_user_goto(__x, ptr, "l", "ir", label); \ break; \ case 8: \ - __put_user_goto_u64(x, ptr, label); \ + __put_user_goto_u64(__x, ptr, label); \ break; \ default: \ __put_user_bad(); \ } \ + instrument_put_user(__x, ptr, size); \ } while (0) #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT @@ -305,6 +312,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + instrument_get_user(x); \ } while (0) #define __get_user_asm(x, addr, itype, ltype, label) \ diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index ee8f7d17d34f5..9f1dba8f717b0 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -153,4 +153,32 @@ instrument_copy_from_user_after(const void *to, const void __user *from, { } +/** + * instrument_get_user() - add instrumentation to get_user()-like macros + * + * get_user() and friends are fragile, so it may depend on the implementation + * whether the instrumentation happens before or after the data is copied from + * the userspace. + * + * @to destination variable, may not be address-taken + */ +#define instrument_get_user(to) \ +({ \ +}) + +/** + * instrument_put_user() - add instrumentation to put_user()-like macros + * + * put_user() and friends are fragile, so it may depend on the implementation + * whether the instrumentation happens before or after the data is copied from + * the userspace. + * + * @from source address + * @ptr userspace pointer to copy to + * @size number of bytes to copy + */ +#define instrument_put_user(from, ptr, size) \ +({ \ +}) + #endif /* _LINUX_INSTRUMENTED_H */ From patchwork Mon Sep 5 12:24:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966036 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 488CCC6FA83 for ; Mon, 5 Sep 2022 12:25:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C22538D006C; Mon, 5 Sep 2022 08:25:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD1EB8D0050; Mon, 5 Sep 2022 08:25:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A72D38D006C; Mon, 5 Sep 2022 08:25:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9537A8D0050 for ; Mon, 5 Sep 2022 08:25:12 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7539DA08BD for ; Mon, 5 Sep 2022 12:25:12 +0000 (UTC) X-FDA: 79877951664.15.9961765 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf05.hostedemail.com (Postfix) with ESMTP id 0D788100078 for ; Mon, 5 Sep 2022 12:25:11 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id ho13-20020a1709070e8d00b00730a655e173so2264968ejc.8 for ; Mon, 05 Sep 2022 05:25:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=fFq4uMFAQOrpNZLU6ydmQ4L19fVmnSZ8zUbgI7i1GQ4=; b=cCxag6KhRzMmnFVgfur/MISL9l4M5Ts+EjiLMP524JfvXCm4VLcLN7LzmXBjiZcaXb tlmiubyC93h17zTr+LO/dsylFcdxii7RwDPGn7qVnv6xTfjMkw0Cmoc7esbeBQP5TY9I YMNzkKVgI5RiZP85BOt8+Kx6fVfZgqQPOft+QNgQuRzLtUpRoXrN0PbcsG9nX9970XBG IBzqQiigJF2SIPyYv8PSiIFJrlq4x8iKcxDedH268UB7rpLHat8Rs+Z89FEg2QhX2aCA UNpAZdA9qSiszdY2MHTipaEnHXo6YveoRZNUVH6fz9F+WSwEd/H/4G01oo+H2EC8PkLV j68A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=fFq4uMFAQOrpNZLU6ydmQ4L19fVmnSZ8zUbgI7i1GQ4=; b=YbWRYE/DCm6ZXdBz/X3jB44nduua14GORHKs/ApClo8KIByxCQwlAmwOCfrwrgN09Y 1AOw4B5P1809xTwjmHmVae6Q0u4gXo2H34VrViIHkExQa5Y677gTj2HCjRNYEQtUL8kk Cw5iuGyXuVCryehpZqYdVWCodi8JnjcUlvc+JdVdcZUb14HUi26woRpEgYFou+Y46gzG rgz61ZCYKdmzAMtZ4VwhopAkLWCVisp6b6r9Wqqq3UpCjsMBv/Tc9JQwYyy3KE3s2KZ6 KkjIiKXKsSX+yU/c3F+3qZXY/rBYJdfoUuLfXi51YvQ9fwA1lh53OPsaY3+h67wASwW1 Mjgg== X-Gm-Message-State: ACgBeo2cvRA36iQIN3OKaHUSmw8qgWU+VMVfkpKWm6klJ0fFDcfyDpUE qiQOA0YwFf9O9oQBVkMFFyBSbywAB7s= X-Google-Smtp-Source: AA6agR6U2run9tld0JOCd4AYP6RzPopWiBDxZtk42V7MPQ3Pjs2lgYt8aiDTWOU8Ssa/Mvqp6YaAqq5ka0Y= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:2c5b:b0:741:6b8f:d3ab with SMTP id hf27-20020a1709072c5b00b007416b8fd3abmr26797162ejc.447.1662380710753; Mon, 05 Sep 2022 05:25:10 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:13 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-6-glider@google.com> Subject: [PATCH v6 05/44] asm-generic: instrument usercopy in cacheflush.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380712; a=rsa-sha256; cv=none; b=YQ9hhppPP3c3fnRduOLtCI2Eh3ZUcwlX/zqbBFoCUu2EBolqhJwJuO+ep/BONcdgrfBnzv lOobpV17oO6RTz1KRHnrkEWGBvJBUq8KlhZ1J88vaFj16ltmC5T7EyCRbsn8rjuBtyUa7t ldFPDp309kV54ulwL/Er+DDoVN3wWQ0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cCxag6Kh; spf=pass (imf05.hostedemail.com: domain of 3puoVYwYKCPEZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3puoVYwYKCPEZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380712; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fFq4uMFAQOrpNZLU6ydmQ4L19fVmnSZ8zUbgI7i1GQ4=; b=8RMOj+Gytui3rL58uEfKzzDEQT/fEDPBra1gSqbkm5DyRJXXpFrLpxL3mDlISy4etgxxXS v1YjSoe8I36dEo4EXmkbGzlGeqH92iW8Ucvre9xJ0tFi9GaNVofWAR4bCH0obgMHMIYCEz EVYVS5d9vMwx+Yz6YlWHbgmOs8Vrgb0= X-Rspam-User: X-Stat-Signature: 9q8pafwupocuf3f8radw6iwoyc9j8rdb X-Rspamd-Queue-Id: 0D788100078 X-Rspamd-Server: rspam10 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cCxag6Kh; spf=pass (imf05.hostedemail.com: domain of 3puoVYwYKCPEZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3puoVYwYKCPEZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1662380711-693356 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Notify memory tools about usercopy events in copy_to_user_page() and copy_from_user_page(). Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v5: -- cast user pointers to `void __user *` Link: https://linux-review.googlesource.com/id/Ic1ee8da1886325f46ad67f52176f48c2c836c48f --- include/asm-generic/cacheflush.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 4f07afacbc239..f46258d1a080f 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -2,6 +2,8 @@ #ifndef _ASM_GENERIC_CACHEFLUSH_H #define _ASM_GENERIC_CACHEFLUSH_H +#include + struct mm_struct; struct vm_area_struct; struct page; @@ -105,14 +107,22 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_to_user_page #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ + instrument_copy_to_user((void __user *)dst, src, len); \ memcpy(dst, src, len); \ flush_icache_user_page(vma, page, vaddr, len); \ } while (0) #endif + #ifndef copy_from_user_page -#define copy_from_user_page(vma, page, vaddr, dst, src, len) \ - memcpy(dst, src, len) +#define copy_from_user_page(vma, page, vaddr, dst, src, len) \ + do { \ + instrument_copy_from_user_before(dst, (void __user *)src, \ + len); \ + memcpy(dst, src, len); \ + instrument_copy_from_user_after(dst, (void __user *)src, len, \ + 0); \ + } while (0) #endif #endif /* _ASM_GENERIC_CACHEFLUSH_H */ From patchwork Mon Sep 5 12:24:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1BD0ECAAD5 for ; Mon, 5 Sep 2022 12:25:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5930C8D006D; Mon, 5 Sep 2022 08:25:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 542B58D0050; Mon, 5 Sep 2022 08:25:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 395A68D006D; Mon, 5 Sep 2022 08:25:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 24DD88D0050 for ; Mon, 5 Sep 2022 08:25:15 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 006FA1C7352 for ; Mon, 5 Sep 2022 12:25:14 +0000 (UTC) X-FDA: 79877951790.25.10DBFFE Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf16.hostedemail.com (Postfix) with ESMTP id A6A33180073 for ; Mon, 5 Sep 2022 12:25:14 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id xc12-20020a170907074c00b007416699ea14so2250741ejb.19 for ; Mon, 05 Sep 2022 05:25:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=vQDAA7RaO97NVNbgK2nE+Ddwgte1b1V7llUj3RudfK0=; b=KN8ODtsgz0VxmdCBjno83X+d5SQviz2ifZAzzuowhlL2qUQ5+XVt2NI7VOYfQIzO+r x30tw9OdVoEU2U6twVrQ9Trtv543NV7jUxDNHPI1Qe/ilJ38+faxWer0tLqEmtJ1GoJG k2x1WtN8+EjIA7Fs2uoPquP23+nf59wIR8JM8c1tMWJLpAiZaKPRoJZwS5LuMRuOsXha 7t8n4KHE4uy/f/6z88RBoWjaR4HW/+S0bnoZvOZtNEQmVHNf/k9rCATsxZK4AlL3MOya QwL4pIKJIgO4Q6SnGoxgDH7QX/lxWxNa6YiEhJhiW9j12zU1y32nLFjU0XCirWCusO3Y XpYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=vQDAA7RaO97NVNbgK2nE+Ddwgte1b1V7llUj3RudfK0=; b=aW6O6ajEPl/NsFV1S2RmAQOkA4sFjhuKDNTL5/bDirhszSkzr/1ShUYLm9hEsGrTR2 JetMz92oXWmmuinJ13o/ge90nZ/KrowROQ7nd7zX/CqcRl3EyyvG4tVIZNfbNrhus91J foNFMBdE6bD9BKm/a1jLm79EsUG9sSWYP7kWJQR6BWNax+CVvKSfEdT6y4SzD53YgVCy b+qPBpr+2Z9ZpAr24jxC/5g6YkVCb9lMO8hGyje4wbR4rUUUdR6vwrrWJj4+ECgcnFgV SBeeg/tQQEY1ZoPeyjUecspGOO75OmBHJP02II6m9b4Dx8OmJ3xgv0Ezp55fkOkHfpUg ihQQ== X-Gm-Message-State: ACgBeo1HqvWvdNEeQsFaCgYqxkBz7VSmT5pq7fF+r8UKEE2jkOXbA33z O7QnmxaeI1bo4p6F4xKbx0zncM4bsJc= X-Google-Smtp-Source: AA6agR6GtswgpzhqJsCWpfV60DXGz4P+TclvNn4O1cvW5tHzOJ0oYuhMOfOVHrwG6Pg2u+s7WGWyex15EtU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:391:b0:73d:c7d5:bb51 with SMTP id ss17-20020a170907039100b0073dc7d5bb51mr35272003ejb.177.1662380713524; Mon, 05 Sep 2022 05:25:13 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:14 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-7-glider@google.com> Subject: [PATCH v6 06/44] kmsan: add ReST documentation From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KN8ODtsg; spf=pass (imf16.hostedemail.com: domain of 3qeoVYwYKCPQcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3qeoVYwYKCPQcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380714; a=rsa-sha256; cv=none; b=yqa5bAgh34N3l1x2VkT4m3/7Q7NQRpbLtryk9wQUL7TpCWO2qdm8CIh4RWI3gVlCgFZ7rh tR772Sp1wA/JTIf5o31XqhZ8D2Dnxqo10/FUwQOBmmc54sskEbNCrVejlNc5NAmWAB7m5X ECEsQLVw7zaCCyiEP+7ulUt5Uw9RM8g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380714; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vQDAA7RaO97NVNbgK2nE+Ddwgte1b1V7llUj3RudfK0=; b=ijXUjyz/O708rb7GGxI28zWBpiiRWYN2ip+dwsS4sG8/Afrk+EjFlwj7tPQIdpVrzBxcS5 iNr+TBW9nVQ6DWE/TM9UfbO0elmFGbftHkMmEqKy2I2z6cqzg3+y0uHSSZMDW5dWv1NUwQ 3/bG5t0YvRKcSuN+Z1W5e7C3WJKQizM= Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KN8ODtsg; spf=pass (imf16.hostedemail.com: domain of 3qeoVYwYKCPQcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3qeoVYwYKCPQcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Stat-Signature: zk6m9brxfigmb3yuxazix133d3xaik6k X-Rspam-User: X-Rspamd-Queue-Id: A6A33180073 X-HE-Tag: 1662380714-587085 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add Documentation/dev-tools/kmsan.rst and reference it in the dev-tools index. Signed-off-by: Alexander Potapenko --- v2: -- added a note that KMSAN is not intended for production use v5: -- mention CONFIG_KMSAN_CHECK_PARAM_RETVAL, drop mentions of cpu_entry_area -- add SPDX license -- address Marco Elver's comments: reorganize doc structure, fix minor nits Link: https://linux-review.googlesource.com/id/I751586f79418b95550a83c6035c650b5b01567cc --- Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/kmsan.rst | 427 ++++++++++++++++++++++++++++++ 2 files changed, 428 insertions(+) create mode 100644 Documentation/dev-tools/kmsan.rst diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index 4621eac290f46..6b0663075dc04 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -24,6 +24,7 @@ Documentation/dev-tools/testing-overview.rst kcov gcov kasan + kmsan ubsan kmemleak kcsan diff --git a/Documentation/dev-tools/kmsan.rst b/Documentation/dev-tools/kmsan.rst new file mode 100644 index 0000000000000..2a53a801198cb --- /dev/null +++ b/Documentation/dev-tools/kmsan.rst @@ -0,0 +1,427 @@ +.. SPDX-License-Identifier: GPL-2.0 +.. Copyright (C) 2022, Google LLC. + +=================================== +The Kernel Memory Sanitizer (KMSAN) +=================================== + +KMSAN is a dynamic error detector aimed at finding uses of uninitialized +values. It is based on compiler instrumentation, and is quite similar to the +userspace `MemorySanitizer tool`_. + +An important note is that KMSAN is not intended for production use, because it +drastically increases kernel memory footprint and slows the whole system down. + +Usage +===== + +Building the kernel +------------------- + +In order to build a kernel with KMSAN you will need a fresh Clang (14.0.6+). +Please refer to `LLVM documentation`_ for the instructions on how to build Clang. + +Now configure and build the kernel with CONFIG_KMSAN enabled. + +Example report +-------------- + +Here is an example of a KMSAN report:: + + ===================================================== + BUG: KMSAN: uninit-value in test_uninit_kmsan_check_memory+0x1be/0x380 [kmsan_test] + test_uninit_kmsan_check_memory+0x1be/0x380 mm/kmsan/kmsan_test.c:273 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Uninit was stored to memory at: + do_uninit_local_array+0xfa/0x110 mm/kmsan/kmsan_test.c:260 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Local variable uninit created at: + do_uninit_local_array+0x4a/0x110 mm/kmsan/kmsan_test.c:256 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + + Bytes 4-7 of 8 are uninitialized + Memory access of size 8 starts at ffff888083fe3da0 + + CPU: 0 PID: 6731 Comm: kunit_try_catch Tainted: G B E 5.16.0-rc3+ #104 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 + ===================================================== + +The report says that the local variable ``uninit`` was created uninitialized in +``do_uninit_local_array()``. The third stack trace corresponds to the place +where this variable was created. + +The first stack trace shows where the uninit value was used (in +``test_uninit_kmsan_check_memory()``). The tool shows the bytes which were left +uninitialized in the local variable, as well as the stack where the value was +copied to another memory location before use. + +A use of uninitialized value ``v`` is reported by KMSAN in the following cases: + - in a condition, e.g. ``if (v) { ... }``; + - in an indexing or pointer dereferencing, e.g. ``array[v]`` or ``*v``; + - when it is copied to userspace or hardware, e.g. ``copy_to_user(..., &v, ...)``; + - when it is passed as an argument to a function, and + ``CONFIG_KMSAN_CHECK_PARAM_RETVAL`` is enabled (see below). + +The mentioned cases (apart from copying data to userspace or hardware, which is +a security issue) are considered undefined behavior from the C11 Standard point +of view. + +Disabling the instrumentation +----------------------------- + +A function can be marked with ``__no_kmsan_checks``. Doing so makes KMSAN +ignore uninitialized values in that function and mark its output as initialized. +As a result, the user will not get KMSAN reports related to that function. + +Another function attribute supported by KMSAN is ``__no_sanitize_memory``. +Applying this attribute to a function will result in KMSAN not instrumenting +it, which can be helpful if we do not want the compiler to interfere with some +low-level code (e.g. that marked with ``noinstr`` which implicitly adds +``__no_sanitize_memory``). + +This however comes at a cost: stack allocations from such functions will have +incorrect shadow/origin values, likely leading to false positives. Functions +called from non-instrumented code may also receive incorrect metadata for their +parameters. + +As a rule of thumb, avoid using ``__no_sanitize_memory`` explicitly. + +It is also possible to disable KMSAN for a single file (e.g. main.o):: + + KMSAN_SANITIZE_main.o := n + +or for the whole directory:: + + KMSAN_SANITIZE := n + +in the Makefile. Think of this as applying ``__no_sanitize_memory`` to every +function in the file or directory. Most users won't need KMSAN_SANITIZE, unless +their code gets broken by KMSAN (e.g. runs at early boot time). + +Support +======= + +In order for KMSAN to work the kernel must be built with Clang, which so far is +the only compiler that has KMSAN support. The kernel instrumentation pass is +based on the userspace `MemorySanitizer tool`_. + +The runtime library only supports x86_64 at the moment. + +How KMSAN works +=============== + +KMSAN shadow memory +------------------- + +KMSAN associates a metadata byte (also called shadow byte) with every byte of +kernel memory. A bit in the shadow byte is set iff the corresponding bit of the +kernel memory byte is uninitialized. Marking the memory uninitialized (i.e. +setting its shadow bytes to ``0xff``) is called poisoning, marking it +initialized (setting the shadow bytes to ``0x00``) is called unpoisoning. + +When a new variable is allocated on the stack, it is poisoned by default by +instrumentation code inserted by the compiler (unless it is a stack variable +that is immediately initialized). Any new heap allocation done without +``__GFP_ZERO`` is also poisoned. + +Compiler instrumentation also tracks the shadow values as they are used along +the code. When needed, instrumentation code invokes the runtime library in +``mm/kmsan/`` to persist shadow values. + +The shadow value of a basic or compound type is an array of bytes of the same +length. When a constant value is written into memory, that memory is unpoisoned. +When a value is read from memory, its shadow memory is also obtained and +propagated into all the operations which use that value. For every instruction +that takes one or more values the compiler generates code that calculates the +shadow of the result depending on those values and their shadows. + +Example:: + + int a = 0xff; // i.e. 0x000000ff + int b; + int c = a | b; + +In this case the shadow of ``a`` is ``0``, shadow of ``b`` is ``0xffffffff``, +shadow of ``c`` is ``0xffffff00``. This means that the upper three bytes of +``c`` are uninitialized, while the lower byte is initialized. + +Origin tracking +--------------- + +Every four bytes of kernel memory also have a so-called origin mapped to them. +This origin describes the point in program execution at which the uninitialized +value was created. Every origin is associated with either the full allocation +stack (for heap-allocated memory), or the function containing the uninitialized +variable (for locals). + +When an uninitialized variable is allocated on stack or heap, a new origin +value is created, and that variable's origin is filled with that value. When a +value is read from memory, its origin is also read and kept together with the +shadow. For every instruction that takes one or more values, the origin of the +result is one of the origins corresponding to any of the uninitialized inputs. +If a poisoned value is written into memory, its origin is written to the +corresponding storage as well. + +Example 1:: + + int a = 42; + int b; + int c = a + b; + +In this case the origin of ``b`` is generated upon function entry, and is +stored to the origin of ``c`` right before the addition result is written into +memory. + +Several variables may share the same origin address, if they are stored in the +same four-byte chunk. In this case every write to either variable updates the +origin for all of them. We have to sacrifice precision in this case, because +storing origins for individual bits (and even bytes) would be too costly. + +Example 2:: + + int combine(short a, short b) { + union ret_t { + int i; + short s[2]; + } ret; + ret.s[0] = a; + ret.s[1] = b; + return ret.i; + } + +If ``a`` is initialized and ``b`` is not, the shadow of the result would be +0xffff0000, and the origin of the result would be the origin of ``b``. +``ret.s[0]`` would have the same origin, but it will never be used, because +that variable is initialized. + +If both function arguments are uninitialized, only the origin of the second +argument is preserved. + +Origin chaining +~~~~~~~~~~~~~~~ + +To ease debugging, KMSAN creates a new origin for every store of an +uninitialized value to memory. The new origin references both its creation stack +and the previous origin the value had. This may cause increased memory +consumption, so we limit the length of origin chains in the runtime. + +Clang instrumentation API +------------------------- + +Clang instrumentation pass inserts calls to functions defined in +``mm/kmsan/nstrumentation.c`` into the kernel code. + +Shadow manipulation +~~~~~~~~~~~~~~~~~~~ + +For every memory access the compiler emits a call to a function that returns a +pair of pointers to the shadow and origin addresses of the given memory:: + + typedef struct { + void *shadow, *origin; + } shadow_origin_ptr_t + + shadow_origin_ptr_t __msan_metadata_ptr_for_load_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_load_n(void *addr, uintptr_t size) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_n(void *addr, uintptr_t size) + +The function name depends on the memory access size. + +The compiler makes sure that for every loaded value its shadow and origin +values are read from memory. When a value is stored to memory, its shadow and +origin are also stored using the metadata pointers. + +Handling locals +~~~~~~~~~~~~~~~ + +A special function is used to create a new origin value for a local variable and +set the origin of that variable to that value:: + + void __msan_poison_alloca(void *addr, uintptr_t size, char *descr) + +Access to per-task data +~~~~~~~~~~~~~~~~~~~~~~~ + +At the beginning of every instrumented function KMSAN inserts a call to +``__msan_get_context_state()``:: + + kmsan_context_state *__msan_get_context_state(void) + +``kmsan_context_state`` is declared in ``include/linux/kmsan.h``:: + + struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; + }; + +This structure is used by KMSAN to pass parameter shadows and origins between +instrumented functions (unless the parameters are checked immediately by +``CONFIG_KMSAN_CHECK_PARAM_RETVAL``). + +Passing uninitialized values to functions +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Clang's MemorySanitizer instrumentation has an option, +``-fsanitize-memory-param-retval``, which makes the compiler check function +parameters passed by value, as well as function return values. + +The option is controlled by ``CONFIG_KMSAN_CHECK_PARAM_RETVAL``, which is +enabled by default to let KMSAN report uninitialized values earlier. +Please refer to the `LKML discussion`_ for more details. + +Because of the way the checks are implemented in LLVM (they are only applied to +parameters marked as ``noundef``), not all parameters are guaranteed to be +checked, so we cannot give up the metadata storage in ``kmsan_context_state``. + +String functions +~~~~~~~~~~~~~~~~ + +The compiler replaces calls to ``memcpy()``/``memmove()``/``memset()`` with the +following functions. These functions are also called when data structures are +initialized or copied, making sure shadow and origin values are copied alongside +with the data:: + + void *__msan_memcpy(void *dst, void *src, uintptr_t n) + void *__msan_memmove(void *dst, void *src, uintptr_t n) + void *__msan_memset(void *dst, int c, uintptr_t n) + +Error reporting +~~~~~~~~~~~~~~~ + +For each use of a value the compiler emits a shadow check that calls +``__msan_warning()`` in the case that value is poisoned:: + + void __msan_warning(u32 origin) + +``__msan_warning()`` causes KMSAN runtime to print an error report. + +Inline assembly instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +KMSAN instruments every inline assembly output with a call to:: + + void __msan_instrument_asm_store(void *addr, uintptr_t size) + +, which unpoisons the memory region. + +This approach may mask certain errors, but it also helps to avoid a lot of +false positives in bitwise operations, atomics etc. + +Sometimes the pointers passed into inline assembly do not point to valid memory. +In such cases they are ignored at runtime. + + +Runtime library +--------------- + +The code is located in ``mm/kmsan/``. + +Per-task KMSAN state +~~~~~~~~~~~~~~~~~~~~ + +Every task_struct has an associated KMSAN task state that holds the KMSAN +context (see above) and a per-task flag disallowing KMSAN reports:: + + struct kmsan_context { + ... + bool allow_reporting; + struct kmsan_context_state cstate; + ... + } + + struct task_struct { + ... + struct kmsan_context kmsan; + ... + } + +KMSAN contexts +~~~~~~~~~~~~~~ + +When running in a kernel task context, KMSAN uses ``current->kmsan.cstate`` to +hold the metadata for function parameters and return values. + +But in the case the kernel is running in the interrupt, softirq or NMI context, +where ``current`` is unavailable, KMSAN switches to per-cpu interrupt state:: + + DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +Metadata allocation +~~~~~~~~~~~~~~~~~~~ + +There are several places in the kernel for which the metadata is stored. + +1. Each ``struct page`` instance contains two pointers to its shadow and +origin pages:: + + struct page { + ... + struct page *shadow, *origin; + ... + }; + +At boot-time, the kernel allocates shadow and origin pages for every available +kernel page. This is done quite late, when the kernel address space is already +fragmented, so normal data pages may arbitrarily interleave with the metadata +pages. + +This means that in general for two contiguous memory pages their shadow/origin +pages may not be contiguous. Consequently, if a memory access crosses the +boundary of a memory block, accesses to shadow/origin memory may potentially +corrupt other pages or read incorrect values from them. + +In practice, contiguous memory pages returned by the same ``alloc_pages()`` +call will have contiguous metadata, whereas if these pages belong to two +different allocations their metadata pages can be fragmented. + +For the kernel data (``.data``, ``.bss`` etc.) and percpu memory regions +there also are no guarantees on metadata contiguity. + +In the case ``__msan_metadata_ptr_for_XXX_YYY()`` hits the border between two +pages with non-contiguous metadata, it returns pointers to fake shadow/origin regions:: + + char dummy_load_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + char dummy_store_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + +``dummy_load_page`` is zero-initialized, so reads from it always yield zeroes. +All stores to ``dummy_store_page`` are ignored. + +2. For vmalloc memory and modules, there is a direct mapping between the memory +range, its shadow and origin. KMSAN reduces the vmalloc area by 3/4, making only +the first quarter available to ``vmalloc()``. The second quarter of the vmalloc +area contains shadow memory for the first quarter, the third one holds the +origins. A small part of the fourth quarter contains shadow and origins for the +kernel modules. Please refer to ``arch/x86/include/asm/pgtable_64_types.h`` for +more details. + +When an array of pages is mapped into a contiguous virtual memory space, their +shadow and origin pages are similarly mapped into contiguous regions. + +References +========== + +E. Stepanov, K. Serebryany. `MemorySanitizer: fast detector of uninitialized +memory use in C++ +`_. +In Proceedings of CGO 2015. + +.. _MemorySanitizer tool: https://clang.llvm.org/docs/MemorySanitizer.html +.. _LLVM documentation: https://llvm.org/docs/GettingStarted.html +.. _LKML discussion: https://lore.kernel.org/all/20220614144853.3693273-1-glider@google.com/ From patchwork Mon Sep 5 12:24:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966038 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61AA5ECAAD3 for ; Mon, 5 Sep 2022 12:25:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F08FC8D006E; Mon, 5 Sep 2022 08:25:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EBA3E8D0050; Mon, 5 Sep 2022 08:25:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D32238D006E; Mon, 5 Sep 2022 08:25:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C43468D0050 for ; Mon, 5 Sep 2022 08:25:17 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A6A4BA0E50 for ; Mon, 5 Sep 2022 12:25:17 +0000 (UTC) X-FDA: 79877951874.20.49BDAFF Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf17.hostedemail.com (Postfix) with ESMTP id 63DD540063 for ; Mon, 5 Sep 2022 12:25:17 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id h6-20020aa7de06000000b004483647900fso5813497edv.21 for ; Mon, 05 Sep 2022 05:25:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=FNkIx/jm8VmVwIUxP0UOjGDN2D7zAcH5S0k2sC/Umx8=; b=K+RIqV9beAonadyBIcL4IRNACaa724k+xc3GpC5ckKIhvZhUePRt9dtYRV1GAqf/Nc w2OjXApEkBuoGciM5C1kXqOkygXqELx0VLQNDDQ7Y86XC8kp3T5UHq7BcN7wIRBCJB32 mKNsyfPyIbvSfGRQd3nqAnx7kF44IWoIRFG2z8XDKqWvSkAVXhJpOFauF5xN8YviYhkY dtO7kDeBKVZdkJ1+Xm6NUcm75BpdRrxwOPyZvT1n/YybRfHPXpH18/sQuGgOErQKLFpY NwUXF9fbAKFFz0nEk9l78fr7ESIV+xkTNrrjSt3K+78HEj2WBk/8BqSpJCdgVmUY8xc9 NeWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=FNkIx/jm8VmVwIUxP0UOjGDN2D7zAcH5S0k2sC/Umx8=; b=3KdEYASYjBdiAcsncQc8zHRfcENUr2WL4Pg2iXVX4nHeeWkmdAJ8fRh6dpNozvscHP TvnxpLc/oXu+7AIR5ZCxKdSMKOdSF3BYhSvy7qf281tqtu0aRs6rYS54cn+LWasV1zSZ 6+vfT667yymepqQDf7SaqA/uF6BixqHHAF7o3kaEOD/TBpkDwR16nl9slhBAlsw5Npp6 hCJs++CnFRn5b/WGDPphMwWD1limrRycXsWq1trad9B3bggiwM/4q7CujUkxNRrJMuIn RvP1dHLNeyp9u/nleZEhGN4LudLNhCoK9gvZaejn/SYHQdK4FHLwl+lGKyuoy8Hq5BDc 5vgA== X-Gm-Message-State: ACgBeo0y7h9yk+vkN8vlIYbtkQS6nG8H62jTqTZpk05OaHyn0DVDkpP5 QCLiAex+hB9JPujOEkTefBgA4XFQcsI= X-Google-Smtp-Source: AA6agR6Va6YkEfg2fJPVKbVDbNHIRkXaHH1xnx1OVuNQv5M738qP0V60GI0bHMSw+4ilK/I2mz3aFKgZg4c= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:2bf9:b0:73d:dd00:9ce8 with SMTP id gv57-20020a1709072bf900b0073ddd009ce8mr33496068ejc.151.1662380716059; Mon, 05 Sep 2022 05:25:16 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:15 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-8-glider@google.com> Subject: [PATCH v6 07/44] kmsan: introduce __no_sanitize_memory and __no_kmsan_checks From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=K+RIqV9b; spf=pass (imf17.hostedemail.com: domain of 3rOoVYwYKCPcfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3rOoVYwYKCPcfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380717; a=rsa-sha256; cv=none; b=1SRCBhwkeib+X+K0GGL1lQORDpcoKOVVlq9zN04dotsIN7KWy+YN2fS0UIjVbEIhaUTfHl r70trtWFM7oBpi2GQNa9TYOwZ529Q/9tK3AlQvGJ1oMfY/SxeINOTfn0UhiN32B6vwn8KJ 7dpZ8c5a495aZiAZCZ9Afee7pFYq6Oo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380717; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FNkIx/jm8VmVwIUxP0UOjGDN2D7zAcH5S0k2sC/Umx8=; b=MZTNjPbrbKw97dp1AA4s5REYy8nljQbvFMSNobzA6rARp6Wzor5Tek/JTkTtnulIRtjtFd CMxL6kPyQGlvuKyRIlwa5DUSQ+ZL5WKIGHayl4glAsYNVpg5NZu7QI7gd/zrLj2s6nfTpE WVif1J9jhxlZ7xNXvYkRg5T5KpoX35A= X-Rspamd-Server: rspam02 X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=K+RIqV9b; spf=pass (imf17.hostedemail.com: domain of 3rOoVYwYKCPcfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3rOoVYwYKCPcfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: mu3jezjow6td4ij671d57kgmq7iffcx9 X-Rspamd-Queue-Id: 63DD540063 X-HE-Tag: 1662380717-838039 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __no_sanitize_memory is a function attribute that instructs KMSAN to skip a function during instrumentation. This is needed to e.g. implement the noinstr functions. __no_kmsan_checks is a function attribute that makes KMSAN ignore the uninitialized values coming from the function's inputs, and initialize the function's outputs. Functions marked with this attribute can't be inlined into functions not marked with it, and vice versa. This behavior is overridden by __always_inline. __SANITIZE_MEMORY__ is a macro that's defined iff the file is instrumented with KMSAN. This is not the same as CONFIG_KMSAN, which is defined for every file. Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- Link: https://linux-review.googlesource.com/id/I004ff0360c918d3cd8b18767ddd1381c6d3281be --- include/linux/compiler-clang.h | 23 +++++++++++++++++++++++ include/linux/compiler-gcc.h | 6 ++++++ 2 files changed, 29 insertions(+) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index c84fec767445d..4fa0cc4cbd2c8 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -51,6 +51,29 @@ #define __no_sanitize_undefined #endif +#if __has_feature(memory_sanitizer) +#define __SANITIZE_MEMORY__ +/* + * Unlike other sanitizers, KMSAN still inserts code into functions marked with + * no_sanitize("kernel-memory"). Using disable_sanitizer_instrumentation + * provides the behavior consistent with other __no_sanitize_ attributes, + * guaranteeing that __no_sanitize_memory functions remain uninstrumented. + */ +#define __no_sanitize_memory __disable_sanitizer_instrumentation + +/* + * The __no_kmsan_checks attribute ensures that a function does not produce + * false positive reports by: + * - initializing all local variables and memory stores in this function; + * - skipping all shadow checks; + * - passing initialized arguments to this function's callees. + */ +#define __no_kmsan_checks __attribute__((no_sanitize("kernel-memory"))) +#else +#define __no_sanitize_memory +#define __no_kmsan_checks +#endif + /* * Support for __has_feature(coverage_sanitizer) was added in Clang 13 together * with no_sanitize("coverage"). Prior versions of Clang support coverage diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 9b157b71036f1..f55a37efdb974 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -114,6 +114,12 @@ #define __SANITIZE_ADDRESS__ #endif +/* + * GCC does not support KMSAN. + */ +#define __no_sanitize_memory +#define __no_kmsan_checks + /* * Turn individual warnings and errors on and off locally, depending * on version. From patchwork Mon Sep 5 12:24:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966039 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 588A7ECAAD3 for ; Mon, 5 Sep 2022 12:25:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D82E68D006F; Mon, 5 Sep 2022 08:25:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D31D28D0050; Mon, 5 Sep 2022 08:25:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFB7C8D006F; Mon, 5 Sep 2022 08:25:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B01468D0050 for ; Mon, 5 Sep 2022 08:25:20 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8EB8D1A0D96 for ; Mon, 5 Sep 2022 12:25:20 +0000 (UTC) X-FDA: 79877952000.04.CB2A3EC Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf26.hostedemail.com (Postfix) with ESMTP id 357CE14008D for ; Mon, 5 Sep 2022 12:25:20 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id z20-20020a05640235d400b0043e1e74a495so5773697edc.11 for ; Mon, 05 Sep 2022 05:25:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=eIK0S734s+WuMuzHGxUrnoQabUzsOPxTYvMm/ke78tI=; b=SLwXQbzPHQT2lx3CmDALMMsEHTfh2C5fRJjzD2NlQVdV1yZDtABiy9mOeHVVuGOF9M m+WfedhfS8OqwiHcSKlOk6gYgctS/yniai+qVNtBv5itHrKEx2UUKP7JuUqvWGoiUZ4p 7UKv1IjobR+tSAj44q8qK4JrQfYF0y6m7kSyZsm4AYQa+8vlkP8GyIWNlUOWl7r0oTxr pBGpVAhHvoR7nzE9+LR6V3BmAQ4A38y6A4NEtxM3wQYP7pY788MI3NEB1eRmB+H0zfeL mXGxy3ik2fCQk/7rDv6yt5ahyqOIzF0oPNuxvRUmCGjRKzXXFL9mefRo74jNp3st4QcE +LBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=eIK0S734s+WuMuzHGxUrnoQabUzsOPxTYvMm/ke78tI=; b=zoxellpX2M9NMY1CgBe3aIhPSFhWfICVvbjmdUx6pORC5u1LqADsX/aObzxeE4vpxP 0kp3J18+vDvOksaBk+tBYe6l3VGhtRrMBrxEiu5Mtul0NZpXXc4kEJWYRbH9lomV9GoI D0fjkGYH2YZmSXfpZ9m19vir07Bx8RMon5rt9PxRHB5Yqnx7E63IMknZH0Gpva1BCYLP WLsAe5dhzyGPFcuBDd6kHkb6xW3EPXVmFK0qDoRUJKTv9b963kP23JqeCRQTHzp2NhzL /18DuaVOpHQURXV+eX5zdD8pNzcLGMCEEjsVqnWtYLOneOBfu6GXkzISo0kEWs8HVNNC Rm1w== X-Gm-Message-State: ACgBeo28rnzR2SrGokdLxdhssQbRqVKPB/uUBGiKKsd2C/NH9L6KVqPw fL3FEkvlGRgfKWIgIEdSGTW7qhOBgD0= X-Google-Smtp-Source: AA6agR7bXvSnVNQUJmYARonFyFhbVF2eDdfDU/CHIrufPEs62rsTK65KVDoWR72fO1/TGY+y+ld4j+iRz5E= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:2937:b0:44e:b578:6fdd with SMTP id ee55-20020a056402293700b0044eb5786fddmr722840edb.159.1662380719065; Mon, 05 Sep 2022 05:25:19 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:16 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-9-glider@google.com> Subject: [PATCH v6 08/44] kmsan: mark noinstr as __no_sanitize_memory From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SLwXQbzP; spf=pass (imf26.hostedemail.com: domain of 3r-oVYwYKCPoinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3r-oVYwYKCPoinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380720; a=rsa-sha256; cv=none; b=cbZX8Ta2R8CsfPAYbfnNq1oyZHR4R/FnLA6VET9Mc3iDhM51rvHvmL2LGP6Td3Zf4b2Koa QmiVe6a087BuxdskuqKdtMnWncCxq5Kz+t07l7bnCHu+dUR9mXrsWkVk+qrDiQdmsuRxtM HAV6oFGVDY8SqPUysnaP3Qa4xHMILzk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eIK0S734s+WuMuzHGxUrnoQabUzsOPxTYvMm/ke78tI=; b=JFVzlhfeXAEWk7AsaVVX9lrO4+5sZt208XvG9rfcYdynvBDgp16cyXIneYmAr2yHAnmrZG rswWxpjHUu4drTtSSJK//rF7g9EFXVaquVkSfsl+xVw1U4bkS1XeDaOjLsu1aRzyFASDCV 8JCOOYQ+HA2R4WRlGmAhtD18QunRCQ0= X-Rspam-User: Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SLwXQbzP; spf=pass (imf26.hostedemail.com: domain of 3r-oVYwYKCPoinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3r-oVYwYKCPoinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 357CE14008D X-Stat-Signature: 1gsekipkozzxz6tzupsi9pk8d8bqzrr8 X-HE-Tag: 1662380720-253334 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: noinstr functions should never be instrumented, so make KMSAN skip them by applying the __no_sanitize_memory attribute. Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v2: -- moved this patch earlier in the series per Mark Rutland's request Link: https://linux-review.googlesource.com/id/I3c9abe860b97b49bc0c8026918b17a50448dec0d --- include/linux/compiler_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 4f2a819fd60a3..015207a6e2bf5 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -229,7 +229,8 @@ struct ftrace_likely_data { /* Section for code which can't be instrumented at all */ #define noinstr \ noinline notrace __attribute((__section__(".noinstr.text"))) \ - __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage + __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage \ + __no_sanitize_memory #endif /* __KERNEL__ */ From patchwork Mon Sep 5 12:24:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966040 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A11CAC54EE9 for ; Mon, 5 Sep 2022 12:25:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36FFE8D0070; Mon, 5 Sep 2022 08:25:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 31F9E8D0050; Mon, 5 Sep 2022 08:25:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C01D8D0070; Mon, 5 Sep 2022 08:25:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0B25D8D0050 for ; Mon, 5 Sep 2022 08:25:24 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D65921C6BE5 for ; Mon, 5 Sep 2022 12:25:23 +0000 (UTC) X-FDA: 79877952126.10.79950CF Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf30.hostedemail.com (Postfix) with ESMTP id 92CD280084 for ; Mon, 5 Sep 2022 12:25:23 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id ay21-20020a05600c1e1500b003a6271a9718so5338866wmb.0 for ; Mon, 05 Sep 2022 05:25:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=PTsTfo/gJDV7iQzWmhln0w3yo3kBxDNSc9M3IZOTBoE=; b=d8oJ2o++JW9bXOqfouyBbl7WkyEV3VHUYF8d1z29wCxZ/PqYypXaFeFJ33XAooBYV2 6QnDZVdh2UZrOxf24TEY+bGaDzLoSOuz+Mt4ecgWbx2NculB+imyaiX0qfEphxqAWKD1 MHVx9LHlo5v7i4izd070T9QRcm38bjy4cTn++9ZxoWkJAy94o2rmHDnSmxn1cm1XXL0g GWA9ByBagZwIeOuTs+8qckPWsft9wpwkXGKmsbytmxjY70ngsRdz0sLz9vhk+xnrQMLC 4mcD/+64uPOb4iGYTejlWfkFKopWWvVqG7mTec9Li6rv7o2hJYEbpzlaF9yp4URO3JXm fw8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=PTsTfo/gJDV7iQzWmhln0w3yo3kBxDNSc9M3IZOTBoE=; b=UgpOT3GYWUrdBu3t30/cpLkekCVSG5wcCiAUOQmd4tDaGpAfpy1gq3ULmvKzY8d26d //hslJBV9tDBD7BrCRtx7wKPNyd15FPvOFMWxpE80qTZPnxRPIUTln79CbIiCJisHXVI CoKZ04W/V6cPM2cT+CY8or0SJBEMZT1uVnXaBXV3MZAHKuwlZRLzAxBQUHTn9eDO2CeF lZsd1SDojKZqNtNUIEhVc9ii58q2outZe5jINMP1MtmPCRYBlENn5z3osORirw73jqIa 30oNNlLgA7DVmaFH43su98sQjxOjCMBqKrGGjOr2C6G/ykWcdi2fj+PQ1AApXeQ9Mu9m JC/Q== X-Gm-Message-State: ACgBeo3wmPYDT0ziZohYS5oK5agUMWkgAGQyxo0erFvzuy4kOWiFRLKa 0zMg2JFSVw8e/hBqIE/nEiOq+vzsnPE= X-Google-Smtp-Source: AA6agR4AUwxgeiq6J6CG95MolO6yz3zXmWIf61Vrp856Wu6spDXPfrQJz5uys7cjFGIfk6d+Qot0Uv3+LWE= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:adf:e74d:0:b0:226:d514:8c29 with SMTP id c13-20020adfe74d000000b00226d5148c29mr22189326wrn.664.1662380722163; Mon, 05 Sep 2022 05:25:22 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:17 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-10-glider@google.com> Subject: [PATCH v6 09/44] x86: kmsan: pgtable: reduce vmalloc space From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=d8oJ2o++; spf=pass (imf30.hostedemail.com: domain of 3suoVYwYKCP0lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3suoVYwYKCP0lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380723; a=rsa-sha256; cv=none; b=JRVUQfL1nTp0hzmlOekCTMtRcsPzcdHV7s676Q55AbPyj1tBoDCb4Pj0YK72drVFnrszkA DmQtx8wqcMvHeeMW3ykWCLRvpAuHMaqiIgHoFdZ0bQQzvpBbETS4HW806hE18P2RIOD9Cg EIehrH4hUpzx8r9wfaVwn4CqLHVylmc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380723; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PTsTfo/gJDV7iQzWmhln0w3yo3kBxDNSc9M3IZOTBoE=; b=JinxDlQHiD51qnRUh+A2I21Hsd1YnYfz74AyyBgANhjWjVrCgk1QtN1gysQaQO9g1emYdt lg6cFTbe+TzirjQeVSF7XzMjT2MsGqHfvnQp2szmMr8rTAUkmvcbx5ByZ05pzYpH6awjvl qLEwOIQvMr6OCRmdzlP10/mSxNkaJB4= X-Rspamd-Server: rspam02 X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=d8oJ2o++; spf=pass (imf30.hostedemail.com: domain of 3suoVYwYKCP0lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3suoVYwYKCP0lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: 9ghhwq58a47i5o7rysejbg83crnnu3mi X-Rspamd-Queue-Id: 92CD280084 X-HE-Tag: 1662380723-78675 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko --- v2: -- added x86: to the title v5: -- add comment for VMEMORY_END Link: https://linux-review.googlesource.com/id/I9d8b7f0a88a639f1263bc693cbd5c136626f7efd --- arch/x86/include/asm/pgtable_64_types.h | 47 ++++++++++++++++++++++++- arch/x86/mm/init_64.c | 2 +- 2 files changed, 47 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 70e360a2e5fb7..04f36063ad546 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,52 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ -#define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +/* + * End of the region for which vmalloc page tables are pre-allocated. + * For non-KMSAN builds, this is the same as VMALLOC_END. + * For KMSAN builds, VMALLOC_START..VMEMORY_END is 4 times bigger than + * VMALLOC_START..VMALLOC_END (see below). + */ +#define VMEMORY_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) + +#ifndef CONFIG_KMSAN +#define VMALLOC_END VMEMORY_END +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. The memory formerly + * belonging to vmalloc area is now laid out as follows: + * + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to + * VMALLOC_END+KMSAN_VMALLOC_SHADOW_OFFSET - vmalloc area shadow + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to + * VMALLOC_END+KMSAN_VMALLOC_ORIGIN_OFFSET - vmalloc area origins + * 4th quarter: KMSAN_MODULES_SHADOW_START to KMSAN_MODULES_ORIGIN_START + * - shadow for modules, + * KMSAN_MODULES_ORIGIN_START to + * KMSAN_MODULES_ORIGIN_START + MODULES_LEN - origins for modules. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) + +/* + * vmalloc metadata addresses are calculated by adding shadow/origin offsets + * to vmalloc address. + */ +#define KMSAN_VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define KMSAN_VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE << 1) + +#define KMSAN_VMALLOC_SHADOW_START (VMALLOC_START + KMSAN_VMALLOC_SHADOW_OFFSET) +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_START + KMSAN_VMALLOC_ORIGIN_OFFSET) + +/* + * The shadow/origin for modules are placed one by one in the last 1/4 of + * vmalloc space. + */ +#define KMSAN_MODULES_SHADOW_START (VMALLOC_END + KMSAN_VMALLOC_ORIGIN_OFFSET + 1) +#define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN) +#endif /* CONFIG_KMSAN */ #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0fe690ebc269b..39b6bfcaa0ed4 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1287,7 +1287,7 @@ static void __init preallocate_vmalloc_pages(void) unsigned long addr; const char *lvl; - for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + for (addr = VMALLOC_START; addr <= VMEMORY_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; pud_t *pud; From patchwork Mon Sep 5 12:24:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966041 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8F14ECAAD3 for ; Mon, 5 Sep 2022 12:25:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A49F8D0071; Mon, 5 Sep 2022 08:25:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6546F8D0050; Mon, 5 Sep 2022 08:25:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CF0A8D0071; Mon, 5 Sep 2022 08:25:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3CAAF8D0050 for ; Mon, 5 Sep 2022 08:25:26 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 16D7C81427 for ; Mon, 5 Sep 2022 12:25:26 +0000 (UTC) X-FDA: 79877952252.21.85B5A31 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf17.hostedemail.com (Postfix) with ESMTP id B232540069 for ; Mon, 5 Sep 2022 12:25:25 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id h6-20020aa7de06000000b004483647900fso5813668edv.21 for ; Mon, 05 Sep 2022 05:25:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=vmKBMF2McNFVJqtG6TmpI3MAJxV0RMGeKyj0ZsE0Fyg=; b=O96bjuI/kcNtVyq3944Z7ueCIaa9yuEsYJRsRqgSlHKcEXxmakhKDnaYm8bTJp/2Ie Y8UpnSRc1QETrs4Y8CaN2D+hdalbChXjb/iF/mgeJ2k74SyqLXaXBPgwOkNVKFXuAPnY ObphBlLVgrXRSkLKMQJGEtkwN+0KhcNeyBefvXOuILKIWJt/BEXyYI12bQA0Tlj44WO5 jxvyzRsqVQanRBcQxBYdjMDiO9F+YgLL1c8yza+3/ejeeXcU2k1NELaQsYeqOb0rYH3V N0CpyqXMVX3sGSh37mJH/qGJ8lKdXml50RvZ8I9e4hWM3iSgRSretp8awq1gK/fuIu3m Y1Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=vmKBMF2McNFVJqtG6TmpI3MAJxV0RMGeKyj0ZsE0Fyg=; b=2LONBWIf5xOyQvMcg4frQljU41d8YfVryzGU2w/4wMnglXJzaFy8KpGshrNTuNfsEf C2vniLLm4H96JbwrzTy8Y+67Ywg3Mo4SdFcR8DKoRsdrU79VjqLAwdQvm/94NGAPvHcH vsmg/1512t0r7xH5RwLA4llvqEjTfMAoYvpd//4fCGSgA9iUOiYTilJpzw6RTPh2JHpW pw2+6CBBJE0gjpvGBnIWKzz7a/ujfliJPMFKGzxABuP1R9POL5bpEo60y9UHS6ouNoO+ QAJNzM2K5PjcVYPoIvpynP+9Mr+7mJ12QXPAqlY81MFlMAE8eIvKZ2M++EnzPf15U2GM eDKA== X-Gm-Message-State: ACgBeo27Po6my+UHErD4ePswFZtwkXDsXbsxuAVqTTsTSspslpxhYvxs P+b9zkTZyTFIje8MyAmU9CuA1QrXzvw= X-Google-Smtp-Source: AA6agR66RFT7ULInIZdaLxzEB+W3njrb9ZsrwOKlCCrjmsBFRCEDn3xMmeOMxizX3qE5C/AOVlfkrTghefs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:b13:b0:73f:d86a:6e3c with SMTP id h19-20020a1709070b1300b0073fd86a6e3cmr31127618ejl.132.1662380725053; Mon, 05 Sep 2022 05:25:25 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:18 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-11-glider@google.com> Subject: [PATCH v6 10/44] libnvdimm/pfn_dev: increase MAX_STRUCT_PAGE_SIZE From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="O96bjuI/"; spf=pass (imf17.hostedemail.com: domain of 3teoVYwYKCAIinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3teoVYwYKCAIinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380725; a=rsa-sha256; cv=none; b=IIavuyhLdp39kAkqpzTtOG8pw7D1Kq/8pVrj+ApirCWQSELm/N7ap/MY2KW8DbNcW8tEpQ ATQLG+F9iI6F5Vd1wk5idaLToSHnlsnVRlKMIXLwMFtp9utHVerAdTypHiN7C6mGj2Mmbf 0o53qa5O9kdxaR0ATstF4G/KQHYNsTw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380725; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vmKBMF2McNFVJqtG6TmpI3MAJxV0RMGeKyj0ZsE0Fyg=; b=pNPOemWJRAYKA4q1W2Spp04WRkAkbB3Q/qrbLrnR6rG0cuqPbpHeOlyHtwYbFWfdgSNBo8 dT0NAV/g8VAPmoAQxmhyZy78CsycMCLzRZ6Qt2ospNkskg/nFrWhoLLoXu3Wl3W+2IrGmv G2Tlz+HGyh1Nm6oi9Q/c+9R0elg/zf0= X-Rspamd-Server: rspam02 X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="O96bjuI/"; spf=pass (imf17.hostedemail.com: domain of 3teoVYwYKCAIinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3teoVYwYKCAIinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: bhp53scrsaqbe9izampdhqq3kud1ge4n X-Rspamd-Queue-Id: B232540069 X-HE-Tag: 1662380725-542170 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN adds extra metadata fields to struct page, so it does not fit into 64 bytes anymore. This change leads to increased memory consumption of the nvdimm driver, regardless of whether the kernel is built with KMSAN or not. Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- Link: https://linux-review.googlesource.com/id/I353796acc6a850bfd7bb342aa1b63e616fc614f1 --- drivers/nvdimm/nd.h | 2 +- drivers/nvdimm/pfn_devs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index ec5219680092d..85ca5b4da3cf3 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -652,7 +652,7 @@ void devm_namespace_disable(struct device *dev, struct nd_namespace_common *ndns); #if IS_ENABLED(CONFIG_ND_CLAIM) /* max struct page size independent of kernel config */ -#define MAX_STRUCT_PAGE_SIZE 64 +#define MAX_STRUCT_PAGE_SIZE 128 int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap); #else static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 0e92ab4b32833..61af072ac98f9 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -787,7 +787,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) * when populating the vmemmap. This *should* be equal to * PMD_SIZE for most architectures. * - * Also make sure size of struct page is less than 64. We + * Also make sure size of struct page is less than 128. We * want to make sure we use large enough size here so that * we don't have a dynamic reserve space depending on * struct page size. But we also want to make sure we notice From patchwork Mon Sep 5 12:24:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966042 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55E12ECAAD5 for ; Mon, 5 Sep 2022 12:25:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6C708D0072; Mon, 5 Sep 2022 08:25:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1C248D0050; Mon, 5 Sep 2022 08:25:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C20938D0072; Mon, 5 Sep 2022 08:25:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id ACD0F8D0050 for ; Mon, 5 Sep 2022 08:25:29 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 72816AB4B4 for ; Mon, 5 Sep 2022 12:25:29 +0000 (UTC) X-FDA: 79877952378.07.F4BF805 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf18.hostedemail.com (Postfix) with ESMTP id 20E961C0062 for ; Mon, 5 Sep 2022 12:25:28 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id w17-20020a056402269100b0043da2189b71so5640325edd.6 for ; Mon, 05 Sep 2022 05:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date; bh=c6MzE/ujbsY192HHriNMQv7C6LIHzG9IiIO34BiK6aQ=; b=Dw5qvjweIe8ToYJnLHB4k8BD8q6RNcY+8VXk4lGRifgwqoOcA2VSe2Q3h/lAWdP/kY UqAlbZAmxJaEoLgKn2vu0XazZUYqB0MDhOl84ihTse7Jv8Wv4Ud/fcBw7Fhrt3R+aKCe fK/X932tfpu0/1BGD74KwbILSVpG5pSmLOD96RTQcGdcl9yA0fXGsmO3jgnU0hRvypgD Bga51GBT18q2jRx943vz+yKm63PrW/AyShajAA11cviyL5kpnZ3Ddmj48BXtoNmb2560 tPzdp5LZj6zcKHrl1gLN3/a8ejpwcvS+H/7tTcsoNCyxOBE04GTbinmtXTijwFn9OxYO 00DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date; bh=c6MzE/ujbsY192HHriNMQv7C6LIHzG9IiIO34BiK6aQ=; b=Kzdig0J63sPpxTBJeDBDC0AlbTecE/30AjU74HxHv+PqEZ0b7cHaLidiiU3XaIOvuf fcpUChHYWuLazXcrNXuWQVP38yWy0fBjKqko5xaFGbsowY5KLk+6klb1OpOccNBaqeNo lGWWgTncnw8ufMIYNmuzKpAabmjgi52z7/auoE/issdK0tvtZzWqQwFUOe52IRU/D8lW zic7dn0UjB9I1UtyNkBBQ4e8IKnN9Oz6G7ufUU9jm9TrAIckcfnrLsJK8Ay/p5yP/NGk lp5gmsx0CHnEHW1aGfgEMMB8MhWV/763TWSDWIUT04B6Xpu4TEiJ4gZwG2OoVAK039Q7 C2+w== X-Gm-Message-State: ACgBeo3qOBEdr2edUCpt/gx0ck2kAZo5w98k4BzTNAAnf6jTQRAV+PmJ kvWpICywLuHWNI+pLwtuIHmsK56Xmx4= X-Google-Smtp-Source: AA6agR7lL1kreNEu8pq7TYg3ZGYNHiV3svqpHpWXNkGrWgiobR8netLEzjeeXnPUb2OEJoMb+sF6xWhNtlI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:28cd:b0:448:3856:41a3 with SMTP id ef13-20020a05640228cd00b00448385641a3mr33677526edb.6.1662380727754; Mon, 05 Sep 2022 05:25:27 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:19 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-12-glider@google.com> Subject: [PATCH v6 11/44] kmsan: add KMSAN runtime core From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Dw5qvjwe; spf=pass (imf18.hostedemail.com: domain of 3t-oVYwYKCAQkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3t-oVYwYKCAQkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380729; a=rsa-sha256; cv=none; b=ibyGe73inChLNy8TTl/UJLbdAxVLQL87RTrvyuYn97AnaevqHdU6Ub/WABMr0RUdXvd1pd /+1yKUY5MnEHBoVbujMBIAt9ouy/Fknv/TcsiMBtiKQkyuMEu/jjagujnNEraLzOjkLkAN j+iEUbgDRfLchQ2pJENb4yWMjJTn/SE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380729; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=c6MzE/ujbsY192HHriNMQv7C6LIHzG9IiIO34BiK6aQ=; b=LsV+gePrhHKpI2BIMUJ6+9jIj21PWbwETeIgBSH9fpOqLPV1TLtlp51U3f2wo+OarVpa// Tu+SudJlr2HPKqwsjq0aIFVpkbdYaSOZsBJmmBLM/IryybkaFM22fLNV2/dGusfDX8n082 lwjwD7BKa3fWcU+o1ZHA2XQ5h0sOXE4= Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Dw5qvjwe; spf=pass (imf18.hostedemail.com: domain of 3t-oVYwYKCAQkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3t-oVYwYKCAQkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Stat-Signature: 9j9sg9sm7ats4r3r657i75ma38t3rhqb X-Rspam-User: X-Rspamd-Queue-Id: 20E961C0062 X-HE-Tag: 1662380728-265902 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For each memory location KernelMemorySanitizer maintains two types of metadata: 1. The so-called shadow of that location - а byte:byte mapping describing whether or not individual bits of memory are initialized (shadow is 0) or not (shadow is 1). 2. The origins of that location - а 4-byte:4-byte mapping containing 4-byte IDs of the stack traces where uninitialized values were created. Each struct page now contains pointers to two struct pages holding KMSAN metadata (shadow and origins) for the original struct page. Utility routines in mm/kmsan/core.c and mm/kmsan/shadow.c handle the metadata creation, addressing, copying and checking. mm/kmsan/report.c performs error reporting in the cases an uninitialized value is used in a way that leads to undefined behavior. KMSAN compiler instrumentation is responsible for tracking the metadata along with the kernel memory. mm/kmsan/instrumentation.c provides the implementation for instrumentation hooks that are called from files compiled with -fsanitize=kernel-memory. To aid parameter passing (also done at instrumentation level), each task_struct now contains a struct kmsan_task_state used to track the metadata of function parameters and return values for that task. Finally, this patch provides CONFIG_KMSAN that enables KMSAN, and declares CFLAGS_KMSAN, which are applied to files compiled with KMSAN. The KMSAN_SANITIZE:=n Makefile directive can be used to completely disable KMSAN instrumentation for certain files. Similarly, KMSAN_ENABLE_CHECKS:=n disables KMSAN checks and makes newly created stack memory initialized. Users can also use functions from include/linux/kmsan-checks.h to mark certain memory regions as uninitialized or initialized (this is called "poisoning" and "unpoisoning") or check that a particular region is initialized. Signed-off-by: Alexander Potapenko Acked-by: Marco Elver --- v2: -- as requested by Greg K-H, moved hooks for different subsystems to respective patches, rewrote the patch description; -- addressed comments by Dmitry Vyukov; -- added a note about KMSAN being not intended for production use. -- fix case of unaligned dst in kmsan_internal_memmove_metadata() v3: -- print build IDs in reports where applicable -- drop redundant filter_irq_stacks(), unpoison the local passed to __stack_depot_save() -- remove a stray BUG() v4: (mostly fixes suggested by Marco Elver) -- add missing SPDX headers -- move CC_IS_CLANG && CLANG_VERSION under HAVE_KMSAN_COMPILER -- replace occurrences of |var| with @var -- reflow KMSAN_WARN_ON(), fix comments -- remove x86-specific code from shadow.c to improve portability -- convert kmsan_report_lock to raw spinlock -- add enter_runtime/exit_runtime around kmsan_internal_memmove_metadata() -- remove unnecessary include from kmsan.h (reported by ) -- introduce CONFIG_KMSAN_CHECK_PARAM_RETVAL (on by default), which maps to -fsanitize-memory-param-retval and makes KMSAN eagerly check values passed as function parameters and returned from functions. v5: -- do not return dummy shadow from within runtime -- preserve shadow when calling memcpy()/memmove()/memset() -- reword some code comments -- reapply clang-format, switch to modern style for-loops -- move kmsan_internal_is_vmalloc_addr() and kmsan_internal_is_module_addr() to the header -- refactor lib/Kconfig.kmsan as suggested by Marco Elver -- remove forward declaration of `struct page` from this patch v6: -- move definitions of `struct kmsan_context_state` and `struct kmsan_ctx` to to avoid circular header dependencies that manifested in -mm tree. Link: https://linux-review.googlesource.com/id/I9b71bfe3425466c97159f9de0062e5e8e4fec866 --- Makefile | 1 + include/linux/kmsan-checks.h | 64 +++++ include/linux/kmsan_types.h | 35 +++ include/linux/mm_types.h | 12 + include/linux/sched.h | 5 + lib/Kconfig.debug | 1 + lib/Kconfig.kmsan | 50 ++++ mm/Makefile | 1 + mm/kmsan/Makefile | 23 ++ mm/kmsan/core.c | 448 +++++++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 66 ++++++ mm/kmsan/instrumentation.c | 307 ++++++++++++++++++++++++ mm/kmsan/kmsan.h | 203 ++++++++++++++++ mm/kmsan/report.c | 211 +++++++++++++++++ mm/kmsan/shadow.c | 147 ++++++++++++ scripts/Makefile.kmsan | 8 + scripts/Makefile.lib | 9 + 17 files changed, 1591 insertions(+) create mode 100644 include/linux/kmsan-checks.h create mode 100644 include/linux/kmsan_types.h create mode 100644 lib/Kconfig.kmsan create mode 100644 mm/kmsan/Makefile create mode 100644 mm/kmsan/core.c create mode 100644 mm/kmsan/hooks.c create mode 100644 mm/kmsan/instrumentation.c create mode 100644 mm/kmsan/kmsan.h create mode 100644 mm/kmsan/report.c create mode 100644 mm/kmsan/shadow.c create mode 100644 scripts/Makefile.kmsan diff --git a/Makefile b/Makefile index a4f71076cacb8..14b4284a998a7 100644 --- a/Makefile +++ b/Makefile @@ -1015,6 +1015,7 @@ include-y := scripts/Makefile.extrawarn include-$(CONFIG_DEBUG_INFO) += scripts/Makefile.debug include-$(CONFIG_KASAN) += scripts/Makefile.kasan include-$(CONFIG_KCSAN) += scripts/Makefile.kcsan +include-$(CONFIG_KMSAN) += scripts/Makefile.kmsan include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h new file mode 100644 index 0000000000000..a6522a0c28df9 --- /dev/null +++ b/include/linux/kmsan-checks.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN checks to be used for one-off annotations in subsystems. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef _LINUX_KMSAN_CHECKS_H +#define _LINUX_KMSAN_CHECKS_H + +#include + +#ifdef CONFIG_KMSAN + +/** + * kmsan_poison_memory() - Mark the memory range as uninitialized. + * @address: address to start with. + * @size: size of buffer to poison. + * @flags: GFP flags for allocations done by this function. + * + * Until other data is written to this range, KMSAN will treat it as + * uninitialized. Error reports for this memory will reference the call site of + * kmsan_poison_memory() as origin. + */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags); + +/** + * kmsan_unpoison_memory() - Mark the memory range as initialized. + * @address: address to start with. + * @size: size of buffer to unpoison. + * + * Until other data is written to this range, KMSAN will treat it as + * initialized. + */ +void kmsan_unpoison_memory(const void *address, size_t size); + +/** + * kmsan_check_memory() - Check the memory range for being initialized. + * @address: address to start with. + * @size: size of buffer to check. + * + * If any piece of the given range is marked as uninitialized, KMSAN will report + * an error. + */ +void kmsan_check_memory(const void *address, size_t size); + +#else + +static inline void kmsan_poison_memory(const void *address, size_t size, + gfp_t flags) +{ +} +static inline void kmsan_unpoison_memory(const void *address, size_t size) +{ +} +static inline void kmsan_check_memory(const void *address, size_t size) +{ +} + +#endif + +#endif /* _LINUX_KMSAN_CHECKS_H */ diff --git a/include/linux/kmsan_types.h b/include/linux/kmsan_types.h new file mode 100644 index 0000000000000..8bfa6c98176d4 --- /dev/null +++ b/include/linux/kmsan_types.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * A minimal header declaring types added by KMSAN to existing kernel structs. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ +#ifndef _LINUX_KMSAN_TYPES_H +#define _LINUX_KMSAN_TYPES_H + +/* These constants are defined in the MSan LLVM instrumentation pass. */ +#define KMSAN_RETVAL_SIZE 800 +#define KMSAN_PARAM_SIZE 800 + +struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + u32 retval_origin_tls; +}; + +#undef KMSAN_PARAM_SIZE +#undef KMSAN_RETVAL_SIZE + +struct kmsan_ctx { + struct kmsan_context_state cstate; + int kmsan_in_runtime; + bool allow_reporting; +}; + +#endif /* _LINUX_KMSAN_TYPES_H */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cf97f3884fda2..8be4f34cb8caa 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -223,6 +223,18 @@ struct page { not kmapped, ie. highmem) */ #endif /* WANT_PAGE_VIRTUAL */ +#ifdef CONFIG_KMSAN + /* + * KMSAN metadata for this page: + * - shadow page: every bit indicates whether the corresponding + * bit of the original page is initialized (0) or not (1); + * - origin page: every 4 bytes contain an id of the stack trace + * where the uninitialized value was created. + */ + struct page *kmsan_shadow; + struct page *kmsan_origin; +#endif + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index e7b2f8a5c711c..b6de1045f044e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1355,6 +1356,10 @@ struct task_struct { #endif #endif +#ifdef CONFIG_KMSAN + struct kmsan_ctx kmsan_ctx; +#endif + #if IS_ENABLED(CONFIG_KUNIT) struct kunit *kunit_test; #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index bcbe60d6c80c1..ff098746c16be 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -971,6 +971,7 @@ config DEBUG_STACKOVERFLOW source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" +source "lib/Kconfig.kmsan" endmenu # "Memory Debugging" diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan new file mode 100644 index 0000000000000..5b19dbd34d76e --- /dev/null +++ b/lib/Kconfig.kmsan @@ -0,0 +1,50 @@ +# SPDX-License-Identifier: GPL-2.0-only +config HAVE_ARCH_KMSAN + bool + +config HAVE_KMSAN_COMPILER + # Clang versions <14.0.0 also support -fsanitize=kernel-memory, but not + # all the features necessary to build the kernel with KMSAN. + depends on CC_IS_CLANG && CLANG_VERSION >= 140000 + def_bool $(cc-option,-fsanitize=kernel-memory -mllvm -msan-disable-checks=1) + +config KMSAN + bool "KMSAN: detector of uninitialized values use" + depends on HAVE_ARCH_KMSAN && HAVE_KMSAN_COMPILER + depends on SLUB && DEBUG_KERNEL && !KASAN && !KCSAN + select STACKDEPOT + select STACKDEPOT_ALWAYS_INIT + help + KernelMemorySanitizer (KMSAN) is a dynamic detector of uses of + uninitialized values in the kernel. It is based on compiler + instrumentation provided by Clang and thus requires Clang to build. + + An important note is that KMSAN is not intended for production use, + because it drastically increases kernel memory footprint and slows + the whole system down. + + See for more details. + +if KMSAN + +config HAVE_KMSAN_PARAM_RETVAL + # -fsanitize-memory-param-retval is supported only by Clang >= 14. + depends on HAVE_KMSAN_COMPILER + def_bool $(cc-option,-fsanitize=kernel-memory -fsanitize-memory-param-retval) + +config KMSAN_CHECK_PARAM_RETVAL + bool "Check for uninitialized values passed to and returned from functions" + default y + depends on HAVE_KMSAN_PARAM_RETVAL + help + If the compiler supports -fsanitize-memory-param-retval, KMSAN will + eagerly check every function parameter passed by value and every + function return value. + + Disabling KMSAN_CHECK_PARAM_RETVAL will result in tracking shadow for + function parameters and return values across function borders. This + is a more relaxed mode, but it generates more instrumentation code and + may potentially report errors in corner cases when non-instrumented + functions call instrumented ones. + +endif diff --git a/mm/Makefile b/mm/Makefile index 9a564f8364035..cce88e5b6d76f 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -89,6 +89,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ +obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile new file mode 100644 index 0000000000000..550ad8625e4f9 --- /dev/null +++ b/mm/kmsan/Makefile @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: GPL-2.0 +# +# Makefile for KernelMemorySanitizer (KMSAN). +# +# +obj-y := core.o instrumentation.o hooks.o report.o shadow.o + +KMSAN_SANITIZE := n +KCOV_INSTRUMENT := n +UBSAN_SANITIZE := n + +# Disable instrumentation of KMSAN runtime with other tools. +CC_FLAGS_KMSAN_RUNTIME := -fno-stack-protector +CC_FLAGS_KMSAN_RUNTIME += $(call cc-option,-fno-conserve-stack) +CC_FLAGS_KMSAN_RUNTIME += -DDISABLE_BRANCH_PROFILING + +CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) + +CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c new file mode 100644 index 0000000000000..009ac577bf3fc --- /dev/null +++ b/mm/kmsan/core.c @@ -0,0 +1,448 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN runtime library. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * Avoid creating too long origin chains, these are unlikely to participate in + * real reports. + */ +#define MAX_CHAIN_DEPTH 7 +#define NUM_SKIPPED_TO_WARN 10000 + +bool kmsan_enabled __read_mostly; + +/* + * Per-CPU KMSAN context to be used in interrupts, where current->kmsan is + * unavaliable. + */ +DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags) +{ + u32 extra_bits = + kmsan_extra_bits(/*depth*/ 0, poison_flags & KMSAN_POISON_FREE); + bool checked = poison_flags & KMSAN_POISON_CHECK; + depot_stack_handle_t handle; + + handle = kmsan_save_stack_with_flags(flags, extra_bits); + kmsan_internal_set_shadow_origin(address, size, -1, handle, checked); +} + +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked) +{ + kmsan_internal_set_shadow_origin(address, size, 0, 0, checked); +} + +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra) +{ + unsigned long entries[KMSAN_STACK_DEPTH]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0); + + /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */ + flags &= ~__GFP_DIRECT_RECLAIM; + + return __stack_depot_save(entries, nr_entries, extra, flags, true); +} + +/* Copy the metadata following the memmove() behavior. */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n) +{ + depot_stack_handle_t old_origin = 0, new_origin = 0; + int src_slots, dst_slots, i, iter, step, skip_bits; + depot_stack_handle_t *origin_src, *origin_dst; + void *shadow_src, *shadow_dst; + u32 *align_shadow_src, shadow; + bool backwards; + + shadow_dst = kmsan_get_metadata(dst, KMSAN_META_SHADOW); + if (!shadow_dst) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(dst, n)); + + shadow_src = kmsan_get_metadata(src, KMSAN_META_SHADOW); + if (!shadow_src) { + /* + * @src is untracked: zero out destination shadow, ignore the + * origins, we're done. + */ + __memset(shadow_dst, 0, n); + return; + } + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(src, n)); + + __memmove(shadow_dst, shadow_src, n); + + origin_dst = kmsan_get_metadata(dst, KMSAN_META_ORIGIN); + origin_src = kmsan_get_metadata(src, KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin_dst || !origin_src); + src_slots = (ALIGN((u64)src + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)src, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + dst_slots = (ALIGN((u64)dst + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)dst, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + KMSAN_WARN_ON((src_slots < 1) || (dst_slots < 1)); + KMSAN_WARN_ON((src_slots - dst_slots > 1) || + (dst_slots - src_slots < -1)); + + backwards = dst > src; + i = backwards ? min(src_slots, dst_slots) - 1 : 0; + iter = backwards ? -1 : 1; + + align_shadow_src = + (u32 *)ALIGN_DOWN((u64)shadow_src, KMSAN_ORIGIN_SIZE); + for (step = 0; step < min(src_slots, dst_slots); step++, i += iter) { + KMSAN_WARN_ON(i < 0); + shadow = align_shadow_src[i]; + if (i == 0) { + /* + * If @src isn't aligned on KMSAN_ORIGIN_SIZE, don't + * look at the first @src % KMSAN_ORIGIN_SIZE bytes + * of the first shadow slot. + */ + skip_bits = ((u64)src % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + } + if (i == src_slots - 1) { + /* + * If @src + n isn't aligned on + * KMSAN_ORIGIN_SIZE, don't look at the last + * (@src + n) % KMSAN_ORIGIN_SIZE bytes of the + * last shadow slot. + */ + skip_bits = (((u64)src + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + } + /* + * Overwrite the origin only if the corresponding + * shadow is nonempty. + */ + if (origin_src[i] && (origin_src[i] != old_origin) && shadow) { + old_origin = origin_src[i]; + new_origin = kmsan_internal_chain_origin(old_origin); + /* + * kmsan_internal_chain_origin() may return + * NULL, but we don't want to lose the previous + * origin value. + */ + if (!new_origin) + new_origin = old_origin; + } + if (shadow) + origin_dst[i] = new_origin; + else + origin_dst[i] = 0; + } + /* + * If dst_slots is greater than src_slots (i.e. + * dst_slots == src_slots + 1), there is an extra origin slot at the + * beginning or end of the destination buffer, for which we take the + * origin from the previous slot. + * This is only done if the part of the source shadow corresponding to + * slot is non-zero. + * + * E.g. if we copy 8 aligned bytes that are marked as uninitialized + * and have origins o111 and o222, to an unaligned buffer with offset 1, + * these two origins are copied to three origin slots, so one of then + * needs to be duplicated, depending on the copy direction (@backwards) + * + * src shadow: |uuuu|uuuu|....| + * src origin: |o111|o222|....| + * + * backwards = 0: + * dst shadow: |.uuu|uuuu|u...| + * dst origin: |....|o111|o222| - fill the empty slot with o111 + * backwards = 1: + * dst shadow: |.uuu|uuuu|u...| + * dst origin: |o111|o222|....| - fill the empty slot with o222 + */ + if (src_slots < dst_slots) { + if (backwards) { + shadow = align_shadow_src[src_slots - 1]; + skip_bits = (((u64)dst + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + if (shadow) + /* src_slots > 0, therefore dst_slots is at least 2 */ + origin_dst[dst_slots - 1] = + origin_dst[dst_slots - 2]; + } else { + shadow = align_shadow_src[0]; + skip_bits = ((u64)dst % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + if (shadow) + origin_dst[0] = origin_dst[1]; + } + } +} + +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id) +{ + unsigned long entries[3]; + u32 extra_bits; + int depth; + bool uaf; + + if (!id) + return id; + /* + * Make sure we have enough spare bits in @id to hold the UAF bit and + * the chain depth. + */ + BUILD_BUG_ON((1 << STACK_DEPOT_EXTRA_BITS) <= (MAX_CHAIN_DEPTH << 1)); + + extra_bits = stack_depot_get_extra_bits(id); + depth = kmsan_depth_from_eb(extra_bits); + uaf = kmsan_uaf_from_eb(extra_bits); + + if (depth >= MAX_CHAIN_DEPTH) { + static atomic_long_t kmsan_skipped_origins; + long skipped = atomic_long_inc_return(&kmsan_skipped_origins); + + if (skipped % NUM_SKIPPED_TO_WARN == 0) { + pr_warn("not chained %ld origins\n", skipped); + dump_stack(); + kmsan_print_origin(id); + } + return id; + } + depth++; + extra_bits = kmsan_extra_bits(depth, uaf); + + entries[0] = KMSAN_CHAIN_MAGIC_ORIGIN; + entries[1] = kmsan_save_stack_with_flags(GFP_ATOMIC, 0); + entries[2] = id; + /* + * @entries is a local var in non-instrumented code, so KMSAN does not + * know it is initialized. Explicitly unpoison it to avoid false + * positives when __stack_depot_save() passes it to instrumented code. + */ + kmsan_internal_unpoison_memory(entries, sizeof(entries), false); + return __stack_depot_save(entries, ARRAY_SIZE(entries), extra_bits, + GFP_ATOMIC, true); +} + +void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b, + u32 origin, bool checked) +{ + u64 address = (u64)addr; + void *shadow_start; + u32 *origin_start; + size_t pad = 0; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + shadow_start = kmsan_get_metadata(addr, KMSAN_META_SHADOW); + if (!shadow_start) { + /* + * kmsan_metadata_is_contiguous() is true, so either all shadow + * and origin pages are NULL, or all are non-NULL. + */ + if (checked) { + pr_err("%s: not memsetting %ld bytes starting at %px, because the shadow is NULL\n", + __func__, size, addr); + KMSAN_WARN_ON(true); + } + return; + } + __memset(shadow_start, b, size); + + if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) { + pad = address % KMSAN_ORIGIN_SIZE; + address -= pad; + size += pad; + } + size = ALIGN(size, KMSAN_ORIGIN_SIZE); + origin_start = + (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); + + for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) + origin_start[i] = origin; +} + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr) +{ + struct page *page; + + if (!kmsan_internal_is_vmalloc_addr(vaddr) && + !kmsan_internal_is_module_addr(vaddr)) + return NULL; + page = vmalloc_to_page(vaddr); + if (pfn_valid(page_to_pfn(page))) + return page; + else + return NULL; +} + +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason) +{ + depot_stack_handle_t cur_origin = 0, new_origin = 0; + unsigned long addr64 = (unsigned long)addr; + depot_stack_handle_t *origin = NULL; + unsigned char *shadow = NULL; + int cur_off_start = -1; + int chunk_size; + size_t pos = 0; + + if (!size) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + while (pos < size) { + chunk_size = min(size - pos, + PAGE_SIZE - ((addr64 + pos) % PAGE_SIZE)); + shadow = kmsan_get_metadata((void *)(addr64 + pos), + KMSAN_META_SHADOW); + if (!shadow) { + /* + * This page is untracked. If there were uninitialized + * bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos - 1, user_addr, + reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + pos += chunk_size; + continue; + } + for (int i = 0; i < chunk_size; i++) { + if (!shadow[i]) { + /* + * This byte is unpoisoned. If there were + * poisoned bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + continue; + } + origin = kmsan_get_metadata((void *)(addr64 + pos + i), + KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin); + new_origin = *origin; + /* + * Encountered new origin - report the previous + * uninitialized range. + */ + if (cur_origin != new_origin) { + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = new_origin; + cur_off_start = pos + i; + } + } + pos += chunk_size; + } + KMSAN_WARN_ON(pos != size); + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, cur_off_start, pos - 1, + user_addr, reason); + kmsan_leave_runtime(); + } +} + +bool kmsan_metadata_is_contiguous(void *addr, size_t size) +{ + char *cur_shadow = NULL, *next_shadow = NULL, *cur_origin = NULL, + *next_origin = NULL; + u64 cur_addr = (u64)addr, next_addr = cur_addr + PAGE_SIZE; + depot_stack_handle_t *origin_p; + bool all_untracked = false; + + if (!size) + return true; + + /* The whole range belongs to the same page. */ + if (ALIGN_DOWN(cur_addr + size - 1, PAGE_SIZE) == + ALIGN_DOWN(cur_addr, PAGE_SIZE)) + return true; + + cur_shadow = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ false); + if (!cur_shadow) + all_untracked = true; + cur_origin = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ true); + if (all_untracked && cur_origin) + goto report; + + for (; next_addr < (u64)addr + size; + cur_addr = next_addr, cur_shadow = next_shadow, + cur_origin = next_origin, next_addr += PAGE_SIZE) { + next_shadow = kmsan_get_metadata((void *)next_addr, false); + next_origin = kmsan_get_metadata((void *)next_addr, true); + if (all_untracked) { + if (next_shadow || next_origin) + goto report; + if (!next_shadow && !next_origin) + continue; + } + if (((u64)cur_shadow == ((u64)next_shadow - PAGE_SIZE)) && + ((u64)cur_origin == ((u64)next_origin - PAGE_SIZE))) + continue; + goto report; + } + return true; + +report: + pr_err("%s: attempting to access two shadow page ranges.\n", __func__); + pr_err("Access of size %ld at %px.\n", size, addr); + pr_err("Addresses belonging to different ranges: %px and %px\n", + (void *)cur_addr, (void *)next_addr); + pr_err("page[0].shadow: %px, page[1].shadow: %px\n", cur_shadow, + next_shadow); + pr_err("page[0].origin: %px, page[1].origin: %px\n", cur_origin, + next_origin); + origin_p = kmsan_get_metadata(addr, KMSAN_META_ORIGIN); + if (origin_p) { + pr_err("Origin: %08x\n", *origin_p); + kmsan_print_origin(*origin_p); + } else { + pr_err("Origin: unavailable\n"); + } + return false; +} diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c new file mode 100644 index 0000000000000..4ac62fa67a02a --- /dev/null +++ b/mm/kmsan/hooks.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include + +#include "../internal.h" +#include "../slab.h" +#include "kmsan.h" + +/* + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_memory((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_poison_memory); + +void kmsan_unpoison_memory(const void *address, size_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_memory((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_memory); + +void kmsan_check_memory(const void *addr, size_t size) +{ + if (!kmsan_enabled) + return; + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c new file mode 100644 index 0000000000000..280d154132684 --- /dev/null +++ b/mm/kmsan/instrumentation.c @@ -0,0 +1,307 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN compiler API. + * + * This file implements __msan_XXX hooks that Clang inserts into the code + * compiled with -fsanitize=kernel-memory. + * See Documentation/dev-tools/kmsan.rst for more information on how KMSAN + * instrumentation works. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" +#include +#include +#include + +static inline bool is_bad_asm_addr(void *addr, uintptr_t size, bool is_store) +{ + if ((u64)addr < TASK_SIZE) + return true; + if (!kmsan_get_metadata(addr, KMSAN_META_SHADOW)) + return true; + return false; +} + +static inline struct shadow_origin_ptr +get_shadow_origin_ptr(void *addr, u64 size, bool store) +{ + unsigned long ua_flags = user_access_save(); + struct shadow_origin_ptr ret; + + ret = kmsan_get_shadow_origin_ptr(addr, size, store); + user_access_restore(ua_flags); + return ret; +} + +/* Get shadow and origin pointers for a memory load with non-standard size. */ +struct shadow_origin_ptr __msan_metadata_ptr_for_load_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ false); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_n); + +/* Get shadow and origin pointers for a memory store with non-standard size. */ +struct shadow_origin_ptr __msan_metadata_ptr_for_store_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ true); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_n); + +/* + * Declare functions that obtain shadow/origin pointers for loads and stores + * with fixed size. + */ +#define DECLARE_METADATA_PTR_GETTER(size) \ + struct shadow_origin_ptr __msan_metadata_ptr_for_load_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ false); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_load_##size); \ + struct shadow_origin_ptr __msan_metadata_ptr_for_store_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ true); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_store_##size) + +DECLARE_METADATA_PTR_GETTER(1); +DECLARE_METADATA_PTR_GETTER(2); +DECLARE_METADATA_PTR_GETTER(4); +DECLARE_METADATA_PTR_GETTER(8); + +/* + * Handle a memory store performed by inline assembly. KMSAN conservatively + * attempts to unpoison the outputs of asm() directives to prevent false + * positives caused by missed stores. + */ +void __msan_instrument_asm_store(void *addr, uintptr_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + /* + * Most of the accesses are below 32 bytes. The two exceptions so far + * are clwb() (64 bytes) and FPU state (512 bytes). + * It's unlikely that the assembly will touch more than 512 bytes. + */ + if (size > 512) { + WARN_ONCE(1, "assembly store size too big: %ld\n", size); + size = 8; + } + if (is_bad_asm_addr(addr, size, /*is_store*/ true)) { + user_access_restore(ua_flags); + return; + } + kmsan_enter_runtime(); + /* Unpoisoning the memory on best effort. */ + kmsan_internal_unpoison_memory(addr, size, /*checked*/ false); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_instrument_asm_store); + +/* + * KMSAN instrumentation pass replaces LLVM memcpy, memmove and memset + * intrinsics with calls to respective __msan_ functions. We use + * get_param0_metadata() and set_retval_metadata() to store the shadow/origin + * values for the destination argument of these functions and use them for the + * functions' return values. + */ +static inline void get_param0_metadata(u64 *shadow, + depot_stack_handle_t *origin) +{ + struct kmsan_ctx *ctx = kmsan_get_context(); + + *shadow = *(u64 *)(ctx->cstate.param_tls); + *origin = ctx->cstate.param_origin_tls[0]; +} + +static inline void set_retval_metadata(u64 shadow, depot_stack_handle_t origin) +{ + struct kmsan_ctx *ctx = kmsan_get_context(); + + *(u64 *)(ctx->cstate.retval_tls) = shadow; + ctx->cstate.retval_origin_tls = origin; +} + +/* Handle llvm.memmove intrinsic. */ +void *__msan_memmove(void *dst, const void *src, uintptr_t n) +{ + depot_stack_handle_t origin; + void *result; + u64 shadow; + + get_param0_metadata(&shadow, &origin); + result = __memmove(dst, src, n); + if (!n) + /* Some people call memmove() with zero length. */ + return result; + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_enter_runtime(); + kmsan_internal_memmove_metadata(dst, (void *)src, n); + kmsan_leave_runtime(); + + set_retval_metadata(shadow, origin); + return result; +} +EXPORT_SYMBOL(__msan_memmove); + +/* Handle llvm.memcpy intrinsic. */ +void *__msan_memcpy(void *dst, const void *src, uintptr_t n) +{ + depot_stack_handle_t origin; + void *result; + u64 shadow; + + get_param0_metadata(&shadow, &origin); + result = __memcpy(dst, src, n); + if (!n) + /* Some people call memcpy() with zero length. */ + return result; + + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_enter_runtime(); + /* Using memmove instead of memcpy doesn't affect correctness. */ + kmsan_internal_memmove_metadata(dst, (void *)src, n); + kmsan_leave_runtime(); + + set_retval_metadata(shadow, origin); + return result; +} +EXPORT_SYMBOL(__msan_memcpy); + +/* Handle llvm.memset intrinsic. */ +void *__msan_memset(void *dst, int c, uintptr_t n) +{ + depot_stack_handle_t origin; + void *result; + u64 shadow; + + get_param0_metadata(&shadow, &origin); + result = __memset(dst, c, n); + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_enter_runtime(); + /* + * Clang doesn't pass parameter metadata here, so it is impossible to + * use shadow of @c to set up the shadow for @dst. + */ + kmsan_internal_unpoison_memory(dst, n, /*checked*/ false); + kmsan_leave_runtime(); + + set_retval_metadata(shadow, origin); + return result; +} +EXPORT_SYMBOL(__msan_memset); + +/* + * Create a new origin from an old one. This is done when storing an + * uninitialized value to memory. When reporting an error, KMSAN unrolls and + * prints the whole chain of stores that preceded the use of this value. + */ +depot_stack_handle_t __msan_chain_origin(depot_stack_handle_t origin) +{ + depot_stack_handle_t ret = 0; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return ret; + + ua_flags = user_access_save(); + + /* Creating new origins may allocate memory. */ + kmsan_enter_runtime(); + ret = kmsan_internal_chain_origin(origin); + kmsan_leave_runtime(); + user_access_restore(ua_flags); + return ret; +} +EXPORT_SYMBOL(__msan_chain_origin); + +/* Poison a local variable when entering a function. */ +void __msan_poison_alloca(void *address, uintptr_t size, char *descr) +{ + depot_stack_handle_t handle; + unsigned long entries[4]; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + entries[0] = KMSAN_ALLOCA_MAGIC_ORIGIN; + entries[1] = (u64)descr; + entries[2] = (u64)__builtin_return_address(0); + /* + * With frame pointers enabled, it is possible to quickly fetch the + * second frame of the caller stack without calling the unwinder. + * Without them, simply do not bother. + */ + if (IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER)) + entries[3] = (u64)__builtin_return_address(1); + else + entries[3] = 0; + + /* stack_depot_save() may allocate memory. */ + kmsan_enter_runtime(); + handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + kmsan_leave_runtime(); + + kmsan_internal_set_shadow_origin(address, size, -1, handle, + /*checked*/ true); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_poison_alloca); + +/* Unpoison a local variable. */ +void __msan_unpoison_alloca(void *address, uintptr_t size) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + kmsan_enter_runtime(); + kmsan_internal_unpoison_memory(address, size, /*checked*/ true); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_unpoison_alloca); + +/* + * Report that an uninitialized value with the given origin was used in a way + * that constituted undefined behavior. + */ +void __msan_warning(u32 origin) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_report(origin, /*address*/ 0, /*size*/ 0, + /*off_first*/ 0, /*off_last*/ 0, /*user_addr*/ 0, + REASON_ANY); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_warning); + +/* + * At the beginning of an instrumented function, obtain the pointer to + * `struct kmsan_context_state` holding the metadata for function parameters. + */ +struct kmsan_context_state *__msan_get_context_state(void) +{ + return &kmsan_get_context()->cstate; +} +EXPORT_SYMBOL(__msan_get_context_state); diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h new file mode 100644 index 0000000000000..6b9deee3b7f32 --- /dev/null +++ b/mm/kmsan/kmsan.h @@ -0,0 +1,203 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Functions used by the KMSAN runtime. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef __MM_KMSAN_KMSAN_H +#define __MM_KMSAN_KMSAN_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#define KMSAN_ALLOCA_MAGIC_ORIGIN 0xabcd0100 +#define KMSAN_CHAIN_MAGIC_ORIGIN 0xabcd0200 + +#define KMSAN_POISON_NOCHECK 0x0 +#define KMSAN_POISON_CHECK 0x1 +#define KMSAN_POISON_FREE 0x2 + +#define KMSAN_ORIGIN_SIZE 4 + +#define KMSAN_STACK_DEPTH 64 + +#define KMSAN_META_SHADOW (false) +#define KMSAN_META_ORIGIN (true) + +extern bool kmsan_enabled; +extern int panic_on_kmsan; + +/* + * KMSAN performs a lot of consistency checks that are currently enabled by + * default. BUG_ON is normally discouraged in the kernel, unless used for + * debugging, but KMSAN itself is a debugging tool, so it makes little sense to + * recover if something goes wrong. + */ +#define KMSAN_WARN_ON(cond) \ + ({ \ + const bool __cond = WARN_ON(cond); \ + if (unlikely(__cond)) { \ + WRITE_ONCE(kmsan_enabled, false); \ + if (panic_on_kmsan) { \ + /* Can't call panic() here because */ \ + /* of uaccess checks. */ \ + BUG(); \ + } \ + } \ + __cond; \ + }) + +/* + * A pair of metadata pointers to be returned by the instrumentation functions. + */ +struct shadow_origin_ptr { + void *shadow, *origin; +}; + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, + bool store); +void *kmsan_get_metadata(void *addr, bool is_origin); + +enum kmsan_bug_reason { + REASON_ANY, + REASON_COPY_TO_USER, + REASON_SUBMIT_URB, +}; + +void kmsan_print_origin(depot_stack_handle_t origin); + +/** + * kmsan_report() - Report a use of uninitialized value. + * @origin: Stack ID of the uninitialized value. + * @address: Address at which the memory access happens. + * @size: Memory access size. + * @off_first: Offset (from @address) of the first byte to be reported. + * @off_last: Offset (from @address) of the last byte to be reported. + * @user_addr: When non-NULL, denotes the userspace address to which the kernel + * is leaking data. + * @reason: Error type from enum kmsan_bug_reason. + * + * kmsan_report() prints an error message for a consequent group of bytes + * sharing the same origin. If an uninitialized value is used in a comparison, + * this function is called once without specifying the addresses. When checking + * a memory range, KMSAN may call kmsan_report() multiple times with the same + * @address, @size, @user_addr and @reason, but different @off_first and + * @off_last corresponding to different @origin values. + */ +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason); + +DECLARE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +static __always_inline struct kmsan_ctx *kmsan_get_context(void) +{ + return in_task() ? ¤t->kmsan_ctx : raw_cpu_ptr(&kmsan_percpu_ctx); +} + +/* + * When a compiler hook or KMSAN runtime function is invoked, it may make a + * call to instrumented code and eventually call itself recursively. To avoid + * that, we guard the runtime entry regions with + * kmsan_enter_runtime()/kmsan_leave_runtime() and exit the hook if + * kmsan_in_runtime() is true. + * + * Non-runtime code may occasionally get executed in nested IRQs from the + * runtime code (e.g. when called via smp_call_function_single()). Because some + * KMSAN routines may take locks (e.g. for memory allocation), we conservatively + * bail out instead of calling them. To minimize the effect of this (potentially + * missing initialization events) kmsan_in_runtime() is not checked in + * non-blocking runtime functions. + */ +static __always_inline bool kmsan_in_runtime(void) +{ + if ((hardirq_count() >> HARDIRQ_SHIFT) > 1) + return true; + return kmsan_get_context()->kmsan_in_runtime; +} + +static __always_inline void kmsan_enter_runtime(void) +{ + struct kmsan_ctx *ctx; + + ctx = kmsan_get_context(); + KMSAN_WARN_ON(ctx->kmsan_in_runtime++); +} + +static __always_inline void kmsan_leave_runtime(void) +{ + struct kmsan_ctx *ctx = kmsan_get_context(); + + KMSAN_WARN_ON(--ctx->kmsan_in_runtime); +} + +depot_stack_handle_t kmsan_save_stack(void); +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra_bits); + +/* + * Pack and unpack the origin chain depth and UAF flag to/from the extra bits + * provided by the stack depot. + * The UAF flag is stored in the lowest bit, followed by the depth in the upper + * bits. + * set_dsh_extra_bits() is responsible for clamping the value. + */ +static __always_inline unsigned int kmsan_extra_bits(unsigned int depth, + bool uaf) +{ + return (depth << 1) | uaf; +} + +static __always_inline bool kmsan_uaf_from_eb(unsigned int extra_bits) +{ + return extra_bits & 1; +} + +static __always_inline unsigned int kmsan_depth_from_eb(unsigned int extra_bits) +{ + return extra_bits >> 1; +} + +/* + * kmsan_internal_ functions are supposed to be very simple and not require the + * kmsan_in_runtime() checks. + */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n); +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags); +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked); +void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, + u32 origin, bool checked); +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); + +bool kmsan_metadata_is_contiguous(void *addr, size_t size); +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason); + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); + +/* + * kmsan_internal_is_module_addr() and kmsan_internal_is_vmalloc_addr() are + * non-instrumented versions of is_module_address() and is_vmalloc_addr() that + * are safe to call from KMSAN runtime without recursion. + */ +static inline bool kmsan_internal_is_module_addr(void *vaddr) +{ + return ((u64)vaddr >= MODULES_VADDR) && ((u64)vaddr < MODULES_END); +} + +static inline bool kmsan_internal_is_vmalloc_addr(void *addr) +{ + return ((u64)addr >= VMALLOC_START) && ((u64)addr < VMALLOC_END); +} + +#endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c new file mode 100644 index 0000000000000..64e061f7da496 --- /dev/null +++ b/mm/kmsan/report.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN error reporting routines. + * + * Copyright (C) 2019-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include + +#include "kmsan.h" + +static DEFINE_RAW_SPINLOCK(kmsan_report_lock); +#define DESCR_SIZE 128 +/* Protected by kmsan_report_lock */ +static char report_local_descr[DESCR_SIZE]; +int panic_on_kmsan __read_mostly; + +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +#define MODULE_PARAM_PREFIX "kmsan." +module_param_named(panic, panic_on_kmsan, int, 0); + +/* + * Skip internal KMSAN frames. + */ +static int get_stack_skipnr(const unsigned long stack_entries[], + int num_entries) +{ + int len, skip; + char buf[64]; + + for (skip = 0; skip < num_entries; ++skip) { + len = scnprintf(buf, sizeof(buf), "%ps", + (void *)stack_entries[skip]); + + /* Never show __msan_* or kmsan_* functions. */ + if ((strnstr(buf, "__msan_", len) == buf) || + (strnstr(buf, "kmsan_", len) == buf)) + continue; + + /* + * No match for runtime functions -- @skip entries to skip to + * get to first frame of interest. + */ + break; + } + + return skip; +} + +/* + * Currently the descriptions of locals generated by Clang look as follows: + * ----local_name@function_name + * We want to print only the name of the local, as other information in that + * description can be confusing. + * The meaningful part of the description is copied to a global buffer to avoid + * allocating memory. + */ +static char *pretty_descr(char *descr) +{ + int pos = 0, len = strlen(descr); + + for (int i = 0; i < len; i++) { + if (descr[i] == '@') + break; + if (descr[i] == '-') + continue; + report_local_descr[pos] = descr[i]; + if (pos + 1 == DESCR_SIZE) + break; + pos++; + } + report_local_descr[pos] = 0; + return report_local_descr; +} + +void kmsan_print_origin(depot_stack_handle_t origin) +{ + unsigned long *entries = NULL, *chained_entries = NULL; + unsigned int nr_entries, chained_nr_entries, skipnr; + void *pc1 = NULL, *pc2 = NULL; + depot_stack_handle_t head; + unsigned long magic; + char *descr = NULL; + + if (!origin) + return; + + while (true) { + nr_entries = stack_depot_fetch(origin, &entries); + magic = nr_entries ? entries[0] : 0; + if ((nr_entries == 4) && (magic == KMSAN_ALLOCA_MAGIC_ORIGIN)) { + descr = (char *)entries[1]; + pc1 = (void *)entries[2]; + pc2 = (void *)entries[3]; + pr_err("Local variable %s created at:\n", + pretty_descr(descr)); + if (pc1) + pr_err(" %pSb\n", pc1); + if (pc2) + pr_err(" %pSb\n", pc2); + break; + } + if ((nr_entries == 3) && (magic == KMSAN_CHAIN_MAGIC_ORIGIN)) { + head = entries[1]; + origin = entries[2]; + pr_err("Uninit was stored to memory at:\n"); + chained_nr_entries = + stack_depot_fetch(head, &chained_entries); + kmsan_internal_unpoison_memory( + chained_entries, + chained_nr_entries * sizeof(*chained_entries), + /*checked*/ false); + skipnr = get_stack_skipnr(chained_entries, + chained_nr_entries); + stack_trace_print(chained_entries + skipnr, + chained_nr_entries - skipnr, 0); + pr_err("\n"); + continue; + } + pr_err("Uninit was created at:\n"); + if (nr_entries) { + skipnr = get_stack_skipnr(entries, nr_entries); + stack_trace_print(entries + skipnr, nr_entries - skipnr, + 0); + } else { + pr_err("(stack is not available)\n"); + } + break; + } +} + +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason) +{ + unsigned long stack_entries[KMSAN_STACK_DEPTH]; + int num_stack_entries, skipnr; + char *bug_type = NULL; + unsigned long ua_flags; + bool is_uaf; + + if (!kmsan_enabled) + return; + if (!current->kmsan_ctx.allow_reporting) + return; + if (!origin) + return; + + current->kmsan_ctx.allow_reporting = false; + ua_flags = user_access_save(); + raw_spin_lock(&kmsan_report_lock); + pr_err("=====================================================\n"); + is_uaf = kmsan_uaf_from_eb(stack_depot_get_extra_bits(origin)); + switch (reason) { + case REASON_ANY: + bug_type = is_uaf ? "use-after-free" : "uninit-value"; + break; + case REASON_COPY_TO_USER: + bug_type = is_uaf ? "kernel-infoleak-after-free" : + "kernel-infoleak"; + break; + case REASON_SUBMIT_URB: + bug_type = is_uaf ? "kernel-usb-infoleak-after-free" : + "kernel-usb-infoleak"; + break; + } + + num_stack_entries = + stack_trace_save(stack_entries, KMSAN_STACK_DEPTH, 1); + skipnr = get_stack_skipnr(stack_entries, num_stack_entries); + + pr_err("BUG: KMSAN: %s in %pSb\n", bug_type, + (void *)stack_entries[skipnr]); + stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, + 0); + pr_err("\n"); + + kmsan_print_origin(origin); + + if (size) { + pr_err("\n"); + if (off_first == off_last) + pr_err("Byte %d of %d is uninitialized\n", off_first, + size); + else + pr_err("Bytes %d-%d of %d are uninitialized\n", + off_first, off_last, size); + } + if (address) + pr_err("Memory access of size %d starts at %px\n", size, + address); + if (user_addr && reason == REASON_COPY_TO_USER) + pr_err("Data copied to user address %px\n", user_addr); + pr_err("\n"); + dump_stack_print_info(KERN_ERR); + pr_err("=====================================================\n"); + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + raw_spin_unlock(&kmsan_report_lock); + if (panic_on_kmsan) + panic("kmsan.panic set ...\n"); + user_access_restore(ua_flags); + current->kmsan_ctx.allow_reporting = true; +} diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c new file mode 100644 index 0000000000000..acc5279acc3be --- /dev/null +++ b/mm/kmsan/shadow.c @@ -0,0 +1,147 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN shadow implementation. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../internal.h" +#include "kmsan.h" + +#define shadow_page_for(page) ((page)->kmsan_shadow) + +#define origin_page_for(page) ((page)->kmsan_origin) + +static void *shadow_ptr_for(struct page *page) +{ + return page_address(shadow_page_for(page)); +} + +static void *origin_ptr_for(struct page *page) +{ + return page_address(origin_page_for(page)); +} + +static bool page_has_metadata(struct page *page) +{ + return shadow_page_for(page) && origin_page_for(page); +} + +static void set_no_shadow_origin_page(struct page *page) +{ + shadow_page_for(page) = NULL; + origin_page_for(page) = NULL; +} + +/* + * Dummy load and store pages to be used when the real metadata is unavailable. + * There are separate pages for loads and stores, so that every load returns a + * zero, and every store doesn't affect other loads. + */ +static char dummy_load_page[PAGE_SIZE] __aligned(PAGE_SIZE); +static char dummy_store_page[PAGE_SIZE] __aligned(PAGE_SIZE); + +static unsigned long vmalloc_meta(void *addr, bool is_origin) +{ + unsigned long addr64 = (unsigned long)addr, off; + + KMSAN_WARN_ON(is_origin && !IS_ALIGNED(addr64, KMSAN_ORIGIN_SIZE)); + if (kmsan_internal_is_vmalloc_addr(addr)) { + off = addr64 - VMALLOC_START; + return off + (is_origin ? KMSAN_VMALLOC_ORIGIN_START : + KMSAN_VMALLOC_SHADOW_START); + } + if (kmsan_internal_is_module_addr(addr)) { + off = addr64 - MODULES_VADDR; + return off + (is_origin ? KMSAN_MODULES_ORIGIN_START : + KMSAN_MODULES_SHADOW_START); + } + return 0; +} + +static struct page *virt_to_page_or_null(void *vaddr) +{ + if (kmsan_virt_addr_valid(vaddr)) + return virt_to_page(vaddr); + else + return NULL; +} + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *address, u64 size, + bool store) +{ + struct shadow_origin_ptr ret; + void *shadow; + + /* + * Even if we redirect this memory access to the dummy page, it will + * go out of bounds. + */ + KMSAN_WARN_ON(size > PAGE_SIZE); + + if (!kmsan_enabled) + goto return_dummy; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(address, size)); + shadow = kmsan_get_metadata(address, KMSAN_META_SHADOW); + if (!shadow) + goto return_dummy; + + ret.shadow = shadow; + ret.origin = kmsan_get_metadata(address, KMSAN_META_ORIGIN); + return ret; + +return_dummy: + if (store) { + /* Ignore this store. */ + ret.shadow = dummy_store_page; + ret.origin = dummy_store_page; + } else { + /* This load will return zero. */ + ret.shadow = dummy_load_page; + ret.origin = dummy_load_page; + } + return ret; +} + +/* + * Obtain the shadow or origin pointer for the given address, or NULL if there's + * none. The caller must check the return value for being non-NULL if needed. + * The return value of this function should not depend on whether we're in the + * runtime or not. + */ +void *kmsan_get_metadata(void *address, bool is_origin) +{ + u64 addr = (u64)address, pad, off; + struct page *page; + + if (is_origin && !IS_ALIGNED(addr, KMSAN_ORIGIN_SIZE)) { + pad = addr % KMSAN_ORIGIN_SIZE; + addr -= pad; + } + address = (void *)addr; + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) + return (void *)vmalloc_meta(address, is_origin); + + page = virt_to_page_or_null(address); + if (!page) + return NULL; + if (!page_has_metadata(page)) + return NULL; + off = addr % PAGE_SIZE; + + return (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; +} diff --git a/scripts/Makefile.kmsan b/scripts/Makefile.kmsan new file mode 100644 index 0000000000000..b5b0aa61322ec --- /dev/null +++ b/scripts/Makefile.kmsan @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0 +kmsan-cflags := -fsanitize=kernel-memory + +ifdef CONFIG_KMSAN_CHECK_PARAM_RETVAL +kmsan-cflags += -fsanitize-memory-param-retval +endif + +export CFLAGS_KMSAN := $(kmsan-cflags) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 3fb6a99e78c47..ac32429e93b73 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -157,6 +157,15 @@ _c_flags += $(if $(patsubst n%,, \ endif endif +ifeq ($(CONFIG_KMSAN),y) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_SANITIZE_$(basetarget).o)$(KMSAN_SANITIZE)y), \ + $(CFLAGS_KMSAN)) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_ENABLE_CHECKS_$(basetarget).o)$(KMSAN_ENABLE_CHECKS)y), \ + , -mllvm -msan-disable-checks=1) +endif + ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ $(UBSAN_SANITIZE_$(basetarget).o)$(UBSAN_SANITIZE)$(CONFIG_UBSAN_SANITIZE_ALL)), \ From patchwork Mon Sep 5 12:24:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966043 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C849DC54EE9 for ; Mon, 5 Sep 2022 12:25:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E1F18D0073; Mon, 5 Sep 2022 08:25:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 590748D0050; Mon, 5 Sep 2022 08:25:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 45A008D0073; Mon, 5 Sep 2022 08:25:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3340A8D0050 for ; Mon, 5 Sep 2022 08:25:32 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 05F971A0D1A for ; Mon, 5 Sep 2022 12:25:32 +0000 (UTC) X-FDA: 79877952504.20.A03A8C2 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf08.hostedemail.com (Postfix) with ESMTP id 9F82F160072 for ; Mon, 5 Sep 2022 12:25:31 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id k17-20020adfb351000000b00228853e5d71so545953wrd.17 for ; Mon, 05 Sep 2022 05:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=XCZJZVAa3O+3/P+XN9/lo0y8HYORhozGNxqdXHsrUsA=; b=bQ+/47cKrqBhbRe4ywkTJqHqYWgP77hUTtmp1dcohb/ioQoSgzm//3xKCe3uoAMMIy MIMVg2G/g8T78zcnHSkaI5X3P+BrN+bNPCX3ZLhni4takFJJ6fWScY0RGeODmt0niiqp tYnyXm7BiAmG0XaHSSRaGB9+V8Exq4Q0sJr5bGZioEQR0ThC50Q+1do4JCnRBMYWmRAt dGpCQy8T+4cRttEORqBm7pd7KzCiFBnpmjsLacLdH//1jnZX0x/HgtbzNK8FLIheW8qW FWlPXyugoRaG8F7ixiCZy+ohdSGvFlnQl/gYpweG6P7kfVbuYXlJ7XmgadF2PQnFmSDg A2tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=XCZJZVAa3O+3/P+XN9/lo0y8HYORhozGNxqdXHsrUsA=; b=sGCUJwKbGbpqpNpEZ1nP88WilPg1Q/VwtMnbik7MuP/gLEGomb5neX91VTjmQUxKpt VtlDST72jfxSGS8YJEpDjXWO//YWne/Rdf2gpBTzfQINTGicPF41yUAxH3LIvD/lex2F TlU29yur3RpfE4Lp2Tm+i8BZUwPVbvRPfuYnfRNK2fl+CBVA0cZ0+Bpodnq+Q+50K1ay BZ69ONDvAKKbrVfECN8W25zcDCLb8RFXlb5Wagul1Tkjq4nCNI4Inqpj4KhIGZB2a7rV lm622SZlVmKUqucESDfzykh2AeJXup4V9PJG0PiIKUuWOXMbvCI/iuOW54CfHS3VvbZl p73A== X-Gm-Message-State: ACgBeo2csNBE11iXRhQXjKdnSYYKNai0kDi5uupXSzMunZrRF3OQw4AO zi+UVS6n7c3BfZZvv6inNFgd4j7cRfU= X-Google-Smtp-Source: AA6agR5ePi+xwp2kWxLXVgdRtOkhFxB2Ee5h9v0DMSNgo8fLmjBOh8J9ueb16LrVhja/ymCHyUA/Jo9hmTc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a5d:598f:0:b0:220:8005:7def with SMTP id n15-20020a5d598f000000b0022080057defmr25144707wri.435.1662380730400; Mon, 05 Sep 2022 05:25:30 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:20 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-13-glider@google.com> Subject: [PATCH v6 12/44] kmsan: disable instrumentation of unsupported common kernel code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380731; a=rsa-sha256; cv=none; b=kZd0jF7L3uuy9Mgp1VlgDlS3XjaI18d4yfwwwr03FEGd5i1Z27SyB78TRMQTBCdqnnBzUt aFopnkT2bUcYfl7njnxvOv/XjZMfpv1XU2DmPn0oKqbl/aPRDwxTTTuGXFHuVNDiqmm5jg CiCL9zGYjKtMOoyFo6Ql35UAhIAr8nI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="bQ+/47cK"; spf=pass (imf08.hostedemail.com: domain of 3uuoVYwYKCAcnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3uuoVYwYKCAcnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380731; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XCZJZVAa3O+3/P+XN9/lo0y8HYORhozGNxqdXHsrUsA=; b=QoxTijJYtYP5nEry5g+vWz6zjPNwWT/f7wvagk/ihwPNqLzjHUjdrjWXvgJ6rz/gAoxFf9 98k5Wapob1/+f5g02MafHB0wSlpsulUn2wXGz8xv3LEdruxJ/L5bI3FLJaYr1Dko8B0Raw ntl+IQSsKLsZxwOVaYI3KjVlaBY5p8I= X-Rspam-User: X-Stat-Signature: bohan64i9kmttr3e6zja7qdds8okerzw X-Rspamd-Queue-Id: 9F82F160072 X-Rspamd-Server: rspam10 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="bQ+/47cK"; spf=pass (imf08.hostedemail.com: domain of 3uuoVYwYKCAcnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3uuoVYwYKCAcnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1662380731-192953 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: EFI stub cannot be linked with KMSAN runtime, so we disable instrumentation for it. Instrumenting kcov, stackdepot or lockdep leads to infinite recursion caused by instrumentation hooks calling instrumented code again. Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v4: -- This patch was previously part of "kmsan: disable KMSAN instrumentation for certain kernel parts", but was split away per Mark Rutland's request. v5: -- remove unnecessary comment belonging to another patch Link: https://linux-review.googlesource.com/id/I41ae706bd3474f074f6a870bfc3f0f90e9c720f7 --- drivers/firmware/efi/libstub/Makefile | 1 + kernel/Makefile | 1 + kernel/locking/Makefile | 3 ++- lib/Makefile | 3 +++ 4 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index d0537573501e9..81432d0c904b1 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -46,6 +46,7 @@ GCOV_PROFILE := n # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n UBSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/kernel/Makefile b/kernel/Makefile index 318789c728d32..d754e0be1176d 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -38,6 +38,7 @@ KCOV_INSTRUMENT_kcov.o := n KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n +KMSAN_SANITIZE_kcov.o := n CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector # Don't instrument error handlers diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index d51cabf28f382..ea925731fa40f 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -5,8 +5,9 @@ KCOV_INSTRUMENT := n obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o -# Avoid recursion lockdep -> KCSAN -> ... -> lockdep. +# Avoid recursion lockdep -> sanitizer -> ... -> lockdep. KCSAN_SANITIZE_lockdep.o := n +KMSAN_SANITIZE_lockdep.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE) diff --git a/lib/Makefile b/lib/Makefile index ffabc30a27d4e..fcebece0f5b6f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -275,6 +275,9 @@ obj-$(CONFIG_POLYNOMIAL) += polynomial.o CFLAGS_stackdepot.o += -fno-builtin obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n +# In particular, instrumenting stackdepot.c with KMSAN will result in infinite +# recursion. +KMSAN_SANITIZE_stackdepot.o := n KCOV_INSTRUMENT_stackdepot.o := n obj-$(CONFIG_REF_TRACKER) += ref_tracker.o From patchwork Mon Sep 5 12:24:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966044 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6769EECAAD5 for ; Mon, 5 Sep 2022 12:25:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03D198D0074; Mon, 5 Sep 2022 08:25:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 012F38D0050; Mon, 5 Sep 2022 08:25:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCE6E8D0074; Mon, 5 Sep 2022 08:25:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CDF178D0050 for ; Mon, 5 Sep 2022 08:25:34 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AA1A3A0EAD for ; Mon, 5 Sep 2022 12:25:34 +0000 (UTC) X-FDA: 79877952588.30.EEFC210 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf26.hostedemail.com (Postfix) with ESMTP id 65D54140087 for ; Mon, 5 Sep 2022 12:25:34 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id v1-20020a056402348100b00448acc79177so5806596edc.23 for ; Mon, 05 Sep 2022 05:25:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=TshE4y0aie2V6V2+uLUqxx2CJ+sNpnEg6whn5GrSFP8=; b=KmmCnyB09bJUZHQjwO3kUqnRIzGi9Y6VrKDAE64MFeu4Hil7cpdZ1q8lGli5tvlBil Cf5P4IYHvBxUmSdTUn2Gs0hAszt3eHWuDQRa1N/xxYxOEV1PWqlg6iRMSOV+PpAy2o+2 1lURM4Q8xxd5V55Pvmx9SoQvzLhysQwJIvhnUwhfV8lbB1JRyFpg50YsIYISy2l27np8 0AeS93YWxdbLmnWpRfAjWXYqBq/UDffiGW4SJHduO9ItYKsRWLYVVh9lruPFOY9Obx9o q9QkK6qz/sMihjClMDGyBIEwP/q80o0fvL3OXcdVsGXHZs3uyfuziK6LVf7bejqyI7za kefA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=TshE4y0aie2V6V2+uLUqxx2CJ+sNpnEg6whn5GrSFP8=; b=M9KEGhbOf0IB6iaCuideCwjW/ZyQBJhAFO+Br3ZGDmIOQYUYzGABNDDjaTiUbVVaPL 8VkLGYZykOV6QAGJAA6x4qHVYiioIdSze7+4+5ZOihIrcyAlfuTEp2KMtgoj/V5MpUYC NFSlZaHnXKKNB1yJ+dXIpQDjZMiWdkTzFnyCMOcfNlEzVni1wiSHNfqvfHUeQHqXDBJl IDpoA+XWrlaQEFCpbbTrt8/YzN3wpHfZQf5WYuq56MInKf2qIYupTE4nS3G1VSnudy2f HBwdFEJD14ArZ8/emK9QA1IscFWzRqMwdDdcrXTEdqIL3cAvMT1tdPrbnPWaoQVYXksC sSxQ== X-Gm-Message-State: ACgBeo2S42Y6XoTeDYG6RoDzXLhj22Qhs7XrJUrp/frtPhdDHxcbzo9l 1gfWh5Wv2weAuDQx47nrMYAthomwbl0= X-Google-Smtp-Source: AA6agR4Xve7I8LLlB44NFT0vCYfBwPcSCu1DsccfnrCnOEmeaOisZ/pABs8s7TnNbobeUn78bMXZn8M1y9E= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:d0c7:0:b0:44d:f0ed:75b8 with SMTP id u7-20020aa7d0c7000000b0044df0ed75b8mr6971238edo.50.1662380733016; Mon, 05 Sep 2022 05:25:33 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:21 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-14-glider@google.com> Subject: [PATCH v6 13/44] MAINTAINERS: add entry for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380734; a=rsa-sha256; cv=none; b=TEnOwW4r1BdUlSzlcNLfAwtMjdBItgVcqJ6dCKv9QqNDWC75nhSwOjyfTlnBYsQVtDU4fG EXR7B9tdZxqpSgEwm8CY94LXP3e/jg2YdjG13gABOCF+kUrYHJzGOopofkJbNWOhAsDBVT BLwlHh+VDDAqFD+VZcpzD9zugnoswuc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KmmCnyB0; spf=pass (imf26.hostedemail.com: domain of 3veoVYwYKCAoqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3veoVYwYKCAoqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380734; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TshE4y0aie2V6V2+uLUqxx2CJ+sNpnEg6whn5GrSFP8=; b=O/cEBINIlNvp7iDSUnIoJcZUWGSqAKYk8V32fHdEpQBPz+pmdDXFK54umcyReJerXqhEwY psaH35V/+b3Txg7ONusIGS0lfTSr6yD6s6DLfsPifxu4xbCNaqTbtHAFdak2bBgKrE9RGn JLEK5Zkpb0VZfta/Z89F3/jnccY7198= X-Rspam-User: X-Stat-Signature: nemmgfr5icq7pebg7qrb411ujp9a9oaz X-Rspamd-Queue-Id: 65D54140087 X-Rspamd-Server: rspam10 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KmmCnyB0; spf=pass (imf26.hostedemail.com: domain of 3veoVYwYKCAoqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3veoVYwYKCAoqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1662380734-331304 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add entry for KMSAN maintainers/reviewers. Signed-off-by: Alexander Potapenko --- v5: -- add arch/*/include/asm/kmsan.h Link: https://linux-review.googlesource.com/id/Ic5836c2bceb6b63f71a60d3327d18af3aa3dab77 --- MAINTAINERS | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index d30f26e07cd39..9332b99371c5b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11373,6 +11373,19 @@ F: kernel/kmod.c F: lib/test_kmod.c F: tools/testing/selftests/kmod/ +KMSAN +M: Alexander Potapenko +R: Marco Elver +R: Dmitry Vyukov +L: kasan-dev@googlegroups.com +S: Maintained +F: Documentation/dev-tools/kmsan.rst +F: arch/*/include/asm/kmsan.h +F: include/linux/kmsan*.h +F: lib/Kconfig.kmsan +F: mm/kmsan/ +F: scripts/Makefile.kmsan + KPROBES M: Naveen N. Rao M: Anil S Keshavamurthy From patchwork Mon Sep 5 12:24:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966045 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F389ECAAD3 for ; Mon, 5 Sep 2022 12:25:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A04448D0075; Mon, 5 Sep 2022 08:25:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9AFDE8D0050; Mon, 5 Sep 2022 08:25:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8296C8D0075; Mon, 5 Sep 2022 08:25:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 71B9A8D0050 for ; Mon, 5 Sep 2022 08:25:38 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3BE0B120306 for ; Mon, 5 Sep 2022 12:25:38 +0000 (UTC) X-FDA: 79877952756.13.40F3451 Received: from mail-lj1-f202.google.com (mail-lj1-f202.google.com [209.85.208.202]) by imf11.hostedemail.com (Postfix) with ESMTP id C98474005F for ; Mon, 5 Sep 2022 12:25:37 +0000 (UTC) Received: by mail-lj1-f202.google.com with SMTP id v4-20020a2ea444000000b00261e0d5bc25so2834230ljn.19 for ; Mon, 05 Sep 2022 05:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=8SyfjxYr3+020xbP74HATSJ8O3XHxQ0BD6WEKGl7y2Q=; b=hWGiiCdklw+IuDXHphLCLPnBiTT9M9rVDaNdZR0f59bQP7sQo0x5DRxcV5QqIX9kJM IASbNL1I+df0AFi8hNUJ5DxhkqTU1GanJ7jn5A1u2lvXItLbdv1E5G4hKFViW3Dt8ZEm 4BFe7zibLFFJmHiypKCU9IPq6CsOswiOPMU2Cuf9mMg/nh+H/YrL5BCQWyWW5E6Iv8M4 oL1pJwAUfwHeaRpVwpvxEab8Ls/A5MApHlqoCu6GLKDmph79yVDDH3TLLjHZIJJeun81 cD5SSbxzkDuMKdfW0PwaffV5+41opSpaMJCtjvrSQRjCpahYyVYqBJhdscxU+hZkHFnQ NBgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=8SyfjxYr3+020xbP74HATSJ8O3XHxQ0BD6WEKGl7y2Q=; b=zWVhmSHp6ZFYk9jec4Wf1WxYl93+Cl1Frew5A7WBVT2gUPonfGF4YhTuksWsndEwvp Etmcay5AtCwsqjXrbowaC/dbyVoQmN0oMgLuUDwiF71z+BgGMLlAipBi307snghrEe4g Z0hNZ2L8V6+Vr9HfuB9Xeg+zsORINpGpVhO1YzHUFeHBHGtroyzI7Txhe/wkl46XOyOo mMbmh+QoQqgbpqtcUL4l/EQP9aMYobNuYS5GObPEFuioa6m0B7rG6953zDnbxfhJJx2M +OXtOIBkYcvpHB1rGGfifOVBi1kVNH5jura9tkdUgF3I68sCfsHKxcKzBCkjnRVmAbJ3 CfWg== X-Gm-Message-State: ACgBeo1CliTp7Fgqz6F5jN8x7aHdab0Vqco6yMfeCifVTW5LrBFmf85Y q0CWS07U8UgCE9iPN9ZzNpy+EeSxQS0= X-Google-Smtp-Source: AA6agR5Kcnj+SIX6ctzmSRBTBvZbZascpBlF4dWzB4BXejcKmw6tfrOGJiZzP599GTfBnfG0zYgSIQBzbrM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6512:3b87:b0:494:9a8d:f74f with SMTP id g7-20020a0565123b8700b004949a8df74fmr6745773lfv.8.1662380735669; Mon, 05 Sep 2022 05:25:35 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:22 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-15-glider@google.com> Subject: [PATCH v6 14/44] mm: kmsan: maintain KMSAN metadata for page operations From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hWGiiCdk; spf=pass (imf11.hostedemail.com: domain of 3v-oVYwYKCAwsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.208.202 as permitted sender) smtp.mailfrom=3v-oVYwYKCAwsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380737; a=rsa-sha256; cv=none; b=8nclEfjwP7YrsjTlLPLAS0er0SRPHuKDJe2rroHCe0IXTzeTDcoi+JsiSn2JFscRGVCkBu qjEHKBzW7ZxtiFh2/rCAOf0FVpG8gXgQ2X0qgLnCLOfc88HKIJCjQOwHzPNjAcMN0Gd4W1 v9JmN9Y7nHWrlVRglqW9ZGsk5CeIVio= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380737; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8SyfjxYr3+020xbP74HATSJ8O3XHxQ0BD6WEKGl7y2Q=; b=55TlFbXiIvduK6960tuukC0CdP6tpBXV0m4ZC5Ho7Ukky61UuAQvGrtNPyz7UGFL3JYRAf LhKd3tTwIhl1tDIUwkG7d5TcfVWAudh9Oyxn9Tc18uLqMiqScid5IxXl6ZEBpvCNMN6C9U 9xHDot+KjU4cRikWm19aHN/63azikCw= X-Stat-Signature: kqgdnuqx6heax9g89cfxbd1w4gsn5wgp X-Rspamd-Queue-Id: C98474005F X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hWGiiCdk; spf=pass (imf11.hostedemail.com: domain of 3v-oVYwYKCAwsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.208.202 as permitted sender) smtp.mailfrom=3v-oVYwYKCAwsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-HE-Tag: 1662380737-836786 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - poison page shadow and origins in alloc_pages()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko --- v2: -- move page metadata hooks implementation here -- remove call to kmsan_memblock_free_pages() v3: -- use PAGE_SHIFT in kmsan_ioremap_page_range() v4: -- change sizeof(type) to sizeof(*ptr) -- replace occurrences of |var| with @var -- swap mm: and kmsan: in the subject -- drop __no_sanitize_memory from clear_page() v5: -- do not export KMSAN hooks that are not called from modules -- use modern style for-loops -- simplify clear_page() instrumentation as suggested by Marco Elver -- move forward declaration of `struct page` in kmsan.h to this patch v6: -- doesn't exist prior to this patch Link: https://linux-review.googlesource.com/id/I6d4f53a0e7eab46fa29f0348f3095d9f2e326850 --- arch/x86/include/asm/page_64.h | 7 ++ arch/x86/mm/ioremap.c | 3 + include/linux/highmem.h | 3 + include/linux/kmsan.h | 145 +++++++++++++++++++++++++++++++++ mm/internal.h | 6 ++ mm/kmsan/hooks.c | 86 +++++++++++++++++++ mm/kmsan/shadow.c | 113 +++++++++++++++++++++++++ mm/memory.c | 2 + mm/page_alloc.c | 11 +++ mm/vmalloc.c | 20 ++++- 10 files changed, 394 insertions(+), 2 deletions(-) create mode 100644 include/linux/kmsan.h diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index baa70451b8df5..198e03e59ca19 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -8,6 +8,8 @@ #include #include +#include + /* duplicated to the one in bootmem.h */ extern unsigned long max_pfn; extern unsigned long phys_base; @@ -47,6 +49,11 @@ void clear_page_erms(void *page); static inline void clear_page(void *page) { + /* + * Clean up KMSAN metadata for the page being cleared. The assembly call + * below clobbers @page, so we perform unpoisoning before it. + */ + kmsan_unpoison_memory(page, PAGE_SIZE); alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 1ad0228f8ceb9..78c5bc654cff5 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -479,6 +480,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 25679035ca283..e9912da5441b4 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -311,6 +312,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_memory(page_address(to), PAGE_SIZE); kunmap_local(vto); kunmap_local(vfrom); } @@ -326,6 +328,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_local(vto); kunmap_local(vfrom); } diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h new file mode 100644 index 0000000000000..b36bf3db835ee --- /dev/null +++ b/include/linux/kmsan.h @@ -0,0 +1,145 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN API for subsystems. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ +#ifndef _LINUX_KMSAN_H +#define _LINUX_KMSAN_H + +#include +#include +#include + +struct page; + +#ifdef CONFIG_KMSAN + +/** + * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. + * @page: struct page pointer returned by alloc_pages(). + * @order: order of allocated struct page. + * @flags: GFP flags used by alloc_pages() + * + * KMSAN marks 1<<@order pages starting at @page as uninitialized, unless + * @flags contain __GFP_ZERO. + */ +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags); + +/** + * kmsan_free_page() - Notify KMSAN about a free_pages() call. + * @page: struct page pointer passed to free_pages(). + * @order: order of deallocated struct page. + * + * KMSAN marks freed memory as uninitialized. + */ +void kmsan_free_page(struct page *page, unsigned int order); + +/** + * kmsan_copy_page_meta() - Copy KMSAN metadata between two pages. + * @dst: destination page. + * @src: source page. + * + * KMSAN copies the contents of metadata pages for @src into the metadata pages + * for @dst. If @dst has no associated metadata pages, nothing happens. + * If @src has no associated metadata pages, @dst metadata pages are unpoisoned. + */ +void kmsan_copy_page_meta(struct page *dst, struct page *src); + +/** + * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. + * @start: start of vmapped range. + * @end: end of vmapped range. + * @prot: page protection flags used for vmap. + * @pages: array of pages. + * @page_shift: page_shift passed to vmap_range_noflush(). + * + * KMSAN maps shadow and origin pages of @pages into contiguous ranges in + * vmalloc metadata address range. + */ +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + +/** + * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap. + * @start: start of vunmapped range. + * @end: end of vunmapped range. + * + * KMSAN unmaps the contiguous metadata ranges created by + * kmsan_map_kernel_range_noflush(). + */ +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end); + +/** + * kmsan_ioremap_page_range() - Notify KMSAN about a ioremap_page_range() call. + * @addr: range start. + * @end: range end. + * @phys_addr: physical range start. + * @prot: page protection flags used for ioremap_page_range(). + * @page_shift: page_shift argument passed to vmap_range_noflush(). + * + * KMSAN creates new metadata pages for the physical pages mapped into the + * virtual memory. + */ +void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift); + +/** + * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. + * @start: range start. + * @end: range end. + * + * KMSAN unmaps the metadata pages for the given range and, unlike for + * vunmap_page_range(), also deallocates them. + */ +void kmsan_iounmap_page_range(unsigned long start, unsigned long end); + +#else + +static inline int kmsan_alloc_page(struct page *page, unsigned int order, + gfp_t flags) +{ + return 0; +} + +static inline void kmsan_free_page(struct page *page, unsigned int order) +{ +} + +static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ +} + +static inline void kmsan_vmap_pages_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages, + unsigned int page_shift) +{ +} + +static inline void kmsan_vunmap_range_noflush(unsigned long start, + unsigned long end) +{ +} + +static inline void kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, + pgprot_t prot, + unsigned int page_shift) +{ +} + +static inline void kmsan_iounmap_page_range(unsigned long start, + unsigned long end) +{ +} + +#endif + +#endif /* _LINUX_KMSAN_H */ diff --git a/mm/internal.h b/mm/internal.h index 785409805ed79..fd7247a2367ed 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -847,8 +847,14 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, } #endif +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + void vunmap_range_noflush(unsigned long start, unsigned long end); +void __vunmap_range_noflush(unsigned long start, unsigned long end); + int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 4ac62fa67a02a..040111bb9f6a3 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -26,6 +27,91 @@ * skipping effects of functions like memset() inside instrumented code. */ +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_ORIGIN); +} + +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end) +{ + __vunmap_range_noflush(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_range_noflush(vmalloc_origin(start), vmalloc_origin(end)); + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); +} + +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) +{ + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; + struct page *shadow, *origin; + unsigned long off = 0; + int nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + for (int i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_pages_range_noflush( + vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, + PAGE_SHIFT); + __vmap_pages_range_noflush( + vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, + PAGE_SHIFT); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + unsigned long v_shadow, v_origin; + struct page *shadow, *origin; + int nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (int i = 0; i < nr; + i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = kmsan_vmalloc_to_page_or_null((void *)v_shadow); + origin = kmsan_vmalloc_to_page_or_null((void *)v_origin); + __vunmap_range_noflush(v_shadow, vmalloc_shadow(end)); + __vunmap_range_noflush(v_origin, vmalloc_origin(end)); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index acc5279acc3be..8c81a059beea6 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -145,3 +145,116 @@ void *kmsan_get_metadata(void *address, bool is_origin) return (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; } + +void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + if (!dst || !page_has_metadata(dst)) + return; + if (!src || !page_has_metadata(src)) { + kmsan_internal_unpoison_memory(page_address(dst), PAGE_SIZE, + /*checked*/ false); + return; + } + + kmsan_enter_runtime(); + __memcpy(shadow_ptr_for(dst), shadow_ptr_for(src), PAGE_SIZE); + __memcpy(origin_ptr_for(dst), origin_ptr_for(src), PAGE_SIZE); + kmsan_leave_runtime(); +} + +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) +{ + bool initialized = (flags & __GFP_ZERO) || !kmsan_enabled; + struct page *shadow, *origin; + depot_stack_handle_t handle; + int pages = 1 << order; + + if (!page) + return; + + shadow = shadow_page_for(page); + origin = origin_page_for(page); + + if (initialized) { + __memset(page_address(shadow), 0, PAGE_SIZE * pages); + __memset(page_address(origin), 0, PAGE_SIZE * pages); + return; + } + + /* Zero pages allocated by the runtime should also be initialized. */ + if (kmsan_in_runtime()) + return; + + __memset(page_address(shadow), -1, PAGE_SIZE * pages); + kmsan_enter_runtime(); + handle = kmsan_save_stack_with_flags(flags, /*extra_bits*/ 0); + kmsan_leave_runtime(); + /* + * Addresses are page-aligned, pages are contiguous, so it's ok + * to just fill the origin pages with @handle. + */ + for (int i = 0; i < PAGE_SIZE * pages / sizeof(handle); i++) + ((depot_stack_handle_t *)page_address(origin))[i] = handle; +} + +void kmsan_free_page(struct page *page, unsigned int order) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(page_address(page), + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift) +{ + unsigned long shadow_start, origin_start, shadow_end, origin_end; + struct page **s_pages, **o_pages; + int nr, mapped; + + if (!kmsan_enabled) + return; + + shadow_start = vmalloc_meta((void *)start, KMSAN_META_SHADOW); + shadow_end = vmalloc_meta((void *)end, KMSAN_META_SHADOW); + if (!shadow_start) + return; + + nr = (end - start) / PAGE_SIZE; + s_pages = kcalloc(nr, sizeof(*s_pages), GFP_KERNEL); + o_pages = kcalloc(nr, sizeof(*o_pages), GFP_KERNEL); + if (!s_pages || !o_pages) + goto ret; + for (int i = 0; i < nr; i++) { + s_pages[i] = shadow_page_for(pages[i]); + o_pages[i] = origin_page_for(pages[i]); + } + prot = __pgprot(pgprot_val(prot) | _PAGE_NX); + prot = PAGE_KERNEL; + + origin_start = vmalloc_meta((void *)start, KMSAN_META_ORIGIN); + origin_end = vmalloc_meta((void *)end, KMSAN_META_ORIGIN); + kmsan_enter_runtime(); + mapped = __vmap_pages_range_noflush(shadow_start, shadow_end, prot, + s_pages, page_shift); + KMSAN_WARN_ON(mapped); + mapped = __vmap_pages_range_noflush(origin_start, origin_end, prot, + o_pages, page_shift); + KMSAN_WARN_ON(mapped); + kmsan_leave_runtime(); + flush_tlb_kernel_range(shadow_start, shadow_end); + flush_tlb_kernel_range(origin_start, origin_end); + flush_cache_vmap(shadow_start, shadow_end); + flush_cache_vmap(origin_start, origin_end); + +ret: + kfree(s_pages); + kfree(o_pages); +} diff --git a/mm/memory.c b/mm/memory.c index 4ba73f5aa8bb7..6cc35d2cae8fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include #include @@ -3128,6 +3129,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) delayacct_wpcopy_end(); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5486d47406e8..d488dab76a6e8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -1398,6 +1399,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); if (unlikely(PageHWPoison(page)) && !order) { /* @@ -3817,6 +3819,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -5535,6 +5545,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, } trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); return page; } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index dd6cdb2011953..68b656e0125c9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -320,6 +320,9 @@ int ioremap_page_range(unsigned long addr, unsigned long end, err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot), ioremap_max_page_shift); flush_cache_vmap(addr, end); + if (!err) + kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; } @@ -416,7 +419,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -void vunmap_range_noflush(unsigned long start, unsigned long end) +void __vunmap_range_noflush(unsigned long start, unsigned long end) { unsigned long next; pgd_t *pgd; @@ -438,6 +441,12 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) arch_sync_kernel_mappings(start, end); } +void vunmap_range_noflush(unsigned long start, unsigned long end) +{ + kmsan_vunmap_range_noflush(start, end); + __vunmap_range_noflush(start, end); +} + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -575,7 +584,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { unsigned int i, nr = (end - addr) >> PAGE_SHIFT; @@ -601,6 +610,13 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, return 0; } +int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); +} + /** * vmap_pages_range - map pages to a kernel virtual address * @addr: start of the VM area to map From patchwork Mon Sep 5 12:24:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966046 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E923DC6FA8C for ; Mon, 5 Sep 2022 12:25:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 82B038D0050; Mon, 5 Sep 2022 08:25:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DC548D0076; Mon, 5 Sep 2022 08:25:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 65E5B8D0050; Mon, 5 Sep 2022 08:25:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 558E98D0050 for ; Mon, 5 Sep 2022 08:25:40 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 35530A06F5 for ; Mon, 5 Sep 2022 12:25:40 +0000 (UTC) X-FDA: 79877952840.27.3D8EDE5 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf06.hostedemail.com (Postfix) with ESMTP id EA06518005B for ; Mon, 5 Sep 2022 12:25:39 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id b13-20020a056402350d00b0043dfc84c533so5670934edd.5 for ; Mon, 05 Sep 2022 05:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=Ovg29tOGV0MBaJSgEZI8XqfnbTwhQ8WU6IE7DIFXMYM=; b=eCahA6YlWLTGTTi1hCmUOqkoxHsfRWbwrk0khGEmEwE26GludLvpbkp+iNxED27up+ QeoMCrkZxrN14gO34aHeRK2qqitcfAIctM6kl9eLc9WwGK06RWETZUv60vbfv+iSZG3E DFWFLeSBqNnNChbHXHtgrkYP2H5OdFH632JiYUZ1QUZX5C6JlcKbkts6+kHDZaEROFwH x+OorqG1kZSqsSdN1WiXZ2L7CUPb13+wg6h8z8YE4H7g7OH1vea0FWKlqN4Mmgwznvou NOfI+cIToxZacDt0S41MmHfjdmJSSCOp4vW3YAqX9P02uLfG4/gvX6e4nmD/mlgB97UF 8+/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=Ovg29tOGV0MBaJSgEZI8XqfnbTwhQ8WU6IE7DIFXMYM=; b=Jc+KGkOMkyjoi1KTsUvJUFQGYxVjjghVTDEwDqG9PMmCdJjMr7ks0f3tqBkGelCf2P teU4hDgppZwX5c1qKst8CO09japuMB6VFH679vCiniogCdwW+8ixTJnZ4s66jfYc8ARt MAqDlsTJScFAkkyDLkMLae/msgB/WhdRq90xPMKsUi9XuaVTf/3QzZHDtwPRcucyZXXT aPdq3GYBYfHCtEKO9p51adXbV4k468T9f528D8jx5ZSlHy+HFh54JJj6cKSCEceQsJDm nPeAhTrNBM8SEEZk9oAZgGYn+9dwF0ZQQItIvovZziAnW49j5NiQG7WooLQCZMyWws6i WSAQ== X-Gm-Message-State: ACgBeo1IS8sYOZ+O4l3JVt0UHaQVnh0rWDAeoeeX1JNyZ0NB1NBN2rqy 4rKjGwZ+YtQpxrWICu+p346WpCEXnO0= X-Google-Smtp-Source: AA6agR6Z/3bvzyUl9jV0ldouDXOG58iDLniTuGjiPLY9l/OIDayJnLlasjvB+pKoDXMbo05eXSrEfkpQ63E= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:c002:b0:73d:d96c:c632 with SMTP id ss2-20020a170907c00200b0073dd96cc632mr33992978ejc.543.1662380738625; Mon, 05 Sep 2022 05:25:38 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:23 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-16-glider@google.com> Subject: [PATCH v6 15/44] mm: kmsan: call KMSAN hooks from SLUB code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=eCahA6Yl; spf=pass (imf06.hostedemail.com: domain of 3wuoVYwYKCA8v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3wuoVYwYKCA8v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380739; a=rsa-sha256; cv=none; b=wp0dBRzVf0rkP8KOTtJU1HZJS+2jnd70ZAFcQBBQcg8FCP2FsTXHosrAWV/A3oQMv6FZsN r99kZfrLOMCJD1rM1CZdca0dMKwK5vM82HoVGeJiJe/h2wtJeRo0gAwW4blcn+xo4We2+1 KwERGE2PtociJF44zmy3PRwIm0QhqNY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380739; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ovg29tOGV0MBaJSgEZI8XqfnbTwhQ8WU6IE7DIFXMYM=; b=UajfMr1bSSBNCQtJfhR+LECrnonyVMBtcNKe9Z0zCfb6pagzLmxVXdmryX6W/Amlos2YEj xARa+kHeM/E3799PMHu7y4lWrr4qAbpPazSsZOX7om5DfLBGh96XgcYxP6hLM3x5n4UXpd EEnnwcmugmR8HYz6g4PPfq5ru+WfSNg= X-Stat-Signature: z39ox3eua3my9wio5zso1okof8g88phx X-Rspamd-Queue-Id: EA06518005B X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=eCahA6Yl; spf=pass (imf06.hostedemail.com: domain of 3wuoVYwYKCA8v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3wuoVYwYKCA8v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-HE-Tag: 1662380739-577146 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. In addition, we apply __no_kmsan_checks to get_freepointer_safe() to suppress reports when accessing freelist pointers that reside in freed objects. Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v2: -- move the implementation of SLUB hooks here v4: -- change sizeof(type) to sizeof(*ptr) -- swap mm: and kmsan: in the subject -- get rid of kmsan_init(), replace it with __no_kmsan_checks v5: -- do not export KMSAN hooks that are not called from modules -- drop an unnecessary whitespace change Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e --- include/linux/kmsan.h | 57 ++++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 76 +++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 1 + mm/slub.c | 17 ++++++++++ 4 files changed, 151 insertions(+) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index b36bf3db835ee..5c4e0079054e6 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -14,6 +14,7 @@ #include struct page; +struct kmem_cache; #ifdef CONFIG_KMSAN @@ -48,6 +49,44 @@ void kmsan_free_page(struct page *page, unsigned int order); */ void kmsan_copy_page_meta(struct page *dst, struct page *src); +/** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it as initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + /** * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. * @start: start of vmapped range. @@ -114,6 +153,24 @@ static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) { } +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ +} + +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) +{ +} + +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ +} + +static inline void kmsan_kfree_large(const void *ptr) +{ +} + static inline void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 040111bb9f6a3..000703c563a4d 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -27,6 +27,82 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + if (unlikely(object == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_memory(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} + +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * until the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + if (unlikely(ptr == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory((void *)ptr, size, + /*checked*/ true); + else + kmsan_internal_poison_memory((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} + +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + KMSAN_WARN_ON(ptr != page_address(page)); + kmsan_internal_poison_memory((void *)ptr, + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + static unsigned long vmalloc_shadow(unsigned long addr) { return (unsigned long)kmsan_get_metadata((void *)addr, diff --git a/mm/slab.h b/mm/slab.h index 4ec82bec15ecd..9d0afd2985df7 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -729,6 +729,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slub.c b/mm/slub.c index 862dbd9af4f52..2c323d83d0526 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -359,6 +360,17 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, we apply __no_kmsan_checks to ensure that + * get_freepointer_safe() returns initialized memory. + */ +__no_kmsan_checks static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; @@ -1709,6 +1721,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1716,12 +1729,14 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -5915,6 +5930,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -6016,6 +6032,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(*al)); return 0; } From patchwork Mon Sep 5 12:24:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC79FC6FA83 for ; Mon, 5 Sep 2022 12:25:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4CFD38D0077; Mon, 5 Sep 2022 08:25:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A63D8D0076; Mon, 5 Sep 2022 08:25:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 36E7C8D0077; Mon, 5 Sep 2022 08:25:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 282318D0076 for ; Mon, 5 Sep 2022 08:25:43 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EAACB14035D for ; Mon, 5 Sep 2022 12:25:42 +0000 (UTC) X-FDA: 79877952924.21.B2F2C2F Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf23.hostedemail.com (Postfix) with ESMTP id A4A89140079 for ; Mon, 5 Sep 2022 12:25:42 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id y12-20020a056402358c00b00448898f1c33so5691096edc.7 for ; Mon, 05 Sep 2022 05:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=hpHoy1zlF7EBf883UoQvMWO5+jjHkQm5Cfi8gPN98eQ=; b=JUFMkLirTS3MRYzbf3DA0Qf78yUbrU3Tmust9whDJBKCcRLOxp9b9XdCvqSBcwkn2Q Por8NNQUTGVnkzerAS/nRWsHGKaKfV5WwSeftTIENCRFWy0wXLiw7lbi5KzgiDciZ8CH MUI7Zz5353o/Qx6J49KbnsWeS7D4wcBdmA0hOCxj7NxRBdtsNEM0aCQEqVvA48EBKZG7 Mz78sHKvnGlp/aG0Wose98tGNuVwnZr5QL7pgaKlmDXbc6WQMzVcmsqCrTN0nDflh5AU 0NIChTRqEPLeAxwwaYnpQl24suF9Rw3FxGPFyZy5KfVYIr9phKVmPe+CcsSON0iQEjrS Bjeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=hpHoy1zlF7EBf883UoQvMWO5+jjHkQm5Cfi8gPN98eQ=; b=kKw0R1uCL/1dRF+D2pA9u9X96hRJfbK5du1m5lIdhHLxKfw0C6ERwtV/JdvfkhKjom nCZnZ8ynQ2b9ZhCG7jW9XvBDb03gNeQoXTlg5uV9ZnP9wzG7CWEb35EKa/ylfw2tLgMX OGlgaa7+MklfHHYPWtrEhkbv1ejgBhm8rqZYNe5ExenF3Okp05h2tSq5Cx/OTpnSbQ4w u2M6qzT3+ewOFFnsdk7uFq2lHsaB9CfwZF1+EIGButMfazEa50lo/zFh3Z7YTE5d9kuG OVd168R4KP5JJECsCGohqDpckGyrWT205eo4spXTBJkoiQBNDXAlpj9QnuiGkjCNpe7U jOOQ== X-Gm-Message-State: ACgBeo1Uk4SDa7mr/eAqHYQeXkneSZRW4hA4VvO8t1O02ue/3W3DglWO YXSBF+OPm6Eqx9dut4ewLvnwRkjMCYs= X-Google-Smtp-Source: AA6agR6tYFa7unOFIILxsZbuti9cA6v/ZqdVDdtiCSluOhA60ILv/E0Dc7MvwB1p0Qd0D4a/LzRHRwo57S8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:cb13:0:b0:448:3759:8c57 with SMTP id s19-20020aa7cb13000000b0044837598c57mr33923922edt.8.1662380741433; Mon, 05 Sep 2022 05:25:41 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:24 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-17-glider@google.com> Subject: [PATCH v6 16/44] kmsan: handle task creation and exiting From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380742; a=rsa-sha256; cv=none; b=ev03wiqMPh92ju9Y72q0WSXQURTPBNuOkWY/VsUp6JsmqFnaJLg9SV62dR/lUwiUOfNbNn ppXn2BLmyCvTiR/oym2BqJM5yiL++Rfsvk5xGuQjA4rAGsG1pAI9Z9dIHNkwtkaxYt+Fml pJQkd2+tOEpVFce+kVzh3d7BqW9NtDE= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JUFMkLir; spf=pass (imf23.hostedemail.com: domain of 3xeoVYwYKCBIy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3xeoVYwYKCBIy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380742; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hpHoy1zlF7EBf883UoQvMWO5+jjHkQm5Cfi8gPN98eQ=; b=JKjIbIs6Op9lMTOpeHD3kswBA34456xA0bgtS4W0Kz5I3qy6F0SWKmVctD7HhCDIP1DzV8 HmVHWz8zXS0tZ8F4OVdOniOJKsjOMYocgxPfBg6PQvJABiEDTjH48VkaPVKo/nszCfpeEH BEhvLsCBeAGV2kv1uKMyfRM7CsHzP2w= Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JUFMkLir; spf=pass (imf23.hostedemail.com: domain of 3xeoVYwYKCBIy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3xeoVYwYKCBIy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspam-User: X-Stat-Signature: j56nxofxjh65fygx4iq11aztqnmfcwdn X-Rspamd-Queue-Id: A4A89140079 X-HE-Tag: 1662380742-781623 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell KMSAN that a new task is created, so the tool creates a backing metadata structure for that task. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_task_create() and kmsan_task_exit() here v4: -- change sizeof(type) to sizeof(*ptr) v5: -- do not export KMSAN hooks that are not called from modules -- minor comment fix Link: https://linux-review.googlesource.com/id/I0f41c3a1c7d66f7e14aabcfdfc7c69addb945805 --- include/linux/kmsan.h | 21 +++++++++++++++++++++ kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ mm/kmsan/core.c | 10 ++++++++++ mm/kmsan/hooks.c | 17 +++++++++++++++++ mm/kmsan/kmsan.h | 2 ++ 6 files changed, 54 insertions(+) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 5c4e0079054e6..354aee6f7b1a2 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -15,9 +15,22 @@ struct page; struct kmem_cache; +struct task_struct; #ifdef CONFIG_KMSAN +/** + * kmsan_task_create() - Initialize KMSAN state for the task. + * @task: task to initialize. + */ +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + /** * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. * @page: struct page pointer returned by alloc_pages(). @@ -139,6 +152,14 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); #else +static inline void kmsan_task_create(struct task_struct *task) +{ +} + +static inline void kmsan_task_exit(struct task_struct *task) +{ +} + static inline int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) { diff --git a/kernel/exit.c b/kernel/exit.c index 84021b24f79e3..f5d620c315662 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include #include @@ -741,6 +742,7 @@ void __noreturn do_exit(long code) WARN_ON(tsk->plug); kcov_task_exit(tsk); + kmsan_task_exit(tsk); coredump_task_exit(tsk); ptrace_event(PTRACE_EVENT_EXIT, code); diff --git a/kernel/fork.c b/kernel/fork.c index 90c85b17bf698..7cf3eea01ceef 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -1026,6 +1027,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->worker_private = NULL; kcov_task_init(tsk); + kmsan_task_create(tsk); kmap_local_fork(tsk); #ifdef CONFIG_FAULT_INJECTION diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c index 009ac577bf3fc..fd007d53e9f53 100644 --- a/mm/kmsan/core.c +++ b/mm/kmsan/core.c @@ -44,6 +44,16 @@ bool kmsan_enabled __read_mostly; */ DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); +void kmsan_internal_task_create(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + struct thread_info *info = current_thread_info(); + + __memset(ctx, 0, sizeof(*ctx)); + ctx->allow_reporting = true; + kmsan_internal_unpoison_memory(info, sizeof(*info), false); +} + void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, unsigned int poison_flags) { diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 000703c563a4d..6f3e64b0b61f8 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -27,6 +27,23 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_task_create(struct task_struct *task) +{ + kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(); +} + +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ctx->allow_reporting = false; +} + void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) { if (unlikely(object == NULL)) diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index 6b9deee3b7f32..04954b83c5d65 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -179,6 +179,8 @@ void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, u32 origin, bool checked); depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); +void kmsan_internal_task_create(struct task_struct *task); + bool kmsan_metadata_is_contiguous(void *addr, size_t size); void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, int reason); From patchwork Mon Sep 5 12:24:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966048 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C555ECAAD5 for ; Mon, 5 Sep 2022 12:25:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1004C8D0078; Mon, 5 Sep 2022 08:25:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0AF9C8D0076; Mon, 5 Sep 2022 08:25:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E91C68D0078; Mon, 5 Sep 2022 08:25:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D82318D0076 for ; Mon, 5 Sep 2022 08:25:45 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AF3DFA0E50 for ; Mon, 5 Sep 2022 12:25:45 +0000 (UTC) X-FDA: 79877953050.25.34B0570 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf24.hostedemail.com (Postfix) with ESMTP id 5C73C18008F for ; Mon, 5 Sep 2022 12:25:45 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id i6-20020a05640242c600b00447c00a776aso5851337edc.20 for ; Mon, 05 Sep 2022 05:25:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=BF2kccsg/NJzP2Xwpe+d061LyCU8U/vhohlAjevIJbs=; b=anjjT0lgkAdallhHnxUuOIBk34pzOUdqNgOiXwUp3//dK4xWKXJx44yzNd1ZELHP27 y57zsmF3Pa9c15OZf8UzGCf+gAJIBe/guoiM7/Vlynbf7tcDS4vCHfKFfpEPSY0ghFKV LtX/4JmBhUgHf67VAH3uaf4jMO6Y8ZjUTFasvwxXiYckPkj5r1iDfpSwvEdu+hjTFcOJ oOw5c9/XcjgI5Is6TysNH8EuMKrh+pIuKPKG2mTcAoLH7+xbN/PCXGl8Nimr75O6nnGZ +vF6tz8DXDS+GWmsMxcBYOjieR2KfZu6QLh1JBmZYYgFD3ZFPaxGnbDh0jTEHjyhXe/t cQpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=BF2kccsg/NJzP2Xwpe+d061LyCU8U/vhohlAjevIJbs=; b=6YwFuOH4aJYp/GZkYcwwDIbkwtlJ86n2F8i9/3+yOmgmMOtC8TsMZw4v++b752dncV pThAz9Gqb1yz98WeABukoqZSGOIJZSu4cIsH3lvNIsN8ikjppkv6ot7orz3RVKp06VwV BIuWNZ8S4GZyEMFEf5yXVuSJlh9lSS8IPM7DKmVx13B1lUVY7hYTr5O2QXK953JIEBoW P7DaqwMO+u3mKZOKTH5l4Z9pJQJTG52kUKjircjrrpk6ofD59o/oMvXZXHC2x9bpvedf tnIDtpOt++zIpHce9eHMlOnos+TfQu08ketIbFbQYsUTLWu83SSdI5RmFfLm8Piw+GMi bnGw== X-Gm-Message-State: ACgBeo13177oQ09YvJZuc09LoMnG0qqevMUCHPeruUlM3V3Cx9wTVrds 6mKulm+Mmo/MfSjS0KM5Q5rHw1J2o6c= X-Google-Smtp-Source: AA6agR7Fa53c+oCEmJAAYrhwKe6kuanrNEqmsi7Ip5qZx99RS2lmTwU5tTdl4K1d4SX5GZRGQJj5dSdagDw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:2701:b0:741:51eb:2338 with SMTP id w1-20020a170907270100b0074151eb2338mr28969744ejk.501.1662380744169; Mon, 05 Sep 2022 05:25:44 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:25 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-18-glider@google.com> Subject: [PATCH v6 17/44] init: kmsan: call KMSAN initialization routines From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=anjjT0lg; spf=pass (imf24.hostedemail.com: domain of 3yOoVYwYKCBU163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3yOoVYwYKCBU163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380745; a=rsa-sha256; cv=none; b=FZE7vZZvWRA1eMKhNYnQdzMfJ7lxa1gp7JR515Ck/lPTGsqEb0XeACBfBf1OlGSVWXYY/r h/QJY9GU8p/3hqYtdZSDX0laFkaFwv1JwtosgToimV74pTD4zXvunz0/uoEnMLY+RCdwsP WZQzjGMZ6s+bdHNr7F4FxtKkDuTmbu8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380745; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BF2kccsg/NJzP2Xwpe+d061LyCU8U/vhohlAjevIJbs=; b=jdhDjgvM67a1XEDZxSkREDxpJapT3pD4f2BeGHrtakHtoXTUqlnyIagW16uYPdJeq87AZX Cw1fqxajObxwUcZn8oVvkv06iZI4UgZvtPG2IPHhKICINS8hC0DSxm74cIPwpf1Yi9uptq faP2UJb7IZsjjvY3E+c+aJE7+h/cP54= X-Stat-Signature: jg13rram1z4xj1r9fcz3mutqc8kcia3q X-Rspamd-Queue-Id: 5C73C18008F X-Rspam-User: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=anjjT0lg; spf=pass (imf24.hostedemail.com: domain of 3yOoVYwYKCBU163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3yOoVYwYKCBU163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-HE-Tag: 1662380745-465740 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_init_shadow() scans the mappings created at boot time and creates metadata pages for those mappings. When the memblock allocator returns pages to pagealloc, we reserve 2/3 of those pages and use them as metadata for the remaining 1/3. Once KMSAN starts, every page allocated by pagealloc has its associated shadow and origin pages. kmsan_initialize() initializes the bookkeeping for init_task and enables KMSAN. Signed-off-by: Alexander Potapenko --- v2: -- move mm/kmsan/init.c and kmsan_memblock_free_pages() to this patch -- print a warning that KMSAN is a debugging tool (per Greg K-H's request) v4: -- change sizeof(type) to sizeof(*ptr) -- replace occurrences of |var| with @var -- swap init: and kmsan: in the subject v5: -- address Marco Elver's comments -- don't export initialization routines -- use modern style for-loops -- better name for struct page_pair -- delete duplicate function prototypes Link: https://linux-review.googlesource.com/id/I7bc53706141275914326df2345881ffe0cdd16bd --- include/linux/kmsan.h | 36 +++++++ init/main.c | 3 + mm/kmsan/Makefile | 3 +- mm/kmsan/init.c | 235 ++++++++++++++++++++++++++++++++++++++++++ mm/kmsan/kmsan.h | 3 + mm/kmsan/shadow.c | 34 ++++++ mm/page_alloc.c | 4 + 7 files changed, 317 insertions(+), 1 deletion(-) create mode 100644 mm/kmsan/init.c diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 354aee6f7b1a2..e00de976ee438 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -31,6 +31,28 @@ void kmsan_task_create(struct task_struct *task); */ void kmsan_task_exit(struct task_struct *task); +/** + * kmsan_init_shadow() - Initialize KMSAN shadow at boot time. + * + * Allocate and initialize KMSAN metadata for early allocations. + */ +void __init kmsan_init_shadow(void); + +/** + * kmsan_init_runtime() - Initialize KMSAN state and enable KMSAN. + */ +void __init kmsan_init_runtime(void); + +/** + * kmsan_memblock_free_pages() - handle freeing of memblock pages. + * @page: struct page to free. + * @order: order of @page. + * + * Freed pages are either returned to buddy allocator or held back to be used + * as metadata pages. + */ +bool __init kmsan_memblock_free_pages(struct page *page, unsigned int order); + /** * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. * @page: struct page pointer returned by alloc_pages(). @@ -152,6 +174,20 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); #else +static inline void kmsan_init_shadow(void) +{ +} + +static inline void kmsan_init_runtime(void) +{ +} + +static inline bool kmsan_memblock_free_pages(struct page *page, + unsigned int order) +{ + return true; +} + static inline void kmsan_task_create(struct task_struct *task) { } diff --git a/init/main.c b/init/main.c index 1fe7942f5d4a8..3afed7bf9f683 100644 --- a/init/main.c +++ b/init/main.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -836,6 +837,7 @@ static void __init mm_init(void) init_mem_debugging_and_hardening(); kfence_alloc_pool(); report_meminit(); + kmsan_init_shadow(); stack_depot_early_init(); mem_init(); mem_init_print_info(); @@ -853,6 +855,7 @@ static void __init mm_init(void) init_espfix_bsp(); /* Should be run after espfix64 is set up. */ pti_init(); + kmsan_init_runtime(); } #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index 550ad8625e4f9..401acb1a491ce 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -3,7 +3,7 @@ # Makefile for KernelMemorySanitizer (KMSAN). # # -obj-y := core.o instrumentation.o hooks.o report.o shadow.o +obj-y := core.o instrumentation.o init.o hooks.o report.o shadow.o KMSAN_SANITIZE := n KCOV_INSTRUMENT := n @@ -18,6 +18,7 @@ CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c new file mode 100644 index 0000000000000..7fb794242fad0 --- /dev/null +++ b/mm/kmsan/init.c @@ -0,0 +1,235 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN initialization routines. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" + +#include +#include +#include + +#include "../internal.h" + +#define NUM_FUTURE_RANGES 128 +struct start_end_pair { + u64 start, end; +}; + +static struct start_end_pair start_end_pairs[NUM_FUTURE_RANGES] __initdata; +static int future_index __initdata; + +/* + * Record a range of memory for which the metadata pages will be created once + * the page allocator becomes available. + */ +static void __init kmsan_record_future_shadow_range(void *start, void *end) +{ + u64 nstart = (u64)start, nend = (u64)end, cstart, cend; + bool merged = false; + + KMSAN_WARN_ON(future_index == NUM_FUTURE_RANGES); + KMSAN_WARN_ON((nstart >= nend) || !nstart || !nend); + nstart = ALIGN_DOWN(nstart, PAGE_SIZE); + nend = ALIGN(nend, PAGE_SIZE); + + /* + * Scan the existing ranges to see if any of them overlaps with + * [start, end). In that case, merge the two ranges instead of + * creating a new one. + * The number of ranges is less than 20, so there is no need to organize + * them into a more intelligent data structure. + */ + for (int i = 0; i < future_index; i++) { + cstart = start_end_pairs[i].start; + cend = start_end_pairs[i].end; + if ((cstart < nstart && cend < nstart) || + (cstart > nend && cend > nend)) + /* ranges are disjoint - do not merge */ + continue; + start_end_pairs[i].start = min(nstart, cstart); + start_end_pairs[i].end = max(nend, cend); + merged = true; + break; + } + if (merged) + return; + start_end_pairs[future_index].start = nstart; + start_end_pairs[future_index].end = nend; + future_index++; +} + +/* + * Initialize the shadow for existing mappings during kernel initialization. + * These include kernel text/data sections, NODE_DATA and future ranges + * registered while creating other data (e.g. percpu). + * + * Allocations via memblock can be only done before slab is initialized. + */ +void __init kmsan_init_shadow(void) +{ + const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE); + phys_addr_t p_start, p_end; + u64 loop; + int nid; + + for_each_reserved_mem_range(loop, &p_start, &p_end) + kmsan_record_future_shadow_range(phys_to_virt(p_start), + phys_to_virt(p_end)); + /* Allocate shadow for .data */ + kmsan_record_future_shadow_range(_sdata, _edata); + + for_each_online_node(nid) + kmsan_record_future_shadow_range( + NODE_DATA(nid), (char *)NODE_DATA(nid) + nd_size); + + for (int i = 0; i < future_index; i++) + kmsan_init_alloc_meta_for_range( + (void *)start_end_pairs[i].start, + (void *)start_end_pairs[i].end); +} + +struct metadata_page_pair { + struct page *shadow, *origin; +}; +static struct metadata_page_pair held_back[MAX_ORDER] __initdata; + +/* + * Eager metadata allocation. When the memblock allocator is freeing pages to + * pagealloc, we use 2/3 of them as metadata for the remaining 1/3. + * We store the pointers to the returned blocks of pages in held_back[] grouped + * by their order: when kmsan_memblock_free_pages() is called for the first + * time with a certain order, it is reserved as a shadow block, for the second + * time - as an origin block. On the third time the incoming block receives its + * shadow and origin ranges from the previously saved shadow and origin blocks, + * after which held_back[order] can be used again. + * + * At the very end there may be leftover blocks in held_back[]. They are + * collected later by kmsan_memblock_discard(). + */ +bool kmsan_memblock_free_pages(struct page *page, unsigned int order) +{ + struct page *shadow, *origin; + + if (!held_back[order].shadow) { + held_back[order].shadow = page; + return false; + } + if (!held_back[order].origin) { + held_back[order].origin = page; + return false; + } + shadow = held_back[order].shadow; + origin = held_back[order].origin; + kmsan_setup_meta(page, shadow, origin, order); + + held_back[order].shadow = NULL; + held_back[order].origin = NULL; + return true; +} + +#define MAX_BLOCKS 8 +struct smallstack { + struct page *items[MAX_BLOCKS]; + int index; + int order; +}; + +static struct smallstack collect = { + .index = 0, + .order = MAX_ORDER, +}; + +static void smallstack_push(struct smallstack *stack, struct page *pages) +{ + KMSAN_WARN_ON(stack->index == MAX_BLOCKS); + stack->items[stack->index] = pages; + stack->index++; +} +#undef MAX_BLOCKS + +static struct page *smallstack_pop(struct smallstack *stack) +{ + struct page *ret; + + KMSAN_WARN_ON(stack->index == 0); + stack->index--; + ret = stack->items[stack->index]; + stack->items[stack->index] = NULL; + return ret; +} + +static void do_collection(void) +{ + struct page *page, *shadow, *origin; + + while (collect.index >= 3) { + page = smallstack_pop(&collect); + shadow = smallstack_pop(&collect); + origin = smallstack_pop(&collect); + kmsan_setup_meta(page, shadow, origin, collect.order); + __free_pages_core(page, collect.order); + } +} + +static void collect_split(void) +{ + struct smallstack tmp = { + .order = collect.order - 1, + .index = 0, + }; + struct page *page; + + if (!collect.order) + return; + while (collect.index) { + page = smallstack_pop(&collect); + smallstack_push(&tmp, &page[0]); + smallstack_push(&tmp, &page[1 << tmp.order]); + } + __memcpy(&collect, &tmp, sizeof(tmp)); +} + +/* + * Memblock is about to go away. Split the page blocks left over in held_back[] + * and return 1/3 of that memory to the system. + */ +static void kmsan_memblock_discard(void) +{ + /* + * For each order=N: + * - push held_back[N].shadow and .origin to @collect; + * - while there are >= 3 elements in @collect, do garbage collection: + * - pop 3 ranges from @collect; + * - use two of them as shadow and origin for the third one; + * - repeat; + * - split each remaining element from @collect into 2 ranges of + * order=N-1, + * - repeat. + */ + collect.order = MAX_ORDER - 1; + for (int i = MAX_ORDER - 1; i >= 0; i--) { + if (held_back[i].shadow) + smallstack_push(&collect, held_back[i].shadow); + if (held_back[i].origin) + smallstack_push(&collect, held_back[i].origin); + held_back[i].shadow = NULL; + held_back[i].origin = NULL; + do_collection(); + collect_split(); + } +} + +void __init kmsan_init_runtime(void) +{ + /* Assuming current is init_task */ + kmsan_internal_task_create(current); + kmsan_memblock_discard(); + pr_info("Starting KernelMemorySanitizer\n"); + pr_info("ATTENTION: KMSAN is a debugging tool! Do not use it on production machines!\n"); + kmsan_enabled = true; +} diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index 04954b83c5d65..e064b4601af9d 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -66,6 +66,7 @@ struct shadow_origin_ptr { struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, bool store); void *kmsan_get_metadata(void *addr, bool is_origin); +void __init kmsan_init_alloc_meta_for_range(void *start, void *end); enum kmsan_bug_reason { REASON_ANY, @@ -186,6 +187,8 @@ void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, int reason); struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order); /* * kmsan_internal_is_module_addr() and kmsan_internal_is_vmalloc_addr() are diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index 8c81a059beea6..6e90a806a7045 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -258,3 +258,37 @@ void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, kfree(s_pages); kfree(o_pages); } + +/* Allocate metadata for pages allocated at boot time. */ +void __init kmsan_init_alloc_meta_for_range(void *start, void *end) +{ + struct page *shadow_p, *origin_p; + void *shadow, *origin; + struct page *page; + u64 size; + + start = (void *)ALIGN_DOWN((u64)start, PAGE_SIZE); + size = ALIGN((u64)end - (u64)start, PAGE_SIZE); + shadow = memblock_alloc(size, PAGE_SIZE); + origin = memblock_alloc(size, PAGE_SIZE); + for (u64 addr = 0; addr < size; addr += PAGE_SIZE) { + page = virt_to_page_or_null((char *)start + addr); + shadow_p = virt_to_page_or_null((char *)shadow + addr); + set_no_shadow_origin_page(shadow_p); + shadow_page_for(page) = shadow_p; + origin_p = virt_to_page_or_null((char *)origin + addr); + set_no_shadow_origin_page(origin_p); + origin_page_for(page) = origin_p; + } +} + +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order) +{ + for (int i = 0; i < (1 << order); i++) { + set_no_shadow_origin_page(&shadow[i]); + set_no_shadow_origin_page(&origin[i]); + shadow_page_for(&page[i]) = &shadow[i]; + origin_page_for(&page[i]) = &origin[i]; + } +} diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d488dab76a6e8..b28093e3bb42a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1806,6 +1806,10 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + if (!kmsan_memblock_free_pages(page, order)) { + /* KMSAN will take care of these pages. */ + return; + } __free_pages_core(page, order); } From patchwork Mon Sep 5 12:24:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966049 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 368F6C6FA83 for ; Mon, 5 Sep 2022 12:25:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4A578D0079; Mon, 5 Sep 2022 08:25:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF9328D0076; Mon, 5 Sep 2022 08:25:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A99F38D0079; Mon, 5 Sep 2022 08:25:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9B5E08D0076 for ; Mon, 5 Sep 2022 08:25:48 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7829CC0D2A for ; Mon, 5 Sep 2022 12:25:48 +0000 (UTC) X-FDA: 79877953176.23.159C1F1 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf19.hostedemail.com (Postfix) with ESMTP id 20BE31A0055 for ; Mon, 5 Sep 2022 12:25:47 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id qf22-20020a1709077f1600b00741638c5f3cso2290886ejc.23 for ; Mon, 05 Sep 2022 05:25:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=78o9fcmZe1rakLyEcoaxPY9Q3UwZCgWFE2+savdbFCc=; b=AsRZmg4FcaRnkB2xKxGrm6iL2iDGcXmhS0tj+V1yFF2aDqOfpFXiHIRCT6zqVsW0aD N5h5Pb+5EyE1FTqgt8cLBFPdqtvKIDtXWqRbvYO/ATFWFWd2HAeTUbRSL9GKhNVVG3UR +Cz38urRZj9ZruT9hAA3LKKRoer80w2xrOai37ChA1Od6371h/dioMQgPIKGmaf+yNq2 ppD8ItTWVHSMEd1mMuPT2G7k7EIkowWAHt4nuZyfb6OsxakVTglc/JRDQZk9/I+LgGq5 2tgG5D4XNS/pj0QT5fzRb+TJYhHEDdcxlyJc0krsiHxzHfX0aka2JbETQ+vXADss1MTT Brbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=78o9fcmZe1rakLyEcoaxPY9Q3UwZCgWFE2+savdbFCc=; b=EWrURVsf1JzA6JR5V9uMVTtvY5J3EGsMXpLDe/iVWdmGfHpC6CxKY5DHp0ISgQ44n7 RcZcvsQnIET1JxuyDXU689+RM6Rhhx8MEerBWkap/H1u8OQfMXC2XYwOlDk/+herk0eH /R61f4EElQkG7KPyGJJNbuRaE/NZ90PdQX/KJjBxTRkwOQvtkmJxoqD9m1kp1oBC4XfE X0e5jwlugIle8+cfRH4mnd06XEaC9Aspm6VkbUd4OeipuG3cdp3KmpGHiD/LojbMr8d2 /RWHe5wfanvFfaRXV77JsLM6RbPoPO4QQ7LOo7FXZSBMcs1LvoblnCkGfQj6arOlyrY1 7B3Q== X-Gm-Message-State: ACgBeo1TL8q8TLPyXZQaeXnpLyPFhe0l+4BGYUWCx/734VjpEBB9Ji2G yOkGiTFWwsJ39d7UKqm9MYZTAV3z/Bg= X-Google-Smtp-Source: AA6agR6O+LsZ4k4byGii1HsmWG4VXEYNBtCLaymsFKv9VXP42ACoLM9ggDG8gs81G7Gmde8sHdwK3hc4QxA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:9bd9:b0:73d:da74:120c with SMTP id de25-20020a1709069bd900b0073dda74120cmr32752711ejc.412.1662380746896; Mon, 05 Sep 2022 05:25:46 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:26 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-19-glider@google.com> Subject: [PATCH v6 18/44] instrumented.h: add KMSAN support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380748; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=78o9fcmZe1rakLyEcoaxPY9Q3UwZCgWFE2+savdbFCc=; b=E6Mui1axoSriPN/mD7ejfYiwaq4HoNkMupU6E63mfAxRsT+BJKxY/hkQ3GR1+5EpftwvrP YBV5EeNYfMq67FvIc2nqHVb9lYT0+XkHuUSGoeubYB3BdbKTW43KWEpxMnNdTgTDXw79LY dzLZhDt+MVy3szn29KeX2t5XRIYKEu8= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=AsRZmg4F; spf=pass (imf19.hostedemail.com: domain of 3yuoVYwYKCBc38501E3BB381.zB985AHK-997Ixz7.BE3@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3yuoVYwYKCBc38501E3BB381.zB985AHK-997Ixz7.BE3@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380748; a=rsa-sha256; cv=none; b=C2BmCsMFgWJFREAq7HJ+6MDoV6iFhSodUqxfoE/i1NWfKZ23KZ+PaKGG08BaijIbid9Y9/ 8Kw2iVH6WsD2n7MyUSL9xquYRiitK9JadLuZl42HPDJyehDVRBPgCGoSEN9e8rBI57Wclw ZSDUYTBFDGNnt2x5vYbu0w6eXHGCxGA= X-Rspam-User: X-Stat-Signature: 6h4fj5mpbeyqtgziw4gjpj5uyjhz6w9z X-Rspamd-Queue-Id: 20BE31A0055 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=AsRZmg4F; spf=pass (imf19.hostedemail.com: domain of 3yuoVYwYKCBc38501E3BB381.zB985AHK-997Ixz7.BE3@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3yuoVYwYKCBc38501E3BB381.zB985AHK-997Ixz7.BE3@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-HE-Tag: 1662380747-606307 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, KMSAN needs to unpoison the data copied from the userspace. To detect infoleaks - check the memory buffer passed to copy_to_user(). Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v2: -- move implementation of kmsan_copy_to_user() here v5: -- simplify kmsan_copy_to_user() -- provide instrument_get_user() and instrument_put_user() v6: -- rebase after changing "x86: asm: instrument usercopy in get_user() and put_user()" Link: https://linux-review.googlesource.com/id/I43e93b9c02709e6be8d222342f1b044ac8bdbaaf --- include/linux/instrumented.h | 18 ++++++++++++----- include/linux/kmsan-checks.h | 19 ++++++++++++++++++ mm/kmsan/hooks.c | 38 ++++++++++++++++++++++++++++++++++++ 3 files changed, 70 insertions(+), 5 deletions(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 9f1dba8f717b0..501fa84867494 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -2,7 +2,7 @@ /* * This header provides generic wrappers for memory access instrumentation that - * the compiler cannot emit for: KASAN, KCSAN. + * the compiler cannot emit for: KASAN, KCSAN, KMSAN. */ #ifndef _LINUX_INSTRUMENTED_H #define _LINUX_INSTRUMENTED_H @@ -10,6 +10,7 @@ #include #include #include +#include #include /** @@ -117,6 +118,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) { kasan_check_read(from, n); kcsan_check_read(from, n); + kmsan_copy_to_user(to, from, n, 0); } /** @@ -151,6 +153,7 @@ static __always_inline void instrument_copy_from_user_after(const void *to, const void __user *from, unsigned long n, unsigned long left) { + kmsan_unpoison_memory(to, n - left); } /** @@ -162,10 +165,14 @@ instrument_copy_from_user_after(const void *to, const void __user *from, * * @to destination variable, may not be address-taken */ -#define instrument_get_user(to) \ -({ \ +#define instrument_get_user(to) \ +({ \ + u64 __tmp = (u64)(to); \ + kmsan_unpoison_memory(&__tmp, sizeof(__tmp)); \ + to = __tmp; \ }) + /** * instrument_put_user() - add instrumentation to put_user()-like macros * @@ -177,8 +184,9 @@ instrument_copy_from_user_after(const void *to, const void __user *from, * @ptr userspace pointer to copy to * @size number of bytes to copy */ -#define instrument_put_user(from, ptr, size) \ -({ \ +#define instrument_put_user(from, ptr, size) \ +({ \ + kmsan_copy_to_user(ptr, &from, sizeof(from), 0); \ }) #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h index a6522a0c28df9..c4cae333deec5 100644 --- a/include/linux/kmsan-checks.h +++ b/include/linux/kmsan-checks.h @@ -46,6 +46,21 @@ void kmsan_unpoison_memory(const void *address, size_t size); */ void kmsan_check_memory(const void *address, size_t size); +/** + * kmsan_copy_to_user() - Notify KMSAN about a data transfer to userspace. + * @to: destination address in the userspace. + * @from: source address in the kernel. + * @to_copy: number of bytes to copy. + * @left: number of bytes not copied. + * + * If this is a real userspace data transfer, KMSAN checks the bytes that were + * actually copied to ensure there was no information leak. If @to belongs to + * the kernel space (which is possible for compat syscalls), KMSAN just copies + * the metadata. + */ +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, + size_t left); + #else static inline void kmsan_poison_memory(const void *address, size_t size, @@ -58,6 +73,10 @@ static inline void kmsan_unpoison_memory(const void *address, size_t size) static inline void kmsan_check_memory(const void *address, size_t size) { } +static inline void kmsan_copy_to_user(void __user *to, const void *from, + size_t to_copy, size_t left) +{ +} #endif diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 6f3e64b0b61f8..5c0eb25d984d7 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -205,6 +205,44 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end) kmsan_leave_runtime(); } +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, + size_t left) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy <= left) + return; + + ua_flags = user_access_save(); + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + } else { + /* Otherwise this is a kernel memory access. This happens when a + * compat syscall passes an argument allocated on the kernel + * stack to a real syscall. + * Don't check anything, just copy the shadow of the copied + * bytes. + */ + kmsan_internal_memmove_metadata((void *)to, (void *)from, + to_copy - left); + } + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Mon Sep 5 12:24:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966050 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 336F5ECAAD5 for ; Mon, 5 Sep 2022 12:25:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BFBF28D007A; Mon, 5 Sep 2022 08:25:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BAD4B8D0076; Mon, 5 Sep 2022 08:25:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4D1A8D007A; Mon, 5 Sep 2022 08:25:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 95A1E8D0076 for ; Mon, 5 Sep 2022 08:25:51 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 72AC11C6996 for ; Mon, 5 Sep 2022 12:25:51 +0000 (UTC) X-FDA: 79877953302.12.545DABF Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf08.hostedemail.com (Postfix) with ESMTP id 1A197160076 for ; Mon, 5 Sep 2022 12:25:50 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id jg40-20020a170907972800b0074148cdc7baso2257468ejc.12 for ; Mon, 05 Sep 2022 05:25:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=JDbXYbJWwPmgAZI0cqV5N9fdafuEA5LdQ6bM1NbJ2qU=; b=FI/NN8FVkqveVSoB6HG4Dkazuf6XYCk/EwCAfidqLj4dNPa0BRhM4P5S4TqXDGIZaI qrEKGE+r+7Jt5yWEty94dld142fuFuiBwTfOiFHQUY5nxSHJ+D8ob7Cf47AnhjKOGoR5 Bklr3XpZrOoyXRce7z6Tw6kGBroRdXW/XXVfP6gsABeJuQtlQH3SmKvkiqAGb4lTB6cD 7eYPHMVcX5DUnPnmRo4rO7LVehmhKNFjhaZP9RF8f0ksJZ1mSX0p7FROg8hQn24pzOBV K93l7+1ykcepxF58K6O1XQpgozdC1Jvz1hxtlrAemelvnVlo8kdLN50VY5GotqXyWFJ4 nDnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=JDbXYbJWwPmgAZI0cqV5N9fdafuEA5LdQ6bM1NbJ2qU=; b=EnPWCT/qWk/FqvBKM55DxeXIu6pAXLAfb9Audjbb3/TaPCoUSDlvGZXOiyOsCtzkYO aGE5U3GasHoyjkAlPbPFYXFYRrGrOt19MW+kj1MnKXLZ4noWcdjrDu2c7MTdqd18jka6 7mKgFzMm07jmSoYc4RVDQrf5izC1+gqqvZp0M0oxdSeQfvhxuBNHedapRnAu7g286D5M /I4E6CHM+JrhZIDFu3uELiEx1gakHBPC7kBOTPoJSRa2zouJcA5JDh9XAQvBNEKSUYVg qry9g9DlHf9JOKFlL1P4ptLHPT+ejfxJxDw52aC1WYBM8h0u7GjX3JVKQvevYdL9LSpg kYNQ== X-Gm-Message-State: ACgBeo3EJ+LiZGHHgazq7tW829PIWhoeYc6nQB61HwB1mEXwnOW/mM45 wKalWa8/O+1/BaUXmYXZ6Dy6qbquieY= X-Google-Smtp-Source: AA6agR6EtI+wppo4BmYhDO6R8PujqfTGTJqoZccyFkAtKf4bzdO9HJQ/6nSdJYuTNsjvrOtwDoN6L9AHrrA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:7242:b0:741:7cd6:57d5 with SMTP id ds2-20020a170907724200b007417cd657d5mr25769275ejc.419.1662380749795; Mon, 05 Sep 2022 05:25:49 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:27 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-20-glider@google.com> Subject: [PATCH v6 19/44] kmsan: unpoison @tlb in arch_tlb_gather_mmu() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="FI/NN8FV"; spf=pass (imf08.hostedemail.com: domain of 3zeoVYwYKCBo6B834H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3zeoVYwYKCBo6B834H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380751; a=rsa-sha256; cv=none; b=mDAGvOQXFYKK1Lng5gslH1ni615rYxkcfs1AZ/y73rQAU4V86+EG/TV8EIDVErEsWlqofw c0usw+2O3NDModgse+1ayRJ8QOp8jin2DF2MZf/BQrACf4X8O3aHwOWHx8DK0s6c8QLI2F yE7oG5Eq6HR2tiO9jSAzr8CVjOlkqck= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380751; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JDbXYbJWwPmgAZI0cqV5N9fdafuEA5LdQ6bM1NbJ2qU=; b=G5qNKwzAmGmFpnWkrKL80XRaOUwCq9oEZqavkDSVdeoYd+TH6/iCvZGuV7bSSXZOZfdxVn hXKUsirmR3Dutf464PCtIPSqiBp9uGao/7CFNSkHXEy5JYLjT9aDKh6HBtU5Cc1VJle3oz y6H5R0Ke3i9C2TN61M3bb7wAvkh7tTA= X-Rspamd-Server: rspam02 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="FI/NN8FV"; spf=pass (imf08.hostedemail.com: domain of 3zeoVYwYKCBo6B834H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3zeoVYwYKCBo6B834H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: qiehk8ny41s49shajzfedh3stfuehqus X-Rspamd-Queue-Id: 1A197160076 X-HE-Tag: 1662380750-689089 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is an optimization to reduce stackdepot pressure. struct mmu_gather contains 7 1-bit fields packed into a 32-bit unsigned int value. The remaining 25 bits remain uninitialized and are never used, but KMSAN updates the origin for them in zap_pXX_range() in mm/memory.c, thus creating very long origin chains. This is technically correct, but consumes too much memory. Unpoisoning the whole structure will prevent creating such chains. Signed-off-by: Alexander Potapenko Acked-by: Marco Elver --- v5: -- updated description as suggested by Marco Elver Link: https://linux-review.googlesource.com/id/I76abee411b8323acfdbc29bc3a60dca8cff2de77 --- mm/mmu_gather.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index a71924bd38c0d..add4244e5790d 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -265,6 +266,15 @@ void tlb_flush_mmu(struct mmu_gather *tlb) static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) { + /* + * struct mmu_gather contains 7 1-bit fields packed into a 32-bit + * unsigned int value. The remaining 25 bits remain uninitialized + * and are never used, but KMSAN updates the origin for them in + * zap_pXX_range() in mm/memory.c, thus creating very long origin + * chains. This is technically correct, but consumes too much memory. + * Unpoisoning the whole structure will prevent creating such chains. + */ + kmsan_unpoison_memory(tlb, sizeof(*tlb)); tlb->mm = mm; tlb->fullmm = fullmm; From patchwork Mon Sep 5 12:24:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966051 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C26E4ECAAD5 for ; Mon, 5 Sep 2022 12:25:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 62FE38D007B; Mon, 5 Sep 2022 08:25:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5DFAF8D0076; Mon, 5 Sep 2022 08:25:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 458E28D007B; Mon, 5 Sep 2022 08:25:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 3573A8D0076 for ; Mon, 5 Sep 2022 08:25:54 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 1944B1C6996 for ; Mon, 5 Sep 2022 12:25:54 +0000 (UTC) X-FDA: 79877953428.29.0A13FE6 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf08.hostedemail.com (Postfix) with ESMTP id C26E3160077 for ; Mon, 5 Sep 2022 12:25:53 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id f14-20020a0564021e8e00b00448da245f25so5686433edf.18 for ; Mon, 05 Sep 2022 05:25:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=eGWHmCncvYGtki7dpdJUZ+ZMr1uc51g9yFEsa5ToQ1k=; b=Wp8EteLuPeDwNYvlW3YQUA/1PnC9FXkAltlzdPYIYxfirlk5aYGMnIBi+AD8+KCaMp Du+PHFpOh72HJVqQNFt10aYA+nrS/1qtnQhEz2iYAhaIsyZqTVzn6auLSX4y3F3T8WGQ wo6A9iOs2qyrCz/UZxr/10/nFx9CekSqnzxze5uH2stnp/W0ubtlfhNnCiALkfvq0Qo1 1p0pfPISgYMWzzRhl2BgDkipynYJCvXfm3BK2waZiK9PVDMCD6TkNIHYstXcFs+ZNEH8 wUHGWOsdJpaxRs/RWpDIfJK0SedwNcC/QoMqimEHUYGkvHqnc3NMF2jA1cj1jko35e3H Vmbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=eGWHmCncvYGtki7dpdJUZ+ZMr1uc51g9yFEsa5ToQ1k=; b=GKnppIGSTa8dQrT6rwTVfoQgh+lw9AosGfl6uC1y8eoRJ2/5oV7EPP6t3exchNHs/+ XGAUy4GKP6prKEwHH7dxJsS6xp5wS/CP0cYobncVxqryWf/vmpeBkUbQle9xb+o2C3Ga jUWe2y5ehlFCpooXMUHQ01XcRzoN6gAwdbllbWk0Us9WgSZkRzcFqtwfUFyg2qAQTocl ee69PpxYZ7Bj2WitegjFj6tCd8tMrAcDKMJa+lR3CUGW+T8q77j0Mym9ezk7YSLHTQkK bAjfZF2KMkYPKZptQ3pqtRFk8ocKL6KfSlZFGw3w55gYdgDMi/WETtMpB8IseuaX1MnD 85oA== X-Gm-Message-State: ACgBeo21DtvJIsaPbcePjGtK0jZpCXls3dM8Qfwnnjvq0X8ElsXGPnds mkZyb0nrZ5UxG9wt0HSgdz577YTe5s4= X-Google-Smtp-Source: AA6agR5r3+ZGhwnIHp0DlIsh8NgF1hg9AH+pInOgGPuCgr+LeGibT7oCrJ1b9agMxP829uvu6kRY+FW1xtI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:2549:b0:448:6db8:9d83 with SMTP id l9-20020a056402254900b004486db89d83mr30507509edb.194.1662380752652; Mon, 05 Sep 2022 05:25:52 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:28 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-21-glider@google.com> Subject: [PATCH v6 20/44] kmsan: add iomap support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380753; a=rsa-sha256; cv=none; b=DvvZkdCqEfIpBRR3imyUYlKGxO+ergJF0JsrdkYygATHAwySUWiGyObNFL3xfKUQpYK3Gg ka3+bIMriI/EjnHnPj5TRyR8e8X9OkGMUJlpfaAgFs3VL1gIcCK9PIaNdRpL02st2Fg2K/ ArMBBlBulv3foMaUdbgsctaYMFlFL0M= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Wp8EteLu; spf=pass (imf08.hostedemail.com: domain of 30OoVYwYKCB09EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=30OoVYwYKCB09EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380753; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eGWHmCncvYGtki7dpdJUZ+ZMr1uc51g9yFEsa5ToQ1k=; b=NEWcPMDY1xvtV4lmzft6uUJHm7Tz2r/2yXtXMUCusb7EieJR+s6zct7F2ZDF0ES4rK+N7o +fjN8+HvZmayt+0pHPMcLVD/QX5eXCB/8qefJhNzHNFCnhsIgzFDgTP/FVdex9cQnsIO04 q4Jnic93WPnvtZD2LyAN+uSmm6AW9Lk= X-Rspam-User: X-Stat-Signature: osydpknnrw8xzk5k3h9f4rbimysri4fa X-Rspamd-Queue-Id: C26E3160077 X-Rspamd-Server: rspam10 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Wp8EteLu; spf=pass (imf08.hostedemail.com: domain of 30OoVYwYKCB09EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=30OoVYwYKCB09EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1662380753-591072 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions from lib/iomap.c interact with hardware, so KMSAN must ensure that: - every read function returns an initialized value - every write function checks values before sending them to hardware. Signed-off-by: Alexander Potapenko --- v4: -- switch from __no_sanitize_memory (which now means "no KMSAN instrumentation") to __no_kmsan_checks (i.e. "unpoison everything") Link: https://linux-review.googlesource.com/id/I45527599f09090aca046dfe1a26df453adab100d --- lib/iomap.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 44 insertions(+) diff --git a/lib/iomap.c b/lib/iomap.c index fbaa3e8f19d6c..4f8b31baa5752 100644 --- a/lib/iomap.c +++ b/lib/iomap.c @@ -6,6 +6,7 @@ */ #include #include +#include #include @@ -70,26 +71,35 @@ static void bad_io_access(unsigned long port, const char *access) #define mmio_read64be(addr) swab64(readq(addr)) #endif +/* + * Here and below, we apply __no_kmsan_checks to functions reading data from + * hardware, to ensure that KMSAN marks their return values as initialized. + */ +__no_kmsan_checks unsigned int ioread8(const void __iomem *addr) { IO_COND(addr, return inb(port), return readb(addr)); return 0xff; } +__no_kmsan_checks unsigned int ioread16(const void __iomem *addr) { IO_COND(addr, return inw(port), return readw(addr)); return 0xffff; } +__no_kmsan_checks unsigned int ioread16be(const void __iomem *addr) { IO_COND(addr, return pio_read16be(port), return mmio_read16be(addr)); return 0xffff; } +__no_kmsan_checks unsigned int ioread32(const void __iomem *addr) { IO_COND(addr, return inl(port), return readl(addr)); return 0xffffffff; } +__no_kmsan_checks unsigned int ioread32be(const void __iomem *addr) { IO_COND(addr, return pio_read32be(port), return mmio_read32be(addr)); @@ -142,18 +152,21 @@ static u64 pio_read64be_hi_lo(unsigned long port) return lo | (hi << 32); } +__no_kmsan_checks u64 ioread64_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_kmsan_checks u64 ioread64_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64_hi_lo(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_kmsan_checks u64 ioread64be_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64be_lo_hi(port), @@ -161,6 +174,7 @@ u64 ioread64be_lo_hi(const void __iomem *addr) return 0xffffffffffffffffULL; } +__no_kmsan_checks u64 ioread64be_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64be_hi_lo(port), @@ -188,22 +202,32 @@ EXPORT_SYMBOL(ioread64be_hi_lo); void iowrite8(u8 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outb(val,port), writeb(val, addr)); } void iowrite16(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outw(val,port), writew(val, addr)); } void iowrite16be(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write16be(val,port), mmio_write16be(val, addr)); } void iowrite32(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outl(val,port), writel(val, addr)); } void iowrite32be(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write32be(val,port), mmio_write32be(val, addr)); } EXPORT_SYMBOL(iowrite8); @@ -239,24 +263,32 @@ static void pio_write64be_hi_lo(u64 val, unsigned long port) void iowrite64_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_lo_hi(val, port), writeq(val, addr)); } void iowrite64_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_hi_lo(val, port), writeq(val, addr)); } void iowrite64be_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_lo_hi(val, port), mmio_write64be(val, addr)); } void iowrite64be_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_hi_lo(val, port), mmio_write64be(val, addr)); } @@ -328,14 +360,20 @@ static inline void mmio_outsl(void __iomem *addr, const u32 *src, int count) void ioread8_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insb(port,dst,count), mmio_insb(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count); } void ioread16_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insw(port,dst,count), mmio_insw(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 2); } void ioread32_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insl(port,dst,count), mmio_insl(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 4); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -343,14 +381,20 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count); IO_COND(addr, outsb(port, src, count), mmio_outsb(addr, src, count)); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 2); IO_COND(addr, outsw(port, src, count), mmio_outsw(addr, src, count)); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 4); IO_COND(addr, outsl(port, src,count), mmio_outsl(addr, src, count)); } EXPORT_SYMBOL(iowrite8_rep); From patchwork Mon Sep 5 12:24:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84DB2C6FA83 for ; Mon, 5 Sep 2022 12:25:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E8FC8D007C; Mon, 5 Sep 2022 08:25:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19A788D0076; Mon, 5 Sep 2022 08:25:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03A3E8D007C; Mon, 5 Sep 2022 08:25:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E7DED8D0076 for ; Mon, 5 Sep 2022 08:25:56 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id C1CDDAAB61 for ; Mon, 5 Sep 2022 12:25:56 +0000 (UTC) X-FDA: 79877953512.01.63878D2 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf27.hostedemail.com (Postfix) with ESMTP id 7353D400B0 for ; Mon, 5 Sep 2022 12:25:56 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id w17-20020a056402269100b0043da2189b71so5640826edd.6 for ; Mon, 05 Sep 2022 05:25:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=ZKRhstpgJFHnwQb7wa4PSp1fP2zpnAEklrA/+6U+C6A=; b=Xc6WWErxpVXx0dCndHy1SI9nJ2RkUM6KWSlEnHkZMNXjYMj3eWyaFlYc5HV6uVWOSB E83ahvzmgbb6ZUoBwYvIRsiXbOFt2hZ4fjxwXWmDqsk+RsSu6VyKEnzpdKQoVAg3Pqq0 0CxCR4RVmo8KuqoAF4wGXJy4Xn+lesRq/0h9FAPZBz35/v9WUgk/GXkMNXrt1nuA+y/O IEkkGPOOeNrH3Y57s/Cr5M6o3FEDG8QCVpwR8LkyrdXQzfpKPiEHtVDv3ICpVu9P0YS7 QddzXczkcNwySoyi832F6DuKvJMX2sDo+04A5YYlbMaf1CZbSK6nIYaRflozv//mECLU eOrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=ZKRhstpgJFHnwQb7wa4PSp1fP2zpnAEklrA/+6U+C6A=; b=QzX2vBWm/YEqyJHqxvndruAAN6Y0nFDNO2FzYryBmbBZ+pl6759ZRJrW0YCN/63BjI kNFjlqpedJkcFv1meUhmCSvKmPYmml2QDPCoEEOXlOI3tjCEoTtqKf1/jCq7kwzgSpZp SoJgiGVQYbXmc6GBKuj6RhllexbPE5P21v2ZIr7FmBqoal214rmB2dpsytsqni99nQcT Kb2NsB4CP8Mp1sarvtf01JI5SUEgiJqft3ElLsU6TtrKkyvErSjgLAlAokfJBHZslB4R RAfCERQ2Yt3kPkTPhuJUesJ10DH//ddd0fvTGOiWL7IA3da1/i0PUQxrp0CebSKV0usd H9Qg== X-Gm-Message-State: ACgBeo2vEgU+eXwBg1RuTI5AU9wRJNGjFUZKfMlNpisca9YjP/oHYIeO YH85FWSeIYs1sxM1eUna8VC2Zjn9Fws= X-Google-Smtp-Source: AA6agR5TErPcIF0hILDHYnMq9LaM3xtffYaYn+vk0P64I6LKeQp+A0rabHx9IAlhRIbYZD3eWmQ7V5DKx/g= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:b15:b0:741:8ae4:f79d with SMTP id h21-20020a1709070b1500b007418ae4f79dmr25370696ejl.247.1662380755352; Mon, 05 Sep 2022 05:25:55 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:29 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-22-glider@google.com> Subject: [PATCH v6 21/44] Input: libps2: mark data received in __ps2_command() as initialized From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Xc6WWErx; spf=pass (imf27.hostedemail.com: domain of 30-oVYwYKCCACHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=30-oVYwYKCCACHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380756; a=rsa-sha256; cv=none; b=Zc3e3jEFYQ2tQvUFeh0WfrbpmH8YN4lIpnhHtfWhBIVpD+8hAXNPTjQb6G0A/YeTbM4iOQ PEzmOoCiPieoo6GEGbsrEHP3SvcaW4eP8c3jyyHbumIPG/N5bD1deqJ4O7dXI58MA68jj4 S+u9R2pIYmqboRv0UmfrYYgqjsB1kZc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380756; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZKRhstpgJFHnwQb7wa4PSp1fP2zpnAEklrA/+6U+C6A=; b=pGamx3HaMoEjXBNMWSPEPqg9TT8gZmfkAmC00YxuZ5hTkYM/G56vPQTSruR2l2PVJrSzhG bgXM8XiI/fEZV+IQRe/hynXJXlwwgHfhuypjkrNo/Pikamkb66rahAe3Obk5jW0o69AVX9 jqQUqSZN+3Jw3EoSbN4g0WmeziUzos4= Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Xc6WWErx; spf=pass (imf27.hostedemail.com: domain of 30-oVYwYKCCACHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=30-oVYwYKCCACHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Stat-Signature: ickwm7kmwafc4ddg1oyy19oc54jjnt6w X-Rspam-User: X-Rspamd-Queue-Id: 7353D400B0 X-HE-Tag: 1662380756-626157 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not know that the device initializes certain bytes in ps2dev->cmdbuf. Call kmsan_unpoison_memory() to explicitly mark them as initialized. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I2d26f6baa45271d37320d3f4a528c39cb7e545f0 --- drivers/input/serio/libps2.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/input/serio/libps2.c b/drivers/input/serio/libps2.c index 250e213cc80c6..3e19344eda93c 100644 --- a/drivers/input/serio/libps2.c +++ b/drivers/input/serio/libps2.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -294,9 +295,11 @@ int __ps2_command(struct ps2dev *ps2dev, u8 *param, unsigned int command) serio_pause_rx(ps2dev->serio); - if (param) + if (param) { for (i = 0; i < receive; i++) param[i] = ps2dev->cmdbuf[(receive - 1) - i]; + kmsan_unpoison_memory(param, receive); + } if (ps2dev->cmdcnt && (command != PS2_CMD_RESET_BAT || ps2dev->cmdcnt != 1)) { From patchwork Mon Sep 5 12:24:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67414ECAAD3 for ; Mon, 5 Sep 2022 12:26:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 015888D007D; Mon, 5 Sep 2022 08:26:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F09A78D0076; Mon, 5 Sep 2022 08:25:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA7BE8D007D; Mon, 5 Sep 2022 08:25:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CBB908D0076 for ; Mon, 5 Sep 2022 08:25:59 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 91EC24053D for ; Mon, 5 Sep 2022 12:25:59 +0000 (UTC) X-FDA: 79877953638.01.A7153CE Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf18.hostedemail.com (Postfix) with ESMTP id 28E1B1C005C for ; Mon, 5 Sep 2022 12:25:59 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id ds17-20020a170907725100b007419bfb5fd4so2254479ejc.4 for ; Mon, 05 Sep 2022 05:25:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=xU59YLMODhIvgc/g7m6GAFrx2Dkb/0YbdG6iSMMhfg0=; b=HRK55Wdk7p6pB6lhNN1Lcu2bb/mA8zHRnSHp5zTzOBzDNqJDj7CzK3crf23+jyTZ3R FWTZD9RKt8/4YU7gKZqyJD2aqpkfZYiZ/ycUx9b2fqSfxjvWcG3vrlKeRLph68NKFPGO 4zxJp4+lFrv+gwL1poaUjiv2MCxPI6DNPoPJQRDls7gAF6utZujiLNEP1ZdVsjqoxK2y 24yMqLig+cwXs0QfgviisLEM/FylgomrSJkxrHDv6a9D/WSyuBosu23ffRBgT1VkGwSy 7Zodph6qQJ5kP97RCJOEzQeOzC0Ks4C1XiiL3rLPjZI2zIyu97wC+81VLj8taHCWXa1T CN2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=xU59YLMODhIvgc/g7m6GAFrx2Dkb/0YbdG6iSMMhfg0=; b=K4ibcui/MDoKs2XnhdYli+DEqb69He+pVQskzbAzbRexw31yjF1sK37/k5P0guiQdv uPa+5D1W88qxbnvzMz2vM1nzRM8aKifU860XR4YDV2NhzrQ7xmpm9Eu2UTFRA4VgO9i3 ZOvFL4TlY5kVLSlEJlDfxPOBRcD6VaM09KBXUQhQwkLA2psxHBiX17HvssrhzkFonYdC OYjdwntqK+qRDsx5Ll1zseSfgEVtlsPUaM/JRSKcGIa9UI0RTWxcmN4i3SnFN/aFhTIR BKr/15030bxOJX6GQriwFQgipyzso7FbppHZ2lFb8S9U85nG/vOr8eT+2Zmr5/hXX0hx syTw== X-Gm-Message-State: ACgBeo1mCkNPVuK3/61wJTN3jvlxwfW8cY4toorV+GXBXxiYFCmn3q7l ekeG7tjoQrcOQMKDTJzLKGt0ajPcC7g= X-Google-Smtp-Source: AA6agR6LxWIUDbhoZxPuR8zz6joMq39SKph6EsVB0PAJ1iCZizYih/JZbVGg6hX3feVI1r37GHcQD7ilrMs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:d853:0:b0:44e:8a89:52f with SMTP id f19-20020aa7d853000000b0044e8a89052fmr3587265eds.293.1662380757922; Mon, 05 Sep 2022 05:25:57 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:30 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-23-glider@google.com> Subject: [PATCH v6 22/44] dma: kmsan: unpoison DMA mappings From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380759; a=rsa-sha256; cv=none; b=sY3o+vfV8dRag8CsyEMtKZCVLf77pDxqUjqKJ8vl/E7/vLqDOz2qYEz2rEr3LGGzhRW4UI zy0RknswLpWE6vAo2gv71BlFnmupNjGmBdW7stgeWcz9I8d6e4VbNABZCj85UdBSygXy74 XNkFgHZL4FDUM1nR90Zt/RGhidg4mpA= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HRK55Wdk; spf=pass (imf18.hostedemail.com: domain of 31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380759; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xU59YLMODhIvgc/g7m6GAFrx2Dkb/0YbdG6iSMMhfg0=; b=l50l0aTbPju+Z4uYhlZVRPgGsP9A2WDRy7GLhjnPbFEMmI2HKzCSy357BgCUk5Tc13v7Ta 9jff4dMPl0+jkrX2zGchKLlZeFONlUE6RCR8odDdQq8WISedy0zelhjHpu4KhtnxVZy7LH fkesTFtM9TLxuNb+NCRrZFxeyWB4LlQ= X-Stat-Signature: 8yxi4xdj3c4wonc4d7spa6htouptcccy X-Rspamd-Queue-Id: 28E1B1C005C Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HRK55Wdk; spf=pass (imf18.hostedemail.com: domain of 31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1662380759-803831 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_handle_dma() and kmsan_handle_dma_sg() here v4: -- swap dma: and kmsan: int the subject v5: -- do not export KMSAN hooks that are not called from modules v6: -- add a missing #include Link: https://linux-review.googlesource.com/id/Ia162dc4c5a92e74d4686c1be32a4dfeffc5c32cd --- include/linux/kmsan.h | 41 ++++++++++++++++++++++++++++++ kernel/dma/mapping.c | 10 +++++--- mm/kmsan/hooks.c | 59 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 107 insertions(+), 3 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index e00de976ee438..dac296da45c55 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -9,6 +9,7 @@ #ifndef _LINUX_KMSAN_H #define _LINUX_KMSAN_H +#include #include #include #include @@ -16,6 +17,7 @@ struct page; struct kmem_cache; struct task_struct; +struct scatterlist; #ifdef CONFIG_KMSAN @@ -172,6 +174,35 @@ void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, */ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); +/** + * kmsan_handle_dma() - Handle a DMA data transfer. + * @page: first page of the buffer. + * @offset: offset of the buffer within the first page. + * @size: buffer size. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffer, if it is copied to device; + * * initializes the buffer, if it is copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir); + +/** + * kmsan_handle_dma_sg() - Handle a DMA transfer using scatterlist. + * @sg: scatterlist holding DMA buffers. + * @nents: number of scatterlist entries. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffers in the scatterlist, if they are copied to device; + * * initializes the buffers, if they are copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir); + #else static inline void kmsan_init_shadow(void) @@ -254,6 +285,16 @@ static inline void kmsan_iounmap_page_range(unsigned long start, { } +static inline void kmsan_handle_dma(struct page *page, size_t offset, + size_t size, enum dma_data_direction dir) +{ +} + +static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 49cbf3e33de71..a8400aa9bcd4e 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -156,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); else addr = ops->map_page(dev, page, offset, size, dir, attrs); + kmsan_handle_dma(page, offset, size, dir); debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); return addr; @@ -194,11 +196,13 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, else ents = ops->map_sg(dev, sg, nents, dir, attrs); - if (ents > 0) + if (ents > 0) { + kmsan_handle_dma_sg(sg, nents, dir); debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); - else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && - ents != -EIO && ents != -EREMOTEIO)) + } else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && + ents != -EIO && ents != -EREMOTEIO)) { return -EIO; + } return ents; } diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 5c0eb25d984d7..563c09443a37a 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -10,10 +10,12 @@ */ #include +#include #include #include #include #include +#include #include #include @@ -243,6 +245,63 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, addr; + + if (PageHighMem(page)) + return; + addr = (u64)page_address(page) + offset; + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = addr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)addr, to_go, dir); + addr += to_go; + size -= to_go; + } +} + +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ + struct scatterlist *item; + int i; + + for_each_sg(sg, item, nents, i) + kmsan_handle_dma(sg_page(item), item->offset, item->length, + dir); +} + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Mon Sep 5 12:24:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966054 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4BD9ECAAD3 for ; Mon, 5 Sep 2022 12:26:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 609E58D007E; Mon, 5 Sep 2022 08:26:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B7CB8D0076; Mon, 5 Sep 2022 08:26:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4325A8D007E; Mon, 5 Sep 2022 08:26:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 351038D0076 for ; Mon, 5 Sep 2022 08:26:02 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 17E021C3F38 for ; Mon, 5 Sep 2022 12:26:02 +0000 (UTC) X-FDA: 79877953764.26.4EA419B Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf20.hostedemail.com (Postfix) with ESMTP id BF2AD1C0085 for ; Mon, 5 Sep 2022 12:26:01 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id he38-20020a1709073da600b0073d98728570so2284229ejc.11 for ; Mon, 05 Sep 2022 05:26:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=lOWe2UvDp+lzLw75WkGUXRoANIiWBb139LkOr44Ytus=; b=tTKCC8E0FZGTE1PqKz29JX/2nPiAROSv7nBNIKGtZQqtA/Jc/wKg23z16zcRSICf9V 5+VK18dSOsEVzrLNZfg05+QTe4rJOMf7PLWu12nixFzQPGlMmrK4kNV0MVAmB1xFyDUp fh0O1ZosEFKdO8Yi1waDmf6eNfefrG1y1wRvC1MLpl4b6ewma9AVT6z20HaDp07JZcPW CWv8wkRFGfuGADjLPH0jWxPW3H7vBuGBn9m0/FJ/z7BJvQsM8Oym+Hget4IsHr+AP45B CZ7Yk/Iz2QhGWTNmmH28t0KetSXuqpaeillrdIThwRnrJbAybPbz8cbVsFZy5erwsmUi wLGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=lOWe2UvDp+lzLw75WkGUXRoANIiWBb139LkOr44Ytus=; b=oZ6wrg1H/PNE3OhWKbusMKYofW/aCb0ASXYsIxKdbcoUo0fhFpf/hC3hFKaGXDh8wk nzxBHXYbnEqpw10/joLX8Vkw4tnQ8bWKiDyWn2BjiGrjOhzY7jTOIAkU+SPnXZBSUfZF jHB1F3ykSsAixN9yumPkKELrXeLSzixr95MkN7lcROst88QKYO+JhDuDxJH1sACxR6lI xFKvPAVQ5B33myWtKw0GQ3HuWDuQSvnMCunZeWgQSuyqHeyC2FLrWrvPJxAm2LgvOP5O 3acjCcRuA3/eOGgOyequLVVEs/nfLDrJWQMUavfFDkQE1NIg82Awpsc5XBVVLMPjDHUO 12vg== X-Gm-Message-State: ACgBeo1pGw8og4hTHCUGhHNH8rHeeXf9Rm0G50FEtqc8ag1GYEpiNqr4 BAR8cQ3E5LJdYvL/pyHNilVeTI0wXlA= X-Google-Smtp-Source: AA6agR7FNmpKjolLE1by3INbujX31rVBl6iJG78jz8dfceM2y9tc86KJyobPC7mva0DftsLWsbXlYG7fcOg= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:320d:b0:448:7cc8:7901 with SMTP id g13-20020a056402320d00b004487cc87901mr30728374eda.423.1662380760517; Mon, 05 Sep 2022 05:26:00 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:31 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-24-glider@google.com> Subject: [PATCH v6 23/44] virtio: kmsan: check/unpoison scatterlist in vring_map_one_sg() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tTKCC8E0; spf=pass (imf20.hostedemail.com: domain of 32OoVYwYKCCUHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=32OoVYwYKCCUHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380761; a=rsa-sha256; cv=none; b=EGPL9JbI7welvDyc57OmXpgCKBIaRwC5BTxJqGPojODCElL5l5lycEP93sysV5jf1Fhr2q gMWRZHKCVbz4cxjWPwaAjp8O1Q2HhJBlAz0idcqap/GquspfBMRIK7Z3xmtBx5kvIr2nzQ rcO5nFtiCaa2dCr533aOtPoz7OyPymg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380761; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lOWe2UvDp+lzLw75WkGUXRoANIiWBb139LkOr44Ytus=; b=k8NCUajsU1Rjf7B2wvikKppGh1F7SEDWoRS0jPBeb2RlQRG78TNdNGgOIzZi88HLNqDDP9 poNxY8ZwcWCwprylmIae6aPkTa/TSH9nlUYltC9q8rruaOnM0G25oD5rethdBT7m+MAz/O V9bSodbCKQY/+6uuToquthFstekNlGk= Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tTKCC8E0; spf=pass (imf20.hostedemail.com: domain of 32OoVYwYKCCUHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=32OoVYwYKCCUHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Stat-Signature: e575jjz5y4cin53ubmmo6e6skn37e6r7 X-Rspam-User: X-Rspamd-Queue-Id: BF2AD1C0085 X-HE-Tag: 1662380761-729839 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If vring doesn't use the DMA API, KMSAN is unable to tell whether the memory is initialized by hardware. Explicitly call kmsan_handle_dma() from vring_map_one_sg() in this case to prevent false positives. Signed-off-by: Alexander Potapenko Acked-by: Michael S. Tsirkin --- v4: -- swap virtio: and kmsan: in the subject v6: -- use instead of Link: https://linux-review.googlesource.com/id/I211533ecb86a66624e151551f83ddd749536b3af --- drivers/virtio/virtio_ring.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 4620e9d79dde8..8974c34b40fda 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -352,8 +353,15 @@ static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction) { - if (!vq->use_dma_api) + if (!vq->use_dma_api) { + /* + * If DMA is not used, KMSAN doesn't know that the scatterlist + * is initialized by the hardware. Explicitly check/unpoison it + * depending on the direction. + */ + kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); return (dma_addr_t)sg_phys(sg); + } /* * We can't use dma_map_sg, because we don't use scatterlists in From patchwork Mon Sep 5 12:24:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5FEC9ECAAD3 for ; Mon, 5 Sep 2022 12:26:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2FF78D007F; Mon, 5 Sep 2022 08:26:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EE0658D0076; Mon, 5 Sep 2022 08:26:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5A298D007F; Mon, 5 Sep 2022 08:26:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C5DE28D0076 for ; Mon, 5 Sep 2022 08:26:04 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A2E0D8067A for ; Mon, 5 Sep 2022 12:26:04 +0000 (UTC) X-FDA: 79877953848.09.F2D0315 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf04.hostedemail.com (Postfix) with ESMTP id 5AD254006D for ; Mon, 5 Sep 2022 12:26:04 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id p12-20020a259e8c000000b006958480b858so6626299ybq.12 for ; Mon, 05 Sep 2022 05:26:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=2KqHkwy5ct3iw6Sq4tv6KYhLxwZCRI8E9SKEfBphS/w=; b=qNMo5+TEtHEFzd8X93C0KJLOfuW+bl/+NTRUdie3mml96pmEn+N4vH73TD76K9APOw x7lr9Ow00o8rBQaSj9JRLCw662OfSNPqEe7f0LMhZDSdpqWcmum4NxKT+HVgyu3RyfBD dIjUpK8ttDmqfrKCMUsqSsU8IJsy5EHQp/h4UYlDNXZbXTIcCguHSUJEjjrD3MIdBnjM ew66vj+qOTIITQCyAZ7Jxljq9KU96gFXJ0ue/bUGJurejXqSQnhg1ok6o1QRgjbXNuIp lffKoMG7G7gn7zuofcQknS310KI34IZsJjruVPp2PdPve0UziuK5oNrmj/4uXS8iYknC 7H6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=2KqHkwy5ct3iw6Sq4tv6KYhLxwZCRI8E9SKEfBphS/w=; b=uxL2JcxTmoKQ3jUKB7MhEoR9/LLFjCfQeqhdzQMDo0g9JGmlu8ZWt2BQXOuD3AsXvu IsqxC4WgTx3bEMQa+XJxcc27SDwDwoPR8QQXIQHvOfT6RiW+GgkPVRtjmi8x3r0iGpNw LhZXam64JBAFCGb1MKi05yT8KXI4Msoaijf6vOj6YM4vwLvP6OwyPuHhP6MrXPebD9Xg Ioda4jy9MGpqBlBBTHnlcssbx/EJzye8JuZs9fXmmSSS4n29yxiqwpMFwChalJShHwx9 gUorKzipNeeLMXLpUV83P34DW9rotvawgoc8ZD1ytdljcyGPm+uPAewKDRcIddxJdzyx HrEg== X-Gm-Message-State: ACgBeo3x9d+6T+KeB7NmZSVGwGlyaMefR2IypqwBlq//sM6LVDiOebs/ 8m6kXDx1Ps72l2GnBHY1G7N+0glibr8= X-Google-Smtp-Source: AA6agR51nX2bLBt0cmW3C7LjTsMe0MSSnl7E/OhkmGQFZLdUXmzJbRciuD6fyHf0emq6ppsHqPpy+bgoLPI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a5b:549:0:b0:677:768b:2784 with SMTP id r9-20020a5b0549000000b00677768b2784mr34118519ybp.296.1662380763627; Mon, 05 Sep 2022 05:26:03 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:32 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-25-glider@google.com> Subject: [PATCH v6 24/44] kmsan: handle memory sent to/from USB From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380764; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2KqHkwy5ct3iw6Sq4tv6KYhLxwZCRI8E9SKEfBphS/w=; b=N1VWiuuiW7exrWLUq3tKCxXip36WqmfqnqY1oQe/a9EmI+t/TGWIrU2DcWI7Xd5GjYLxFZ SFUPFtULCMfQJ3Jy5USf2TrnxpJs1rRhlPNiKhC5KttQcZMnv+l/30yOM5p54of2NqhvB0 5jKDntp+lsWLavdtJeXi2AYJthw+L/c= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qNMo5+TE; spf=pass (imf04.hostedemail.com: domain of 32-oVYwYKCCgKPMHIVKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--glider.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=32-oVYwYKCCgKPMHIVKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380764; a=rsa-sha256; cv=none; b=l3Ff40j4kzriOjQZA5JTwvb18CVXYxK2k44eqLm21W9jMgXWjEUtCmd+btqzpOfciKFhpc b/eNFP6NXA95RHLzbuqzxyMLuqNKSC0vn8z/oVy7DI/PNCW/EFTdWZxqFhHhnnqBAJxTgS 3WyHH+m/G4LJPrSRrXYvIryNzMncfnM= X-Rspam-User: X-Stat-Signature: o3s9jorg87o8asuf3q5nxrbmewwsb8cm X-Rspamd-Queue-Id: 5AD254006D Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qNMo5+TE; spf=pass (imf04.hostedemail.com: domain of 32-oVYwYKCCgKPMHIVKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--glider.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=32-oVYwYKCCgKPMHIVKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-HE-Tag: 1662380764-149874 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Depending on the value of is_out kmsan_handle_urb() KMSAN either marks the data copied to the kernel from a USB device as initialized, or checks the data sent to the device for being initialized. Signed-off-by: Alexander Potapenko --- v2: -- move kmsan_handle_urb() implementation to this patch v5: -- do not export KMSAN hooks that are not called from modules v6: -- use instead of Link: https://linux-review.googlesource.com/id/Ifa67fb72015d4de14c30e971556f99fc8b2ee506 --- drivers/usb/core/urb.c | 2 ++ include/linux/kmsan.h | 15 +++++++++++++++ mm/kmsan/hooks.c | 16 ++++++++++++++++ 3 files changed, 33 insertions(+) diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c index 33d62d7e3929f..9f3c54032556e 100644 --- a/drivers/usb/core/urb.c +++ b/drivers/usb/core/urb.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -426,6 +427,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) URB_SETUP_MAP_SINGLE | URB_SETUP_MAP_LOCAL | URB_DMA_SG_COMBINED); urb->transfer_flags |= (is_out ? URB_DIR_OUT : URB_DIR_IN); + kmsan_handle_urb(urb, is_out); if (xfertype != USB_ENDPOINT_XFER_CONTROL && dev->state < USB_STATE_CONFIGURED) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index dac296da45c55..c473e0e21683c 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -18,6 +18,7 @@ struct page; struct kmem_cache; struct task_struct; struct scatterlist; +struct urb; #ifdef CONFIG_KMSAN @@ -203,6 +204,16 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size, void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, enum dma_data_direction dir); +/** + * kmsan_handle_urb() - Handle a USB data transfer. + * @urb: struct urb pointer. + * @is_out: data transfer direction (true means output to hardware). + * + * If @is_out is true, KMSAN checks the transfer buffer of @urb. Otherwise, + * KMSAN initializes the transfer buffer. + */ +void kmsan_handle_urb(const struct urb *urb, bool is_out); + #else static inline void kmsan_init_shadow(void) @@ -295,6 +306,10 @@ static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, { } +static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 563c09443a37a..79d7e73e2cfd8 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -18,6 +18,7 @@ #include #include #include +#include #include "../internal.h" #include "../slab.h" @@ -245,6 +246,21 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/ false); +} + static void kmsan_handle_dma_page(const void *addr, size_t size, enum dma_data_direction dir) { From patchwork Mon Sep 5 12:24:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6543ECAAD5 for ; Mon, 5 Sep 2022 12:26:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E38C8D0080; Mon, 5 Sep 2022 08:26:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5912B8D0076; Mon, 5 Sep 2022 08:26:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 432408D0080; Mon, 5 Sep 2022 08:26:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 320628D0076 for ; Mon, 5 Sep 2022 08:26:08 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 14453C076A for ; Mon, 5 Sep 2022 12:26:08 +0000 (UTC) X-FDA: 79877954016.31.4B54F66 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf24.hostedemail.com (Postfix) with ESMTP id C5865180090 for ; Mon, 5 Sep 2022 12:26:07 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id ay21-20020a05600c1e1500b003a6271a9718so5339594wmb.0 for ; Mon, 05 Sep 2022 05:26:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=6HLRx+/DZF7UroOp434K8fDP2m/kJbWJ8nrMBvS6v+Y=; b=WAp6V1ddOskogqQPaCAelHM9/ueBNBw8LVE1Q/ECGMxVPnjU2K5hI7eAxFPNwWqm1Q FWmYWB8p2L7njDRNzVj16gITb0dUSex+tRUQ+8nLHjjnvjTainjn5NaGzqIX4OXRRmcD RWXIWhnusG6MZmHMU3D2j1ccTyckmzpInlbOmzKKr19C9c0JnbLX65JUWX1P663w7leo 669Q4gq66pYuEEGnqtUHoRtSHHvvMsFsOUEYx3gWTM3g/mK79a6G0tIjMGE5qmz+kKU4 f1kr77l2rQlga1vs8TBEeUx6kA00hQNM74EVZiJaDxjKydmXEhEVGIwmlfCJDTYMafs9 pEZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=6HLRx+/DZF7UroOp434K8fDP2m/kJbWJ8nrMBvS6v+Y=; b=HMl0MkG2Epyl9xOV21ZlF9YBDDsO6IFbOE5sL0HtNdfxlRP6fVCv0IHtFC6+CcpmRB 1RATbLAZ8HSSs75G76vJRyaqXtlM8jAzLiSiozwJpTW/Ks9HuzrCOMNVPJYThh8cZn4z 8+bXX7dSu+opZszQi6FzO1CYqYUmQYP+pdZ0JUfKCJ71wAxiDFn9r/s8mhHdff4zhE/a zhVCtZd7P7pPOyQnCAT08tyGRQrg2n8hDWShBYndULGxltlYPIGdLsQSQJZ2kIryG4zx 0JFzcOHGkLl0Pmlxr290AkDKGGnsX8tSLavi6EsIWvV3v+4wMJnseMmBL/YMKzqRBnWG BOfg== X-Gm-Message-State: ACgBeo1ZscXvQQJPQpcTj4WTqdVO2AbXhctUxUiDNJJ4Zs4j5r2SgYBy 6SzGvqooDRk7xueZ2lgUJFfGe1pcsJ4= X-Google-Smtp-Source: AA6agR6UQxOhmau7fzyAK4mJwXB+G+4+Zm91j/Eqs+/aS5eatamlttICibGjRJ8T6Ih8pw7tWShg86Gjgyc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6000:799:b0:226:e3e9:e482 with SMTP id bu25-20020a056000079900b00226e3e9e482mr17101079wrb.219.1662380766506; Mon, 05 Sep 2022 05:26:06 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:33 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-26-glider@google.com> Subject: [PATCH v6 25/44] kmsan: add tests for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380767; a=rsa-sha256; cv=none; b=xhTU4QO6GayyBkUfv4HaeD1MwLkBDy+syj5dsS5MSxe9NbgxJd+RuBkmkkg+9b13NNLR8/ DHy+bi3HPtqATqrxS9m14D1IpiOWY8mavgjaKAq3DtINVPxfteZ+MD+vrH4cD/nmuxwg8B IzFO8FYZX1E7Ye9aIzY16cZdpdlNgp4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WAp6V1dd; spf=pass (imf24.hostedemail.com: domain of 33uoVYwYKCCsNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=33uoVYwYKCCsNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380767; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6HLRx+/DZF7UroOp434K8fDP2m/kJbWJ8nrMBvS6v+Y=; b=QuMESXgY3aQaj+EpJN0yThBxa/wxIjAmoWgliORUIR4BQ24y2C7vVpmieMVXQprMEyCFXT 3fgIDWj6IMhcYv5UlOefVoCDN8i53LL4B32RYBWHn8PT2kci2Wfp5HhNpnYyK2DnT4rJRV rXrFm3eJKJZgmNaUAC78OmGMluBGAts= Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WAp6V1dd; spf=pass (imf24.hostedemail.com: domain of 33uoVYwYKCCsNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=33uoVYwYKCCsNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspam-User: X-Stat-Signature: 3zbb6szqo959at481g6864ykrwqi4ziu X-Rspamd-Queue-Id: C5865180090 X-HE-Tag: 1662380767-716988 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The testing module triggers KMSAN warnings in different cases and checks that the errors are properly reported, using console probes to capture the tool's output. Signed-off-by: Alexander Potapenko --- v2: -- add memcpy tests v4: -- change sizeof(type) to sizeof(*ptr) -- add test expectations for CONFIG_KMSAN_CHECK_PARAM_RETVAL v5: -- reapply clang-format -- use modern style for-loops -- address Marco Elver's comments Link: https://linux-review.googlesource.com/id/I49c3f59014cc37fd13541c80beb0b75a75244650 --- lib/Kconfig.kmsan | 12 + mm/kmsan/Makefile | 4 + mm/kmsan/kmsan_test.c | 552 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 568 insertions(+) create mode 100644 mm/kmsan/kmsan_test.c diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan index 5b19dbd34d76e..b2489dd6503fa 100644 --- a/lib/Kconfig.kmsan +++ b/lib/Kconfig.kmsan @@ -47,4 +47,16 @@ config KMSAN_CHECK_PARAM_RETVAL may potentially report errors in corner cases when non-instrumented functions call instrumented ones. +config KMSAN_KUNIT_TEST + tristate "KMSAN integration test suite" if !KUNIT_ALL_TESTS + default KUNIT_ALL_TESTS + depends on TRACEPOINTS && KUNIT + help + Test suite for KMSAN, testing various error detection scenarios, + and checking that reports are correctly output to console. + + Say Y here if you want the test to be built into the kernel and run + during boot; say M if you want the test to build as a module; say N + if you are unsure. + endif diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index 401acb1a491ce..98eab2856626f 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -22,3 +22,7 @@ CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) + +obj-$(CONFIG_KMSAN_KUNIT_TEST) += kmsan_test.o +KMSAN_SANITIZE_kmsan_test.o := y +CFLAGS_kmsan_test.o += $(call cc-disable-warning, uninitialized) diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c new file mode 100644 index 0000000000000..b68f4334cf184 --- /dev/null +++ b/mm/kmsan/kmsan_test.c @@ -0,0 +1,552 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for KMSAN. + * For each test case checks the presence (or absence) of generated reports. + * Relies on 'console' tracepoint to capture reports as they appear in the + * kernel log. + * + * Copyright (C) 2021-2022, Google LLC. + * Author: Alexander Potapenko + * + */ + +#include +#include "kmsan.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DEFINE_PER_CPU(int, per_cpu_var); + +/* Report as observed from console. */ +static struct { + spinlock_t lock; + bool available; + bool ignore; /* Stop console output collection. */ + char header[256]; +} observed = { + .lock = __SPIN_LOCK_UNLOCKED(observed.lock), +}; + +/* Probe for console output: obtains observed lines of interest. */ +static void probe_console(void *ignore, const char *buf, size_t len) +{ + unsigned long flags; + + if (observed.ignore) + return; + spin_lock_irqsave(&observed.lock, flags); + + if (strnstr(buf, "BUG: KMSAN: ", len)) { + /* + * KMSAN report and related to the test. + * + * The provided @buf is not NUL-terminated; copy no more than + * @len bytes and let strscpy() add the missing NUL-terminator. + */ + strscpy(observed.header, buf, + min(len + 1, sizeof(observed.header))); + WRITE_ONCE(observed.available, true); + observed.ignore = true; + } + spin_unlock_irqrestore(&observed.lock, flags); +} + +/* Check if a report related to the test exists. */ +static bool report_available(void) +{ + return READ_ONCE(observed.available); +} + +/* Information we expect in a report. */ +struct expect_report { + const char *error_type; /* Error type. */ + /* + * Kernel symbol from the error header, or NULL if no report is + * expected. + */ + const char *symbol; +}; + +/* Check observed report matches information in @r. */ +static bool report_matches(const struct expect_report *r) +{ + typeof(observed.header) expected_header; + unsigned long flags; + bool ret = false; + const char *end; + char *cur; + + /* Doubled-checked locking. */ + if (!report_available() || !r->symbol) + return (!report_available() && !r->symbol); + + /* Generate expected report contents. */ + + /* Title */ + cur = expected_header; + end = &expected_header[sizeof(expected_header) - 1]; + + cur += scnprintf(cur, end - cur, "BUG: KMSAN: %s", r->error_type); + + scnprintf(cur, end - cur, " in %s", r->symbol); + /* The exact offset won't match, remove it; also strip module name. */ + cur = strchr(expected_header, '+'); + if (cur) + *cur = '\0'; + + spin_lock_irqsave(&observed.lock, flags); + if (!report_available()) + goto out; /* A new report is being captured. */ + + /* Finally match expected output to what we actually observed. */ + ret = strstr(observed.header, expected_header); +out: + spin_unlock_irqrestore(&observed.lock, flags); + + return ret; +} + +/* ===== Test cases ===== */ + +/* Prevent replacing branch with select in LLVM. */ +static noinline void check_true(char *arg) +{ + pr_info("%s is true\n", arg); +} + +static noinline void check_false(char *arg) +{ + pr_info("%s is false\n", arg); +} + +#define USE(x) \ + do { \ + if (x) \ + check_true(#x); \ + else \ + check_false(#x); \ + } while (0) + +#define EXPECTATION_ETYPE_FN(e, reason, fn) \ + struct expect_report e = { \ + .error_type = reason, \ + .symbol = fn, \ + } + +#define EXPECTATION_NO_REPORT(e) EXPECTATION_ETYPE_FN(e, NULL, NULL) +#define EXPECTATION_UNINIT_VALUE_FN(e, fn) \ + EXPECTATION_ETYPE_FN(e, "uninit-value", fn) +#define EXPECTATION_UNINIT_VALUE(e) EXPECTATION_UNINIT_VALUE_FN(e, __func__) +#define EXPECTATION_USE_AFTER_FREE(e) \ + EXPECTATION_ETYPE_FN(e, "use-after-free", __func__) + +/* Test case: ensure that kmalloc() returns uninitialized memory. */ +static void test_uninit_kmalloc(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + int *ptr; + + kunit_info(test, "uninitialized kmalloc test (UMR report)\n"); + ptr = kmalloc(sizeof(*ptr), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that kmalloc'ed memory becomes initialized after memset(). + */ +static void test_init_kmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kmalloc test (no reports)\n"); + ptr = kmalloc(sizeof(*ptr), GFP_KERNEL); + memset(ptr, 0, sizeof(*ptr)); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that kzalloc() returns initialized memory. */ +static void test_init_kzalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kzalloc test (no reports)\n"); + ptr = kzalloc(sizeof(*ptr), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that local variables are uninitialized by default. */ +static void test_uninit_stack_var(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int cond; + + kunit_info(test, "uninitialized stack variable (UMR report)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that local variables with initializers are initialized. */ +static void test_init_stack_var(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + volatile int cond = 1; + + kunit_info(test, "initialized stack variable (no reports)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static noinline void two_param_fn_2(int arg1, int arg2) +{ + USE(arg1); + USE(arg2); +} + +static noinline void one_param_fn(int arg) +{ + two_param_fn_2(arg, arg); + USE(arg); +} + +static noinline void two_param_fn(int arg1, int arg2) +{ + int init = 0; + + one_param_fn(init); + USE(arg1); + USE(arg2); +} + +static void test_params(struct kunit *test) +{ +#ifdef CONFIG_KMSAN_CHECK_PARAM_RETVAL + /* + * With eager param/retval checking enabled, KMSAN will report an error + * before the call to two_param_fn(). + */ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_params"); +#else + EXPECTATION_UNINIT_VALUE_FN(expect, "two_param_fn"); +#endif + volatile int uninit, init = 1; + + kunit_info(test, + "uninit passed through a function parameter (UMR report)\n"); + two_param_fn(uninit, init); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static int signed_sum3(int a, int b, int c) +{ + return a + b + c; +} + +/* + * Test case: ensure that uninitialized values are tracked through function + * arguments. + */ +static void test_uninit_multiple_params(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile char b = 3, c; + volatile int a; + + kunit_info(test, "uninitialized local passed to fn (UMR report)\n"); + USE(signed_sum3(a, b, c)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Helper function to make an array uninitialized. */ +static noinline void do_uninit_local_array(char *array, int start, int stop) +{ + volatile char uninit; + + for (int i = start; i < stop; i++) + array[i] = uninit; +} + +/* + * Test case: ensure kmsan_check_memory() reports an error when checking + * uninitialized memory. + */ +static void test_uninit_kmsan_check_memory(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_uninit_kmsan_check_memory"); + volatile char local_array[8]; + + kunit_info( + test, + "kmsan_check_memory() called on uninit local (UMR report)\n"); + do_uninit_local_array((char *)local_array, 5, 7); + + kmsan_check_memory((char *)local_array, 8); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: check that a virtual memory range created with vmap() from + * initialized pages is still considered as initialized. + */ +static void test_init_kmsan_vmap_vunmap(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + const int npages = 2; + struct page **pages; + void *vbuf; + + kunit_info(test, "pages initialized via vmap (no reports)\n"); + + pages = kmalloc_array(npages, sizeof(*pages), GFP_KERNEL); + for (int i = 0; i < npages; i++) + pages[i] = alloc_page(GFP_KERNEL); + vbuf = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + memset(vbuf, 0xfe, npages * PAGE_SIZE); + for (int i = 0; i < npages; i++) + kmsan_check_memory(page_address(pages[i]), PAGE_SIZE); + + if (vbuf) + vunmap(vbuf); + for (int i = 0; i < npages; i++) { + if (pages[i]) + __free_page(pages[i]); + } + kfree(pages); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memset() can initialize a buffer allocated via + * vmalloc(). + */ +static void test_init_vmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int npages = 8; + char *buf; + + kunit_info(test, "vmalloc buffer can be initialized (no reports)\n"); + buf = vmalloc(PAGE_SIZE * npages); + buf[0] = 1; + memset(buf, 0xfe, PAGE_SIZE * npages); + USE(buf[0]); + for (int i = 0; i < npages; i++) + kmsan_check_memory(&buf[PAGE_SIZE * i], PAGE_SIZE); + vfree(buf); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that use-after-free reporting works. */ +static void test_uaf(struct kunit *test) +{ + EXPECTATION_USE_AFTER_FREE(expect); + volatile int value; + volatile int *var; + + kunit_info(test, "use-after-free in kmalloc-ed buffer (UMR report)\n"); + var = kmalloc(80, GFP_KERNEL); + var[3] = 0xfeedface; + kfree((int *)var); + /* Copy the invalid value before checking it. */ + value = var[3]; + USE(value); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that uninitialized values are propagated through per-CPU + * memory. + */ +static void test_percpu_propagate(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int uninit, check; + + kunit_info(test, + "uninit local stored to per_cpu memory (UMR report)\n"); + + this_cpu_write(per_cpu_var, uninit); + check = this_cpu_read(per_cpu_var); + USE(check); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that passing uninitialized values to printk() leads to an + * error report. + */ +static void test_printk(struct kunit *test) +{ +#ifdef CONFIG_KMSAN_CHECK_PARAM_RETVAL + /* + * With eager param/retval checking enabled, KMSAN will report an error + * before the call to pr_info(). + */ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_printk"); +#else + EXPECTATION_UNINIT_VALUE_FN(expect, "number"); +#endif + volatile int uninit; + + kunit_info(test, "uninit local passed to pr_info() (UMR report)\n"); + pr_info("%px contains %d\n", &uninit, uninit); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and `dst`. + */ +static void test_memcpy_aligned_to_aligned(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_aligned"); + volatile int uninit_src; + volatile int dst = 0; + + kunit_info( + test, + "memcpy()ing aligned uninit src to aligned dst (UMR report)\n"); + memcpy((void *)&dst, (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)&dst, sizeof(dst)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and unaligned `dst`. + * + * Copying aligned 4-byte value to an unaligned one leads to touching two + * aligned 4-byte values. This test case checks that KMSAN correctly reports an + * error on the first of the two values. + */ +static void test_memcpy_aligned_to_unaligned(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_unaligned"); + volatile int uninit_src; + volatile char dst[8] = { 0 }; + + kunit_info( + test, + "memcpy()ing aligned uninit src to unaligned dst (UMR report)\n"); + memcpy((void *)&dst[1], (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)dst, 4); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and unaligned `dst`. + * + * Copying aligned 4-byte value to an unaligned one leads to touching two + * aligned 4-byte values. This test case checks that KMSAN correctly reports an + * error on the second of the two values. + */ +static void test_memcpy_aligned_to_unaligned2(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, + "test_memcpy_aligned_to_unaligned2"); + volatile int uninit_src; + volatile char dst[8] = { 0 }; + + kunit_info( + test, + "memcpy()ing aligned uninit src to unaligned dst - part 2 (UMR report)\n"); + memcpy((void *)&dst[1], (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)&dst[4], sizeof(uninit_src)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static struct kunit_case kmsan_test_cases[] = { + KUNIT_CASE(test_uninit_kmalloc), + KUNIT_CASE(test_init_kmalloc), + KUNIT_CASE(test_init_kzalloc), + KUNIT_CASE(test_uninit_stack_var), + KUNIT_CASE(test_init_stack_var), + KUNIT_CASE(test_params), + KUNIT_CASE(test_uninit_multiple_params), + KUNIT_CASE(test_uninit_kmsan_check_memory), + KUNIT_CASE(test_init_kmsan_vmap_vunmap), + KUNIT_CASE(test_init_vmalloc), + KUNIT_CASE(test_uaf), + KUNIT_CASE(test_percpu_propagate), + KUNIT_CASE(test_printk), + KUNIT_CASE(test_memcpy_aligned_to_aligned), + KUNIT_CASE(test_memcpy_aligned_to_unaligned), + KUNIT_CASE(test_memcpy_aligned_to_unaligned2), + {}, +}; + +/* ===== End test cases ===== */ + +static int test_init(struct kunit *test) +{ + unsigned long flags; + + spin_lock_irqsave(&observed.lock, flags); + observed.header[0] = '\0'; + observed.ignore = false; + observed.available = false; + spin_unlock_irqrestore(&observed.lock, flags); + + return 0; +} + +static void test_exit(struct kunit *test) +{ +} + +static void register_tracepoints(struct tracepoint *tp, void *ignore) +{ + check_trace_callback_type_console(probe_console); + if (!strcmp(tp->name, "console")) + WARN_ON(tracepoint_probe_register(tp, probe_console, NULL)); +} + +static void unregister_tracepoints(struct tracepoint *tp, void *ignore) +{ + if (!strcmp(tp->name, "console")) + tracepoint_probe_unregister(tp, probe_console, NULL); +} + +static int kmsan_suite_init(struct kunit_suite *suite) +{ + /* + * Because we want to be able to build the test as a module, we need to + * iterate through all known tracepoints, since the static registration + * won't work here. + */ + for_each_kernel_tracepoint(register_tracepoints, NULL); + return 0; +} + +static void kmsan_suite_exit(struct kunit_suite *suite) +{ + for_each_kernel_tracepoint(unregister_tracepoints, NULL); + tracepoint_synchronize_unregister(); +} + +static struct kunit_suite kmsan_test_suite = { + .name = "kmsan", + .test_cases = kmsan_test_cases, + .init = test_init, + .exit = test_exit, + .suite_init = kmsan_suite_init, + .suite_exit = kmsan_suite_exit, +}; +kunit_test_suites(&kmsan_test_suite); + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Alexander Potapenko "); From patchwork Mon Sep 5 12:24:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966057 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98A3BECAAD3 for ; Mon, 5 Sep 2022 12:26:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FC6D8D0081; Mon, 5 Sep 2022 08:26:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D3188D0076; Mon, 5 Sep 2022 08:26:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 173688D0081; Mon, 5 Sep 2022 08:26:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 07C778D0076 for ; Mon, 5 Sep 2022 08:26:11 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D340B140D95 for ; Mon, 5 Sep 2022 12:26:10 +0000 (UTC) X-FDA: 79877954100.19.4B7ED9F Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 7E3AEA0079 for ; Mon, 5 Sep 2022 12:26:10 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id qf22-20020a1709077f1600b00741638c5f3cso2291190ejc.23 for ; Mon, 05 Sep 2022 05:26:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=PRETZsLeKtd8er/vGZIvwJ67UbRQnpYPS7VnNd6iitc=; b=Rw0JSo2mfA53wv1KbqcCFIJoJCKgB38eiEtze1NtffAs06/flW4T57bbg2dVgmm7vN Qgo1uiqbKJaHLIYDDiLsbIbsnOBwXkyph/jL1jJt7c88fQk6BTFd9sFusdBxHhx6dDYE u5T3ewW3kKrqDUiYOfyUWYtYPe4wTLLwz6YktJHkCggWgZlXMaXpy8DUIH24zJx2XEu5 /2ZJizHYM8tdojTqRxVSOKMfWIBfh4BQZl9kEZRZrxqDuXDq6LhgmA36ysx4Fnd4wn/z RXSYyqOZGt5AmWmyGx3znX+g0JifZp9ZClGm7haZUkqafRea3A16fYJTpOu+ONa6qpWB P9rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=PRETZsLeKtd8er/vGZIvwJ67UbRQnpYPS7VnNd6iitc=; b=yDFMhvOifu49awYlUJFEUnL/FPw+bC9zRmiqk+TUfBWThTYVS8SorhnWeyBpGQ1iaS kOYYpXhEPkTMopiZ6uhHEcSBG1/Y+FoJxkdvEVPS94d0WMohdnmvmJ8NyTRWEQqkldme HSvp6mXd02CQJYzUgisKet0FSXvKQRPO2P5pZvXFyv+dhnahFll3hHbTw67I5VESkEm3 9VmSLatcDXcn6AMlfGLznu+cfBHqx59u9fkBRUKnNqnOZBi9l1N1EYTeuxoAFdqfxfH+ ZufKCUAbIckA2bywxDK36owh5Dz2z5xmJ65/n64VAcHbAcOa83IWtxUq0i4VlqhEdVet I7OA== X-Gm-Message-State: ACgBeo1CRWSGKFijREVimTImVmaJjXFzEMmXZhqBq33MP8sVpToaTkLh VzuRtwBZuIKHdK/acylnFvwtfMsr8yc= X-Google-Smtp-Source: AA6agR6H7t7tR3hDwMYsSl5f8ymL7keGAQBGQoCYb8T9aH1rg12oi88mnhysrWMbCn39pKzombeP5qdOg3Y= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:4515:b0:443:7833:3d7b with SMTP id ez21-20020a056402451500b0044378333d7bmr18008265edb.151.1662380769211; Mon, 05 Sep 2022 05:26:09 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:34 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-27-glider@google.com> Subject: [PATCH v6 26/44] kmsan: disable strscpy() optimization under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380770; a=rsa-sha256; cv=none; b=bjQuBkdspkRtgE37zb19OkWpWexQiBvmE66rIwmTkQ61V+oZ6vpldSNAb1SGBnb0fGdJAt L6o0KuAqh0cTmmwIy5s1nyRXjTl0Y7qZtR1u4x8GQHs463D2V9iqri+QxyaKaUpLRuXrBq JidoDtyXVsFTjzZ9lRIEGFKM88KI4Es= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Rw0JSo2m; spf=pass (imf25.hostedemail.com: domain of 34eoVYwYKCC4QVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=34eoVYwYKCC4QVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380770; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PRETZsLeKtd8er/vGZIvwJ67UbRQnpYPS7VnNd6iitc=; b=p8lL4SR1wHyIz9qGIfJWQ+UV2ETN+IZpsM5MQBNTf3C4C6CTIJAEnPbxHv+B1wnA3dEHz8 j4nhBGCVoMRHaX7HtMAe8u6FbVp4rIsyS+faDVpKged3Zmgdn+NUWfF0rhL5GCut24nzrB YP72LlEOh4W50DnwnSRnZT8vtxnnSuI= Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Rw0JSo2m; spf=pass (imf25.hostedemail.com: domain of 34eoVYwYKCC4QVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=34eoVYwYKCC4QVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: u9a6qt6h7qnidg9msbow486sxqn9dr4z X-Rspamd-Queue-Id: 7E3AEA0079 X-HE-Tag: 1662380770-940045 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Disable the efficient 8-byte reading under KMSAN to avoid false positives. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iffd8336965e88fce915db2e6a9d6524422975f69 --- lib/string.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/lib/string.c b/lib/string.c index 6f334420f6871..3371d26a0e390 100644 --- a/lib/string.c +++ b/lib/string.c @@ -197,6 +197,14 @@ ssize_t strscpy(char *dest, const char *src, size_t count) max = 0; #endif + /* + * read_word_at_a_time() below may read uninitialized bytes after the + * trailing zero and use them in comparisons. Disable this optimization + * under KMSAN to prevent false positive reports. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + max = 0; + while (max >= sizeof(unsigned long)) { unsigned long c, data; From patchwork Mon Sep 5 12:24:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966058 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64B2AECAAD5 for ; Mon, 5 Sep 2022 12:26:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0657C8D0082; Mon, 5 Sep 2022 08:26:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0128B8D0076; Mon, 5 Sep 2022 08:26:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA7FB8D0082; Mon, 5 Sep 2022 08:26:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C975E8D0076 for ; Mon, 5 Sep 2022 08:26:13 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A8F371A0B9B for ; Mon, 5 Sep 2022 12:26:13 +0000 (UTC) X-FDA: 79877954226.05.6FB1C20 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf29.hostedemail.com (Postfix) with ESMTP id 599EC1200A4 for ; Mon, 5 Sep 2022 12:26:13 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id m2-20020adfc582000000b0021e28acded7so1247885wrg.13 for ; Mon, 05 Sep 2022 05:26:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=YUiFcVJy4Yf/nhkRGOhECdnAl5j2ey18UxYwbxsEwuI=; b=EuJefT6pFI1SfJ1MHzNVEtL/D1nKROem7gBcZZNWm+tmUjIlm6XlXML4r/Ulb6F8Oi 7JZcAeqwIFv11ItGjGtVY+2vAoI145sAHWAJ2eCCgc9gmOJQ+rJxoOxi4IxQuKOqo9Xs jwJqNPHrB7IP6p+X7A89Sq60etOBYFLCpzyG1ZEogPtR1XI1hN447p2zQBgEEyXxDXjB 1ueq55cwIKpWhpvImTkpLZb0xUfSkeDqxz8hzbFPpqjbXx8+aAig55o5wiquAxAR3RQy vct2tlYLX55zgMwv7nfI7KoeAFF3ZoNEKvex9CtBI8yLfc403GWeouQugJijjiHsxxof X/Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=YUiFcVJy4Yf/nhkRGOhECdnAl5j2ey18UxYwbxsEwuI=; b=7jG3ITpjcoJVJAAKyhx0g2tu36FpHe5bNIAtKqLeoAs6dnIEnlTLXljdBV94fBVjGw geFnoXzZZaT4yXEVvd0N+D7tgiBQPZ5tFfFhlaqi+F6U3LpCIhEdT+mvChaj8PKyWaWW JECJdg8Z1vmG31NWI36ElJY5Hb+aXCAgpqBHHLyQXxu/69jU2JNF4uCkXZ6MQxibm+7E Q0IPIiPAe77zUQpcLdY+bzex0/4at8NkHb3CmxXMab2zpcc1a+CJsfwkpdPVF6syqJND X35owtsHkuxELeg355RPvfIRngS2Y6V38UjRZumPJu2lY3vDDKEk1yusAV7yUcq7pZfK 62ig== X-Gm-Message-State: ACgBeo2V6dN0+ldgaCqrV6jRbgLWFcvSpz1NN9VrfguwWkIEL/EcThYH KThTfFuZCnfFahct0MaEDSFzGQAFIcs= X-Google-Smtp-Source: AA6agR5XcljDqREXz5yOy07HcvY+X3l1ismgSiUmdI4a6SdYPcZywBoA3OLSshLNFBaU4Mi1hXiZhhBtJnA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6000:713:b0:226:ea6c:2d7d with SMTP id bs19-20020a056000071300b00226ea6c2d7dmr14917835wrb.293.1662380772234; Mon, 05 Sep 2022 05:26:12 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:35 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-28-glider@google.com> Subject: [PATCH v6 27/44] crypto: kmsan: disable accelerated configs under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EuJefT6p; spf=pass (imf29.hostedemail.com: domain of 35OoVYwYKCDETYVQReTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--glider.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=35OoVYwYKCDETYVQReTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380773; a=rsa-sha256; cv=none; b=YdFhvsotgMyEAkAFfKM6RB0i0VsaOnAlP4nkx71RUYSRCcwjVDXIwc7MwOduPbT64X7ocH 4aekvrE6ugtXMrW5P2mkSkzPgWKyQ7LvLbFCpQ33pHOpOa+QsOSALr8otvn8fYOxMkj/Lj zUybYPn4FpXVXRr2Sr/Lp5iL4kLHc4k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380773; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YUiFcVJy4Yf/nhkRGOhECdnAl5j2ey18UxYwbxsEwuI=; b=M3rVW0h1cgGAbDeQDqtpP+Fbsfd57oC/CtGSgf8gyzaCk7BvLlprjO4wGXNhNnRB+DebzV znnAyCS13fL1MztoTdrGFfVvPpsrnacr8uUpuVE9ZYulWHMExdVs3NSTCd6twrwAzo8bxd LcNKvmca4C/7KPgsa8NZzWjKcXXnTSo= Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EuJefT6p; spf=pass (imf29.hostedemail.com: domain of 35OoVYwYKCDETYVQReTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--glider.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=35OoVYwYKCDETYVQReTbbTYR.PbZYVahk-ZZXiNPX.beT@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Stat-Signature: pdao1rm3gru1z9h8uqceyptqz7h8racn X-Rspam-User: X-Rspamd-Queue-Id: 599EC1200A4 X-HE-Tag: 1662380773-57969 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is unable to understand when initialized values come from assembly. Disable accelerated configs in KMSAN builds to prevent false positive reports. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Idb2334bf3a1b68b31b399709baefaa763038cc50 --- crypto/Kconfig | 30 ++++++++++++++++++++++++++++++ drivers/net/Kconfig | 1 + 2 files changed, 31 insertions(+) diff --git a/crypto/Kconfig b/crypto/Kconfig index bb427a835e44a..182fb817ebb52 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -319,6 +319,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -367,11 +368,13 @@ config CRYPTO_AEGIS128 config CRYPTO_AEGIS128_SIMD bool "Support SIMD acceleration for AEGIS-128" depends on CRYPTO_AEGIS128 && ((ARM || ARM64) && KERNEL_MODE_NEON) + depends on !KMSAN # avoid false positives from assembly default y config CRYPTO_AEGIS128_AESNI_SSE2 tristate "AEGIS-128 AEAD algorithm (x86_64 AESNI+SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_SIMD help @@ -517,6 +520,7 @@ config CRYPTO_NHPOLY1305 config CRYPTO_NHPOLY1305_SSE2 tristate "NHPoly1305 hash function (x86_64 SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help SSE2 optimized implementation of the hash function used by the @@ -525,6 +529,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -649,6 +654,7 @@ config CRYPTO_CRC32C config CRYPTO_CRC32C_INTEL tristate "CRC32c INTEL hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help In Intel processor with SSE4.2 supported, the processor will @@ -689,6 +695,7 @@ config CRYPTO_CRC32 config CRYPTO_CRC32_PCLMUL tristate "CRC32 PCLMULQDQ hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH select CRC32 help @@ -748,6 +755,7 @@ config CRYPTO_BLAKE2B config CRYPTO_BLAKE2S_X86 bool "BLAKE2s digest algorithm (x86 accelerated version)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_BLAKE2S_GENERIC select CRYPTO_ARCH_HAVE_LIB_BLAKE2S @@ -762,6 +770,7 @@ config CRYPTO_CRCT10DIF config CRYPTO_CRCT10DIF_PCLMUL tristate "CRCT10DIF PCLMULQDQ hardware acceleration" depends on X86 && 64BIT && CRC_T10DIF + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help For x86_64 processors with SSE4.2 and PCLMULQDQ supported, @@ -831,6 +840,7 @@ config CRYPTO_POLY1305 config CRYPTO_POLY1305_X86_64 tristate "Poly1305 authenticator algorithm (x86_64/SSE2/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_POLY1305_GENERIC select CRYPTO_ARCH_HAVE_LIB_POLY1305 help @@ -920,6 +930,7 @@ config CRYPTO_SHA1 config CRYPTO_SHA1_SSSE3 tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA1 select CRYPTO_HASH help @@ -931,6 +942,7 @@ config CRYPTO_SHA1_SSSE3 config CRYPTO_SHA256_SSSE3 tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA256 select CRYPTO_HASH help @@ -943,6 +955,7 @@ config CRYPTO_SHA256_SSSE3 config CRYPTO_SHA512_SSSE3 tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA512 select CRYPTO_HASH help @@ -1168,6 +1181,7 @@ config CRYPTO_WP512 config CRYPTO_GHASH_CLMUL_NI_INTEL tristate "GHASH hash function (CLMUL-NI accelerated)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CRYPTD help This is the x86_64 CLMUL-NI accelerated implementation of @@ -1228,6 +1242,7 @@ config CRYPTO_AES_TI config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_LIB_AES select CRYPTO_ALGAPI @@ -1369,6 +1384,7 @@ config CRYPTO_BLOWFISH_COMMON config CRYPTO_BLOWFISH_X86_64 tristate "Blowfish cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_BLOWFISH_COMMON imply CRYPTO_CTR @@ -1399,6 +1415,7 @@ config CRYPTO_CAMELLIA config CRYPTO_CAMELLIA_X86_64 tristate "Camellia cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER imply CRYPTO_CTR help @@ -1415,6 +1432,7 @@ config CRYPTO_CAMELLIA_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAMELLIA_X86_64 select CRYPTO_SIMD @@ -1433,6 +1451,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help Camellia cipher algorithm module (x86_64/AES-NI/AVX2). @@ -1478,6 +1497,7 @@ config CRYPTO_CAST5 config CRYPTO_CAST5_AVX_X86_64 tristate "CAST5 (CAST-128) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST5 select CRYPTO_CAST_COMMON @@ -1501,6 +1521,7 @@ config CRYPTO_CAST6 config CRYPTO_CAST6_AVX_X86_64 tristate "CAST6 (CAST-256) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST6 select CRYPTO_CAST_COMMON @@ -1534,6 +1555,7 @@ config CRYPTO_DES_SPARC64 config CRYPTO_DES3_EDE_X86_64 tristate "Triple DES EDE cipher algorithm (x86-64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_DES imply CRYPTO_CTR @@ -1604,6 +1626,7 @@ config CRYPTO_CHACHA20 config CRYPTO_CHACHA20_X86_64 tristate "ChaCha stream cipher algorithms (x86_64/SSSE3/AVX2/AVX-512VL)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA @@ -1674,6 +1697,7 @@ config CRYPTO_SERPENT config CRYPTO_SERPENT_SSE2_X86_64 tristate "Serpent cipher algorithm (x86_64/SSE2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1693,6 +1717,7 @@ config CRYPTO_SERPENT_SSE2_X86_64 config CRYPTO_SERPENT_SSE2_586 tristate "Serpent cipher algorithm (i586/SSE2)" depends on X86 && !64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1712,6 +1737,7 @@ config CRYPTO_SERPENT_SSE2_586 config CRYPTO_SERPENT_AVX_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1732,6 +1758,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. @@ -1876,6 +1903,7 @@ config CRYPTO_TWOFISH_586 config CRYPTO_TWOFISH_X86_64 tristate "Twofish cipher algorithm (x86_64)" depends on (X86 || UML_X86) && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_ALGAPI select CRYPTO_TWOFISH_COMMON imply CRYPTO_CTR @@ -1893,6 +1921,7 @@ config CRYPTO_TWOFISH_X86_64 config CRYPTO_TWOFISH_X86_64_3WAY tristate "Twofish cipher algorithm (x86_64, 3-way parallel)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_TWOFISH_COMMON select CRYPTO_TWOFISH_X86_64 @@ -1913,6 +1942,7 @@ config CRYPTO_TWOFISH_X86_64_3WAY config CRYPTO_TWOFISH_AVX_X86_64 tristate "Twofish cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SIMD select CRYPTO_TWOFISH_COMMON diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 94c889802566a..2aaf02bfe6f7e 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -76,6 +76,7 @@ config WIREGUARD tristate "WireGuard secure network tunnel" depends on NET && INET depends on IPV6 || !IPV6 + depends on !KMSAN # KMSAN doesn't support the crypto configs below select NET_UDP_TUNNEL select DST_CACHE select CRYPTO From patchwork Mon Sep 5 12:24:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966059 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A39B9ECAAD5 for ; Mon, 5 Sep 2022 12:26:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BF618D0083; Mon, 5 Sep 2022 08:26:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36EE08D0076; Mon, 5 Sep 2022 08:26:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 236618D0083; Mon, 5 Sep 2022 08:26:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 137C48D0076 for ; Mon, 5 Sep 2022 08:26:17 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EC039140668 for ; Mon, 5 Sep 2022 12:26:16 +0000 (UTC) X-FDA: 79877954352.22.BFDC3E9 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 40AA9A0080 for ; Mon, 5 Sep 2022 12:26:16 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id c14-20020a05640227ce00b0043e5df12e2cso5752402ede.15 for ; Mon, 05 Sep 2022 05:26:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=T4yulc1U4RDI/lAlxqlpzG4AIUYO9fQrzg+B84ydySw=; b=gykJQm16KBq2G1POc9yawl2YF5uKFWxrq5CWIWwQMtdL8/Y4B0bJfVbxJTszw18YVx 4E7qTI7rzio5nRVJ1FODUAtp/gLZ4JUBLm7SxRqEGBezrmCDsaRdu10ahEbwyMKOHAnS mL4d3XS2YrrbBWzFaHkAc86d82LPVwuZr9FNhGX9lMdqGF26zvlVbpRXEh8qRgLj77Y8 76yFFAvEbnv46jD2CZU9sNo1Vk3UkvA6lYT1fvAVGXzAfOyNN03DhtVyXZh9WTPSiISL CK6qhGXl1pCPNGjaBDhupNrDjetS7nAg6yMvS2fe8SlPKpaw/+edKhLnuNlgtl8KA9cs 5eVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=T4yulc1U4RDI/lAlxqlpzG4AIUYO9fQrzg+B84ydySw=; b=PxII6lJqWoB+ZQ3CzkJXdRV1Qc+ZUnxcRPuwzgzlJQlCTHdBWhjD3223K70NsscwWf VsJ/BuSXpbrUAPvJds1hdAhZBXHZdHd9ilf5orhsxPIWsX9dKO6zPlTWEO1oOSuUN6si LaYiX333eGcnPZrytpgbrxkilsv3WCapEt8Xz/hObPoxfz1LSERkPK6ojLj0uxJV7sTK gFI1dHkQNv4HV/FcEJkL4fIu4hkxt0MRBeQl0+tYN6Oa6AV/iC3y5IyOqJZlkGchO/TZ nXKDVLTRcU6UUW2E853DIE1GyE4C0dX4Zeo/z6XYX2xkf6uMm3Yv4fffUHrrtNJzCmH4 8ffQ== X-Gm-Message-State: ACgBeo0Ar86QJOZGBY6c4Qqt14KnNJng7bo5/mRHVzyN9LBCi4ldfX57 9u76JeLx8gKXvVlJKWsLwMC4E6iCsDc= X-Google-Smtp-Source: AA6agR7Zjm0a6dDy47TgZFwFrLnSH1RBWiJEP1JF2+WzkJzs2a7pahrySQrFD9Id5uBQt5d7er7XAKepdQM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:cc13:b0:73d:d22d:63cd with SMTP id ml19-20020a170906cc1300b0073dd22d63cdmr36111625ejb.741.1662380774896; Mon, 05 Sep 2022 05:26:14 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:36 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-29-glider@google.com> Subject: [PATCH v6 28/44] kmsan: disable physical page merging in biovec From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380776; a=rsa-sha256; cv=none; b=MuYu79c5/L24kcxBCZcmy9intTh13JuiUDEMAyU7XjZzmZRMCQwIZM3vJJWmVHvTtMHorG ydHpXOlt3Sy6Qw/7GUCesNViu+svjN/H5BPcpKT9hs5KrRO2yn76J+MAd0HoGpQeY6tNd8 31NGoYTS3yUjHhcJLDF/99RtI0kMEEc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gykJQm16; spf=pass (imf25.hostedemail.com: domain of 35uoVYwYKCDMVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=35uoVYwYKCDMVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380776; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T4yulc1U4RDI/lAlxqlpzG4AIUYO9fQrzg+B84ydySw=; b=YZEQ8o3SAFk+hRGZmYyjCTmxtTVadzebMaJrEy5hr7Xe5EBhHlFHc82X6/rORZzQhI+qS9 VmnX3IxRB27oBic1OtHuRDD9fzDleZqwRZCdNyBXU5xslGPwS89QZKeDDJijMIhmZclmX/ YQUAbAA05X9H7xG50SoIxciRfdoOIZY= Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gykJQm16; spf=pass (imf25.hostedemail.com: domain of 35uoVYwYKCDMVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=35uoVYwYKCDMVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: i1khscyegq36zsxosbbrfj3ptu8zgt7a X-Rspamd-Queue-Id: 40AA9A0080 X-HE-Tag: 1662380776-23203 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN metadata for adjacent physical pages may not be adjacent, therefore accessing such pages together may lead to metadata corruption. We disable merging pages in biovec to prevent such corruptions. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iece16041be5ee47904fbc98121b105e5be5fea5c --- block/blk.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/block/blk.h b/block/blk.h index d7142c4d2fefb..af02b93c1dba5 100644 --- a/block/blk.h +++ b/block/blk.h @@ -88,6 +88,13 @@ static inline bool biovec_phys_mergeable(struct request_queue *q, phys_addr_t addr1 = page_to_phys(vec1->bv_page) + vec1->bv_offset; phys_addr_t addr2 = page_to_phys(vec2->bv_page) + vec2->bv_offset; + /* + * Merging adjacent physical pages may not work correctly under KMSAN + * if their metadata pages aren't adjacent. Just disable merging. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + return false; + if (addr1 + vec1->bv_len != addr2) return false; if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page)) From patchwork Mon Sep 5 12:24:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CECC0ECAAD5 for ; Mon, 5 Sep 2022 12:26:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A5948D0084; Mon, 5 Sep 2022 08:26:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 654108D0076; Mon, 5 Sep 2022 08:26:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F6358D0084; Mon, 5 Sep 2022 08:26:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3CBC08D0076 for ; Mon, 5 Sep 2022 08:26:19 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0CC1C120CA5 for ; Mon, 5 Sep 2022 12:26:19 +0000 (UTC) X-FDA: 79877954478.04.5BFE6F5 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf10.hostedemail.com (Postfix) with ESMTP id BC550C0060 for ; Mon, 5 Sep 2022 12:26:18 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id w17-20020a056402269100b0043da2189b71so5641276edd.6 for ; Mon, 05 Sep 2022 05:26:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=r3HZXk/BEn65kcaybENH+gdA6j04yycM3f96/pfDo3Y=; b=ASzeAsbMOdZf4WYpGmHP1pAOfqzeXvi1/mYmjoy1+gLpo2mFVGXt7YkxHdn2cGG7Th LhsyoqxduLzAWg9guBf7MeUGs5Sl556966lTsEax9VeA+GsBoCisGlbSkVEruCeqXdIL pmn8EWbkEJjYIFVr9fHMcvJQS7hgukxapq/1AYQ4UGaMetmJBsMlft+caiJ7o6VTTsRO /gBZA1HxzLhDIpKDjWeAuT4C4Rnv8Ae2OB/4lJ2Ca03N2vHUV4V1Gj7baRHl/poEsqyJ jflQ8NkbH5DBqPL8/Jio1X1Vv9nc7m22bqOJom4ePsHMKcBTNXLmYOpZp6cjj1DVwhN9 4O5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=r3HZXk/BEn65kcaybENH+gdA6j04yycM3f96/pfDo3Y=; b=luWHiphu5xgKA5p+9v1CC0e/AaqhVyrabK/HGvSgOyJAJWgtADFt5HZcDSRBjGCyIe 2aQLeT/ntuu13NFDir2bX9XImcQzIT3Er9XpoR7F/oXF/n6Agkblsz+G4UoreOeWRNKX JvGFvrSESoMF9Nl4sNJMvB+qe+gUSVeLwE2JEAvA1HQCKXAoXeXACc7gtElI5vcsjPd6 JnMEA3h7UB6fubdvNi+ENLtHQnSZbS0QrIQDNed6qNUFYNh3Cp/GStOXF3P3ljikEcaF Hnlltp8hHlKrvayO+I07eRz63k2hQ3/ztp2pfFvnAVvZu/LWbj9rpcnR4BvwFieaIuBl nseQ== X-Gm-Message-State: ACgBeo3/xYOVbuFLdToibCE60rohXGYWqiEGiJNOwtSfg1HlIN/tP3Wv Lu0RYF8B6ctDB9uLBOTtiPeHZDuL3+o= X-Google-Smtp-Source: AA6agR4ywNo4RURTtgkJzNxgjjBzX0rpx4pVEpcnNVzzj6eodf3zbpdzsA8Tkeoavta9Gzz8OJ5GJVPldJ8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:348f:b0:448:6005:68af with SMTP id v15-20020a056402348f00b00448600568afmr32104195edc.184.1662380777495; Mon, 05 Sep 2022 05:26:17 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:37 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-30-glider@google.com> Subject: [PATCH v6 29/44] block: kmsan: skip bio block merging logic for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Biggers ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380778; a=rsa-sha256; cv=none; b=rc1iHnUqMmUx9zvDtcXL0A8wAaS6zJXG0LRhSo9bkTa9JZPIlm8l4MS0JwjZvOs2lrp1gs ZiXFuM0jYw3fJsExeYL6HxiN0eED1FCMMc9FyVlj+0VmM1xglwSOLXdmOzI489aW5n3++u p3CxD9FM8xjLlNmS9Ma7VIfdoYfI0rA= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ASzeAsbM; spf=pass (imf10.hostedemail.com: domain of 36eoVYwYKCDYYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=36eoVYwYKCDYYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380778; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=r3HZXk/BEn65kcaybENH+gdA6j04yycM3f96/pfDo3Y=; b=dQPSnBH6K5oS2P1WrEfmehnePVmxPk8InFnzoDfzYJkNyTQNpDflF+j+wjOXiAMlMAxTZX G++qapWVBKanTEcrwgGCQXVc078avs2UEDsgOJtRekImwstqxHSQKQwFl+d5KpUq/DAA4/ fRn+9OAlCrLYZIGSZ0KtOxB1ihms/EE= Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ASzeAsbM; spf=pass (imf10.hostedemail.com: domain of 36eoVYwYKCDYYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=36eoVYwYKCDYYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: ke4muckw3e91xmzecurtrt3n1wtn84ze X-Rspamd-Queue-Id: BC550C0060 X-HE-Tag: 1662380778-562833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't allow treating adjacent memory pages as such, if they were allocated by different alloc_pages() calls. The block layer however does so: adjacent pages end up being used together. To prevent this, make page_is_mergeable() return false under KMSAN. Suggested-by: Eric Biggers Signed-off-by: Alexander Potapenko --- v4: -- swap block: and kmsan: in the subject v5: -- address Marco Elver's comments Link: https://linux-review.googlesource.com/id/Ie29cc2464c70032347c32ab2a22e1e7a0b37b905 --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index 3d3a2678fea25..106ef14f28c2a 100644 --- a/block/bio.c +++ b/block/bio.c @@ -869,6 +869,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); if (*same_page) return true; + else if (IS_ENABLED(CONFIG_KMSAN)) + return false; return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE); } From patchwork Mon Sep 5 12:24:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79D79ECAAD3 for ; Mon, 5 Sep 2022 12:26:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 129408D0085; Mon, 5 Sep 2022 08:26:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 089C08D0076; Mon, 5 Sep 2022 08:26:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6B1F8D0085; Mon, 5 Sep 2022 08:26:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D5EF18D0076 for ; Mon, 5 Sep 2022 08:26:21 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BE72D407AC for ; Mon, 5 Sep 2022 12:26:21 +0000 (UTC) X-FDA: 79877954562.08.979B048 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 6AF3C180068 for ; Mon, 5 Sep 2022 12:26:21 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id e15-20020adf9bcf000000b002285faa9bd4so740156wrc.8 for ; Mon, 05 Sep 2022 05:26:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=TqyxH/L5MkAtosS0jWqWllGGzUIesiLdVg3wG3p7XtY=; b=SRHbIOUb4DGUSlckCuBIb5D5nY2hugScbD5wJSkj9+gz5MDYNGyreOoUjr426hDozZ eIZwnOJAR9QbuL6MBljpr+ef5VyWMq7LE+MGIMu63m/ooGwVOtY9Wymo4x+ItRsbPuMF 3x3WktXqUtSalOy5ST4UA3C+QVvz/PUjD2mQzbD9n6SlVfmNFgRe+dRr3wA+uShralze F5IELO9fjYSA2cgSGi/7JeC9JLPRXPbVVnHX/AfgjNsTAchbmP+32JYRVNVmU8xLiZ/2 5IjHNbRGxFz5C2uzmzWFN1Wc7yPI/jBEojh1hnBnCwk9zy4n2ozIysDaR+/OsHRccZvb k9Lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=TqyxH/L5MkAtosS0jWqWllGGzUIesiLdVg3wG3p7XtY=; b=V/tkXZI/8nUM1B3wwXUgSw+6q3oEN1ncnz/rKfDqArPQ+BSZjaePS9C5uIsfUgXGgf gUGyLYrTIeof5aOMjZA/rK4vLHw+zVbFLpBgD7POYmfdJsWMIJbo4OrdMFQmEOQAC9Xd kdBkIBvFOVwsoFC33PbnnuB0LfsYwHHVRiagwxjtBsHFabPV1zC755GgDbCGzgmWwgZW Vom9rUmsYXJPPHUk63FHuN1GfqHWYDYwZycXVngAuIwAV2OfMSz7NWwOhuhi5+IyOLn/ pG9/im8ek5sWntRicPuAjIzAT9/9sPT0WCQiZiUDJKdzMe63K74iSg9L/qDscno3zLTV TlJA== X-Gm-Message-State: ACgBeo3mzZeodSvuNPeEMPfyu7xsHNmpESZa4qzVmrwAHcMqQ+t+QiYV fTdGxhUk3ZYUIZSfPkeTOrqW3PvDcXc= X-Google-Smtp-Source: AA6agR7gpIA9Ys+R+97x8E6SItr56wFcMnWLG4Y2aVvt4HKxaJejCTap6PHw9s2SK+DWSs3ZsGMtcLcy1oM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a5d:584b:0:b0:220:7624:5aae with SMTP id i11-20020a5d584b000000b0022076245aaemr24101908wrf.119.1662380780114; Mon, 05 Sep 2022 05:26:20 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:38 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-31-glider@google.com> Subject: [PATCH v6 30/44] kcov: kmsan: unpoison area->list in kcov_remote_area_put() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380781; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TqyxH/L5MkAtosS0jWqWllGGzUIesiLdVg3wG3p7XtY=; b=dQsGTi5ccHX98RVVyya9b0r45DZlGsj2a47s9gepvtsXSHStr/uH3IO4okJ16MqPbTmOfG an5PiJ0cMi6xOdLK+ltxoPO3uLx8ZUyOqqEe9z0pKpSCNUGTvXSJByfEMQ/AFAvCevFOlV f6Fe0MQn8Y1pIQZxvnV3lkBsXYyzmzw= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SRHbIOUb; spf=pass (imf16.hostedemail.com: domain of 37OoVYwYKCDkbgdYZmbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=37OoVYwYKCDkbgdYZmbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380781; a=rsa-sha256; cv=none; b=UbR+d6ny0RcKwGpj/wAHRnmerOMFRGiCg92T93EuY1xGnQx9p8AMxqUq1jBMVTd3GATAPT nvYO65SWmZPsqDS5vvZQdYKoKn9jCrt3wSSnXuEPFFg24rD+k0csKc6G2pN2bP+h00RDhT 6dnBLg709012EMLHEayAzwzNzG525kE= X-Rspam-User: X-Stat-Signature: 5mg6cf8dwi5kq4wehs1inspcf7743y4w X-Rspamd-Queue-Id: 6AF3C180068 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SRHbIOUb; spf=pass (imf16.hostedemail.com: domain of 37OoVYwYKCDkbgdYZmbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=37OoVYwYKCDkbgdYZmbjjbgZ.Xjhgdips-hhfqVXf.jmb@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-HE-Tag: 1662380781-611030 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not instrument kernel/kcov.c for performance reasons (with CONFIG_KCOV=y virtually every place in the kernel invokes kcov instrumentation). Therefore the tool may miss writes from kcov.c that initialize memory. When CONFIG_DEBUG_LIST is enabled, list pointers from kernel/kcov.c are passed to instrumented helpers in lib/list_debug.c, resulting in false positives. To work around these reports, we unpoison the contents of area->list after initializing it. Signed-off-by: Alexander Potapenko --- v4: -- change sizeof(type) to sizeof(*ptr) -- swap kcov: and kmsan: in the subject Link: https://linux-review.googlesource.com/id/Ie17f2ee47a7af58f5cdf716d585ebf0769348a5a --- kernel/kcov.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/kcov.c b/kernel/kcov.c index e19c84b02452e..e5cd09fd8a050 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -152,6 +153,12 @@ static void kcov_remote_area_put(struct kcov_remote_area *area, INIT_LIST_HEAD(&area->list); area->size = size; list_add(&area->list, &kcov_remote_areas); + /* + * KMSAN doesn't instrument this file, so it may not know area->list + * is initialized. Unpoison it explicitly to avoid reports in + * kcov_remote_area_get(). + */ + kmsan_unpoison_memory(&area->list, sizeof(area->list)); } static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) From patchwork Mon Sep 5 12:24:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43BCFECAAD5 for ; Mon, 5 Sep 2022 12:26:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C99B68D0086; Mon, 5 Sep 2022 08:26:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C47ED8D0076; Mon, 5 Sep 2022 08:26:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE8E48D0086; Mon, 5 Sep 2022 08:26:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9E84A8D0076 for ; Mon, 5 Sep 2022 08:26:24 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 80252140D4B for ; Mon, 5 Sep 2022 12:26:24 +0000 (UTC) X-FDA: 79877954688.09.1AF858B Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf23.hostedemail.com (Postfix) with ESMTP id 24F96140077 for ; Mon, 5 Sep 2022 12:26:23 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id s19-20020a056402521300b00448954f38c9so5719611edd.14 for ; Mon, 05 Sep 2022 05:26:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=gorYk6QZYQqsbqX3c5FquLgSIgOkVhkBb0FBAlBlnNE=; b=Mky7siHZEB3fEVWYjmYZB5975h3eFuJUOPRaVsIHhDdbe8vWODOkKOcTcIOtvRs3dA Gqdv8LYPv590UyUqvpPeLNRnb58b8H6+CChTPnchb+nHnIy2NVsJiB3QFAsSiBgH+0cz /0fdR0cMX4AUUhYwf7g2N0mru/fhRKm9K8tn96seGrdmb5AbOpnmECrbM5beMI8FHRK4 T81Be6OsaGk3aPk/fjPKtMjltQDKdpeirQzbsGa8Q+gsbhfG4G5DaFhBPL7glFIpPA+l cOWrmQXoJXMdDSc8atI0e+LRoBQH9bVK28k5xmQ5Q8ncHiBZj9Gk5lda2kVWvaRrngdm I5xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=gorYk6QZYQqsbqX3c5FquLgSIgOkVhkBb0FBAlBlnNE=; b=r8katAnynvqTCNkr8Cousm1TYRyTM4BcKz+WRQQl6YcgHKxJ8qNrkqYgy1KUBPeDA3 J2lv5qN7jRcIDwaz4ABj+mAEyQ8MOY7XZn1kXnvIKEWGfVLMplsTMxCpy7UpVg1QxjnR gMK8o+cWsaaEjx+yUr02KS01eGqtwjfAyFO2sEwptKzsfFp2LivhCAjoGUDcmcesWVLB aevBkY6vIcCWRkxt3iGniSNbRBvrJ9ftnNy/evDkr2tll2KP/t1/qiwhR+nWvP2qE9zd 6l/4MaLMMtgiv/iYOh4wVIEGxCwTAOkXBWrZx6BX5EjdC6ivEw3I4aQLa3cW6B04ichH SpBg== X-Gm-Message-State: ACgBeo04G44/ztBjwhvWFer5L6qEgDMrksneKNfwNXGbOfbvVyGy3er8 tbI53NNRaw+iZdzZLRk/k9MxfP0C4ZY= X-Google-Smtp-Source: AA6agR6VMfYGvTl8597KztYX92eWUsjds97DFXhCN+pqdpQR7nCQFTv/cdSNJsVV5FjbT2j+Bij2pc3hsmw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:17d7:b0:44e:95b0:3741 with SMTP id s23-20020a05640217d700b0044e95b03741mr2597122edy.281.1662380782901; Mon, 05 Sep 2022 05:26:22 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:39 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-32-glider@google.com> Subject: [PATCH v6 31/44] security: kmsan: fix interoperability with auto-initialization From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380784; a=rsa-sha256; cv=none; b=lvGKsMhq3Fm47v9qMYHX/xZdjFoj6B+hYzrOzB29BpSwRmkxi+g0Uc04zP2T5hB77zVlY9 +0rCQspsLFzkvF5xCkPZE3Lvxxn+Rv9WYsZ1jPesArWKcd/F7RN+jzzJpKYQlWiadl+s0O AR1NVe9Imb7XUdJ0wdQ91NCbu/SptFM= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Mky7siHZ; spf=pass (imf23.hostedemail.com: domain of 37uoVYwYKCDsdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=37uoVYwYKCDsdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380784; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gorYk6QZYQqsbqX3c5FquLgSIgOkVhkBb0FBAlBlnNE=; b=UWlIN78Rw+J/PATN8RiDopwPMWd/MLdOAUu9F+lkjEjL33s3i9Q7CCxqAwzeXH0OR9cMLf qQDbbAoIH4KmqmzzVhJLc5B8VToMy5k5ynWNLTsflLSjoJgzMdw/6+GvNPkDjrO9X+OSxv nwxTqDxVH+tbwAbEczUaQrw7RXpzCN0= Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Mky7siHZ; spf=pass (imf23.hostedemail.com: domain of 37uoVYwYKCDsdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=37uoVYwYKCDsdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: iwsq6aoy95ydxha9irwe8p4ogadiez1y X-Rspamd-Queue-Id: 24F96140077 X-HE-Tag: 1662380783-101285 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heap and stack initialization is great, but not when we are trying uses of uninitialized memory. When the kernel is built with KMSAN, having kernel memory initialization enabled may introduce false negatives. We disable CONFIG_INIT_STACK_ALL_PATTERN and CONFIG_INIT_STACK_ALL_ZERO under CONFIG_KMSAN, making it impossible to auto-initialize stack variables in KMSAN builds. We also disable CONFIG_INIT_ON_ALLOC_DEFAULT_ON and CONFIG_INIT_ON_FREE_DEFAULT_ON to prevent accidental use of heap auto-initialization. We however still let the users enable heap auto-initialization at boot-time (by setting init_on_alloc=1 or init_on_free=1), in which case a warning is printed. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I86608dd867018683a14ae1870f1928ad925f42e9 --- mm/page_alloc.c | 4 ++++ security/Kconfig.hardening | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b28093e3bb42a..e5eed276ee41d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -936,6 +936,10 @@ void init_mem_debugging_and_hardening(void) else static_branch_disable(&init_on_free); + if (IS_ENABLED(CONFIG_KMSAN) && + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); + #ifdef CONFIG_DEBUG_PAGEALLOC if (!debug_pagealloc_enabled()) return; diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index bd2aabb2c60f9..2739a6776454e 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -106,6 +106,7 @@ choice config INIT_STACK_ALL_PATTERN bool "pattern-init everything (strongest)" depends on CC_HAS_AUTO_VAR_INIT_PATTERN + depends on !KMSAN help Initializes everything on the stack (including padding) with a specific debug value. This is intended to eliminate @@ -124,6 +125,7 @@ choice config INIT_STACK_ALL_ZERO bool "zero-init everything (strongest and safest)" depends on CC_HAS_AUTO_VAR_INIT_ZERO + depends on !KMSAN help Initializes everything on the stack (including padding) with a zero value. This is intended to eliminate all @@ -218,6 +220,7 @@ config STACKLEAK_RUNTIME_DISABLE config INIT_ON_ALLOC_DEFAULT_ON bool "Enable heap memory zeroing on allocation by default" + depends on !KMSAN help This has the effect of setting "init_on_alloc=1" on the kernel command line. This can be disabled with "init_on_alloc=0". @@ -230,6 +233,7 @@ config INIT_ON_ALLOC_DEFAULT_ON config INIT_ON_FREE_DEFAULT_ON bool "Enable heap memory zeroing on free by default" + depends on !KMSAN help This has the effect of setting "init_on_free=1" on the kernel command line. This can be disabled with "init_on_free=0". From patchwork Mon Sep 5 12:24:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966063 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0C24ECAAD5 for ; Mon, 5 Sep 2022 12:26:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C82D8D0087; Mon, 5 Sep 2022 08:26:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8503F8D0076; Mon, 5 Sep 2022 08:26:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F15D8D0087; Mon, 5 Sep 2022 08:26:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5EDF48D0076 for ; Mon, 5 Sep 2022 08:26:27 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3C4B880FE2 for ; Mon, 5 Sep 2022 12:26:27 +0000 (UTC) X-FDA: 79877954814.17.82107EB Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf07.hostedemail.com (Postfix) with ESMTP id DF95B40053 for ; Mon, 5 Sep 2022 12:26:26 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id gb33-20020a170907962100b00741496e2da1so2280458ejc.1 for ; Mon, 05 Sep 2022 05:26:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=mxKrKakNES2O6WNVUjUiVAG/aqg8NEyyXmfzwvDy3M8=; b=Hll0XPDmwrA/wkE0+inHBLREQNrLcoBNQOLJv6vlBIuGAkWI+xpzOnwpEHVMNRdEFq LadRtvkYKT0rBaCGIQh+SpcYQSYJ6+D3urzxB4a+suX0OZKqBBYIRUGR9X7fE50HgxBz B2g0i4X6vQks1KpTx5jgpjFZyeuT89E6vIhppFNHjDfbX+1QPnLtmUz9mtN+7ySfB+bR PFoDRy3NBTPBWZ7NAt+/xQb5yB7Y9UmTQE95ZWJNyoXXzQAFuUS4huF8Aevejgh3HJgS M1OLT91cEQYlIRh7PZL9bQgnExoipMKZwEt+JNxRwLYODVp7nciM3moyVQjv2BbTAuZX Rk6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=mxKrKakNES2O6WNVUjUiVAG/aqg8NEyyXmfzwvDy3M8=; b=nzzJi05en2KMut/surhvDjfIQl8CCHp/s0t2qeRsnRDuRU/FyKI5qKcuJbFQTCxeo0 4QDjzj3ha9xiGc2JITr9kUCDEctoUtjFmbKAjPYlkPqxTtnurEYQn8StVN3nWiFZU7X0 7kr3yp1wLrNh2XLxDO9C7sVJK8eXPfe/iK9JZ9PFmIroESgOMSXiWpsNbhYKTWOFuglX 94xaprCyRcmrzn9iWphxGJ4ZVuYo2+AgjceGshYA8fD/mHQgJnEFcJoSwp5QqRHODocD Kwb4B0KVS8KXKmufs/NUYs9QxrWYd+BMw/OAuuzWgoul5FEGCW2rDTaZ+IV20t9VAbAU neSw== X-Gm-Message-State: ACgBeo1Kktox7eXjchLpKjKfOtIs/Bwe7rtoDA7S8WchZknVY3Ct8Hsv 6D34iLBrW2tAH87ux1AT608DkR/i7rY= X-Google-Smtp-Source: AA6agR4pZP+Qf1sER/Bcc0sCrTDiYtuFvoFMz4Z9eHlW4ZWYhhtKJNa6WbLDpJSuTZ9T3g+moO7a0PvtFbY= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:845c:b0:730:bbf1:196a with SMTP id e28-20020a170906845c00b00730bbf1196amr36512046ejy.13.1662380785638; Mon, 05 Sep 2022 05:26:25 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:40 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-33-glider@google.com> Subject: [PATCH v6 32/44] objtool: kmsan: list KMSAN API functions as uaccess-safe From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Hll0XPDm; spf=pass (imf07.hostedemail.com: domain of 38eoVYwYKCD4lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=38eoVYwYKCD4lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380786; a=rsa-sha256; cv=none; b=7mQqeZgwC+G3nMJnrCEXd24aZtTZxZA3lZTCdSMpy24NZmbDqhFrWQwoYno/4G6vX7gkds glkYSIHY0Nm2m7D9QpmRb8OafWxY29eRK8zD9dCWAifafrY9qmXQ2uDX0S4HbJ71aYC1wo hNxk7vZY1v+Ntpxw+gjXKcbS+00lDmg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380786; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mxKrKakNES2O6WNVUjUiVAG/aqg8NEyyXmfzwvDy3M8=; b=iXaDkKmedGysqsx8dZClTEmJBsLMu2kvk/NvqQV9GZm6xh4cBWA5ROl5kSS6QSF3LZtNQd MQ0oPZDP7qJZX7OjO3o1zbP3oqG6YuaZeGT+KQxev86iddRiIQ3xKHH1JpGSgyOfjWe6+f 1ARHUWf5zokOq6GxGpvzRjeB6Fpdfdg= Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Hll0XPDm; spf=pass (imf07.hostedemail.com: domain of 38eoVYwYKCD4lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=38eoVYwYKCD4lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Stat-Signature: 74s8dsh3tttwospgqg4pjcx35xjnfetn X-Rspamd-Queue-Id: DF95B40053 X-Rspamd-Server: rspam05 X-HE-Tag: 1662380786-866873 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN inserts API function calls in a lot of places (function entries and exits, local variables, memory accesses), so they may get called from the uaccess regions as well. KMSAN API functions are used to update the metadata (shadow/origin pages) for kernel memory accesses. The metadata pages for kernel pointers are also located in the kernel memory, so touching them is not a problem. For userspace pointers, no metadata is allocated. If an API function is supposed to read or modify the metadata, it does so for kernel pointers and ignores userspace pointers. If an API function is supposed to return a pair of metadata pointers for the instrumentation to use (like all __msan_metadata_ptr_for_TYPE_SIZE() functions do), it returns the allocated metadata for kernel pointers and special dummy buffers residing in the kernel memory for userspace pointers. As a result, none of KMSAN API functions perform userspace accesses, but since they might be called from UACCESS regions they use user_access_save/restore(). Signed-off-by: Alexander Potapenko --- v3: -- updated the patch description v4: -- add kmsan_unpoison_entry_regs() Link: https://linux-review.googlesource.com/id/I242bc9816273fecad4ea3d977393784396bb3c35 --- tools/objtool/check.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index e55fdf952a3a1..7c048c11ce7da 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1062,6 +1062,26 @@ static const char *uaccess_safe_builtin[] = { "__sanitizer_cov_trace_cmp4", "__sanitizer_cov_trace_cmp8", "__sanitizer_cov_trace_switch", + /* KMSAN */ + "kmsan_copy_to_user", + "kmsan_report", + "kmsan_unpoison_entry_regs", + "kmsan_unpoison_memory", + "__msan_chain_origin", + "__msan_get_context_state", + "__msan_instrument_asm_store", + "__msan_metadata_ptr_for_load_1", + "__msan_metadata_ptr_for_load_2", + "__msan_metadata_ptr_for_load_4", + "__msan_metadata_ptr_for_load_8", + "__msan_metadata_ptr_for_load_n", + "__msan_metadata_ptr_for_store_1", + "__msan_metadata_ptr_for_store_2", + "__msan_metadata_ptr_for_store_4", + "__msan_metadata_ptr_for_store_8", + "__msan_metadata_ptr_for_store_n", + "__msan_poison_alloca", + "__msan_warning", /* UBSAN */ "ubsan_type_mismatch_common", "__ubsan_handle_type_mismatch", From patchwork Mon Sep 5 12:24:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966064 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E75EECAAD5 for ; Mon, 5 Sep 2022 12:26:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2D4508D0088; Mon, 5 Sep 2022 08:26:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 283F38D0076; Mon, 5 Sep 2022 08:26:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 124868D0088; Mon, 5 Sep 2022 08:26:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 03A5F8D0076 for ; Mon, 5 Sep 2022 08:26:30 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DC5EF1A0D8E for ; Mon, 5 Sep 2022 12:26:29 +0000 (UTC) X-FDA: 79877954898.22.30C6FD0 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf06.hostedemail.com (Postfix) with ESMTP id 8F207180067 for ; Mon, 5 Sep 2022 12:26:29 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id i6-20020a05640242c600b00447c00a776aso5852214edc.20 for ; Mon, 05 Sep 2022 05:26:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=ByRxUK8Sn+pDB+Gp2EKIbiaRYh/I0Prcbbdv1W00JlM=; b=bnvE8RFZd8JSsWywJZyh8WgYPEeDhTPjtf4oYjSDKgUjt+216/PXCURzhoe4UMjQSg aiwxh8SWnN/dXpWl+MrWSyJz47h95LVg0h5y5IB6zHbMFFLeSw1sPxAszFLN2LgL1Gba xWxFSd6GmJpXk5tX1ANkSM82BjNBp6x1dqSWIqEmMrbuiRGIX79y4txI7jeOkmGe/GKs p7SfNPoi35wEA3J6itsMUQc3MENxO8ZjzO2zbj9qxPTkMJYXeleclLfDKGE1hMA4rFxo jBP5Mdh3ktlni83OCWSpK8Ny7PSGEwVr2PtgkG+kHulRLThMR2PCM3xERXcQ7xL3u4yl cmWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=ByRxUK8Sn+pDB+Gp2EKIbiaRYh/I0Prcbbdv1W00JlM=; b=MO9oTV+N+xJHCrIFyG3yjD8r0FWHerYRHtN1a7DWZ9B8hHXSy0gPjp9OznkqN7jVZ1 qibVDvhAM+FVrMjX9JD2+SlOppq7FgE5O4aSRm0zvlFjBrXTHTzt//Ji3BbSk++EMVEx Ajue6ElccM0r8nDI9uwWYNSgUcCaiAVWKiHlPm6Xj6XqvsrsBi5fwarjBGmqcnIVk/JA iRScOMX89GIq0csW7CObF1A5dK8dNluMYLEtU7SGwJUKutrOL5uIJ84Rd26tNI3bRBs7 ue0vezWYSi8VHNK6wWgoko4oqncwQUsZaT/gqltu02cHk7/n8WDD3bTdb/ax9lw2vMsP pDsA== X-Gm-Message-State: ACgBeo3eSEznhSoZzOu8erAAUW/JMoh9ffTW0mKbaFCP9YlJStAazGFl U5Hxrx5ZP5Ev+sYUtMLsgi69S0YRMe4= X-Google-Smtp-Source: AA6agR6KvpDDT21GyUrgjOCikTI9CcgSEHgfgED+CFrI2pXRaiuH1IsB+XbwCFXMnDLFPc8kr/VMOT95Pko= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:9bf4:b0:741:4902:4e6 with SMTP id de52-20020a1709069bf400b00741490204e6mr29476988ejc.222.1662380788354; Mon, 05 Sep 2022 05:26:28 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:41 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-34-glider@google.com> Subject: [PATCH v6 33/44] x86: kmsan: disable instrumentation of unsupported code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bnvE8RFZ; spf=pass (imf06.hostedemail.com: domain of 39OoVYwYKCEEjolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=39OoVYwYKCEEjolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380789; a=rsa-sha256; cv=none; b=A/jOW5M5MVL+tS0APvD2VZlwUetLjsS71mD5K7vpt+lXi2ICRaRdOiz6bQSTBJkAuo0GLF y6fvrDHWra62U7tB5qzPFp8nRFyZdN6wDTarPyNZ37DOQt/RM5AQhxd0/9gk4pi1EgVhHV JQoqkdi0aDIO1mP1PNxIwr1skYJWHBw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380789; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ByRxUK8Sn+pDB+Gp2EKIbiaRYh/I0Prcbbdv1W00JlM=; b=KB38MXwwhh/gOf8KCNG/jC/e8sj3Kt2UFxF+KNC+h1rngqlfnNAvDoe4hTT3ue3/yOAt4I 3x0YVuhptt5Rvbbj8FlBvZKPopT/CneGW4BXg+L/vKNgsbVdhH+ocwaaYIcxX6Tn/iJ2oT WEcTctPHI2V/6R+dL+JnZ/wHZtsmqPI= Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bnvE8RFZ; spf=pass (imf06.hostedemail.com: domain of 39OoVYwYKCEEjolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=39OoVYwYKCEEjolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Stat-Signature: ar4a73sqaoewaknj7h186st4b7kmzesm X-Rspam-User: X-Rspamd-Queue-Id: 8F207180067 X-HE-Tag: 1662380789-44984 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instrumenting some files with KMSAN will result in kernel being unable to link, boot or crashing at runtime for various reasons (e.g. infinite recursion caused by instrumentation hooks calling instrumented code again). Completely omit KMSAN instrumentation in the following places: - arch/x86/boot and arch/x86/realmode/rm, as KMSAN doesn't work for i386; - arch/x86/entry/vdso, which isn't linked with KMSAN runtime; - three files in arch/x86/kernel - boot problems; - arch/x86/mm/cpu_entry_area.c - recursion. Signed-off-by: Alexander Potapenko --- v2: -- moved the patch earlier in the series so that KMSAN can compile -- split off the non-x86 part into a separate patch v3: -- added a comment to lib/Makefile v5: -- removed a comment belonging to another patch Link: https://linux-review.googlesource.com/id/Id5e5c4a9f9d53c24a35ebb633b814c414628d81b --- arch/x86/boot/Makefile | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/entry/vdso/Makefile | 3 +++ arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/mm/Makefile | 2 ++ arch/x86/realmode/rm/Makefile | 1 + 7 files changed, 11 insertions(+) diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index ffec8bb01ba8c..9860ca5979f8a 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -12,6 +12,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Kernel does not boot with kcov instrumentation here. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 35ce1a64068b7..3a261abb6d158 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -20,6 +20,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 12f6c4d714cd6..ce4eb7e44e5b8 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -11,6 +11,9 @@ include $(srctree)/lib/vdso/Makefile # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n +KMSAN_SANITIZE_vclock_gettime.o := n +KMSAN_SANITIZE_vgetcpu.o := n + UBSAN_SANITIZE := n KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index a20a5ebfacd73..ac564c5d7b1f0 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -33,6 +33,8 @@ KASAN_SANITIZE_sev.o := n # With some compiler versions the generated code results in boot hangs, caused # by several compilation units. To be safe, disable all instrumentation. KCSAN_SANITIZE := n +KMSAN_SANITIZE_head$(BITS).o := n +KMSAN_SANITIZE_nmi.o := n # If instrumentation of this dir is enabled, boot hangs during first second. # Probably could be more selective here, but note that files related to irqs, diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 9661e3e802be5..f10a921ee7565 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -12,6 +12,7 @@ endif # If these files are instrumented, boot hangs during the first second. KCOV_INSTRUMENT_common.o := n KCOV_INSTRUMENT_perf_event.o := n +KMSAN_SANITIZE_common.o := n # As above, instrumenting secondary CPU boot code causes boot hangs. KCSAN_SANITIZE_common.o := n diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index f8220fd2c169a..39c0700c9955c 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -12,6 +12,8 @@ KASAN_SANITIZE_mem_encrypt_identity.o := n # Disable KCSAN entirely, because otherwise we get warnings that some functions # reference __initdata sections. KCSAN_SANITIZE := n +# Avoid recursion by not calling KMSAN hooks for CEA code. +KMSAN_SANITIZE_cpu_entry_area.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index 83f1b6a56449f..f614009d3e4e2 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -10,6 +10,7 @@ # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. From patchwork Mon Sep 5 12:24:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EAC6ECAAD3 for ; Mon, 5 Sep 2022 12:26:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAA9D8D0089; Mon, 5 Sep 2022 08:26:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D59298D0076; Mon, 5 Sep 2022 08:26:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BAC6C8D0089; Mon, 5 Sep 2022 08:26:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AB15B8D0076 for ; Mon, 5 Sep 2022 08:26:32 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 88BF414035D for ; Mon, 5 Sep 2022 12:26:32 +0000 (UTC) X-FDA: 79877955024.12.16FFEDD Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf15.hostedemail.com (Postfix) with ESMTP id 422F9A0062 for ; Mon, 5 Sep 2022 12:26:32 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id a17-20020a05600c349100b003a545125f6eso7401211wmq.4 for ; Mon, 05 Sep 2022 05:26:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=VswCSWA+rV4PC5eDv8SsttMbBbTb1fmAEKvfw1RwEtM=; b=FaysfHPUHEI9xDQ+DizkafuJSz2Euxr4w8pch7JyeIF7w+mm9SpixN7VAttRnhIcf1 iHLsEV9jw/pLNwUY4C8KGw1A06689EYkbOQFe18iHyK1OO4vYcFXMfZ5ltlCGAVJ5Riu XG1siZwi0o/mlCR+V64/HPoH407x/VrsCnkDyjBQzBqc2hGL0yRU8s5mEz8P1N59BhTg hzG3VGefRSfd/eN++lt3E3Gg+z8Tl5riHtwzfLnKb+uWX3YCISAS21Fwh0ERlJq81TyF KZxD8W4yBHXpc4Lq6Y1MBvt4ESwZqiGITUw1+GV0SilDaebvQNF1YZTejeneVonkFaZs QPng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=VswCSWA+rV4PC5eDv8SsttMbBbTb1fmAEKvfw1RwEtM=; b=h6Xu0yIw2urB0QI+p9cE/wo0SJ0TDiKGYbvfSJv5waXeM8JMlIEBExt3Q7j/Te9kpr eB+ZF7U76//2K201hYf6m+3aGTwXFLeGkQh1CdXlitoGZr4iGuCNIm25UoI06caJKTh+ xnwDh3k1TFLdhll4OAw+SX4yWF47ksEuxasxe8txCJ/f8OTwOtXacp/MSULkF7t7QTrO u05wuVaMaIMXpng6TXx7Jal7fcOLryuSwBJOVEy/U0eIxvseY2R6XJVPCDgzAFqDtU8m 6Wu0KS6H7nM7RrfgRw73dHXRUB5ZwxrfpJg5jyxWyG+PkiKTPrs35NhDb5V/6M3uyF9w xEnw== X-Gm-Message-State: ACgBeo0rtmZrYhtqD4npC/V6NUJLQcYA87VKlmRh58jnHwZYjE6ZLvL0 /DiUp2TJJY4kphb7uNLAqk0MjDDla/8= X-Google-Smtp-Source: AA6agR4gbeELzWeJ61Je42r5xKh3pLfxGhHrG+p/6kCfXsLn7Q35sPMS2Mz+9NAgIsEblPfRPD+7AHIMdIQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a5d:4448:0:b0:226:82ff:f3e6 with SMTP id x8-20020a5d4448000000b0022682fff3e6mr25180706wrr.115.1662380790918; Mon, 05 Sep 2022 05:26:30 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:42 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-35-glider@google.com> Subject: [PATCH v6 34/44] x86: kmsan: skip shadow checks in __switch_to() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380792; a=rsa-sha256; cv=none; b=vFMpRgj9xGJB1L1adGnApIRkTCBNUlX9b8kAfRGeQ4dDxJqbCpHH3WlwJWTMtZV5ptYY+M QPrDLUyNTmBaRYaIAypadmSmIT+jLnMqWo/b0zvQ56VgqC4N2P2rUyG5a3mvuC+KPo8doe ldPfw8lPJ0BUbSMfPMCtL0oPavj4I6w= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FaysfHPU; spf=pass (imf15.hostedemail.com: domain of 39uoVYwYKCEMlqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=39uoVYwYKCEMlqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380792; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VswCSWA+rV4PC5eDv8SsttMbBbTb1fmAEKvfw1RwEtM=; b=JCZa3JZyaCvbEw1zuZWzJMWi74OzVjwzUUrvHVcb64rbJirmiZZIhFVn73R0zk3noCCy3u bZYEue6WGlTKrBIL1BsAik3vTAsDWZOTLuTubYCR0mHMdHCwGzNSjYqzF6DlQB4LRIAxFs o2s1XdEzG8sgywKi09GzUYWJ43puTb4= Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FaysfHPU; spf=pass (imf15.hostedemail.com: domain of 39uoVYwYKCEMlqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=39uoVYwYKCEMlqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspam-User: X-Stat-Signature: qey4y9to4jbgtg7gm1k78dxim36xppbx X-Rspamd-Queue-Id: 422F9A0062 X-HE-Tag: 1662380792-125969 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When instrumenting functions, KMSAN obtains the per-task state (mostly pointers to metadata for function arguments and return values) once per function at its beginning, using the `current` pointer. Every time the instrumented function calls another function, this state (`struct kmsan_context_state`) is updated with shadow/origin data of the passed and returned values. When `current` changes in the low-level arch code, instrumented code can not notice that, and will still refer to the old state, possibly corrupting it or using stale data. This may result in false positive reports. To deal with that, we need to apply __no_kmsan_checks to the functions performing context switching - this will result in skipping all KMSAN shadow checks and marking newly created values as initialized, preventing all false positive reports in those functions. False negatives are still possible, but we expect them to be rare and impersistent. Suggested-by: Marco Elver Signed-off-by: Alexander Potapenko --- v2: -- This patch was previously called "kmsan: skip shadow checks in files doing context switches". Per Mark Rutland's suggestion, we now only skip checks in low-level arch-specific code, as context switches in common code should be invisible to KMSAN. We also apply the checks to precisely the functions performing the context switch instead of the whole file. v5: -- Replace KMSAN_ENABLE_CHECKS_process_64.o with __no_kmsan_checks Link: https://linux-review.googlesource.com/id/I45e3ed9c5f66ee79b0409d1673d66ae419029bcb --- arch/x86/kernel/process_64.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index 1962008fe7437..6b3418bff3261 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -553,6 +553,7 @@ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp, bool x32) * Kprobes not supported here. Set the probe on schedule instead. * Function graph tracer not supported too. */ +__no_kmsan_checks __visible __notrace_funcgraph struct task_struct * __switch_to(struct task_struct *prev_p, struct task_struct *next_p) { From patchwork Mon Sep 5 12:24:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E84D5ECAAD5 for ; Mon, 5 Sep 2022 12:26:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 82B928D008A; Mon, 5 Sep 2022 08:26:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DEFF8D0076; Mon, 5 Sep 2022 08:26:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 654E28D008A; Mon, 5 Sep 2022 08:26:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5772A8D0076 for ; Mon, 5 Sep 2022 08:26:35 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 37E4A160777 for ; Mon, 5 Sep 2022 12:26:35 +0000 (UTC) X-FDA: 79877955150.09.2A7F18D Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf16.hostedemail.com (Postfix) with ESMTP id E86BD18006C for ; Mon, 5 Sep 2022 12:26:34 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id ds17-20020a170907725100b007419bfb5fd4so2255020ejc.4 for ; Mon, 05 Sep 2022 05:26:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=yflFKH40jmwCCGQluWMEQ//gU/gWIMkmRe3xdXByGVQ=; b=fAxph5Y6U04CXZrajhFkL+LMhlfmAqX1mOptfFGFa+34F1+59OCoqL+hyS00yE9PU3 JiQF8Q7PP63alm8LFCcaO0ZZHzZ5WxGFto4GmMcgZr9o9k0lEZRIQVzzKRElWx4oA9Ez 7ZjbmW+o8zPS23NFbXCbrLPnQaO1lqjWBLYh2MePrjtDiEAPdC3lrLsuRUbPOH3mzVEQ PgdyG8vgJN4Qnre4Ot1waLJj7+TH2C6/GbnJwovqjOPhmu9jof0uYhctQHZkHtEoZesE L4Z7jgUgDUS3X3M2bfaQgTzxuHr1pPSXpKXdHYlPC0r3legvJZAdTPNLBXX4Lymsq/aN mAFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=yflFKH40jmwCCGQluWMEQ//gU/gWIMkmRe3xdXByGVQ=; b=P3oz7C+nXwNGufb9qa53Uo6WCB/jnQs5/x9/KmnjoCk/S+gzXWrRNGCXX5C3YEXWqt DM08AYxSC4O19yuriQ2cjUS6/5ilGoivzTR3j3f5dMN3KSta0/2Dl/MIhQnQe+GU1+vD Q+N7oxwH53ELCOWYMGCKdmY6JOV75pjQpcK5QiqyeLDSAQNlEEdsXqOpZ1NCRrGctbIk VbL/+7+wRdINdeD6j7L+8a2yADlj3EthtOzpDDWORlAOXcaW5UVKCg2/OmSJjECUL1tX SowS8aRT8R+A8lhssE2ucTVa+0hSWkNuJnHtQP9GxGkQvE0qEByDqovUf96z2N6iPHJA W6vg== X-Gm-Message-State: ACgBeo1gLt4ec7rDo9bKwKeuX5Z3XMOfkxLallGbGi6rPVU8D4Xnsg4A BmhzVEZdsK4y7QAt4B3BvGKHpgfRQkg= X-Google-Smtp-Source: AA6agR7p8f0oX7Qv/mGgEi+wXgphK2AAXJizc5KwHHFoVlbMDXfcad2dvQIIPWVBGG5QIQ/9evyI4QEPVGs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:2722:b0:731:2aeb:7942 with SMTP id d2-20020a170907272200b007312aeb7942mr35645383ejl.734.1662380793723; Mon, 05 Sep 2022 05:26:33 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:43 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-36-glider@google.com> Subject: [PATCH v6 35/44] x86: kmsan: handle open-coded assembly in lib/iomem.c From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380794; a=rsa-sha256; cv=none; b=0dWl0wN74QK4uPLXis6zHM0/hTQrJtC53wyTspcKiLB5qus+hmkjH8My71kh/QjVTn5RV4 QkaWpk5WRbNiBnqBWjHMCPIxeaOa5CDTkj8/B0PCKPIDu4sfU1hL+jSnorVAmq5ERMSXIb BqeRyDxUKobXxqUmkOhO08OxMRRLbSs= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=fAxph5Y6; spf=pass (imf16.hostedemail.com: domain of 3-eoVYwYKCEYotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3-eoVYwYKCEYotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380794; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yflFKH40jmwCCGQluWMEQ//gU/gWIMkmRe3xdXByGVQ=; b=QaarBXO64o5yOvhSNTZCuQtgk0vczJ9eeGRezbLZXWXY1wykdhWzUKiofP4lxU3ssjpKmc 8sowgkg/JjObUT5ZhYe6lYQjK+ThSkKuLi2Ekh8nEa7zbDn4P3oBZfwf2D2SzTk8xTsNrO /KE5TNDizGeDTQi1dzp5p+jBpPNhTeg= X-Stat-Signature: yicgco61okamk7xjnr6wyg1f5qppbgwc X-Rspamd-Queue-Id: E86BD18006C Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=fAxph5Y6; spf=pass (imf16.hostedemail.com: domain of 3-eoVYwYKCEYotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3-eoVYwYKCEYotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1662380794-797359 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN cannot intercept memory accesses within asm() statements. That's why we add kmsan_unpoison_memory() and kmsan_check_memory() to hint it how to handle memory copied from/to I/O memory. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Icb16bf17269087e475debf07a7fe7d4bebc3df23 --- arch/x86/lib/iomem.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/lib/iomem.c b/arch/x86/lib/iomem.c index 3e2f33fc33de2..e0411a3774d49 100644 --- a/arch/x86/lib/iomem.c +++ b/arch/x86/lib/iomem.c @@ -1,6 +1,7 @@ #include #include #include +#include #define movs(type,to,from) \ asm volatile("movs" type:"=&D" (to), "=&S" (from):"0" (to), "1" (from):"memory") @@ -37,6 +38,8 @@ static void string_memcpy_fromio(void *to, const volatile void __iomem *from, si n-=2; } rep_movs(to, (const void *)from, n); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(to, n); } static void string_memcpy_toio(volatile void __iomem *to, const void *from, size_t n) @@ -44,6 +47,8 @@ static void string_memcpy_toio(volatile void __iomem *to, const void *from, size if (unlikely(!n)) return; + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(from, n); /* Align any unaligned destination IO */ if (unlikely(1 & (unsigned long)to)) { movs("b", to, from); From patchwork Mon Sep 5 12:24:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BD68ECAAD5 for ; Mon, 5 Sep 2022 12:26:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BF998D008B; Mon, 5 Sep 2022 08:26:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36ECF8D0076; Mon, 5 Sep 2022 08:26:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E80A8D008B; Mon, 5 Sep 2022 08:26:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 108478D0076 for ; Mon, 5 Sep 2022 08:26:38 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E2F1D160235 for ; Mon, 5 Sep 2022 12:26:37 +0000 (UTC) X-FDA: 79877955234.14.0022743 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf30.hostedemail.com (Postfix) with ESMTP id 9CF1A80083 for ; Mon, 5 Sep 2022 12:26:37 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id nb19-20020a1709071c9300b0074151953770so2298070ejc.21 for ; Mon, 05 Sep 2022 05:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=s0jAEbnxevXU2w5QvhShkO+su70yEEyH0HIO5IJWDY4=; b=bu0u5tzWys3aoNNOSqWMbs79DbrrHmKiGn+hL4KDlErPP+Lo7FLkHfELO/k6xRlYBe ykglug1+rXGdRt18mk9So62vJrQC5Nfam0G4xfUqu81YGX39bYWi9cdlEP/AR0hQIjI7 LhjuHlxdatOiIm2BvL1TtIwiRB+pmYK3d/lMv6E7IX7kgbpmXV2Qldgf6t5Q4tpDmTLr DC+ByxODcULZ/qsffD26Dqt/CBcxjCxhVYakt20QopL7GW2eWqNO8oWE31QVN4vRZjXt uf7C6E5ZyZl5/fMLW/+EpL6LgwqZxpJLp+xPVqll692XKyHC3OVtZcvx/SkX3nvjqXpq FF9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=s0jAEbnxevXU2w5QvhShkO+su70yEEyH0HIO5IJWDY4=; b=IqI35eIC9QRJJ4+3Z/OeMNBQ8R5hVVnMgomYq6YlLBIbFUUCvMhh9yatw5dDIhoZnL UGDi9lb9ULcUaRwbUVcF092KMSfKn/JUh+mZpiZ2ZDVSY7hRJ7TNEUwSKHvmAncltLFb WBvEqOHnh2ROjLpHxTVygSZtc69T2MLxNT6Ye1Z/sgLROMwWRZt4qFZMS/7CHtNO5AQr UglTEwPRr/pD0Idvt05xa5rEE+DjBB2fgKHmU5VHfrubYMLM4+ed1SG30cm+R0POYQSl j/d4eY2xd3vCYGfwCUh5CgKMUhrsGWTSBRfiZuYJNd2teFue7jXBfIboN4A3ZFhRzAJF lFAA== X-Gm-Message-State: ACgBeo0K3CDmP2LVvMJi0VsgiCr/oksZFhjCTegIpHo+X4ZDGX86wZXF xQ4JuGX+PUx8+L+UjNXQY/0z7m0JaM8= X-Google-Smtp-Source: AA6agR5260F+3gOSRJyoSXQw3pfFItb9FgNqwZTy7bpqxdCSRtFwmoz1gIihp3WzCC0P247/3anfhhixqLQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:8a5c:b0:73d:7f4a:b951 with SMTP id gx28-20020a1709068a5c00b0073d7f4ab951mr35092641ejc.481.1662380796421; Mon, 05 Sep 2022 05:26:36 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:44 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-37-glider@google.com> Subject: [PATCH v6 36/44] x86: kmsan: use __msan_ string functions where possible. From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380797; a=rsa-sha256; cv=none; b=Ux2gQI5lI2R9nsp0O83UTASsfukriOCybTybxTxXXkpgcT6rdsC6UzXu30tgC3OAMz4eOv 4DPA1pv9ikvkPrjbXoorpFIC+s+zakQEkp+0S+KLJswrCohitIrSdTbf8P7ZmyV8I/bg6Y 2iMVCSvzBKissnd1jFZsmnIQuKjvKzg= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bu0u5tzW; spf=pass (imf30.hostedemail.com: domain of 3_OoVYwYKCEkrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3_OoVYwYKCEkrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380797; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s0jAEbnxevXU2w5QvhShkO+su70yEEyH0HIO5IJWDY4=; b=46eeV08v0319s1sjlLvVoY5ppcpwq/eYKoQnqM6FaDKQX28aSUL/r1CK0RNjVgRC2VU83h b+JkhSVTUO25WlKtbOvpwYtd/Why0dTxCQowyR8sJcIImrFkELY7zY3xncuPDHZ1BBqGLN jEqmT73ggw7xwA1O1RVi3kxN+r9P7zk= Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bu0u5tzW; spf=pass (imf30.hostedemail.com: domain of 3_OoVYwYKCEkrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3_OoVYwYKCEkrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspam-User: X-Stat-Signature: zui5z88di49eq4k61qtxwasycp3th6sf X-Rspamd-Queue-Id: 9CF1A80083 X-HE-Tag: 1662380797-600155 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7ca9bd6b4f5c9b9816404862ae87ca7984395f33 --- arch/x86/include/asm/string_64.h | 23 +++++++++++++++++++++-- include/linux/fortify-string.h | 2 ++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 6e450827f677a..3b87d889b6e16 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ #define __HAVE_ARCH_MEMCPY 1 +#if defined(__SANITIZE_MEMORY__) +#undef memcpy +void *__msan_memcpy(void *dst, const void *src, size_t size); +#define memcpy __msan_memcpy +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); #define __HAVE_ARCH_MEMSET +#if defined(__SANITIZE_MEMORY__) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset __msan_memset +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) } #define __HAVE_ARCH_MEMMOVE +#if defined(__SANITIZE_MEMORY__) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove __msan_memmove +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); int memcmp(const void *cs, const void *ct, size_t count); @@ -64,8 +82,7 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) - +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) /* * For files that not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. @@ -73,7 +90,9 @@ int strcmp(const char *cs, const char *ct); #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index 3b401fa0f3746..6c8a1a29d0b63 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -285,8 +285,10 @@ __FORTIFY_INLINE void fortify_memset_chk(__kernel_size_t size, * __builtin_object_size() must be captured here to avoid evaluating argument * side-effects further into the macro layers. */ +#ifndef CONFIG_KMSAN #define memset(p, c, s) __fortify_memset_chk(p, c, s, \ __builtin_object_size(p, 0), __builtin_object_size(p, 1)) +#endif /* * To make sure the compiler can enforce protection against buffer overflows, From patchwork Mon Sep 5 12:24:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966068 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8161AECAAD3 for ; Mon, 5 Sep 2022 12:26:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C2F68D008C; Mon, 5 Sep 2022 08:26:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 174598D0076; Mon, 5 Sep 2022 08:26:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03AC48D008C; Mon, 5 Sep 2022 08:26:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E94F98D0076 for ; Mon, 5 Sep 2022 08:26:40 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CC85B121405 for ; Mon, 5 Sep 2022 12:26:40 +0000 (UTC) X-FDA: 79877955360.14.B121128 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 7B880A0079 for ; Mon, 5 Sep 2022 12:26:40 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id qa35-20020a17090786a300b0073d4026a97dso2257154ejc.9 for ; Mon, 05 Sep 2022 05:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=y4+04AEFcGK6lJisKBf+d7wj7/tgJIfosEPW0CJiM4s=; b=gBW+tZSLxGZnND4jrchBsghKl8/54pySEDlyRdlr3psJ/Hvd7VXlx2wfj6ECdvmUQi sRTttLdDmy8zOhLjlaiCYzBsWtLdTMT8cmyabpe9q0MWsHkFkNMgKrt1Kn8/asTah7ZT H6TsR0U0wbSPvK4a+Dn7G+e/iNVEda36hWolwcehaapqlon8rlllNIzqQKO9iAA0Un/8 Nz12cRDUDSP+JcyVeGaQDbPYXAdFlP+cE5B3/TKbaZ+UJjg/9BZo0cHg67GtE9TCDPWu bFXqLAs8ynH14qtiFqkJTIrl8/aPUNXCg2mJ4yBGSqjV3wD2S0fjbX1I1vr87dz8Dh6a mFvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=y4+04AEFcGK6lJisKBf+d7wj7/tgJIfosEPW0CJiM4s=; b=JDZjMwSW6henl65PBHZ4TXcZv9umFbzQljfa0vTj4p+CfRg3n++FxoFpX0KWb0bcSC T93hiLYBTTLO8USZS74DSGkJo8bIaKGotnyW5+uHCRenOMwFVNFivnc6ZFdV51OFOqP0 Unh6/JxU/EbuNXuXqKMdYQrxATggDzwnTnPfldnwZbZUh91XU7NvMV+mluRjLwBQz7ea dlf8LpCEoafHzdbVdjDjpRJHrKoc1uLkfX5EVafVGofLXt6Y/UYkFurEtXBXtaj4BcUb UVT74SKr9RYjrXQkCr7ZsfqhR52sN7vqMoH6j7cAeXZ563jD71C1WOm56dAAjvvwTvlh bZbA== X-Gm-Message-State: ACgBeo1m4Tq4eaZV3q0YcETZ2pptpOY3bgYTuASIUxyQIjiDKx29XsMv T7epolOWVY7tQ79iCCeK9sNdQ1j2cwE= X-Google-Smtp-Source: AA6agR7T0dnY+WLIWZ049DPiC2qaPW6QsYICvGVoSrCTFAnpUI8NVaVqBkaF1S6uhuuJRKOYM2t5hyE1pCs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a05:6402:10d2:b0:445:d9ee:fc19 with SMTP id p18-20020a05640210d200b00445d9eefc19mr41641834edu.81.1662380799213; Mon, 05 Sep 2022 05:26:39 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:45 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-38-glider@google.com> Subject: [PATCH v6 37/44] x86: kmsan: sync metadata pages on page fault From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380800; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y4+04AEFcGK6lJisKBf+d7wj7/tgJIfosEPW0CJiM4s=; b=1Dl4WbiDrVD0pKFRnI+f10ZMACCbu2ASC8kHecQpErq6qKi0I+zv9sfLl9b/fxJ/JRSHe1 Rn80wg7ynZFWINvvtGb4/lIIor4W/Mof8eo6fqsAKzRo0nD53Rc/SB5UywJw0uBTty4qUH TZvcciZFnXwwweuriLYChq4tDDNGzBo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gBW+tZSL; spf=pass (imf25.hostedemail.com: domain of 3_-oVYwYKCEwuzwrs5u22uzs.q20zw18B-00y9oqy.25u@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3_-oVYwYKCEwuzwrs5u22uzs.q20zw18B-00y9oqy.25u@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380800; a=rsa-sha256; cv=none; b=zjoyKVJF7/974FHfShrvPGJLBmUQ2A8MZf8F8xmyXIMI7QJvkNgnRQMu0WcwjG6CnCKOda Cu6tf8/nuj1eNIVIkp/8ZrIVy4DqM8vG3DLuZGiCdMGQbnub6i6MsFuDOMxxQnfORrQhot tW0Zsic4FZH3SqEUBfiESXgUR99xpGY= X-Rspam-User: X-Stat-Signature: 1r5xs1t7ad6encz1mtgau68fkyu86c4z X-Rspamd-Queue-Id: 7B880A0079 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gBW+tZSL; spf=pass (imf25.hostedemail.com: domain of 3_-oVYwYKCEwuzwrs5u22uzs.q20zw18B-00y9oqy.25u@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3_-oVYwYKCEwuzwrs5u22uzs.q20zw18B-00y9oqy.25u@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-HE-Tag: 1662380800-846367 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN assumes shadow and origin pages for every allocated page are accessible. For pages between [VMALLOC_START, VMALLOC_END] those metadata pages start at KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START, therefore we must sync a bigger memory region. Signed-off-by: Alexander Potapenko --- v2: -- addressed reports from kernel test robot Link: https://linux-review.googlesource.com/id/Ia5bd541e54f1ecc11b86666c3ec87c62ac0bdfb8 --- arch/x86/mm/fault.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index fa71a5d12e872..d728791be8ace 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -260,7 +260,7 @@ static noinline int vmalloc_fault(unsigned long address) } NOKPROBE_SYMBOL(vmalloc_fault); -void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +static void __arch_sync_kernel_mappings(unsigned long start, unsigned long end) { unsigned long addr; @@ -284,6 +284,27 @@ void arch_sync_kernel_mappings(unsigned long start, unsigned long end) } } +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + __arch_sync_kernel_mappings(start, end); +#ifdef CONFIG_KMSAN + /* + * KMSAN maintains two additional metadata page mappings for the + * [VMALLOC_START, VMALLOC_END) range. These mappings start at + * KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START and + * have to be synced together with the vmalloc memory mapping. + */ + if (start >= VMALLOC_START && end < VMALLOC_END) { + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START, + end - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START); + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START, + end - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START); + } +#endif +} + static bool low_pfn(unsigned long pfn) { return pfn < max_low_pfn; From patchwork Mon Sep 5 12:24:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A059C6FA83 for ; Mon, 5 Sep 2022 12:26:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF8C68D008D; Mon, 5 Sep 2022 08:26:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA9B48D0076; Mon, 5 Sep 2022 08:26:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C495B8D008D; Mon, 5 Sep 2022 08:26:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B1C7E8D0076 for ; Mon, 5 Sep 2022 08:26:43 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9A6CF14035D for ; Mon, 5 Sep 2022 12:26:43 +0000 (UTC) X-FDA: 79877955486.26.39FA3C9 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf24.hostedemail.com (Postfix) with ESMTP id 46FF518008C for ; Mon, 5 Sep 2022 12:26:43 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id l19-20020a056402255300b0043df64f9a0fso5758968edb.16 for ; Mon, 05 Sep 2022 05:26:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=UAXLi6hWes4K0I7wTwgH8ix0vMuStYu2bu/++oTzO6o=; b=IiXcuANzOXVeghkyw45jIMRJOD8tJCWr7Q1BUgEz0fm35aYdWZX4D683Pva/+zNLad choRXGR+vUulOeBWEDGvOCuy/XY+TgrqANSjoAj0HTZHuhSXRl0uaBoC9IohaljDqOSw lHnzDdCFB3MwKou7/Xmijnr1q4RL7SKQSLJrgFR5I0C2bUbU1QmYTqovQCM2Xy1SHffm d3owLA3ha4/6k+UDm+7p/tlCYtuTh4p4aZioY467gpQIBRQTy+ELgVXgJZk8GQRjZN4q UQUIUId5D4WLT9t/nPPBAtfQjbnc0NkSg0OyN+VUeEPZgk5kBTa88vDc+Z5HF7RV6MNo Hz1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=UAXLi6hWes4K0I7wTwgH8ix0vMuStYu2bu/++oTzO6o=; b=B20xXcKs3s8hrSiIPawsoAQNJNH2pIVNO9ri0lLBzXqBzeW3T8m6K9mMcQYVvqd4mF aM0s1ZcgrBzGUUNociKf7h5AVwpd5uOCgs/jd02L+fLKsrmP91eIf/Mf07OZI1S6/ZmX ob2gG+WMUfpulLQiwbqdxUfda3FIS08njwQDtbVDfIdTEnomhfwr8/j3Fa9x6fUbDN80 muT7FTQjPUUJv/iiMWcHWFojduldRGbuIjuQENOlXO4vm1LlZycXXzSFtpc+N1VSRCQK wLHjDKs9ei5uA2IwwFuLjq08QnTmwlPQm6BtPT69GhuC52o7N+aFAebEWRMHKh4ADCis EAUg== X-Gm-Message-State: ACgBeo2JPYBZ+Ro6z3gf2e9Z80ox1pR5JVylmFaQEbJg5Awl+z9DuhNb B3ORkePTltI7lykFdTr+QL+I0Kt9sjs= X-Google-Smtp-Source: AA6agR7MDr6BOPDfZGI9VUpdu+EDSsR8WTCobkcNLw4fbja27oiV7sHYlc5Qaiw4Oc2k8IXLG99/Xrjflgw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:7d8f:b0:732:9d6c:4373 with SMTP id oz15-20020a1709077d8f00b007329d6c4373mr33515704ejc.493.1662380802045; Mon, 05 Sep 2022 05:26:42 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:46 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-39-glider@google.com> Subject: [PATCH v6 38/44] x86: kasan: kmsan: support CONFIG_GENERIC_CSUM on x86, enable it for KASAN/KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380803; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UAXLi6hWes4K0I7wTwgH8ix0vMuStYu2bu/++oTzO6o=; b=eAmx6nUOaV3HXrj6xHvnt9mAQvVJzHU21BpKsqE1jSfGgFpiP/yM4pytfa/su9u4YhY/Xi LOOlJPBGCY5q1GtysbD8I+e3Pm1UBFSuvv9a/ygFzRblDxpcXqwqfF6v8rItzQayWEmq1B QWACXMfgozJLE0b1y9gXZBg1XOmBzIo= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IiXcuANz; spf=pass (imf24.hostedemail.com: domain of 3AusVYwYKCE8x2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3AusVYwYKCE8x2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380803; a=rsa-sha256; cv=none; b=P1YYrOWVO+MuDpXJ27ymqYL+KOKakkJNL8BUE55vZ9VIBweRavqWhDrdiJ6UiVyQXsZfdQ 0Wekczx5HtMQfFmbxTmq9fthf+2wLrvbEKuCVTeWBoUJn9oKz+kp1WeY+vGPHJxue0SXad /sF8y8Ehwo61RiMNNHGoNu8VWwO1pbQ= X-Rspam-User: X-Stat-Signature: 54si6hcofjxex7sywfwh1rijf7bkrgnn X-Rspamd-Queue-Id: 46FF518008C Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IiXcuANz; spf=pass (imf24.hostedemail.com: domain of 3AusVYwYKCE8x2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3AusVYwYKCE8x2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-HE-Tag: 1662380803-338689 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is needed to allow memory tools like KASAN and KMSAN see the memory accesses from the checksum code. Without CONFIG_GENERIC_CSUM the tools can't see memory accesses originating from handwritten assembly code. For KASAN it's a question of detecting more bugs, for KMSAN using the C implementation also helps avoid false positives originating from seemingly uninitialized checksum values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I3e95247be55b1112af59dbba07e8cbf34e50a581 --- arch/x86/Kconfig | 4 ++++ arch/x86/include/asm/checksum.h | 16 ++++++++++------ arch/x86/lib/Makefile | 2 ++ 3 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f9920f1341c8d..33f4d4baba079 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -324,6 +324,10 @@ config GENERIC_ISA_DMA def_bool y depends on ISA_DMA_API +config GENERIC_CSUM + bool + default y if KMSAN || KASAN + config GENERIC_BUG def_bool y depends on BUG diff --git a/arch/x86/include/asm/checksum.h b/arch/x86/include/asm/checksum.h index bca625a60186c..6df6ece8a28ec 100644 --- a/arch/x86/include/asm/checksum.h +++ b/arch/x86/include/asm/checksum.h @@ -1,9 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 -#define HAVE_CSUM_COPY_USER -#define _HAVE_ARCH_CSUM_AND_COPY -#ifdef CONFIG_X86_32 -# include +#ifdef CONFIG_GENERIC_CSUM +# include #else -# include +# define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 +# define HAVE_CSUM_COPY_USER +# define _HAVE_ARCH_CSUM_AND_COPY +# ifdef CONFIG_X86_32 +# include +# else +# include +# endif #endif diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index f76747862bd2e..7ba5f61d72735 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -65,7 +65,9 @@ ifneq ($(CONFIG_X86_CMPXCHG64),y) endif else obj-y += iomap_copy_64.o +ifneq ($(CONFIG_GENERIC_CSUM),y) lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o +endif lib-y += clear_page_64.o copy_page_64.o lib-y += memmove_64.o memset_64.o lib-y += copy_user_64.o From patchwork Mon Sep 5 12:24:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966070 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21428ECAAD5 for ; Mon, 5 Sep 2022 12:26:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B39918D008E; Mon, 5 Sep 2022 08:26:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AE87C8D0076; Mon, 5 Sep 2022 08:26:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9894D8D008E; Mon, 5 Sep 2022 08:26:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 892158D0076 for ; Mon, 5 Sep 2022 08:26:46 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 633B6809AB for ; Mon, 5 Sep 2022 12:26:46 +0000 (UTC) X-FDA: 79877955612.15.3F06B06 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 0EAAA18006F for ; Mon, 5 Sep 2022 12:26:45 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id sb14-20020a1709076d8e00b0073d48a10e10so2275971ejc.16 for ; Mon, 05 Sep 2022 05:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=y7GARGHGPpfPMqaRIp9ae7D7QgvYSCiDFPeV4nzQ3Rk=; b=ddTOCduO5x6A/vNQMTAUwanA9sA0BCMeAUh+AbwWZsHDAk9+85xCqrDPAoy/kXj/yM pZSbfcwBzuznV/e1nsCZpHlDb15U84fy2lfiqZMe8aoHY/I7P25KzOs1d3vrlFmbReSY CMgox+WoZoI864HUj6y69WtC8wXEhJsfJO1HGhmgOHeHxH+VxZafKqFKdLrvdFRpq+z3 ENymG35akVbdRkGAMGE8Z8s0OdqqWY6HbsmxVcinzKH7wWkH7p+aAzFkQtSI/s7mr3da ilHo8C+mpJ4L2avMdUVG4qeyBVv1yqdqrzQBpWWV9P63dsr6gXP1A1YwiJarV7ILSOFv WffQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=y7GARGHGPpfPMqaRIp9ae7D7QgvYSCiDFPeV4nzQ3Rk=; b=yeWeHHmyeKB1+8wnVmie+uNzbjd1Ny5k2kzeporS5XTQ9ws1xLoe7vfc5yh/pSEn4A Gx4I80s8KXqbVshgpUeJrMK0AqxlnEFgL+ARZ9KSS6usqrixD163joIrpUsnO4Ujk/Il IUKsfiWOA8UDCMtSjEs2RDHT4VhG92G3MsEyAFfG6BCdatxZCHHqf92zVdIzwjsI2cVn Rd3KNrgEf7+XkSvBMPPc4fFujssXTCR1ZhusUK28xxRmv8t+OGVCOI589TLigN9vX3Pn Sc+9clfmS4ua1AmOaYGHsvfruTwk94uO9iGPFFvrY+qM0SjBxbWKxjCausGJOtcCtUz6 t5Dg== X-Gm-Message-State: ACgBeo00YyYO/+y5zAMfGaFqYS4eylH/p6vajinokld5IZHqHki9TS44 s1iBDJIfpnhwJUaSf2jdfFy0BCYGz2o= X-Google-Smtp-Source: AA6agR4QT5xvR//Dr9lpspZ++6X8QeAx9PC0zKsZbmb97U5jvQ2WfdBoaCZHo4G5Gg4skY0cmggOzLItUto= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:906:fe46:b0:730:ca2b:cb7b with SMTP id wz6-20020a170906fe4600b00730ca2bcb7bmr37494826ejb.703.1662380804705; Mon, 05 Sep 2022 05:26:44 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:47 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-40-glider@google.com> Subject: [PATCH v6 39/44] x86: fs: kmsan: disable CONFIG_DCACHE_WORD_ACCESS From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Andrey Konovalov ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ddTOCduO; spf=pass (imf16.hostedemail.com: domain of 3BOsVYwYKCFEz41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3BOsVYwYKCFEz41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380806; a=rsa-sha256; cv=none; b=MyiI+IqFXx/VwYkNLrSBWj9wPYvpXtNcS94QKIvcE6jcY08XZwiU/4Drpw++pDueYNsddi fizP+JmuS2Lg6Xv0E7n6RD9I0x7GmqYrmxGRAtxZG/lpaqpSLem+SQ3VEkUj6aCzdoCz8F 9Fvgqg/Ul8GybC+M+kav5iza9zVH97U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380806; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y7GARGHGPpfPMqaRIp9ae7D7QgvYSCiDFPeV4nzQ3Rk=; b=Nl/SBxbcQ1B/U5KwqeWs/H0hEtj8U8XSl7OIUVPXBJXC69NDy7DSll1x/bcudY/TVLanWx uRFGlZ4hapamrinMUAcW3L/9LgWeNYBPZG2GZPVa5o6cQwMLXADHknrAzoCCF6eO1xBPYf jKfzuI8CvTyR0jMj1J4fgdpnNrBw2dc= X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ddTOCduO; spf=pass (imf16.hostedemail.com: domain of 3BOsVYwYKCFEz41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3BOsVYwYKCFEz41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0EAAA18006F X-Stat-Signature: 7skmd8xeynwmmk13bm3tpyed8euhipyk X-HE-Tag: 1662380805-256706 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dentry_string_cmp() calls read_word_at_a_time(), which might read uninitialized bytes to optimize string comparisons. Disabling CONFIG_DCACHE_WORD_ACCESS should prohibit this optimization, as well as (probably) similar ones. Suggested-by: Andrey Konovalov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I4c0073224ac2897cafb8c037362c49dda9cfa133 --- arch/x86/Kconfig | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 33f4d4baba079..697da8dae1418 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -128,7 +128,9 @@ config X86 select CLKEVT_I8253 select CLOCKSOURCE_VALIDATE_LAST_CYCLE select CLOCKSOURCE_WATCHDOG - select DCACHE_WORD_ACCESS + # Word-size accesses may read uninitialized data past the trailing \0 + # in strings and cause false KMSAN reports. + select DCACHE_WORD_ACCESS if !KMSAN select DYNAMIC_SIGFRAME select EDAC_ATOMIC_SCRUB select EDAC_SUPPORT From patchwork Mon Sep 5 12:24:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EACCFECAAD5 for ; Mon, 5 Sep 2022 12:26:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 82BE88D008F; Mon, 5 Sep 2022 08:26:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DAEE8D0076; Mon, 5 Sep 2022 08:26:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A3798D008F; Mon, 5 Sep 2022 08:26:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5C6208D0076 for ; Mon, 5 Sep 2022 08:26:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4271C1C5C74 for ; Mon, 5 Sep 2022 12:26:49 +0000 (UTC) X-FDA: 79877955738.13.2B3B55C Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf28.hostedemail.com (Postfix) with ESMTP id D955BC0093 for ; Mon, 5 Sep 2022 12:26:48 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id he38-20020a1709073da600b0073d98728570so2284981ejc.11 for ; Mon, 05 Sep 2022 05:26:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=Rbxz0iuc7RXxWgIfjeSrHOuiWEHvXH2VAJuclXEfLdg=; b=THOF30VtPA03HguE2vHQscvu4ZTBh5MWS0Wvqn4CdmFennnT2MwMnV9TcDZ3iHNp+s GIFt6P3re2Xo+37u/v00Y9DHfFIunA33ANa8bO9IuXi1k0z7jcjjiC1Qs1OqFW9GRw+H NE01mSjr6C9GGNwYYEPh7ZecXNqKtBXVTf5e4tlGS8oJ11WGhPU/7O8qaTlZ7kHXGZXr nv54U87Qubi0ceB7OERr7dnfX9ZRb6Ez6oDMpF+0xoUm5aualihiUUnnVv0ssiVkIzsf p/1s9yIn41sKfV1mVxEOL51ABsyyowCV9IdnoM8lSsY7waAb7T8vnpeJ9C3teWfyXNpp hOrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=Rbxz0iuc7RXxWgIfjeSrHOuiWEHvXH2VAJuclXEfLdg=; b=0fRuukFSViNr7j6uZRIlflvWqhE5Owc8YZE2sTFavyqlupz4zF32PCIWIkbdtHdQYl nlMdFdaRcx7idxmTkbg8hfWmRT6DsSQN80Blujk5SGBkmXgzWI7+Lhu1vuiHMNQot0kC quc2B6eJIgjlyS1eGDIWPwaA2NzTx5vPri9HmDR+tB/3btyEOlRwzFcKfoFl/+hhNG6S X+EP9bnbZlXS0Id132gHyPLZckz24BrwI3b8Zwe6SggHrRyBePcbD7v4ZLfMTwIX8qpQ 6g/EO+2gn5szXRQyPkoTq51FpENYwEbK5tPBtvsz4L5OTTsaFzV4GIFMuDoGfyIz1dRc vfTg== X-Gm-Message-State: ACgBeo2wF6wQoks3/+9Vy8peGo8L8JXMG0q+JoVmYvSi38dPiFLg3lY6 goIvuebtwZcD7FdVheJdS6eWGbP+GjY= X-Google-Smtp-Source: AA6agR4vM0zsIKH75s7RCEaW6zrpgK+jiBpEwRdOSTxPxJ6LYkLuc7HHV+OetDykyKOVYyXQay+1ci6hfI4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:cb87:0:b0:43b:e650:6036 with SMTP id r7-20020aa7cb87000000b0043be6506036mr44091595edt.350.1662380807787; Mon, 05 Sep 2022 05:26:47 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:48 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-41-glider@google.com> Subject: [PATCH v6 40/44] x86: kmsan: don't instrument stack walking functions From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380808; a=rsa-sha256; cv=none; b=xPh0BnzIIKzkjOgTlctq8E+ZBVn0L0EwcAVp4S6l6yRC4JwlNyl48HLppH8CMmvG5e/t14 ZjoL4QxiV8FAxVl7TLposX7+WpVXJx3UNyCPjsYsReooi+w+eX5g/yqER2k/Xq5PGsqjR2 8ZVTCSGpO8Dr1X9LPTffjeUEjdIv1fQ= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=THOF30Vt; spf=pass (imf28.hostedemail.com: domain of 3B-sVYwYKCFQ274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3B-sVYwYKCFQ274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380808; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Rbxz0iuc7RXxWgIfjeSrHOuiWEHvXH2VAJuclXEfLdg=; b=N08UVcnu8k94hdje5Ba2D24zdLfV5u9upU94kK4RviQBU31X2khh+m0VxewWIlj3/7fRH9 HRr7zE3BwTWFdIPC8V2/TAiw5b3lftjYzq5rzq5PxQAANkWrNtwFeKyvqHSeNWz30/EJLB qm3XeA2ZeZlwStHJrS6O8n+cYf6/4/M= Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=THOF30Vt; spf=pass (imf28.hostedemail.com: domain of 3B-sVYwYKCFQ274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3B-sVYwYKCFQ274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: 547xr78piz9ht713cux3yojf95e8hngi X-Rspamd-Queue-Id: D955BC0093 X-HE-Tag: 1662380808-125426 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Upon function exit, KMSAN marks local variables as uninitialized. Further function calls may result in the compiler creating the stack frame where these local variables resided. This results in frame pointers being marked as uninitialized data, which is normally correct, because they are not stack-allocated. However stack unwinding functions are supposed to read and dereference the frame pointers, in which case KMSAN might be reporting uses of uninitialized values. To work around that, we mark update_stack_state(), unwind_next_frame() and show_trace_log_lvl() with __no_kmsan_checks, preventing all KMSAN reports inside those functions and making them return initialized values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I6550563768fbb08aa60b2a96803675dcba93d802 --- arch/x86/kernel/dumpstack.c | 6 ++++++ arch/x86/kernel/unwind_frame.c | 11 +++++++++++ 2 files changed, 17 insertions(+) diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index afae4dd774951..476eb504084e4 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -177,6 +177,12 @@ static void show_regs_if_on_stack(struct stack_info *info, struct pt_regs *regs, } } +/* + * This function reads pointers from the stack and dereferences them. The + * pointers may not have their KMSAN shadow set up properly, which may result + * in false positive reports. Disable instrumentation to avoid those. + */ +__no_kmsan_checks static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, unsigned long *stack, const char *log_lvl) { diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c index 8e1c50c86e5db..d8ba93778ae32 100644 --- a/arch/x86/kernel/unwind_frame.c +++ b/arch/x86/kernel/unwind_frame.c @@ -183,6 +183,16 @@ static struct pt_regs *decode_frame_pointer(unsigned long *bp) } #endif +/* + * While walking the stack, KMSAN may stomp on stale locals from other + * functions that were marked as uninitialized upon function exit, and + * now hold the call frame information for the current function (e.g. the frame + * pointer). Because KMSAN does not specifically mark call frames as + * initialized, false positive reports are possible. To prevent such reports, + * we mark the functions scanning the stack (here and below) with + * __no_kmsan_checks. + */ +__no_kmsan_checks static bool update_stack_state(struct unwind_state *state, unsigned long *next_bp) { @@ -250,6 +260,7 @@ static bool update_stack_state(struct unwind_state *state, return true; } +__no_kmsan_checks bool unwind_next_frame(struct unwind_state *state) { struct pt_regs *regs; From patchwork Mon Sep 5 12:24:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966072 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D470CECAAD3 for ; Mon, 5 Sep 2022 12:26:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F3608D0090; Mon, 5 Sep 2022 08:26:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A30C8D0076; Mon, 5 Sep 2022 08:26:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5432E8D0090; Mon, 5 Sep 2022 08:26:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 41DF88D0076 for ; Mon, 5 Sep 2022 08:26:52 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 21735406D4 for ; Mon, 5 Sep 2022 12:26:52 +0000 (UTC) X-FDA: 79877955864.11.2E9B507 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf10.hostedemail.com (Postfix) with ESMTP id C0906C005D for ; Mon, 5 Sep 2022 12:26:51 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id z6-20020a05640240c600b0043e1d52fd98so5818949edb.22 for ; Mon, 05 Sep 2022 05:26:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=S+kwzH5LfDyg6tc+T47KWRR4zTWxpHjqyuN2TRvGEwc=; b=S7wqzmITpHaUQQ601PmLQLF2AnmzFvH8vp75pKlPuqlSL7XFiQiV+t+CuEVS9rDc95 YTC/2jROe8XrsYRkg6GR5NJZ4gxO13PXYLci1oXhoxmtUx0U3dXDpYhzBRe8xP1bBa1m yUX8soFx5DjE2Hzv1jICZdgm1K8X05X3C3aaKK6hLn3Rb9NHbi326B4nIbBWD+rOYDv/ 5ZdbvrOGFekGi9ZJSlEgsbw5zNeE90cnWYu9B0CzyO0fYDFO05KMXAU3Eu7pvo403nYZ 6hoSikjaW4BYgn6KVYp/rm30IbcA9hcfc75g6Q8jUpc+cz7+8xIzSrkwlFb9hqN+tsc8 2DZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=S+kwzH5LfDyg6tc+T47KWRR4zTWxpHjqyuN2TRvGEwc=; b=JvbznwZS8VFRWU2fZQj3fZTKs5s366wJ23ISpYoGkQIDpD/bWjcOO3tQxI6YHeYmq8 IzcFC9BlNtOQJQmLZg6YAIeYZJi+/kXlde4h9JutVx6+Ixevpy3mOthxU9jEKYGqHPby j1eA/WNrik1YarPf8eVlJhm6ziEDsEfgvkwE1kNfgHriA+7IqFv0T/u6OOI6W7qT42G8 xqNIiKsTO2pAr640giC+gkKy212nKo7wB78sv3EmSyQGkMHI8IF6v84/Txlpru3Viv2u JVOJkVucsImhCIOkM/GVhCCMjyjDoW0luB9Lrf5O409Bx5GuBQDRt7hPghOJsTRpaZAN ST4g== X-Gm-Message-State: ACgBeo2+hz32ToXKb60EVruj9g9ccAOdwaaQifdxYyF3CQi+ZRCvZLdW XceRXf3BNyHg5s+B9amy2+KSBNTZE88= X-Google-Smtp-Source: AA6agR4mQQNFmEgVYN7X2/LANotTJ/QIMRRFfD7L/MO50ymq0vfCSKCuMncm7i/c02baCigU27aXPo0cYqA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a50:ff13:0:b0:43e:76d3:63e1 with SMTP id a19-20020a50ff13000000b0043e76d363e1mr42442604edu.271.1662380810584; Mon, 05 Sep 2022 05:26:50 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:49 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-42-glider@google.com> Subject: [PATCH v6 41/44] entry: kmsan: introduce kmsan_unpoison_entry_regs() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=S7wqzmIT; spf=pass (imf10.hostedemail.com: domain of 3CusVYwYKCFc5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3CusVYwYKCFc5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380811; a=rsa-sha256; cv=none; b=CSAchlNU6ZxTp3Nu/ixN66kHx5mWlVhDy/pIFUNZijjjKbaPdxsObqSoGXCj8GoG56O85v Hw0iigbKNwHBqx7qxl4yOVJ2ofXKShsOsAXrjf+Pn58BsAwhI2BY1KCsItuUHiLESAap1l F4wrvVa8T7K0INGsKCqO7Ll5i824GYM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380811; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S+kwzH5LfDyg6tc+T47KWRR4zTWxpHjqyuN2TRvGEwc=; b=pY3AearT/G84z4bX+cE3Rnhw2eNuZ3BmBIjCr0N1DBiGkXurVnbHr2sKu9c2ffa1lRa5VV DeivWtzzV9PSRngqhZj0d0JoMaRnenFU1XleQ+W6QOU1XMLaab12CR0+UiTJH3eghjkunh AJp15pIUJjF/fmRcrxh5rQeDrlYnjQk= X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=S7wqzmIT; spf=pass (imf10.hostedemail.com: domain of 3CusVYwYKCFc5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3CusVYwYKCFc5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C0906C005D X-Stat-Signature: xxrsaj8gwp4pikjgbxwxdprobcnh8u6o X-HE-Tag: 1662380811-941810 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: struct pt_regs passed into IRQ entry code is set up by uninstrumented asm functions, therefore KMSAN may not notice the registers are initialized. kmsan_unpoison_entry_regs() unpoisons the contents of struct pt_regs, preventing potential false positives. Unlike kmsan_unpoison_memory(), it can be called under kmsan_in_runtime(), which is often the case in IRQ entry code. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ibfd7018ac847fd8e5491681f508ba5d14e4669cf --- include/linux/kmsan.h | 15 +++++++++++++++ kernel/entry/common.c | 5 +++++ mm/kmsan/hooks.c | 26 ++++++++++++++++++++++++++ 3 files changed, 46 insertions(+) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index c473e0e21683c..e38ae3c346184 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -214,6 +214,17 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, */ void kmsan_handle_urb(const struct urb *urb, bool is_out); +/** + * kmsan_unpoison_entry_regs() - Handle pt_regs in low-level entry code. + * @regs: struct pt_regs pointer received from assembly code. + * + * KMSAN unpoisons the contents of the passed pt_regs, preventing potential + * false positive reports. Unlike kmsan_unpoison_memory(), + * kmsan_unpoison_entry_regs() can be called from the regions where + * kmsan_in_runtime() returns true, which is the case in early entry code. + */ +void kmsan_unpoison_entry_regs(const struct pt_regs *regs); + #else static inline void kmsan_init_shadow(void) @@ -310,6 +321,10 @@ static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) { } +static inline void kmsan_unpoison_entry_regs(const struct pt_regs *regs) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 063068a9ea9b3..846add8394c41 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -24,6 +25,7 @@ static __always_inline void __enter_from_user_mode(struct pt_regs *regs) user_exit_irqoff(); instrumentation_begin(); + kmsan_unpoison_entry_regs(regs); trace_hardirqs_off_finish(); instrumentation_end(); } @@ -352,6 +354,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) lockdep_hardirqs_off(CALLER_ADDR0); ct_irq_enter(); instrumentation_begin(); + kmsan_unpoison_entry_regs(regs); trace_hardirqs_off_finish(); instrumentation_end(); @@ -367,6 +370,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) */ lockdep_hardirqs_off(CALLER_ADDR0); instrumentation_begin(); + kmsan_unpoison_entry_regs(regs); rcu_irq_enter_check_tick(); trace_hardirqs_off_finish(); instrumentation_end(); @@ -452,6 +456,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs) ct_nmi_enter(); instrumentation_begin(); + kmsan_unpoison_entry_regs(regs); trace_hardirqs_off_finish(); ftrace_nmi_enter(); instrumentation_end(); diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 79d7e73e2cfd8..35f6b6e6a908c 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -348,6 +348,32 @@ void kmsan_unpoison_memory(const void *address, size_t size) } EXPORT_SYMBOL(kmsan_unpoison_memory); +/* + * Version of kmsan_unpoison_memory() that can be called from within the KMSAN + * runtime. + * + * Non-instrumented IRQ entry functions receive struct pt_regs from assembly + * code. Those regs need to be unpoisoned, otherwise using them will result in + * false positives. + * Using kmsan_unpoison_memory() is not an option in entry code, because the + * return value of in_task() is inconsistent - as a result, certain calls to + * kmsan_unpoison_memory() are ignored. kmsan_unpoison_entry_regs() ensures that + * the registers are unpoisoned even if kmsan_in_runtime() is true in the early + * entry code. + */ +void kmsan_unpoison_entry_regs(const struct pt_regs *regs) +{ + unsigned long ua_flags; + + if (!kmsan_enabled) + return; + + ua_flags = user_access_save(); + kmsan_internal_unpoison_memory((void *)regs, sizeof(*regs), + KMSAN_POISON_NOCHECK); + user_access_restore(ua_flags); +} + void kmsan_check_memory(const void *addr, size_t size) { if (!kmsan_enabled) From patchwork Mon Sep 5 12:24:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1081C54EE9 for ; Mon, 5 Sep 2022 12:26:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B7348D0091; Mon, 5 Sep 2022 08:26:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 766868D0076; Mon, 5 Sep 2022 08:26:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62E538D0091; Mon, 5 Sep 2022 08:26:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5389A8D0076 for ; Mon, 5 Sep 2022 08:26:55 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3BA4DAB7C1 for ; Mon, 5 Sep 2022 12:26:55 +0000 (UTC) X-FDA: 79877955990.04.7C5FD7D Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf23.hostedemail.com (Postfix) with ESMTP id CFA09140077 for ; Mon, 5 Sep 2022 12:26:54 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id gn30-20020a1709070d1e00b0074144af99d1so2277299ejc.17 for ; Mon, 05 Sep 2022 05:26:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=63Lbiss0mCSaP/JatuEwb7Qda7JVHXONR7b7gS/f0zw=; b=UDsvJYgtTXPZF5a9agPh2mXyrYJrhXMTAzXPZNghSJw5ee3mNMwXIwpKa+RKY0zuMd 0oketLAQ8tAqBq9H1OztfYXsFZLC3/GjPvaRGV4Tf5cx5pY09RUcAdfIPub4N1YomTRe qN17vYLlykWW3fcYQY5yUk9fGBNhlorODfdAFeJeB/phj1xFUAdvMGDkePvsaAAnNZDZ QNcHmgGjaBYlCKkJsPpqD7vIZwk92kcg49ObV7TTQ+SLXpAZ9pO/JJZcCHGzK92euOmB wKHTk01woRx7PzSGgjyowKFGJYseYwyRla1r4RWjs1ovT7LskcrPtqUZWACg7jNiOAET dJPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=63Lbiss0mCSaP/JatuEwb7Qda7JVHXONR7b7gS/f0zw=; b=WqdslGrPYbcaPEnw0WSjbletdFb/hqrB/3y3DB1VCYof/0gDGJAkig8SCeY7oGpN3Y t4kvXjJH9bqKC0JGGHFWqqfvuvyKFTlcguJeBMaNg6khY44wPFj3xw1McqEQh6ACxBzo 1e9mKDxcZnGASK7LfB33IkUWaG2prTJsrL3c3IRLcyGj84Px7kLGIdV1fC+HkZ2On6uB 5CoaQGIqebOyk5zdqoE/Mhd4Lqg9jI2vbg0YiVmp3YWj9KElrmFeL4ZzsXIFZeNNATFQ dy0MZnsy5mbJ3nrbHqhIYqGursMoiHOp9gy4uodKQ5BhxwTlAQI67+1YV4KZoVFwjlAm q/+A== X-Gm-Message-State: ACgBeo3OpcUJ8T8NImCFUkh/Lz+Bi3KGuzdbC1H9OpigyhtjpWaNHxb3 XeszPujEmGk55W6YL1Xs36OKM5dHTjI= X-Google-Smtp-Source: AA6agR5LBRpdlR606D8rNDqY5QJ3OBC/wbbIgC3X8Ct4s02WNlqHsbJo9IxNX+CTnN40NsuDZAMm2jER36E= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:2c41:b0:741:4906:482b with SMTP id hf1-20020a1709072c4100b007414906482bmr28414813ejc.239.1662380813588; Mon, 05 Sep 2022 05:26:53 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:50 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-43-glider@google.com> Subject: [PATCH v6 42/44] bpf: kmsan: initialize BPF registers with zeroes From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380814; a=rsa-sha256; cv=none; b=iQ5SfihY5EK/b5gfWljc+yWAQqEooyWXGU5XmTLqAZhpHr88Q1L1bc+nYbiJ/qqLut7GdE AbeD5HZiqRsglCC0v02A5ZaeBETHiJcyrGjJfdXoxOhAHfOxz7AmKbl6OPbTSGWNf4Gb2J lXDy5sAkYSsGS/D8TwD9X3NiZk/v0UQ= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UDsvJYgt; spf=pass (imf23.hostedemail.com: domain of 3DesVYwYKCFo8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3DesVYwYKCFo8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380814; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=63Lbiss0mCSaP/JatuEwb7Qda7JVHXONR7b7gS/f0zw=; b=2i12yxLFYlTScRkpLt0X67etLq8SCFfLr0L0CDOREjFRdkaRfhEyy/Urgjm8C/XD1aYiCN pLeNM5Cqtt7d9cBmh1rMVO72dcVl6Ii1a79+vt3/JoR5r1vrFejcTwiOq1+deRIx2DVWL3 tn50CdnsHwUelyvabrzMbFqIAsgoFoM= X-Rspam-User: X-Stat-Signature: ygfi97ks9bfa5fs61mdxb8esngkfmrkn X-Rspamd-Queue-Id: CFA09140077 X-Rspamd-Server: rspam10 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UDsvJYgt; spf=pass (imf23.hostedemail.com: domain of 3DesVYwYKCFo8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3DesVYwYKCFo8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1662380814-125908 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When executing BPF programs, certain registers may get passed uninitialized to helper functions. E.g. when performing a JMP_CALL, registers BPF_R1-BPF_R5 are always passed to the helper, no matter how many of them are actually used. Passing uninitialized values as function parameters is technically undefined behavior, so we work around it by always initializing the registers. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I8ef9dbe94724cee5ad1e3a162f2b805345bc0586 --- kernel/bpf/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 3d9eb3ae334ce..21c74fac5131c 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2002,7 +2002,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn) static unsigned int PROG_NAME(stack_size)(const void *ctx, const struct bpf_insn *insn) \ { \ u64 stack[stack_size / sizeof(u64)]; \ - u64 regs[MAX_BPF_EXT_REG]; \ + u64 regs[MAX_BPF_EXT_REG] = {}; \ \ FP = (u64) (unsigned long) &stack[ARRAY_SIZE(stack)]; \ ARG1 = (u64) (unsigned long) ctx; \ From patchwork Mon Sep 5 12:24:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966074 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79495C54EE9 for ; Mon, 5 Sep 2022 12:26:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 19C7E8D0092; Mon, 5 Sep 2022 08:26:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0FE0F8D0076; Mon, 5 Sep 2022 08:26:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F07BA8D0092; Mon, 5 Sep 2022 08:26:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E080D8D0076 for ; Mon, 5 Sep 2022 08:26:57 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B8914AB69B for ; Mon, 5 Sep 2022 12:26:57 +0000 (UTC) X-FDA: 79877956074.14.86409EA Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf07.hostedemail.com (Postfix) with ESMTP id 7492C40052 for ; Mon, 5 Sep 2022 12:26:57 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id h6-20020aa7de06000000b004483647900fso5815487edv.21 for ; Mon, 05 Sep 2022 05:26:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=RV5QTvMygwcM31S/hyrs6MnTkve28DlweGTrQchhxXo=; b=UJpdtPzQ4jg2bWxTTvLaBi+Ut8ibPGKpNFhDZh0iWTz3b+amhz6N+AZttgF2UwhDsT 0LBgQGCXwZMpWRgLy4vET4fOKKKgXxY1KuDJisX2XchJTyi2VCCX2dm68m0DAAvKGjC4 xIiTur2vx0OwVtk/WuhzSCK0dCMGsAVAL1CUqR69b/4FX/L7L9Ac2f05uDt/2u86QFT3 sZ1YamUl4cHshGvep6afoqIL2W3pcn/R7pZmOmD+ro7M2AzkNlTp3ADlUIg3hz9E6cPD /LU/ibYLRJEjq+u+M10jxNd1fjPNCkgQzBqOq14Kbt19kwHHkgk2aqR1DyBMN6VYb1BR uz3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=RV5QTvMygwcM31S/hyrs6MnTkve28DlweGTrQchhxXo=; b=q3nBI8x2W7tZ400yCqo8HhmKNj1sWZ06aPZXyc83sGh78fvMrmfm+9Y2yD3bZuvizU 4WZnKj5tsxCfVnQWeKgjbY6vgHQEXsbEYceMl/TZT7t/hRZg5lD4EVQqr+aIV4xuZ8M2 hAMWFEyU4wj6iP9/eYtlUMKSOO/X/foZLVJ0/Z0yi2W0NPFHGozNm0Pw/2JSQropGuJ1 VHPSzHlgGRLetke9K80JGuI2fEV77qK23jIq7nG8F9AmXkYIuhDpxoE0FDFgVu4XWrD3 879Q2Sg18QcjY4ERvf/bVdaE4WOOKZXZlwQaxmRDP2us4W3zN/wT63ato/hm4TBX4CdW vLHQ== X-Gm-Message-State: ACgBeo2/M2m1HDknH7/Rf7I7WK9SsfZJJQ/M7SXtKUi44jkFhwJWTGH8 i171UHGXelc5lLVc3qCFkefEUPie25Y= X-Google-Smtp-Source: AA6agR7PXaXryyLZJx+eRIAfaY6wz+72tUpar4Sm/m5k7H+ezMzVzMxUtwtMAFDWbsTMvDkpBpYpems7Myc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:cb87:0:b0:43b:e650:6036 with SMTP id r7-20020aa7cb87000000b0043be6506036mr44092076edt.350.1662380816223; Mon, 05 Sep 2022 05:26:56 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:51 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-44-glider@google.com> Subject: [PATCH v6 43/44] mm: fs: initialize fsdata passed to write_begin/write_end interface From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UJpdtPzQ; spf=pass (imf07.hostedemail.com: domain of 3EOsVYwYKCF0BGD89MBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3EOsVYwYKCF0BGD89MBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380817; a=rsa-sha256; cv=none; b=LHKht1Ds3ie/WpSic3/u06ueh96MoCTM5d/zC85xizJjGJDFit06PE/3k8gcEXB46VPQIU 97CT1+OqWvXD5vc1rNCyQSppaJ5HXLqg2Fd95t00Hun6wmc5AtZDGXK7eNWw861SSmUE20 OeR/hVKkUEimfEuBXC9MWz/DxpU8MV0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380817; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RV5QTvMygwcM31S/hyrs6MnTkve28DlweGTrQchhxXo=; b=BMzd7x1K9gApMQamf8bn19XXYpEJ5aTjOg6tpKDqT8gYQnAI1eeYuHZnTzMrfJoLc9lzEz WaGXxwLcrf2YpO7lDpJyrcKjvq7XrBut08QjYk23yBoeho7P9NlWAlAhl7xWpKWVYidQ/2 E0TXOqg+lo/rkE+irtfuzPbtWdprOaY= X-Stat-Signature: xtchfsbint896oe6w9dfs1dg7g8ctrpz X-Rspamd-Queue-Id: 7492C40052 X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UJpdtPzQ; spf=pass (imf07.hostedemail.com: domain of 3EOsVYwYKCF0BGD89MBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3EOsVYwYKCF0BGD89MBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam07 X-HE-Tag: 1662380817-157362 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions implementing the a_ops->write_end() interface accept the `void *fsdata` parameter that is supposed to be initialized by the corresponding a_ops->write_begin() (which accepts `void **fsdata`). However not all a_ops->write_begin() implementations initialize `fsdata` unconditionally, so it may get passed uninitialized to a_ops->write_end(), resulting in undefined behavior. Fix this by initializing fsdata with NULL before the call to write_begin(), rather than doing so in all possible a_ops implementations. This patch covers only the following cases found by running x86 KMSAN under syzkaller: - generic_perform_write() - cont_expand_zero() and generic_cont_expand_simple() - page_symlink() Other cases of passing uninitialized fsdata may persist in the codebase. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie300c21bbe9dea69a730745bd3c6d2720953bf41 --- fs/buffer.c | 4 ++-- fs/namei.c | 2 +- mm/filemap.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index 55e762a58eb65..e1198f4b28c8f 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2352,7 +2352,7 @@ int generic_cont_expand_simple(struct inode *inode, loff_t size) struct address_space *mapping = inode->i_mapping; const struct address_space_operations *aops = mapping->a_ops; struct page *page; - void *fsdata; + void *fsdata = NULL; int err; err = inode_newsize_ok(inode, size); @@ -2378,7 +2378,7 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, const struct address_space_operations *aops = mapping->a_ops; unsigned int blocksize = i_blocksize(inode); struct page *page; - void *fsdata; + void *fsdata = NULL; pgoff_t index, curidx; loff_t curpos; unsigned zerofrom, offset, len; diff --git a/fs/namei.c b/fs/namei.c index 53b4bc094db23..076ae96ca0b14 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -5088,7 +5088,7 @@ int page_symlink(struct inode *inode, const char *symname, int len) const struct address_space_operations *aops = mapping->a_ops; bool nofs = !mapping_gfp_constraint(mapping, __GFP_FS); struct page *page; - void *fsdata; + void *fsdata = NULL; int err; unsigned int flags; diff --git a/mm/filemap.c b/mm/filemap.c index 15800334147b3..ada25b9f45ad1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3712,7 +3712,7 @@ ssize_t generic_perform_write(struct kiocb *iocb, struct iov_iter *i) unsigned long offset; /* Offset into pagecache page */ unsigned long bytes; /* Bytes to write to page */ size_t copied; /* Bytes copied from user */ - void *fsdata; + void *fsdata = NULL; offset = (pos & (PAGE_SIZE - 1)); bytes = min_t(unsigned long, PAGE_SIZE - offset, From patchwork Mon Sep 5 12:24:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 831DBECAAD3 for ; Mon, 5 Sep 2022 12:27:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 211278D0093; Mon, 5 Sep 2022 08:27:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C0E88D0076; Mon, 5 Sep 2022 08:27:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 061D18D0093; Mon, 5 Sep 2022 08:27:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E8FBE8D0076 for ; Mon, 5 Sep 2022 08:27:00 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id CD9B180645 for ; Mon, 5 Sep 2022 12:27:00 +0000 (UTC) X-FDA: 79877956200.08.30E292D Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf02.hostedemail.com (Postfix) with ESMTP id 8496D80072 for ; Mon, 5 Sep 2022 12:27:00 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id dz16-20020a0564021d5000b004489f04cc2cso5726935edb.10 for ; Mon, 05 Sep 2022 05:27:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=qWMR0SPZO0jqHHxoNfMsk5w/9uuaYeqiCUk8OeqbWGY=; b=MCXD5lbAQVcPgT/EOdvK/WVt9NS7Tehk6H0aVaS8GX3ikViotqPCRrmdoTTCXIhr5J N8bepvbPPvSbhZ8S0DmGK9sFHZlcXZ31hLA7G3xuzIyLsP7X5LFnKWatBqYu/WyuYeav k7/a9IJ5CxBv/ClbXm8jzN0Wty3J801Voam2e3glQZTv/b9+9CyLW+e+qo77yLiBc3q4 tLOthnqhOsLvn/SWDpHT5+mmNbEUNHIB8pgYXGWJNleqraHhVXBzpssmWKCcJwIEj3HJ SW6L9kIN8TBi8B5aJM7ga9767Q3PcHiJ21sor8antmuKraDCJSwCtpgLZh65GuT8gSnh 9ivg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=qWMR0SPZO0jqHHxoNfMsk5w/9uuaYeqiCUk8OeqbWGY=; b=G0XdGa6nRe5+ZW3o4+Y84aw+asUzRzwo3clV3vyzPAk59jo8atUkD8ufknbeDwceJw l3bQNF30ZHCEkvgu9lyETXGtsAohMsLRE9p6aAopJUnwKjvT4fJQF13CQAwNglIWDa8j ULw8tXeGdHfEkIqpiwNI8d12IPoh683D7MtRAhq/wzX5at0HWfk7hGY3V5bOIPCgJGfX yGQhcZNI0B9AqjhqgtMWRYKlPllwutOVS8FNjj0OcLIqGBlndPJbSt+ZvLMDxjoyTgyn wX5hJzUJGBu2r3GCR6KFv2myZSPrzBku4/TwptWiVEPTcFOezf6rsABJtT538DNvFQnH 9JCg== X-Gm-Message-State: ACgBeo27gLWJkB09vXe85o6zlRaRHf8EzWb/LS+3qAy+JXhvXxDOsDl9 cOYcgXdNiuYhF6L75afzDpYv7Akk43c= X-Google-Smtp-Source: AA6agR5SRqMCINjkMuIrD7WXHVNS75SNahzGJoh0jVbnVtc6SNEh0M27/BA2cqVS6mxm7vNN9nbOSPJjvGE= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:a17:907:2c74:b0:741:657a:89de with SMTP id ib20-20020a1709072c7400b00741657a89demr26824523ejc.58.1662380819269; Mon, 05 Sep 2022 05:26:59 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:52 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-45-glider@google.com> Subject: [PATCH v6 44/44] x86: kmsan: enable KMSAN builds for x86 From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MCXD5lbA; spf=pass (imf02.hostedemail.com: domain of 3E-sVYwYKCGAEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3E-sVYwYKCGAEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380820; a=rsa-sha256; cv=none; b=OArlxn2743xO60vCPg1LaWqUgCL6xOcI1jfjsT9jEBJBtjyJYIOxMqSNDwE/aNmXD2FCp/ W2Cp+IFwXFUUrqoZuoKxhcpAZileHH7E434pnaDQYEdHOUQXvaZl72i6OwdQ1O/GBqcRLb bz3rtroPnZB7OhASwwE021mhhp0zqIk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380820; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qWMR0SPZO0jqHHxoNfMsk5w/9uuaYeqiCUk8OeqbWGY=; b=rhvz1/9O3kLoS9rde56iVN6n7qKgPa7mmDt0N9bwBWeEdfumnqmE/5SzWabooY1Xzyb2dv qVKQRTeOCgIOiLiQOc5FrBeo7g1BQV63oSO5PRaTrOcHQrb3/jGcw46tf5nUvtmA0lyKEB vn4bTSn5MTUj7/9/TcROIr8kneJkrLo= X-Rspam-User: Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MCXD5lbA; spf=pass (imf02.hostedemail.com: domain of 3E-sVYwYKCGAEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3E-sVYwYKCGAEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8496D80072 X-Stat-Signature: de3eimkunb41uufkwipthjch81ysjmy6 X-HE-Tag: 1662380820-918304 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make KMSAN usable by adding the necessary Kconfig bits. Also declare x86-specific functions checking address validity in arch/x86/include/asm/kmsan.h. Signed-off-by: Alexander Potapenko --- v4: -- per Marco Elver's request, create arch/x86/include/asm/kmsan.h and move arch-specific inline functions there. Link: https://linux-review.googlesource.com/id/I1d295ce8159ce15faa496d20089d953a919c125e --- arch/x86/Kconfig | 1 + arch/x86/include/asm/kmsan.h | 55 ++++++++++++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) create mode 100644 arch/x86/include/asm/kmsan.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 697da8dae1418..bd9436cd0f29b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -168,6 +168,7 @@ config X86 select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KFENCE + select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h new file mode 100644 index 0000000000000..a790b865d0a68 --- /dev/null +++ b/arch/x86/include/asm/kmsan.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * x86 KMSAN support. + * + * Copyright (C) 2022, Google LLC + * Author: Alexander Potapenko + */ + +#ifndef _ASM_X86_KMSAN_H +#define _ASM_X86_KMSAN_H + +#ifndef MODULE + +#include +#include + +/* + * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented version. + */ +static inline bool kmsan_phys_addr_valid(unsigned long addr) +{ + if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT)) + return !(addr >> boot_cpu_data.x86_phys_bits); + else + return true; +} + +/* + * Taken from arch/x86/mm/physaddr.c to avoid using an instrumented version. + */ +static inline bool kmsan_virt_addr_valid(void *addr) +{ + unsigned long x = (unsigned long)addr; + unsigned long y = x - __START_KERNEL_map; + + /* use the carry flag to determine if x was < __START_KERNEL_map */ + if (unlikely(x > y)) { + x = y + phys_base; + + if (y >= KERNEL_IMAGE_SIZE) + return false; + } else { + x = y + (__START_KERNEL_map - PAGE_OFFSET); + + /* carry flag will be set if starting x was >= PAGE_OFFSET */ + if ((x > y) || !kmsan_phys_addr_valid(x)) + return false; + } + + return pfn_valid(x >> PAGE_SHIFT); +} + +#endif /* !MODULE */ + +#endif /* _ASM_X86_KMSAN_H */