From patchwork Mon Sep 5 12:24:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12966053 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67414ECAAD3 for ; Mon, 5 Sep 2022 12:26:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 015888D007D; Mon, 5 Sep 2022 08:26:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F09A78D0076; Mon, 5 Sep 2022 08:25:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA7BE8D007D; Mon, 5 Sep 2022 08:25:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CBB908D0076 for ; Mon, 5 Sep 2022 08:25:59 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 91EC24053D for ; Mon, 5 Sep 2022 12:25:59 +0000 (UTC) X-FDA: 79877953638.01.A7153CE Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf18.hostedemail.com (Postfix) with ESMTP id 28E1B1C005C for ; Mon, 5 Sep 2022 12:25:59 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id ds17-20020a170907725100b007419bfb5fd4so2254479ejc.4 for ; Mon, 05 Sep 2022 05:25:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date; bh=xU59YLMODhIvgc/g7m6GAFrx2Dkb/0YbdG6iSMMhfg0=; b=HRK55Wdk7p6pB6lhNN1Lcu2bb/mA8zHRnSHp5zTzOBzDNqJDj7CzK3crf23+jyTZ3R FWTZD9RKt8/4YU7gKZqyJD2aqpkfZYiZ/ycUx9b2fqSfxjvWcG3vrlKeRLph68NKFPGO 4zxJp4+lFrv+gwL1poaUjiv2MCxPI6DNPoPJQRDls7gAF6utZujiLNEP1ZdVsjqoxK2y 24yMqLig+cwXs0QfgviisLEM/FylgomrSJkxrHDv6a9D/WSyuBosu23ffRBgT1VkGwSy 7Zodph6qQJ5kP97RCJOEzQeOzC0Ks4C1XiiL3rLPjZI2zIyu97wC+81VLj8taHCWXa1T CN2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date; bh=xU59YLMODhIvgc/g7m6GAFrx2Dkb/0YbdG6iSMMhfg0=; b=K4ibcui/MDoKs2XnhdYli+DEqb69He+pVQskzbAzbRexw31yjF1sK37/k5P0guiQdv uPa+5D1W88qxbnvzMz2vM1nzRM8aKifU860XR4YDV2NhzrQ7xmpm9Eu2UTFRA4VgO9i3 ZOvFL4TlY5kVLSlEJlDfxPOBRcD6VaM09KBXUQhQwkLA2psxHBiX17HvssrhzkFonYdC OYjdwntqK+qRDsx5Ll1zseSfgEVtlsPUaM/JRSKcGIa9UI0RTWxcmN4i3SnFN/aFhTIR BKr/15030bxOJX6GQriwFQgipyzso7FbppHZ2lFb8S9U85nG/vOr8eT+2Zmr5/hXX0hx syTw== X-Gm-Message-State: ACgBeo1mCkNPVuK3/61wJTN3jvlxwfW8cY4toorV+GXBXxiYFCmn3q7l ekeG7tjoQrcOQMKDTJzLKGt0ajPcC7g= X-Google-Smtp-Source: AA6agR6LxWIUDbhoZxPuR8zz6joMq39SKph6EsVB0PAJ1iCZizYih/JZbVGg6hX3feVI1r37GHcQD7ilrMs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:b808:8d07:ab4a:554c]) (user=glider job=sendgmr) by 2002:aa7:d853:0:b0:44e:8a89:52f with SMTP id f19-20020aa7d853000000b0044e8a89052fmr3587265eds.293.1662380757922; Mon, 05 Sep 2022 05:25:57 -0700 (PDT) Date: Mon, 5 Sep 2022 14:24:30 +0200 In-Reply-To: <20220905122452.2258262-1-glider@google.com> Mime-Version: 1.0 References: <20220905122452.2258262-1-glider@google.com> X-Mailer: git-send-email 2.37.2.789.g6183377224-goog Message-ID: <20220905122452.2258262-23-glider@google.com> Subject: [PATCH v6 22/44] dma: kmsan: unpoison DMA mappings From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662380759; a=rsa-sha256; cv=none; b=sY3o+vfV8dRag8CsyEMtKZCVLf77pDxqUjqKJ8vl/E7/vLqDOz2qYEz2rEr3LGGzhRW4UI zy0RknswLpWE6vAo2gv71BlFnmupNjGmBdW7stgeWcz9I8d6e4VbNABZCj85UdBSygXy74 XNkFgHZL4FDUM1nR90Zt/RGhidg4mpA= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HRK55Wdk; spf=pass (imf18.hostedemail.com: domain of 31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662380759; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xU59YLMODhIvgc/g7m6GAFrx2Dkb/0YbdG6iSMMhfg0=; b=l50l0aTbPju+Z4uYhlZVRPgGsP9A2WDRy7GLhjnPbFEMmI2HKzCSy357BgCUk5Tc13v7Ta 9jff4dMPl0+jkrX2zGchKLlZeFONlUE6RCR8odDdQq8WISedy0zelhjHpu4KhtnxVZy7LH fkesTFtM9TLxuNb+NCRrZFxeyWB4LlQ= X-Stat-Signature: 8yxi4xdj3c4wonc4d7spa6htouptcccy X-Rspamd-Queue-Id: 28E1B1C005C Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HRK55Wdk; spf=pass (imf18.hostedemail.com: domain of 31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=31eoVYwYKCCIEJGBCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1662380759-803831 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_handle_dma() and kmsan_handle_dma_sg() here v4: -- swap dma: and kmsan: int the subject v5: -- do not export KMSAN hooks that are not called from modules v6: -- add a missing #include Link: https://linux-review.googlesource.com/id/Ia162dc4c5a92e74d4686c1be32a4dfeffc5c32cd --- include/linux/kmsan.h | 41 ++++++++++++++++++++++++++++++ kernel/dma/mapping.c | 10 +++++--- mm/kmsan/hooks.c | 59 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 107 insertions(+), 3 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index e00de976ee438..dac296da45c55 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -9,6 +9,7 @@ #ifndef _LINUX_KMSAN_H #define _LINUX_KMSAN_H +#include #include #include #include @@ -16,6 +17,7 @@ struct page; struct kmem_cache; struct task_struct; +struct scatterlist; #ifdef CONFIG_KMSAN @@ -172,6 +174,35 @@ void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, */ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); +/** + * kmsan_handle_dma() - Handle a DMA data transfer. + * @page: first page of the buffer. + * @offset: offset of the buffer within the first page. + * @size: buffer size. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffer, if it is copied to device; + * * initializes the buffer, if it is copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir); + +/** + * kmsan_handle_dma_sg() - Handle a DMA transfer using scatterlist. + * @sg: scatterlist holding DMA buffers. + * @nents: number of scatterlist entries. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffers in the scatterlist, if they are copied to device; + * * initializes the buffers, if they are copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir); + #else static inline void kmsan_init_shadow(void) @@ -254,6 +285,16 @@ static inline void kmsan_iounmap_page_range(unsigned long start, { } +static inline void kmsan_handle_dma(struct page *page, size_t offset, + size_t size, enum dma_data_direction dir) +{ +} + +static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 49cbf3e33de71..a8400aa9bcd4e 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -156,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); else addr = ops->map_page(dev, page, offset, size, dir, attrs); + kmsan_handle_dma(page, offset, size, dir); debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); return addr; @@ -194,11 +196,13 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, else ents = ops->map_sg(dev, sg, nents, dir, attrs); - if (ents > 0) + if (ents > 0) { + kmsan_handle_dma_sg(sg, nents, dir); debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); - else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && - ents != -EIO && ents != -EREMOTEIO)) + } else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && + ents != -EIO && ents != -EREMOTEIO)) { return -EIO; + } return ents; } diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 5c0eb25d984d7..563c09443a37a 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -10,10 +10,12 @@ */ #include +#include #include #include #include #include +#include #include #include @@ -243,6 +245,63 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, addr; + + if (PageHighMem(page)) + return; + addr = (u64)page_address(page) + offset; + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = addr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)addr, to_go, dir); + addr += to_go; + size -= to_go; + } +} + +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ + struct scatterlist *item; + int i; + + for_each_sg(sg, item, nents, i) + kmsan_handle_dma(sg_page(item), item->offset, item->length, + dir); +} + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) {