From patchwork Wed Mar 25 16:12:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458243 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66389913 for ; Wed, 25 Mar 2020 16:13:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A4E920409 for ; Wed, 25 Mar 2020 16:13:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QF/C4SH2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A4E920409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF81D6B006C; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ECFF96B006E; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBE9D6B0070; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C11D06B006C for ; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id ACD838248047 for ; Wed, 25 Mar 2020 16:13:23 +0000 (UTC) X-FDA: 76634379486.27.shop53_74dd1a8d50b2e X-Spam-Summary: 2,0,0,70bbef59bba4aba7,d41d8cd98f00b204,3iyn7xgykcbaw1ytu7w44w1u.s421y3ad-220bqs0.47w@flex--glider.bounces.google.com,,RULES_HIT:41:152:327:355:379:541:800:960:966:968:973:981:988:989:1042:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2538:2553:2559:2562:2693:2901:2911:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4385:4425:4605:5007:6119:6261:6653:6742:6743:7875:7903:8603:8660:9969:10004:11026:11233:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12683:12895:12986:13148:13230:13846:14096:14097:14394:14659:21080:21365:21433:21444:21451:21627:21772:21795:21939:21990:30003:30012:30034:30045:30046:30051:30054:30064:30067:30070:30079:30090,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none ,Custom_ X-HE-Tag: shop53_74dd1a8d50b2e X-Filterd-Recvd-Size: 21338 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:22 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id l17so1377599wro.3 for ; Wed, 25 Mar 2020 09:13:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/dVKJJqMDx7mPMhb7GMC/kWpg0kueRYQWUDrf7a+hZc=; b=QF/C4SH2Ehssgx08ic593kWuOej6cZUQWWqFx1xtweuVEhBJoZ4K2QW8lK5s9qZcwo lFbqIlOEUkfSWPD514hJUir5FKyUpEnVnNnqA7FsAfPe1qKRfU9sThGQsLyquTRyTNmz oOk6EklNmW9arLASapN8N4R9c9SnGXV4vXEoP2KpUq+nYm6MeBd0OuWLknbBg5zOSppL 0e9JIzCWFZxmi36WHNLJy949VcEmk7m/2Lq8vu2OtRgDGOasouOQulipz1f0qC9W/IBt tfZ4xSQdyAcBzsTa+tSxhjQyfLIk82HhhBuLkjaSqLwQcfgMWp4PNIFvgiZqeNDasScc ovQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/dVKJJqMDx7mPMhb7GMC/kWpg0kueRYQWUDrf7a+hZc=; b=YDIeZm/x9oiw/By0bT8Q2HkGB++nD55kseTlvg4KRT2CxG6jb81aj5Lb1wn9ISEgcp 6+x+R0MActjYGrpjjMyylmogRcO0HeMVREvceCc/iVkguHVr4T2dCbV/r91UnJwDX2py sAeWRRHgsZ4P7tE+TaJkO9LQqjEP9+j8UOcBI+9ci61i/rtQSsbOy8pH5HWYeMlP+HGl xUkgUmaS2JawsqXQ+2VtknP+4PukYJIaL/f5z8qer5SRc1VOMwLmpAeEOuPdTftJk5Yh HVlPXdB0okK3k16Vms80EZvelpytYKkyGBtoiOcdlHBC/R0+Z/I1Z58Ni40bz3xaT+Aw P+Uw== X-Gm-Message-State: ANhLgQ2wWFBIRXfpg5lEZrEackXbIoXiQskVlEvTEn2BucnivpE1bNmX BbceBHgtyrVJPTfXu4ve2WwrAbj5p/E= X-Google-Smtp-Source: ADFU+vs+Dz89lAXf05zZdSMDJ2C4sOf1bVq2dGaEvzKqGaSWI4h53O8VS/VPCIeb+abjsCvYJQAwdtmPJ58= X-Received: by 2002:adf:f309:: with SMTP id i9mr4585883wro.0.1585152801381; Wed, 25 Mar 2020 09:13:21 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:19 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-9-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 08/38] kmsan: add KMSAN hooks for kernel subsystems From: glider@google.com To: Jens Axboe , Andy Lutomirski , Wolfram Sang , Christoph Hellwig , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch provides hooks that subsystems use to notify KMSAN about changes in the kernel state. Such changes include: - page operations (allocation, deletion, splitting, mapping); - memory allocation and deallocation; - entering and leaving IRQ/NMI/softirq contexts; - copying data between kernel, userspace and hardware. This patch has been split away from the rest of KMSAN runtime to simplify the review process. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Wolfram Sang Cc: Christoph Hellwig Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v4: - fix a lot of comments by Marco Elver and Andrey Konovalov: - clean up headers and #defines, remove debugging code - simplified KMSAN entry hooks - fixed kmsan_check_skb() Change-Id: I99d1f34f26bef122897cb840dac8d5b34d2b6a80 --- arch/x86/include/asm/kmsan.h | 93 ++++++++ mm/kmsan/kmsan_entry.c | 38 ++++ mm/kmsan/kmsan_hooks.c | 416 +++++++++++++++++++++++++++++++++++ 3 files changed, 547 insertions(+) create mode 100644 arch/x86/include/asm/kmsan.h create mode 100644 mm/kmsan/kmsan_entry.c create mode 100644 mm/kmsan/kmsan_hooks.c diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h new file mode 100644 index 0000000000000..f924f29f90f97 --- /dev/null +++ b/arch/x86/include/asm/kmsan.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Assembly bits to safely invoke KMSAN hooks from .S files. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef _ASM_X86_KMSAN_H +#define _ASM_X86_KMSAN_H + +#ifdef CONFIG_KMSAN + +#ifdef __ASSEMBLY__ +.macro KMSAN_PUSH_REGS + pushq %rax + pushq %rcx + pushq %rdx + pushq %rdi + pushq %rsi + pushq %r8 + pushq %r9 + pushq %r10 + pushq %r11 +.endm + +.macro KMSAN_POP_REGS + popq %r11 + popq %r10 + popq %r9 + popq %r8 + popq %rsi + popq %rdi + popq %rdx + popq %rcx + popq %rax + +.endm + +.macro KMSAN_CALL_HOOK fname + KMSAN_PUSH_REGS + call \fname + KMSAN_POP_REGS +.endm + +.macro KMSAN_CONTEXT_ENTER + KMSAN_CALL_HOOK kmsan_context_enter +.endm + +.macro KMSAN_CONTEXT_EXIT + KMSAN_CALL_HOOK kmsan_context_exit +.endm + +#define KMSAN_INTERRUPT_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_INTERRUPT_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_SOFTIRQ_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_SOFTIRQ_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_NMI_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_NMI_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_IST_ENTER(shift_ist) KMSAN_CONTEXT_ENTER +#define KMSAN_IST_EXIT(shift_ist) KMSAN_CONTEXT_EXIT + +.macro KMSAN_UNPOISON_PT_REGS + KMSAN_CALL_HOOK kmsan_unpoison_pt_regs +.endm + +#else +#error this header must be included into an assembly file +#endif + +#else /* ifdef CONFIG_KMSAN */ + +#define KMSAN_INTERRUPT_ENTER +#define KMSAN_INTERRUPT_EXIT +#define KMSAN_SOFTIRQ_ENTER +#define KMSAN_SOFTIRQ_EXIT +#define KMSAN_NMI_ENTER +#define KMSAN_NMI_EXIT +#define KMSAN_SYSCALL_ENTER +#define KMSAN_SYSCALL_EXIT +#define KMSAN_IST_ENTER(shift_ist) +#define KMSAN_IST_EXIT(shift_ist) +#define KMSAN_UNPOISON_PT_REGS + +#endif /* ifdef CONFIG_KMSAN */ +#endif /* ifndef _ASM_X86_KMSAN_H */ diff --git a/mm/kmsan/kmsan_entry.c b/mm/kmsan/kmsan_entry.c new file mode 100644 index 0000000000000..7af31642cd451 --- /dev/null +++ b/mm/kmsan/kmsan_entry.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for entry_64.S + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" + +void kmsan_context_enter(void) +{ + int level = this_cpu_inc_return(kmsan_context_level); + + BUG_ON(level >= KMSAN_NESTED_CONTEXT_MAX); +} +EXPORT_SYMBOL(kmsan_context_enter); + +void kmsan_context_exit(void) +{ + int level = this_cpu_dec_return(kmsan_context_level); + + BUG_ON(level < 0); +} +EXPORT_SYMBOL(kmsan_context_exit); + +void kmsan_unpoison_pt_regs(struct pt_regs *regs) +{ + if (!kmsan_ready || kmsan_in_runtime()) + return; + kmsan_internal_unpoison_shadow(regs, sizeof(*regs), /*checked*/true); +} +EXPORT_SYMBOL(kmsan_unpoison_pt_regs); diff --git a/mm/kmsan/kmsan_hooks.c b/mm/kmsan/kmsan_hooks.c new file mode 100644 index 0000000000000..8ddfd91b1d115 --- /dev/null +++ b/mm/kmsan/kmsan_hooks.c @@ -0,0 +1,416 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * The functions may call back to instrumented code, which, in turn, may call + * these hooks again. To avoid re-entrancy, we use __GFP_NO_KMSAN_SHADOW. + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +/* Called from kernel/kthread.c, kernel/fork.c */ +void kmsan_task_create(struct task_struct *task) +{ + unsigned long irq_flags; + + if (!task) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_task_create); + +/* Called from kernel/exit.c */ +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_task_state *state = &task->kmsan; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + state->allow_reporting = false; +} +EXPORT_SYMBOL(kmsan_task_exit); + +/* Called from mm/slub.c */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(object == NULL)) + return; + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + irq_flags = kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_shadow(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_shadow(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +/* Called from mm/slub.c */ +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * till the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_internal_poison_shadow(object, s->object_size, + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_slab_free); + +/* Called from mm/slub.c */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(ptr == NULL)) + return; + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_shadow((void *)ptr, size, + /*checked*/true); + else + kmsan_internal_poison_shadow((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +/* Called from mm/slub.c */ +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + BUG_ON(ptr != page_address(page)); + kmsan_internal_poison_shadow( + (void *)ptr, PAGE_SIZE << compound_order(page), GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_kfree_large); + +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_ORIGIN); +} + +/* Called from mm/vmalloc.c */ +void kmsan_vunmap_page_range(unsigned long start, unsigned long end) +{ + __vunmap_page_range(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_page_range(vmalloc_origin(start), vmalloc_origin(end)); +} +EXPORT_SYMBOL(kmsan_vunmap_page_range); + +/* Called from lib/ioremap.c */ +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) +{ + unsigned long irq_flags; + struct page *shadow, *origin; + int i, nr; + unsigned long off = 0; + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO | __GFP_NO_KMSAN_SHADOW; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + irq_flags = kmsan_enter_runtime(); + for (i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_page_range_noflush(vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), + prot, &shadow); + __vmap_page_range_noflush(vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), + prot, &origin); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_ioremap_page_range); + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + int i, nr; + struct page *shadow, *origin; + unsigned long v_shadow, v_origin; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + irq_flags = kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (i = 0; i < nr; i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = vmalloc_to_page_or_null((void *)v_shadow); + origin = vmalloc_to_page_or_null((void *)v_origin); + __vunmap_page_range(v_shadow, v_shadow + PAGE_SIZE); + __vunmap_page_range(v_origin, v_origin + PAGE_SIZE); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_iounmap_page_range); + +/* Called from include/linux/uaccess.h, include/linux/uaccess.h */ +void kmsan_copy_to_user(const void *to, const void *from, + size_t to_copy, size_t left) +{ + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy == left) + return; + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + kmsan_memcpy_metadata((void *)to, (void *)from, to_copy - left); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + +void kmsan_gup_pgd_range(struct page **pages, int nr) +{ + int i; + void *page_addr; + + /* + * gup_pgd_range() has just created a number of new pages that KMSAN + * treats as uninitialized. In the case they belong to the userspace + * memory, unpoison the corresponding kernel pages. + */ + for (i = 0; i < nr; i++) { + page_addr = page_address(pages[i]); + if (((u64)page_addr < TASK_SIZE) && + ((u64)page_addr + PAGE_SIZE < TASK_SIZE)) + kmsan_unpoison_shadow(page_addr, PAGE_SIZE); + } + +} +EXPORT_SYMBOL(kmsan_gup_pgd_range); + +/* Helper function to check an SKB. */ +void kmsan_check_skb(const struct sk_buff *skb) +{ + struct sk_buff *frag_iter; + int i; + skb_frag_t *f; + u32 p_off, p_len, copied; + struct page *p; + u8 *vaddr; + + if (!skb || !skb->len) + return; + + kmsan_internal_check_memory(skb->data, skb_headlen(skb), 0, REASON_ANY); + if (skb_is_nonlinear(skb)) { + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + f = &skb_shinfo(skb)->frags[i]; + + skb_frag_foreach_page(f, skb_frag_off(f), + skb_frag_size(f), + p, p_off, p_len, copied) { + + vaddr = kmap_atomic(p); + kmsan_internal_check_memory(vaddr + p_off, + p_len, /*user_addr*/ 0, + REASON_ANY); + kunmap_atomic(vaddr); + } + } + } + skb_walk_frags(skb, frag_iter) + kmsan_check_skb(frag_iter); +} +EXPORT_SYMBOL(kmsan_check_skb); + +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_shadow(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/0, + REASON_ANY); + kmsan_internal_unpoison_shadow((void *)addr, size, + /*checked*/false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_shadow((void *)addr, size, + /*checked*/false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(const void *addr, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, uaddr = (u64)addr; + + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = uaddr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)uaddr, to_go, dir); + uaddr += to_go; + size -= to_go; + } +} +EXPORT_SYMBOL(kmsan_handle_dma); + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_shadow(const void *address, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_shadow((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_poison_shadow); + +void kmsan_unpoison_shadow(const void *address, size_t size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + irq_flags = kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_shadow((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_shadow); + +void kmsan_check_memory(const void *addr, size_t size) +{ + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory);