From patchwork Wed Mar 25 16:12:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C921F6CA for ; Wed, 25 Mar 2020 16:15:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7BB75206F8 for ; Wed, 25 Mar 2020 16:15:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xda12kqi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BB75206F8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 670416B00AC; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5D2A76B00AE; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 471DC6B00AF; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 246156B00AC for ; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0B7D9180AC4ED for ; Wed, 25 Mar 2020 16:14:42 +0000 (UTC) X-FDA: 76634382804.06.note46_804c639b02a4b X-Spam-Summary: 2,0,0,1dd4b2f57eb3d454,d41d8cd98f00b204,3cin7xgykcf8difabodlldib.9ljifkru-jjhs79h.lod@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2393:2559:2562:2901:2903:3138:3139:3140:3141:3142:3152:3866:3867:3874:4049:4250:5007:6261:6653:6742:6743:7875:7903:9969:10004:11026:11658:11914:12043:12048:12291:12297:12438:12555:12683:12895:13846:14096:14097:14394:14659:21060:21080:21365:21444:21451:21627:21990:30054:30064,0,RBL:209.85.217.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: note46_804c639b02a4b X-Filterd-Recvd-Size: 10421 Received: from mail-vs1-f73.google.com (mail-vs1-f73.google.com [209.85.217.73]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:41 +0000 (UTC) Received: by mail-vs1-f73.google.com with SMTP id v10so447471vso.12 for ; Wed, 25 Mar 2020 09:14:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YvwAoG4X3WWEZQ7U1+pxpD7xDTJfH6zyOtSyfbOznuI=; b=Xda12kqiTkKU61XayPhLhCJfPP4aik5cTnB3r4BDF7aef42NSiDya4EP1p2LFU8vOu U/69wPkR5Fkdlt7FwTJh4u4GojivkQFrliunWFUgfO/PLOoXcDF8FSx7LKlhA+1JhlAn kqrNHs31sBKb1dei50n+/MWCcBRfOS/wuKsn7jYfMa4QncNF6gykHEsNfXoi3gr6NJvX vs2qC0FomzfAT9uI7pq9IDwfTuc0lklg8QBPWLCdo/hAuvRj4YK8rrDO8whD5DEzqdpH h1z4DdnhEjrx0vsLBGnNEOFrFmp4pRvvwrVeDZb0rUs/Pp0ItagrgsbRWb6ZziMUePwQ mPSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YvwAoG4X3WWEZQ7U1+pxpD7xDTJfH6zyOtSyfbOznuI=; b=ghIJKubjHHKCIFq2j4aqWDzwRbdEtpGDLTPTnzo+22lZ8OyckUT7/RFT1P8s8kg5Ao EFlnk5zD/6BJcncJj34WTrCwCtcRSEPWPahAZdfw/QCB1oatXILgSG1TYDRbmtRwPb/q 9q4NC1K8Z+P++RcynRv9l2/w49JWCOdU+DIKjTA4PvZr9GheIplBNOMfY/MYdAR9I9Gf nSVYKawm05+jltvNEV8jdivPGSFGpPs3saj6ilIwelFFXAIQENkeOK8lvMkLwJeGwqAD d7YEfkCJguyUCArQQjli2j733bQRPDZDnw+Q4JxjTBk1AxR7IoVAIOmMCk7jUkRbMOJx qHeQ== X-Gm-Message-State: ANhLgQ0JNNoAJ0ZeiAObRp4HBlYnVLpapI3UkNn7XhXLbN2od80weFJY kBG/KbuXZyfkIez6MKMNVgoUoFZrxAY= X-Google-Smtp-Source: ADFU+vuruuTu00b2lJlnmW/XBY4+Rn+dkjKIWDgWlHfd598bEbLmnPNctxB22uMhZ9ExRb8vwcrwezIXLpg= X-Received: by 2002:ab0:2553:: with SMTP id l19mr2875866uan.128.1585152880676; Wed, 25 Mar 2020 09:14:40 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:44 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-34-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 33/38] kmsan: add iomap support From: glider@google.com To: Christoph Hellwig , "Darrick J. Wong" , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@lst.de, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions from lib/iomap.c interact with hardware, so KMSAN must ensure that: - every read function returns an initialized value - every write function checks values before sending them to hardware. Signed-off-by: Alexander Potapenko Cc: Christoph Hellwig Cc: Darrick J. Wong Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v4: - adjust sizes of checked memory buffers as requested by Marco Elver Change-Id: Iacd96265e56398d8c111637ddad3cad727e48c8d --- lib/iomap.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/lib/iomap.c b/lib/iomap.c index e909ab71e995d..3582e8d1ca34e 100644 --- a/lib/iomap.c +++ b/lib/iomap.c @@ -6,6 +6,7 @@ */ #include #include +#include #include @@ -70,26 +71,31 @@ static void bad_io_access(unsigned long port, const char *access) #define mmio_read64be(addr) swab64(readq(addr)) #endif +__no_sanitize_memory unsigned int ioread8(void __iomem *addr) { IO_COND(addr, return inb(port), return readb(addr)); return 0xff; } +__no_sanitize_memory unsigned int ioread16(void __iomem *addr) { IO_COND(addr, return inw(port), return readw(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread16be(void __iomem *addr) { IO_COND(addr, return pio_read16be(port), return mmio_read16be(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread32(void __iomem *addr) { IO_COND(addr, return inl(port), return readl(addr)); return 0xffffffff; } +__no_sanitize_memory unsigned int ioread32be(void __iomem *addr) { IO_COND(addr, return pio_read32be(port), return mmio_read32be(addr)); @@ -142,18 +148,21 @@ static u64 pio_read64be_hi_lo(unsigned long port) return lo | (hi << 32); } +__no_sanitize_memory u64 ioread64_lo_hi(void __iomem *addr) { IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64_hi_lo(void __iomem *addr) { IO_COND(addr, return pio_read64_hi_lo(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_lo_hi(void __iomem *addr) { IO_COND(addr, return pio_read64be_lo_hi(port), @@ -161,6 +170,7 @@ u64 ioread64be_lo_hi(void __iomem *addr) return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_hi_lo(void __iomem *addr) { IO_COND(addr, return pio_read64be_hi_lo(port), @@ -188,22 +198,32 @@ EXPORT_SYMBOL(ioread64be_hi_lo); void iowrite8(u8 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outb(val,port), writeb(val, addr)); } void iowrite16(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outw(val,port), writew(val, addr)); } void iowrite16be(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write16be(val,port), mmio_write16be(val, addr)); } void iowrite32(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outl(val,port), writel(val, addr)); } void iowrite32be(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write32be(val,port), mmio_write32be(val, addr)); } EXPORT_SYMBOL(iowrite8); @@ -239,24 +259,32 @@ static void pio_write64be_hi_lo(u64 val, unsigned long port) void iowrite64_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_lo_hi(val, port), writeq(val, addr)); } void iowrite64_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_hi_lo(val, port), writeq(val, addr)); } void iowrite64be_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_lo_hi(val, port), mmio_write64be(val, addr)); } void iowrite64be_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_hi_lo(val, port), mmio_write64be(val, addr)); } @@ -328,14 +356,20 @@ static inline void mmio_outsl(void __iomem *addr, const u32 *src, int count) void ioread8_rep(void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insb(port,dst,count), mmio_insb(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_shadow(dst, count); } void ioread16_rep(void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insw(port,dst,count), mmio_insw(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_shadow(dst, count * 2); } void ioread32_rep(void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insl(port,dst,count), mmio_insl(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_shadow(dst, count * 4); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -343,14 +377,20 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count); IO_COND(addr, outsb(port, src, count), mmio_outsb(addr, src, count)); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 2); IO_COND(addr, outsw(port, src, count), mmio_outsw(addr, src, count)); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 4); IO_COND(addr, outsl(port, src,count), mmio_outsl(addr, src, count)); } EXPORT_SYMBOL(iowrite8_rep);