From patchwork Thu Apr 13 13:12:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13210282 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27BECC77B61 for ; Thu, 13 Apr 2023 13:12:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C3263900006; Thu, 13 Apr 2023 09:12:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE3DE900002; Thu, 13 Apr 2023 09:12:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8450900006; Thu, 13 Apr 2023 09:12:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9A188900002 for ; Thu, 13 Apr 2023 09:12:34 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6E9D81C5B34 for ; Thu, 13 Apr 2023 13:12:34 +0000 (UTC) X-FDA: 80676407028.12.B802642 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf04.hostedemail.com (Postfix) with ESMTP id 94B534001B for ; Thu, 13 Apr 2023 13:12:32 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=UWMmjgu4; spf=pass (imf04.hostedemail.com: domain of 3v_83ZAYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3v_83ZAYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681391552; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=akRe6e4LluVT6nSisjA8EFH4R71lf+zwEBXmk0EhBro=; b=ivi8/mGvNPjTT4AhhgXN4uJ/CGWur3g0AOaLweUEB0pIHdNlkTX5qV/TaKiqShd67IFMlw hPpWAo+r6z0PnyyPSMG+BiO/e7rrfKlHuWSIgDE0YevxnehX38L2NQUvibIZGFZnb8/yN4 d8P/IuChmF9CBPxyA01ewkA+QBr7pFE= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=UWMmjgu4; spf=pass (imf04.hostedemail.com: domain of 3v_83ZAYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3v_83ZAYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681391552; a=rsa-sha256; cv=none; b=2QL4fmF/XPB7ewhI3zUMzihqXbK1L+LPzU2lpyogyPpVwwy+gBzexIjAdRYF/qKZZ5x7o2 JGKagFMJDvB/UWlpaIMegqiqZ0X5NhN1/58MZRX/lIPj/9pZAb1sPhnEZM3XBuy1TgjVTG qEsS1FXzTuGucfSWm4eHqkO0BllWlr8= Received: by mail-wm1-f74.google.com with SMTP id j15-20020a05600c1c0f00b003f0a83bf082so928328wms.8 for ; Thu, 13 Apr 2023 06:12:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681391551; x=1683983551; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=akRe6e4LluVT6nSisjA8EFH4R71lf+zwEBXmk0EhBro=; b=UWMmjgu4IOSqX5O5xqqvwkH7RA1XeATrgudE5npO4wFN2DVyqyLAVKX22J9okhAt/f WotKPxEQVCFGgD5uRAwrTZUlIEZ/IsbLrld0fd3ZNx6+fnGYgn6BFkHWg+tR5TuM3i/2 /QxA1eaUKLWaI7yosX5HLTbRHZPPs9eytyhZr7c+VI11gRjnyGympvKbf2DjnClcTAeZ jMHfSvmsDhUshFLa9aTwnqaCvBr5xvm37uTdVsG/JwEBAxP/7+CABtpGpFdswDI8yGf6 rlZFmxJpt+if/yKAmeqXtLu4so+aQiQ3pC2KmSbses/Xm3lEXWcn2I/t7g7IAXvH2OlP 1pwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681391551; x=1683983551; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=akRe6e4LluVT6nSisjA8EFH4R71lf+zwEBXmk0EhBro=; b=gEEyqJZ6PPiEZKKSAKUswtRPkA3aoqzPqiv7fUKm61ZR45o7zznrTCALSCVJRa+5CU 7wOEatvmpNUj2fJJMmJqQWneqKADYdlBV/k3iaiOtmuj+u1gsUufp5wqqn1ii5P8uPoB SuXHbEVMzzBtf0VWKlSlik6roZQO8Do0c0sge78wsGp3m8BiCK3I/qH6ljAv99MUEmtt GdDE8BftrieG6KL9K51Mj8q/3/Y+AsHlCTQCQHYnVCglCspGpK766iedl6K9xEKGiOuW 8fE9HG0yMMvJWfkPvVFoe08V0NbHbF97/13e1XJEYJ/dfbPD9HvQPQ1EQrgefskAEcKH 4axw== X-Gm-Message-State: AAQBX9fwBCbIMBI0J1iCOW3bm/SSzg16MXS+2+dv9Z2udSippKFo2P1Y 8DU1LzNPn99x9/hrDCxZp6nE4DyIBKM= X-Google-Smtp-Source: AKy350aXlH+JEH6G36TukQUXMTmo+3/5cJ7SYgjh19063ZjyiBg+1zGqVChNDLCI/LK5ak77r2MOdwY/dZk= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:eb2b:4d7d:1d7f:9316]) (user=glider job=sendgmr) by 2002:a5d:4a42:0:b0:2f2:7854:f419 with SMTP id v2-20020a5d4a42000000b002f27854f419mr325850wrs.13.1681391551064; Thu, 13 Apr 2023 06:12:31 -0700 (PDT) Date: Thu, 13 Apr 2023 15:12:21 +0200 In-Reply-To: <20230413131223.4135168-1-glider@google.com> Mime-Version: 1.0 References: <20230413131223.4135168-1-glider@google.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog Message-ID: <20230413131223.4135168-2-glider@google.com> Subject: [PATCH v2 2/4] mm: kmsan: handle alloc failures in kmsan_ioremap_page_range() From: Alexander Potapenko To: glider@google.com Cc: urezki@gmail.com, hch@infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, elver@google.com, dvyukov@google.com, kasan-dev@googlegroups.com, Dipanjan Das X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 94B534001B X-Rspam-User: X-Stat-Signature: ey9kdjg8nkidxtcekz4nts5r43endop5 X-HE-Tag: 1681391552-246331 X-HE-Meta: U2FsdGVkX19fa6HJKsLyrrcbUs+hKa3vCzJeKGdr2jFq4qg3t1G0hpdXyaHeXaYhyTwtuYBf4C/xWjhk+DVMLaryVvg9Id21YO1McFbpk5Y3D4s1Avw3jMUjv4ZaDj6sP3KLV8VGrUetBj/qKaOGkTIf8VbXCYH8ZJpIlMNg/6VDwxsIyngzwDBr4j0g85st7CrlSqWOhd0e6oiYBzV4RS16leQ6rbMUFeLn7yftB9eMF8t40iFm1QSLjv9sxFO4YQERnreuxMH4R7SIFBNeP4qk4jKeBYhLD9g4IPzp6eM6tgE2tangqxPuxMWoH2HGBBRs+AMBIiExQDUayLm+Mmm6+PH3vAHRFkIWX8R/SICZYV3LumO9tvbtO4MJcv8bpD8RT1byp200ak0r1jdKxmchlloL8dkhCKacCz2B0IJsUEyZh/Cg/QrR/V+t2/LMhOmXW2YLrud+/44wPLieiYyrF1gu//mnsEtx3v+h/pkqm88mDNqj/fJsHAAZjlKY7hJipMJRGk3Zaw2zdcCuAw3quN8DeDC+h3gk4tceWj0iKpIPMZ51ofIElINck2EvptJ7m3F7l5uY4U8bfEKygO1rBFJoRXCGwQKt+7vGvK6SPCiRmXOux1f6U5y5gQNEHKtixus6bCB+g9pmFypQqsbVS5O5wVJri7k7AwY8tSA6HgR+oaM3Hgp2wYuYh/ekb6H82toeQaHn6fcNL/BBOeGlyhWayj+pfU9Dn9BlAHtmY5ZMV++/o8aSZsvWxMySxQOD6sfA5jX2eOwRJoszQGJThxdoa/2L1DSIuGoStFp9S4QyZY5QPRjslgFuRx2T96t8qCL+uMHWO1W85RmhJ3Pvzo2EXfEzC5Z0LQlafczx2jDbyMPAFTMR7R5mMsl8/w0tCIeqkfAjRijiTOnL1l3hfCTvfSRpzuHJ1+VNeMARXt/KqoZZ2jOqIwy3UsYd7N0HLDdi+cdRZ725M36 zpEo4Dmm vwXO8dujWzdnLUCjCgkOxWFTo4zig203wKKkndr2tb4Ed4dTTOhehNh/1IjD5/sq6stPawM5sfvZ6J9H+SsItO+9U3i+7N5cfSOfW8mgoAVq/FhvG+Grl5y2tJJhAgxcKyWlAc61A4SVIkmaRdlVYhcqFfxM5GpKxewk9SoNWz5ZdV9vqQZvPZZoddrUs+XDGEuCuAa0mVAs8VpCYwiRWjtAsGXVIKrhpRP+KKVg47Q4e9sBoulO+TjvW+tIfw/Qb4YusLmmiTzsXB1PD6N4RinTnLeoeIUIYBb62ZqUDYBf06Now+hCFkVh5pQ/IK6+iB2Vue7CVPbHCOVEp6FHBNw2LA+S1i531r8sdMqbgIO2ygnvpsfPTHCS+Vr7WoSz40gTWUEMH/9OVb+MCRgsme93PXLh4wNciEn5lI1wlsIiSjLlbrao1WRaA/BsFMQAe9L9f2jgrTuwaDXm8FamNWdW4DX9UZN4Vy93uVVKJvzvlWFlVSLObxoDGv/CrTrx++cxvOMaf0f+ofMIqu5JpctwJHtoA+SSv9Y0eaNE1ED+gnbusiIVLWuoWVBHlnbBdoK5QTig8t7d7mkau5Dsrj6xpYz/4FnqdUbWhcTl+FM2QqewMNjXcHl+TtvCP3epgQZmm0bQDWEWFsT4Zbwf6p+o24wOvzv6OJ0dY+CX2NmNmHsw3Yv5OTFaIBoiQy2KXIvG3PXdhZPz2Jm/WX0p411RP4KMYzyKRCY38tTciHjF7EPPtxgYDNjhLXi9GLqU7RtHe9YG+qotG/y+51hnIvjmhGy8wX9Vp5Jc80E/jOW4BiX12ofcbxvhkEwPaVCogwcC1TNn5UPHp7+d56Y6cHEScEg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Similarly to kmsan_vmap_pages_range_noflush(), kmsan_ioremap_page_range() must also properly handle allocation/mapping failures. In the case of such, it must clean up the already created metadata mappings and return an error code, so that the error can be propagated to ioremap_page_range(). Without doing so, KMSAN may silently fail to bring the metadata for the page range into a consistent state, which will result in user-visible crashes when trying to access them. Reported-by: Dipanjan Das Link: https://lore.kernel.org/linux-mm/CANX2M5ZRrRA64k0hOif02TjmY9kbbO2aCBPyq79es34RXZ=cAw@mail.gmail.com/ Fixes: b073d7f8aee4 ("mm: kmsan: maintain KMSAN metadata for page operations") Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver --- v2: -- updated patch description as requested by Andrew Morton -- check the return value of __vmap_pages_range_noflush(), as suggested by Dipanjan Das -- return 0 from the inline version of kmsan_ioremap_page_range() (spotted by kernel test robot ) --- include/linux/kmsan.h | 19 ++++++++------- mm/kmsan/hooks.c | 55 ++++++++++++++++++++++++++++++++++++------- mm/vmalloc.c | 4 ++-- 3 files changed, 59 insertions(+), 19 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index c7ff3aefc5a13..30b17647ce3c7 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -160,11 +160,12 @@ void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end); * @page_shift: page_shift argument passed to vmap_range_noflush(). * * KMSAN creates new metadata pages for the physical pages mapped into the - * virtual memory. + * virtual memory. Returns 0 on success, callers must check for non-zero return + * value. */ -void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int page_shift); +int kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift); /** * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. @@ -296,12 +297,12 @@ static inline void kmsan_vunmap_range_noflush(unsigned long start, { } -static inline void kmsan_ioremap_page_range(unsigned long start, - unsigned long end, - phys_addr_t phys_addr, - pgprot_t prot, - unsigned int page_shift) +static inline int kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) { + return 0; } static inline void kmsan_iounmap_page_range(unsigned long start, diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 3807502766a3e..ec0da72e65aa0 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -148,35 +148,74 @@ void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end) * into the virtual memory. If those physical pages already had shadow/origin, * those are ignored. */ -void kmsan_ioremap_page_range(unsigned long start, unsigned long end, - phys_addr_t phys_addr, pgprot_t prot, - unsigned int page_shift) +int kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) { gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; struct page *shadow, *origin; unsigned long off = 0; - int nr; + int nr, err = 0, clean = 0, mapped; if (!kmsan_enabled || kmsan_in_runtime()) - return; + return 0; nr = (end - start) / PAGE_SIZE; kmsan_enter_runtime(); - for (int i = 0; i < nr; i++, off += PAGE_SIZE) { + for (int i = 0; i < nr; i++, off += PAGE_SIZE, clean = i) { shadow = alloc_pages(gfp_mask, 1); origin = alloc_pages(gfp_mask, 1); - __vmap_pages_range_noflush( + if (!shadow || !origin) { + err = -ENOMEM; + goto ret; + } + mapped = __vmap_pages_range_noflush( vmalloc_shadow(start + off), vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, PAGE_SHIFT); - __vmap_pages_range_noflush( + if (mapped) { + err = mapped; + goto ret; + } + shadow = NULL; + mapped = __vmap_pages_range_noflush( vmalloc_origin(start + off), vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, PAGE_SHIFT); + if (mapped) { + __vunmap_range_noflush( + vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE)); + err = mapped; + goto ret; + } + origin = NULL; + } + /* Page mapping loop finished normally, nothing to clean up. */ + clean = 0; + +ret: + if (clean > 0) { + /* + * Something went wrong. Clean up shadow/origin pages allocated + * on the last loop iteration, then delete mappings created + * during the previous iterations. + */ + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + __vunmap_range_noflush( + vmalloc_shadow(start), + vmalloc_shadow(start + clean * PAGE_SIZE)); + __vunmap_range_noflush( + vmalloc_origin(start), + vmalloc_origin(start + clean * PAGE_SIZE)); } flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); kmsan_leave_runtime(); + return err; } void kmsan_iounmap_page_range(unsigned long start, unsigned long end) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 1355d95cce1ca..31ff782d368b0 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -313,8 +313,8 @@ int ioremap_page_range(unsigned long addr, unsigned long end, ioremap_max_page_shift); flush_cache_vmap(addr, end); if (!err) - kmsan_ioremap_page_range(addr, end, phys_addr, prot, - ioremap_max_page_shift); + err = kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; }