From patchwork Tue Dec 14 16:20:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1CCDC433EF for ; Tue, 14 Dec 2021 16:24:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AB696B007B; Tue, 14 Dec 2021 11:22:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 658FD6B007D; Tue, 14 Dec 2021 11:22:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D29A6B007E; Tue, 14 Dec 2021 11:22:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 3E8046B007B for ; Tue, 14 Dec 2021 11:22:19 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E9950181AC9CC for ; Tue, 14 Dec 2021 16:22:08 +0000 (UTC) X-FDA: 78916916736.25.30BBC8B Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf07.hostedemail.com (Postfix) with ESMTP id 1E4BA4000D for ; Tue, 14 Dec 2021 16:22:06 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id u4-20020a5d4684000000b0017c8c1de97dso4853116wrq.16 for ; Tue, 14 Dec 2021 08:22:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CQyambTSmP5kC58BDOZ38NkSus/ix4gF7HsyVWQKWXc=; b=RE3Ne1BeyAaL9++X1sCY7W6HxZFmsx3WE7bIxrpwlErXaOtGOfyERSJ5er3nl4ZCpY zmAzhk03fbH3MJBmjIftslQUgrs8buJ/7AMueg53qeMYueUQlTzpKz7cHQ/9UUrSNWxa 5Q5AaP2Tfoy2xyi3yDx/1yESKxT4BNyhXGlX2I0kRZ3z3SiZwhdsTzcPGlF/D0Jdeu5t PdY0JYCa8VbVCQxB2i1OtWjEZyLGgvMc0UvOuRLDIK2PS6jiAyEPXnpmIzANy2bhQnXQ FLg3tJRU2/w81RXVJWYgfl6HuKYVgTqZOrjBUgfZu4hvco/0MV0VJnM1KAFOmShvQpfR EHFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CQyambTSmP5kC58BDOZ38NkSus/ix4gF7HsyVWQKWXc=; b=ViGWi/EnGym+bkL+Trx3zD4QFB4CNbZnNllQmqSTx3ygCCX/Xsp5foDp2Gk3AYoRwy 1HmwXQPL4kptb0j33M9diQCAGn0RW4mk1FCqKY9k8deydIAh9gMYh1NgDe8ntqajXu5k eQGOIN4D1J1k3XPaGiYt4wgtTby9K9jRYUpVkUogpJqMQFAVHottb6mdz3lq04gUFN2e Lk6OBF42AXLckxlpoI7pZR04x+8jmbWQPXLqFZz3h4TFZQARvKMDRIXMomHnurSMzpbm DVGduty4PM4poU/APUEWZMMFuu/a3yBrd+lBMNrw32PT++gG2QSRWfP9VGaCrEc1qbO4 WI6A== X-Gm-Message-State: AOAM531wEoSV9N6Zi1SEm6hWHSlYtp6KDds2BFF+vZaijdai+zv3grK2 iSb8oVyevvGRI3RkyGQHAiQUzOho+yg= X-Google-Smtp-Source: ABdhPJyj4nXTOH98QEuMjM8LdPyxWb8TjdCYI8xbjpieTOD1vLMhRc6PwL5s5uZRNEINQF/xeJdedBJ48NU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr5818449wms.1.1639498924568; Tue, 14 Dec 2021 08:22:04 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:11 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-5-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 04/43] instrumented.h: allow instrumenting both sides of copy_from_user() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1E4BA4000D X-Stat-Signature: ypy5m75sdhz4qcgcdf1h85xbj5oq7yr5 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RE3Ne1Be; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3rMS4YQYKCC0PURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3rMS4YQYKCC0PURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com X-HE-Tag: 1639498926-992732 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce instrument_copy_from_user_before() and instrument_copy_from_user_after() hooks to be invoked before and after the call to copy_from_user(). KASAN and KCSAN will be only using instrument_copy_from_user_before(), but for KMSAN we'll need to insert code after copy_from_user(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I855034578f0b0f126734cbd734fb4ae1d3a6af99 --- include/linux/instrumented.h | 21 +++++++++++++++++++-- include/linux/uaccess.h | 19 ++++++++++++++----- lib/iov_iter.c | 9 ++++++--- lib/usercopy.c | 3 ++- 4 files changed, 41 insertions(+), 11 deletions(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 42faebbaa202a..ee8f7d17d34f5 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -120,7 +120,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) } /** - * instrument_copy_from_user - instrument writes of copy_from_user + * instrument_copy_from_user_before - add instrumentation before copy_from_user * * Instrument writes to kernel memory, that are due to copy_from_user (and * variants). The instrumentation should be inserted before the accesses. @@ -130,10 +130,27 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) * @n number of bytes to copy */ static __always_inline void -instrument_copy_from_user(const void *to, const void __user *from, unsigned long n) +instrument_copy_from_user_before(const void *to, const void __user *from, unsigned long n) { kasan_check_write(to, n); kcsan_check_write(to, n); } +/** + * instrument_copy_from_user_after - add instrumentation after copy_from_user + * + * Instrument writes to kernel memory, that are due to copy_from_user (and + * variants). The instrumentation should be inserted after the accesses. + * + * @to destination address + * @from source address + * @n number of bytes to copy + * @left number of bytes not copied (as returned by copy_from_user) + */ +static __always_inline void +instrument_copy_from_user_after(const void *to, const void __user *from, + unsigned long n, unsigned long left) +{ +} + #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index ac0394087f7d4..8dadd8642afbb 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -98,20 +98,28 @@ static inline void force_uaccess_end(mm_segment_t oldfs) static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { - instrument_copy_from_user(to, from, n); + unsigned long res; + + instrument_copy_from_user_before(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); + instrument_copy_from_user_before(to, from, n); if (should_fail_usercopy()) return n; - instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } /** @@ -155,8 +163,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 66a740e6e153c..28c033cb9e803 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -161,13 +161,16 @@ static int copyout(void __user *to, const void *from, size_t n) static int copyin(void *to, const void __user *from, size_t n) { + size_t res = n; + if (should_fail_usercopy()) return n; if (access_ok(from, n)) { - instrument_copy_from_user(to, from, n); - n = raw_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } - return n; + return res; } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, diff --git a/lib/usercopy.c b/lib/usercopy.c index 7413dd300516e..1505a52f23a01 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,8 +12,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res);