From patchwork Mon Nov 30 23:35:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 11941537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67005C83013 for ; Mon, 30 Nov 2020 23:35:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D832720725 for ; Mon, 30 Nov 2020 23:35:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="WwtcU9ge" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D832720725 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2DD696B005D; Mon, 30 Nov 2020 18:35:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 28E698D0002; Mon, 30 Nov 2020 18:35:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 157268D0001; Mon, 30 Nov 2020 18:35:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id EFC7C6B005D for ; Mon, 30 Nov 2020 18:35:25 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id ADFB83631 for ; Mon, 30 Nov 2020 23:35:25 +0000 (UTC) X-FDA: 77542693410.11.cart89_55109e3273a5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 8F718180F8B82 for ; Mon, 30 Nov 2020 23:35:25 +0000 (UTC) X-HE-Tag: cart89_55109e3273a5 X-Filterd-Recvd-Size: 11358 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Nov 2020 23:35:24 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id b4so36177plk.17 for ; Mon, 30 Nov 2020 15:35:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=SdCffT91DJd0x/mWRdc6XWua7OypxDLtXhbKvEN3B+Y=; b=WwtcU9ge46BAEwhOlL6ngDU1bVo/gN6z3y7MNY6UjgB1mRPAgNAdnvo5OlBMn+qccy xMMJ0fo071Akl9hcbkQeKQAxB06GyjEVhcUMTd5zOJ72uDHLMKRCFKbarYuIG3W9d268 OPkPTdbdXXSthAAfxoUoHlJsHLOOhgNYVAS1tMrL+xpJhTCeb2wms7WXcf67tHYUoZ0t DU74a0DZhTP8IIVvPqnJoqQ2eUxwD6Kfa3ji+Jcz5rSJs/4gQyLNHurovHnlRCIHifUe vP+b0LncMeJj8feTO9WqGWE8I303y3xC97waz647zgZd9sFyMZ1oesOdvLcW6W3Xc4pk 6PAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=SdCffT91DJd0x/mWRdc6XWua7OypxDLtXhbKvEN3B+Y=; b=sJyDt4YUqpZ7U/ufktRi/SVGIt1YCBDVxVcmJ6AhRA1yyGx0ygUIV4eWyqytggdrF3 mPxpeBiWJKYHzQTTu83RSoss7WQl8iBnQsrB3CaCmANHmsnlisV929Zv9QPsFhGu2i58 4zbKHfi5YyPp+yex6rKKUemZiUMN+zS2rcEjGRCj5sGTZEedORZkMlaLBL3XWlhqCdya zrSr//u1tvnNp/3088xBFIj8Qv9cCSGadBI5fQw1RnRbM13+05nObmhnAwG4JCVSiGQv oz544oWIOAm8FyBuPvoUTup0aeN0meum+Hm4qn55s9z/i9zrIyNZjWLUIAA8l3FA+iMi 7hTA== X-Gm-Message-State: AOAM532bMTpeuUyIWoc6FekD/XiJmpS3cq2b6/s4+TBqMz53ZlFedhE3 RPFJ2Cwx7LfTP2hMotL2C03lsg/XGgxGhmWiEy1l X-Google-Smtp-Source: ABdhPJwds+UAfYQft2dHxSUmHaNhiKXHWR9FSiQ/0vbuy2tnzqRr+Kx+s0BU1nhzlCzrfIssMvibqTIxiFjFP6JMzO0N X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:f693:9fff:feef:c8f8]) (user=axelrasmussen job=sendgmr) by 2002:a63:ad4a:: with SMTP id y10mr19846988pgo.176.1606779323651; Mon, 30 Nov 2020 15:35:23 -0800 (PST) Date: Mon, 30 Nov 2020 15:35:04 -0800 Message-Id: <20201130233504.3725241-1-axelrasmussen@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH] mm: mmap_lock: fix use-after-free race and css ref leak in tracepoints From: Axel Rasmussen To: Andrew Morton , Chinwen Chang , Daniel Jordan , David Rientjes , Davidlohr Bueso , Ingo Molnar , Jann Horn , Laurent Dufour , Michel Lespinasse , Stephen Rothwell , Steven Rostedt , Vlastimil Babka Cc: Yafang Shao , davem@davemloft.net, dsahern@kernel.org, gregkh@linuxfoundation.org, kuba@kernel.org, liuhangbin@gmail.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: syzbot reported[1] a use-after-free introduced in 0f818c4bc1f3. The bug is that an ongoing trace event might race with the tracepoint being disabled (and therefore the _unreg() callback being called). Consider this ordering: T1: trace event fires, get_mm_memcg_path() is called T1: get_memcg_path_buf() returns a buffer pointer T2: trace_mmap_lock_unreg() is called, buffers are freed T1: cgroup_path() is called with the now-freed buffer The solution in this commit is to modify trace_mmap_lock_unreg() to first stop new buffers from being handed out, and then to wait (spin) until any existing buffer references are dropped (i.e., those trace events complete). I have a simple reproducer program which spins up two pools of threads, doing the following in a tight loop: Pool 1: mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0) munmap() Pool 2: echo 1 > /sys/kernel/debug/tracing/events/mmap_lock/enable echo 0 > /sys/kernel/debug/tracing/events/mmap_lock/enable This triggers the use-after-free very quickly. With this patch, I let it run for an hour without any BUGs. While fixing this, I also noticed and fixed a css ref leak. Previously we called get_mem_cgroup_from_mm(), but we never called css_put() to release that reference. get_mm_memcg_path() now does this properly. [1]: https://syzkaller.appspot.com/bug?extid=19e6dd9943972fa1c58a Fixes: 0f818c4bc1f3 ("mm: mmap_lock: add tracepoints around lock acquisition") Signed-off-by: Axel Rasmussen --- mm/mmap_lock.c | 100 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 85 insertions(+), 15 deletions(-) diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c index 12af8f1b8a14..be38dc58278b 100644 --- a/mm/mmap_lock.c +++ b/mm/mmap_lock.c @@ -3,6 +3,7 @@ #include #include +#include #include #include #include @@ -18,13 +19,28 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released); #ifdef CONFIG_MEMCG /* - * Our various events all share the same buffer (because we don't want or need - * to allocate a set of buffers *per event type*), so we need to protect against - * concurrent _reg() and _unreg() calls, and count how many _reg() calls have - * been made. + * This is unfortunately complicated... _reg() and _unreg() may be called + * in parallel, separately for each of our three event types. To save memory, + * all of the event types share the same buffers. Furthermore, trace events + * might happen in parallel with _unreg(); we need to ensure we don't free the + * buffers before all inflights have finished. Because these events happen + * "frequently", we also want to prevent new inflights from starting once the + * _unreg() process begins. And, for performance reasons, we want to avoid any + * locking in the trace event path. + * + * So: + * + * - Use a spinlock to serialize _reg() and _unreg() calls. + * - Keep track of nested _reg() calls with a lock-protected counter. + * - Define a flag indicating whether or not unregistration has begun (and + * therefore that there should be no new buffer uses going forward). + * - Keep track of inflight buffer users with a reference count. */ static DEFINE_SPINLOCK(reg_lock); -static int reg_refcount; +static int reg_types_rc; /* Protected by reg_lock. */ +static bool unreg_started; /* Doesn't need synchronization. */ +/* atomic_t instead of refcount_t, as we want ordered inc without locks. */ +static atomic_t inflight_rc = ATOMIC_INIT(0); /* * Size of the buffer for memcg path names. Ignoring stack trace support, @@ -46,9 +62,14 @@ int trace_mmap_lock_reg(void) unsigned long flags; int cpu; + /* + * Serialize _reg() and _unreg(). Without this, e.g. _unreg() might + * start cleaning up while _reg() is only partially completed. + */ spin_lock_irqsave(®_lock, flags); - if (reg_refcount++) + /* If the refcount is going 0->1, proceed with allocating buffers. */ + if (reg_types_rc++) goto out; for_each_possible_cpu(cpu) { @@ -62,6 +83,11 @@ int trace_mmap_lock_reg(void) per_cpu(memcg_path_buf_idx, cpu) = 0; } + /* Reset unreg_started flag, allowing new trace events. */ + WRITE_ONCE(unreg_started, false); + /* Add the registration +1 to the inflight refcount. */ + atomic_inc(&inflight_rc); + out: spin_unlock_irqrestore(®_lock, flags); return 0; @@ -74,7 +100,8 @@ int trace_mmap_lock_reg(void) break; } - --reg_refcount; + /* Since we failed, undo the earlier increment. */ + --reg_types_rc; spin_unlock_irqrestore(®_lock, flags); return -ENOMEM; @@ -87,9 +114,23 @@ void trace_mmap_lock_unreg(void) spin_lock_irqsave(®_lock, flags); - if (--reg_refcount) + /* If the refcount is going 1->0, proceed with freeing buffers. */ + if (--reg_types_rc) goto out; + /* This was the last registration; start preventing new events... */ + WRITE_ONCE(unreg_started, true); + /* Remove the registration +1 from the inflight refcount. */ + atomic_dec(&inflight_rc); + /* + * Wait for inflight refcount to be zero (all inflights stopped). Since + * we have a spinlock we can't sleep, so just spin. Because trace events + * are "fast", and because we stop new inflights from starting at this + * point with unreg_started, this should be a short spin. + */ + while (atomic_read(&inflight_rc)) + barrier(); + for_each_possible_cpu(cpu) { kfree(per_cpu(memcg_path_buf, cpu)); } @@ -102,6 +143,20 @@ static inline char *get_memcg_path_buf(void) { int idx; + /* + * If unregistration is happening, stop. Yes, this check is racy; + * that's fine. It just means _unreg() might spin waiting for an extra + * event or two. Use-after-free is actually prevented by the refcount. + */ + if (READ_ONCE(unreg_started)) + return NULL; + /* + * Take a reference, unless the registration +1 has been released + * and there aren't already existing inflights (refcount is zero). + */ + if (!atomic_inc_not_zero(&inflight_rc)) + return NULL; + idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) - MEMCG_PATH_BUF_SIZE; return &this_cpu_read(memcg_path_buf)[idx]; @@ -110,27 +165,42 @@ static inline char *get_memcg_path_buf(void) static inline void put_memcg_path_buf(void) { this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE); + /* We're done with this buffer; drop the reference. */ + atomic_dec(&inflight_rc); } /* * Write the given mm_struct's memcg path to a percpu buffer, and return a - * pointer to it. If the path cannot be determined, NULL is returned. + * pointer to it. If the path cannot be determined, or no buffer was available + * (because the trace event is being unregistered), NULL is returned. * * Note: buffers are allocated per-cpu to avoid locking, so preemption must be * disabled by the caller before calling us, and re-enabled only after the * caller is done with the pointer. + * + * The caller must call put_memcg_path_buf() once the buffer is no longer + * needed. This must be done while preemption is still disabled. */ static const char *get_mm_memcg_path(struct mm_struct *mm) { + char *buf = NULL; struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); - if (memcg != NULL && likely(memcg->css.cgroup != NULL)) { - char *buf = get_memcg_path_buf(); + if (memcg == NULL) + goto out; + if (unlikely(memcg->css.cgroup == NULL)) + goto out_put; - cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); - return buf; - } - return NULL; + buf = get_memcg_path_buf(); + if (buf == NULL) + goto out_put; + + cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); + +out_put: + css_put(&memcg->css); +out: + return buf; } #define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \