From patchwork Tue Apr 25 22:51:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13223831 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E283C77B61 for ; Tue, 25 Apr 2023 22:51:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236270AbjDYWvT (ORCPT ); Tue, 25 Apr 2023 18:51:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231834AbjDYWvR (ORCPT ); Tue, 25 Apr 2023 18:51:17 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 043A06A77; Tue, 25 Apr 2023 15:51:17 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 589F021C2B30; Tue, 25 Apr 2023 15:51:16 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 589F021C2B30 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1682463076; bh=9bQADjcnzfitEajWNwld8GIEX6pWZSTccoW5aErnhjg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JvURQ2Ip9pc67fmVIrDnS2crFVC3TbLJfVjV4785RlJ99bl4qP1Zrcqac0h1PsLXr h51QQEVosx7sude/AaN3V0LG0YXFYUYhjQTBWlueiuU9ZgCSTH5JEHWmbkdspZRsc9 t+VFltbgo+l0PWmZ6Sei5h4r9Hk6bX5or6F5GaCc= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH v2 1/4] tracing/user_events: Ensure write index cannot be negative Date: Tue, 25 Apr 2023 15:51:04 -0700 Message-Id: <20230425225107.8525-2-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425225107.8525-1-beaub@linux.microsoft.com> References: <20230425225107.8525-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org The write index indicates which event the data is for and accesses a per-file array. The index is passed by user processes during write() calls as the first 4 bytes. Ensure that it cannot be negative by returning -EINVAL to prevent out of bounds accesses. Update ftrace self-test to ensure this occurs properly. Fixes: 7f5a08c79df3 ("user_events: Add minimal support for trace_event into ftrace") Reported-by: Doug Cook Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 3 +++ tools/testing/selftests/user_events/ftrace_test.c | 5 +++++ 2 files changed, 8 insertions(+) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index cc8c6d8b69b5..e7dff24aa724 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -1821,6 +1821,9 @@ static ssize_t user_events_write_core(struct file *file, struct iov_iter *i) if (unlikely(copy_from_iter(&idx, sizeof(idx), i) != sizeof(idx))) return -EFAULT; + if (idx < 0) + return -EINVAL; + rcu_read_lock_sched(); refs = rcu_dereference_sched(info->refs); diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c index aceafacfb126..91272f9d6fce 100644 --- a/tools/testing/selftests/user_events/ftrace_test.c +++ b/tools/testing/selftests/user_events/ftrace_test.c @@ -296,6 +296,11 @@ TEST_F(user, write_events) { ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 3)); after = trace_bytes(); ASSERT_GT(after, before); + + /* Negative index should fail with EINVAL */ + reg.write_index = -1; + ASSERT_EQ(-1, writev(self->data_fd, (const struct iovec *)io, 3)); + ASSERT_EQ(EINVAL, errno); } TEST_F(user, write_fault) { From patchwork Tue Apr 25 22:51:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13223832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0315FC7EE21 for ; Tue, 25 Apr 2023 22:51:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236279AbjDYWvT (ORCPT ); Tue, 25 Apr 2023 18:51:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236169AbjDYWvS (ORCPT ); Tue, 25 Apr 2023 18:51:18 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 395D793CC; Tue, 25 Apr 2023 15:51:17 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 9704B21C2B31; Tue, 25 Apr 2023 15:51:16 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 9704B21C2B31 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1682463076; bh=hP1cRNuxHkIdEXUzblUPdQ1BwU/USkWFGaUyWmDt9gQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=stYwt6gwDx8tvjPta+xm9EJJy6EoPgjZEQqDVXoriRvW/R0QZ6v21XEXYWypQnB3i 4OV6Gb8blRM4T5sKZ5XbL1vvcNVEKPnN6Upd9V2uWjdmCyO4kyW3anMzKxVBhec5e1 MxFoe1yaC1A9tA+vyzQbWNRyV6ytZCg5su4IRrEk= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH v2 2/4] tracing/user_events: Ensure bit is cleared on unregister Date: Tue, 25 Apr 2023 15:51:05 -0700 Message-Id: <20230425225107.8525-3-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425225107.8525-1-beaub@linux.microsoft.com> References: <20230425225107.8525-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org If an event is enabled and a user process unregisters user_events, the bit is left set. Fix this by always clearing the bit in the user process if unregister is successful. Update abi self-test to ensure this occurs properly. Suggested-by: Doug Cook Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 34 +++++++++++++++++++ .../testing/selftests/user_events/abi_test.c | 9 +++-- 2 files changed, 40 insertions(+), 3 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index e7dff24aa724..eb195d697177 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -2146,6 +2146,35 @@ static long user_unreg_get(struct user_unreg __user *ureg, return ret; } +static int user_event_mm_clear_bit(struct user_event_mm *user_mm, + unsigned long uaddr, unsigned char bit) +{ + struct user_event_enabler enabler; + int result; + + memset(&enabler, 0, sizeof(enabler)); + enabler.addr = uaddr; + enabler.values = bit; +retry: + /* Prevents state changes from racing with new enablers */ + mutex_lock(&event_mutex); + + /* Force the bit to be cleared, since no event is attached */ + mmap_read_lock(user_mm->mm); + result = user_event_enabler_write(user_mm, &enabler, false); + mmap_read_unlock(user_mm->mm); + + mutex_unlock(&event_mutex); + + if (result) { + /* Attempt to fault-in and retry if it worked */ + if (!user_event_mm_fault_in(user_mm, uaddr)) + goto retry; + } + + return result; +} + /* * Unregisters an enablement address/bit within a task/user mm. */ @@ -2190,6 +2219,11 @@ static long user_events_ioctl_unreg(unsigned long uarg) mutex_unlock(&event_mutex); + /* Ensure bit is now cleared for user, regardless of event status */ + if (!ret) + ret = user_event_mm_clear_bit(mm, reg.disable_addr, + reg.disable_bit); + return ret; } diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c index e0323d3777a7..5125c42efe65 100644 --- a/tools/testing/selftests/user_events/abi_test.c +++ b/tools/testing/selftests/user_events/abi_test.c @@ -109,13 +109,16 @@ TEST_F(user, enablement) { ASSERT_EQ(0, change_event(false)); ASSERT_EQ(0, self->check); - /* Should not change after disable */ + /* Ensure kernel clears bit after disable */ ASSERT_EQ(0, change_event(true)); ASSERT_EQ(1, self->check); ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, self->check); + + /* Ensure doesn't change after unreg */ + ASSERT_EQ(0, change_event(true)); + ASSERT_EQ(0, self->check); ASSERT_EQ(0, change_event(false)); - ASSERT_EQ(1, self->check); - self->check = 0; } TEST_F(user, bit_sizes) { From patchwork Tue Apr 25 22:51:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13223834 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFFD0C7EE24 for ; Tue, 25 Apr 2023 22:51:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236157AbjDYWvU (ORCPT ); Tue, 25 Apr 2023 18:51:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236215AbjDYWvS (ORCPT ); Tue, 25 Apr 2023 18:51:18 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6FAA19758; Tue, 25 Apr 2023 15:51:17 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id D29CA21C2B32; Tue, 25 Apr 2023 15:51:16 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com D29CA21C2B32 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1682463077; bh=cfB/KvW47aSVwVlWaZPefAAtHKj3axBjgH2UZv+QwIM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wd7KOp+jT9zN6lbyka5Tjfj82jqvL/4t1tWyEBBR3fH7keaG/2ILtv4iPrCwm+KVj +L5r1OxVcXZeLFTW1LQlqR1Q5K1ptoOQM1wY2nMcQrFdI1G/ulwjh0ZzaFtbSaVwhh JKdXOWhSXtYiVTTnUG0zGl0C4udlBD5KlcwPmiVo= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH v2 3/4] tracing/user_events: Prevent same address and bit per process Date: Tue, 25 Apr 2023 15:51:06 -0700 Message-Id: <20230425225107.8525-4-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425225107.8525-1-beaub@linux.microsoft.com> References: <20230425225107.8525-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org User processes register an address and bit pair for events. If the same address and bit pair are registered multiple times in the same process, it can cause undefined behavior when events are enabled/disabled. When more than one are used, the bit could be turned off by another event being disabled, while the original event is still enabled. Prevent undefined behavior by checking the current mm to see if any event has already been registered for the address and bit pair. Return EADDRINUSE back to the user process if it's already being used. Update ftrace self-test to ensure this occurs properly. Suggested-by: Doug Cook Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 41 +++++++++++++++++++ .../selftests/user_events/ftrace_test.c | 9 +++- 2 files changed, 49 insertions(+), 1 deletion(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index eb195d697177..4fc099fc7637 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -419,6 +419,21 @@ static int user_event_enabler_write(struct user_event_mm *mm, return 0; } +static bool user_event_enabler_exists(struct user_event_mm *mm, + unsigned long uaddr, unsigned char bit) +{ + struct user_event_enabler *enabler; + struct user_event_enabler *next; + + list_for_each_entry_safe(enabler, next, &mm->enablers, link) { + if (enabler->addr == uaddr && + (enabler->values & ENABLE_VAL_BIT_MASK) == bit) + return true; + } + + return false; +} + static void user_event_enabler_update(struct user_event *user) { struct user_event_enabler *enabler; @@ -657,6 +672,22 @@ void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm) user_event_mm_remove(t); } +static bool current_user_event_enabler_exists(unsigned long uaddr, + unsigned char bit) +{ + struct user_event_mm *user_mm = current_user_event_mm(); + bool exists; + + if (!user_mm) + return false; + + exists = user_event_enabler_exists(user_mm, uaddr, bit); + + user_event_mm_put(user_mm); + + return exists; +} + static struct user_event_enabler *user_event_enabler_create(struct user_reg *reg, struct user_event *user, int *write_result) @@ -2045,6 +2076,16 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, if (ret) return ret; + /* + * Prevent users from using the same address and bit multiple times + * within the same mm address space. This can cause unexpected behavior + * for user processes that is far easier to debug if this is explictly + * an error upon registering. + */ + if (current_user_event_enabler_exists((unsigned long)reg.enable_addr, + reg.enable_bit)) + return -EADDRINUSE; + name = strndup_user((const char __user *)(uintptr_t)reg.name_args, MAX_EVENT_DESC); diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c index 91272f9d6fce..7c99cef94a65 100644 --- a/tools/testing/selftests/user_events/ftrace_test.c +++ b/tools/testing/selftests/user_events/ftrace_test.c @@ -219,7 +219,12 @@ TEST_F(user, register_events) { ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - /* Multiple registers should result in same index */ + /* Multiple registers to the same addr + bit should fail */ + ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSREG, ®)); + ASSERT_EQ(EADDRINUSE, errno); + + /* Multiple registers to same name should result in same index */ + reg.enable_bit = 30; ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); @@ -242,6 +247,8 @@ TEST_F(user, register_events) { /* Unregister */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg)); + unreg.disable_bit = 30; + ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg)); /* Delete should work only after close and unregister */ close(self->data_fd); From patchwork Tue Apr 25 22:51:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 13223833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95911C77B7F for ; Tue, 25 Apr 2023 22:51:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236212AbjDYWvU (ORCPT ); Tue, 25 Apr 2023 18:51:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236248AbjDYWvT (ORCPT ); Tue, 25 Apr 2023 18:51:19 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A66DA9766; Tue, 25 Apr 2023 15:51:17 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 1808621C2B33; Tue, 25 Apr 2023 15:51:17 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1808621C2B33 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1682463077; bh=S6vaCMPQuD92cWCedlBij+dfdn0laVton5ehbi63OWI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E76kuiCWVbRZgMz/ZLod85efjDohowBDBRu9u4F+7o00+4HnTGx2qXxJs/U8/eeJ9 ph3C0ZCPkkhKtMMgCuj5qFPzsVufXE1hC65Be0KG/kPT80hpJ7sNb94xYGgkBm+C0y qKLzVBL6g/V1uNE8A8A+Vq8lotDCatbkkg7QXggk= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH v2 4/4] tracing/user_events: Limit max fault-in attempts Date: Tue, 25 Apr 2023 15:51:07 -0700 Message-Id: <20230425225107.8525-5-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230425225107.8525-1-beaub@linux.microsoft.com> References: <20230425225107.8525-1-beaub@linux.microsoft.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-trace-kernel@vger.kernel.org When event enablement changes, user_events attempts to update a bit in the user process. If a fault is hit, an attempt to fault-in the page and the write is retried if the page made it in. While this normally requires a couple attempts, it is possible a bad user process could attempt to cause infinite loops. Ensure fault-in attempts either sync or async are limited to a max of 10 attempts for each update. When the max is hit, return -EFAULT so another attempt is not made in all cases. Suggested-by: Steven Rostedt (Google) Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 49 +++++++++++++++++++++++--------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 4fc099fc7637..cab2c5891758 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -123,6 +123,7 @@ struct user_event_enabler_fault { struct work_struct work; struct user_event_mm *mm; struct user_event_enabler *enabler; + int attempt; }; static struct kmem_cache *fault_cache; @@ -266,11 +267,19 @@ static void user_event_enabler_destroy(struct user_event_enabler *enabler) kfree(enabler); } -static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) +static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr, + int attempt) { bool unlocked; int ret; + /* + * Normally this is low, ensure that it cannot be taken advantage of by + * bad user processes to cause excessive looping. + */ + if (attempt > 10) + return -EFAULT; + mmap_read_lock(mm->mm); /* Ensure MM has tasks, cannot use after exit_mm() */ @@ -289,7 +298,7 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) static int user_event_enabler_write(struct user_event_mm *mm, struct user_event_enabler *enabler, - bool fixup_fault); + bool fixup_fault, int *attempt); static void user_event_enabler_fault_fixup(struct work_struct *work) { @@ -298,9 +307,10 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) struct user_event_enabler *enabler = fault->enabler; struct user_event_mm *mm = fault->mm; unsigned long uaddr = enabler->addr; + int attempt = fault->attempt; int ret; - ret = user_event_mm_fault_in(mm, uaddr); + ret = user_event_mm_fault_in(mm, uaddr, attempt); if (ret && ret != -ENOENT) { struct user_event *user = enabler->event; @@ -329,7 +339,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) if (!ret) { mmap_read_lock(mm->mm); - user_event_enabler_write(mm, enabler, true); + user_event_enabler_write(mm, enabler, true, &attempt); mmap_read_unlock(mm->mm); } out: @@ -341,7 +351,8 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) } static bool user_event_enabler_queue_fault(struct user_event_mm *mm, - struct user_event_enabler *enabler) + struct user_event_enabler *enabler, + int attempt) { struct user_event_enabler_fault *fault; @@ -353,6 +364,7 @@ static bool user_event_enabler_queue_fault(struct user_event_mm *mm, INIT_WORK(&fault->work, user_event_enabler_fault_fixup); fault->mm = user_event_mm_get(mm); fault->enabler = enabler; + fault->attempt = attempt; /* Don't try to queue in again while we have a pending fault */ set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); @@ -372,7 +384,7 @@ static bool user_event_enabler_queue_fault(struct user_event_mm *mm, static int user_event_enabler_write(struct user_event_mm *mm, struct user_event_enabler *enabler, - bool fixup_fault) + bool fixup_fault, int *attempt) { unsigned long uaddr = enabler->addr; unsigned long *ptr; @@ -383,6 +395,8 @@ static int user_event_enabler_write(struct user_event_mm *mm, lockdep_assert_held(&event_mutex); mmap_assert_locked(mm->mm); + *attempt += 1; + /* Ensure MM has tasks, cannot use after exit_mm() */ if (refcount_read(&mm->tasks) == 0) return -ENOENT; @@ -398,7 +412,7 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (!fixup_fault) return -EFAULT; - if (!user_event_enabler_queue_fault(mm, enabler)) + if (!user_event_enabler_queue_fault(mm, enabler, *attempt)) pr_warn("user_events: Unable to queue fault handler\n"); return -EFAULT; @@ -439,15 +453,19 @@ static void user_event_enabler_update(struct user_event *user) struct user_event_enabler *enabler; struct user_event_mm *mm = user_event_mm_get_all(user); struct user_event_mm *next; + int attempt; while (mm) { next = mm->next; mmap_read_lock(mm->mm); rcu_read_lock(); - list_for_each_entry_rcu(enabler, &mm->enablers, link) - if (enabler->event == user) - user_event_enabler_write(mm, enabler, true); + list_for_each_entry_rcu(enabler, &mm->enablers, link) { + if (enabler->event == user) { + attempt = 0; + user_event_enabler_write(mm, enabler, true, &attempt); + } + } rcu_read_unlock(); mmap_read_unlock(mm->mm); @@ -695,6 +713,7 @@ static struct user_event_enabler struct user_event_enabler *enabler; struct user_event_mm *user_mm; unsigned long uaddr = (unsigned long)reg->enable_addr; + int attempt = 0; user_mm = current_user_event_mm(); @@ -715,7 +734,8 @@ static struct user_event_enabler /* Attempt to reflect the current state within the process */ mmap_read_lock(user_mm->mm); - *write_result = user_event_enabler_write(user_mm, enabler, false); + *write_result = user_event_enabler_write(user_mm, enabler, false, + &attempt); mmap_read_unlock(user_mm->mm); /* @@ -735,7 +755,7 @@ static struct user_event_enabler if (*write_result) { /* Attempt to fault-in and retry if it worked */ - if (!user_event_mm_fault_in(user_mm, uaddr)) + if (!user_event_mm_fault_in(user_mm, uaddr, attempt)) goto retry; kfree(enabler); @@ -2192,6 +2212,7 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm, { struct user_event_enabler enabler; int result; + int attempt = 0; memset(&enabler, 0, sizeof(enabler)); enabler.addr = uaddr; @@ -2202,14 +2223,14 @@ static int user_event_mm_clear_bit(struct user_event_mm *user_mm, /* Force the bit to be cleared, since no event is attached */ mmap_read_lock(user_mm->mm); - result = user_event_enabler_write(user_mm, &enabler, false); + result = user_event_enabler_write(user_mm, &enabler, false, &attempt); mmap_read_unlock(user_mm->mm); mutex_unlock(&event_mutex); if (result) { /* Attempt to fault-in and retry if it worked */ - if (!user_event_mm_fault_in(user_mm, uaddr)) + if (!user_event_mm_fault_in(user_mm, uaddr, attempt)) goto retry; }