diff mbox

fanotify: fix race between fanotify_release() and fanotify_get_response()

Message ID 1472032539-30256-1-git-send-email-mszeredi@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Miklos Szeredi Aug. 24, 2016, 9:55 a.m. UTC
List corruption was reported with a fanotify stress test.

The bug turned out to be due to fsnotify_remove_event() being called on an
event on the fanotify_data.access_list and protected by
fanotify_data.access_lock instead of notification_mutex.  This resulted in
list_del_init() being run concurrently on the same list entry.

This was introduced by commit 09e5f14e57c7 ("fanotify: on group destroy
allow all waiters to bypass permission check") which made
fanotify_get_response() flush out events when bypass_perm was set.  The
flush doesn't normally happen, since the wake_up() is called after the
access_list was cleaned in fsnotify_release().  But the two are not
synchronized, the fanotify_get_response() could still be processing a
previous wakeup by the time bypass_perm is true.  This was seen in the
crashdumps in the report.

This bug can be solved multiple ways, maybe the simplest is moving the
bypass_perm setting after the list has been processed.

In theory there's also a memory ordering problem here.  atomic_inc() in
itself doesn't imply a memory barrier, and spin_unlock() is a semi
permeable barrier, so we need an explicit memory barrier so that the
condition is precieved after the list is cleared.

Similarly we need barriers for the case when event->response is set
(i.e. non zero): fsnotify_destroy_event() might destroy the event while
it's still on the access_list, since nothing guarantees that the storing
the response value in event->response will be preceived after the list
manipulation.  So add the necessary barriers there as well.

PS not sure why bypass_perm is an atomic_t, it could just as well be a
boolean flag.

PPS all this subtlety could be removed if the waitq was per-event, which
would also allow better performance.

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Fixes: 09e5f14e57c7 ("fanotify: on group destroy allow all waiters to bypass permission check")
Cc: <stable@vger.kernel.org> #v2.6.37+
Cc: Jan Kara <jack@suse.cz>
Cc: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Cc: Eric Paris <eparis@redhat.com>
---
 fs/notify/fanotify/fanotify.c      |  5 +++++
 fs/notify/fanotify/fanotify_user.c | 25 +++++++++++++++++++++----
 2 files changed, 26 insertions(+), 4 deletions(-)

Comments

Jan Kara Sept. 8, 2016, 10:55 a.m. UTC | #1
On Wed 24-08-16 11:55:39, Miklos Szeredi wrote:
> List corruption was reported with a fanotify stress test.
> 
> The bug turned out to be due to fsnotify_remove_event() being called on an
> event on the fanotify_data.access_list and protected by
> fanotify_data.access_lock instead of notification_mutex.  This resulted in
> list_del_init() being run concurrently on the same list entry.
> 
> This was introduced by commit 09e5f14e57c7 ("fanotify: on group destroy
> allow all waiters to bypass permission check") which made
> fanotify_get_response() flush out events when bypass_perm was set.  The
> flush doesn't normally happen, since the wake_up() is called after the
> access_list was cleaned in fsnotify_release().  But the two are not
> synchronized, the fanotify_get_response() could still be processing a
> previous wakeup by the time bypass_perm is true.  This was seen in the
> crashdumps in the report.

Thanks for the analysis and the patch! I agree there's a bug you describe,
I just somewhat disagree with the solution.

> This bug can be solved multiple ways, maybe the simplest is moving the
> bypass_perm setting after the list has been processed.
> 
> In theory there's also a memory ordering problem here.  atomic_inc() in
> itself doesn't imply a memory barrier, and spin_unlock() is a semi
> permeable barrier, so we need an explicit memory barrier so that the
> condition is precieved after the list is cleared.

Well, the culprit of the problem seems to be that fanotify_get_response()
does not use proper function to remove the event from the list in all the
cases. The event can be in two states:

1) User didn't yet read the even at the time fanotify_release() is closed -
note that a group can still receive new events queued while
fanotify_release() is running until fsnotify_destroy_group() kills all the
marks and that is the main reason why we have that bypass_perm thing to
avoid blocking new processes in fanotify_get_response() because they could
otherwise miss a wakeup and hang there indefinitely. In this state calling
fsnotify_remove_event() is correct and that's what commit 5838d4442bd5
(fanotify: fix double free of pending permission events) had in mind.

2) User has read the event (thus the permission even was moved to
access_list) but didn't write the response yet. In this state calling
fsnotify_remove_event() from fanotify_get_response() is just wrong - as you
noted it uses the wrong lock to protect the list removal but it also
wrongly updates group->q_len. In this situation we should be calling
dequeue_event().

The problem is that there's no easy way to distinguishing these two cases
in fanotify_get_response(). We could flag this somehow inside the event
structure but I think it's cleaner to remove all permission events in
fanotify_release() (the same way we already handle the permission events on
access_list). I'll send a patch shortly.

> Similarly we need barriers for the case when event->response is set
> (i.e. non zero): fsnotify_destroy_event() might destroy the event while
> it's still on the access_list, since nothing guarantees that the storing
> the response value in event->response will be preceived after the list
> manipulation.  So add the necessary barriers there as well.
> 
> PS not sure why bypass_perm is an atomic_t, it could just as well be a
> boolean flag.

Agreed. I was just never bothered enough to fix this.

> PPS all this subtlety could be removed if the waitq was per-event, which
> would also allow better performance.

I'm not sure how much this problem would be helped by a per-event
waitqueue.  Also it would add a significant memory cost to permission
events and I prefer them small as there can be a lot of them queued...

								Honza
diff mbox

Patch

diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
index d2f97ecca6a5..0d0cabd946e0 100644
--- a/fs/notify/fanotify/fanotify.c
+++ b/fs/notify/fanotify/fanotify.c
@@ -70,6 +70,11 @@  static int fanotify_get_response(struct fsnotify_group *group,
 	wait_event(group->fanotify_data.access_waitq, event->response ||
 				atomic_read(&group->fanotify_data.bypass_perm));
 
+	/*
+	 * Pairs with smp_wmb() before storing event->response.  This makes sure
+	 * that the list_del_init() done on the event is preceived after this.
+	 */
+	smp_rmb();
 	if (!event->response) {	/* bypass_perm set */
 		/*
 		 * Event was canceled because group is being destroyed. Remove
diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index 8e8e6bcd1d43..af57e75772a0 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -193,6 +193,10 @@  static int process_access_response(struct fsnotify_group *group,
 	if (!event)
 		return -ENOENT;
 
+	/*
+	 * Make sure the dequeue is preceived before the store of "response"
+	 */
+	smp_wmb();
 	event->response = response;
 	wake_up(&group->fanotify_data.access_waitq);
 
@@ -305,6 +309,11 @@  static ssize_t fanotify_read(struct file *file, char __user *buf,
 		} else {
 #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
 			if (ret < 0) {
+				/*
+				 * Make sure the dequeue is preceived before
+				 * the store of "response"
+				 */
+				smp_wmb();
 				FANOTIFY_PE(kevent)->response = FAN_DENY;
 				wake_up(&group->fanotify_data.access_waitq);
 				break;
@@ -365,26 +374,34 @@  static int fanotify_release(struct inode *ignored, struct file *file)
 	 * enter or leave access_list by now.
 	 */
 	spin_lock(&group->fanotify_data.access_lock);
-
-	atomic_inc(&group->fanotify_data.bypass_perm);
-
 	list_for_each_entry_safe(event, next, &group->fanotify_data.access_list,
 				 fae.fse.list) {
 		pr_debug("%s: found group=%p event=%p\n", __func__, group,
 			 event);
 
 		list_del_init(&event->fae.fse.list);
+		/*
+		 * Make sure the dequeue is preceived before the store of
+		 * "response"
+		 */
+		smp_wmb();
 		event->response = FAN_ALLOW;
 	}
 	spin_unlock(&group->fanotify_data.access_lock);
 
 	/*
-	 * Since bypass_perm is set, newly queued events will not wait for
+	 * After bypass_perm is set, newly queued events will not wait for
 	 * access response. Wake up the already sleeping ones now.
+	 *
+	 * Make sure we do this only *after* all events were taken off
+	 * group->fanotify_data.access_list, otherwise the entry might be
+	 * deleted concurrently by two entities, resulting in list corruption.
+	 *
 	 * synchronize_srcu() in fsnotify_destroy_group() will wait for all
 	 * processes sleeping in fanotify_handle_event() waiting for access
 	 * response and thus also for all permission events to be freed.
 	 */
+	atomic_inc(&group->fanotify_data.bypass_perm);
 	wake_up(&group->fanotify_data.access_waitq);
 #endif