diff mbox series

[v2,1/2] mmap locking API: Order lock of nascent mm outside lock of live mm

Message ID 20201006225450.751742-2-jannh@google.com (mailing list archive)
State New, archived
Headers show
Series Broad write-locking of nascent mm in execve | expand

Commit Message

Jann Horn Oct. 6, 2020, 10:54 p.m. UTC
Until now, the mmap lock of the nascent mm was ordered inside the mmap lock
of the old mm (in dup_mmap() and in UML's activate_mm()).
A following patch will change the exec path to very broadly lock the
nascent mm, but fine-grained locking should still work at the same time for
the old mm.

In particular, mmap locking calls are hidden behind the copy_from_user()
calls and such that are reached through functions like copy_strings() -
when a page fault occurs on a userspace memory access, the mmap lock
will be taken.

To do this in a way that lockdep is happy about, let's turn around the lock
ordering in both places that currently nest the locks.
Since SINGLE_DEPTH_NESTING is normally used for the inner nesting layer,
make up our own lock subclass MMAP_LOCK_SUBCLASS_NASCENT and use that
instead.

The added locking calls in exec_mmap() are temporary; the following patch
will move the locking out of exec_mmap().

Signed-off-by: Jann Horn <jannh@google.com>
---
 arch/um/include/asm/mmu_context.h |  3 +--
 fs/exec.c                         |  4 ++++
 include/linux/mmap_lock.h         | 23 +++++++++++++++++++++--
 kernel/fork.c                     |  7 ++-----
 4 files changed, 28 insertions(+), 9 deletions(-)

Comments

Johannes Berg Oct. 7, 2020, 7:42 a.m. UTC | #1
On Wed, 2020-10-07 at 00:54 +0200, Jann Horn wrote:
> Until now, the mmap lock of the nascent mm was ordered inside the mmap lock
> of the old mm (in dup_mmap() and in UML's activate_mm()).
> A following patch will change the exec path to very broadly lock the
> nascent mm, but fine-grained locking should still work at the same time for
> the old mm.
> 
> In particular, mmap locking calls are hidden behind the copy_from_user()
> calls and such that are reached through functions like copy_strings() -
> when a page fault occurs on a userspace memory access, the mmap lock
> will be taken.
> 
> To do this in a way that lockdep is happy about, let's turn around the lock
> ordering in both places that currently nest the locks.
> Since SINGLE_DEPTH_NESTING is normally used for the inner nesting layer,
> make up our own lock subclass MMAP_LOCK_SUBCLASS_NASCENT and use that
> instead.
> 
> The added locking calls in exec_mmap() are temporary; the following patch
> will move the locking out of exec_mmap().
> 
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
>  arch/um/include/asm/mmu_context.h |  3 +--
>  fs/exec.c                         |  4 ++++
>  include/linux/mmap_lock.h         | 23 +++++++++++++++++++++--
>  kernel/fork.c                     |  7 ++-----
>  4 files changed, 28 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
> index 17ddd4edf875..c13bc5150607 100644
> --- a/arch/um/include/asm/mmu_context.h
> +++ b/arch/um/include/asm/mmu_context.h
> @@ -48,9 +48,8 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new)
>  	 * when the new ->mm is used for the first time.
>  	 */
>  	__switch_mm(&new->context.id);
> -	mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING);
> +	mmap_assert_write_locked(new);
>  	uml_setup_stubs(new);
> -	mmap_write_unlock(new);
>  }

FWIW, this was I believe causing lockdep issues.

I think I had previously determined that this was pointless, since it's
still nascent and cannot be used yet? But I didn't really know for sure,
and this patch was never applied:

https://patchwork.ozlabs.org/project/linux-um/patch/20200604133752.397dedea0758.I7a24aaa26794eb3fa432003c1bf55cbb816489e2@changeid/

I guess your patches will also fix the lockdep complaints in UML in this
area, I hope I'll be able to test it soon.

johannes
Jann Horn Oct. 7, 2020, 8:28 a.m. UTC | #2
On Wed, Oct 7, 2020 at 9:42 AM Johannes Berg <johannes@sipsolutions.net> wrote:
> On Wed, 2020-10-07 at 00:54 +0200, Jann Horn wrote:
> > Until now, the mmap lock of the nascent mm was ordered inside the mmap lock
> > of the old mm (in dup_mmap() and in UML's activate_mm()).
> > A following patch will change the exec path to very broadly lock the
> > nascent mm, but fine-grained locking should still work at the same time for
> > the old mm.
> >
> > In particular, mmap locking calls are hidden behind the copy_from_user()
> > calls and such that are reached through functions like copy_strings() -
> > when a page fault occurs on a userspace memory access, the mmap lock
> > will be taken.
> >
> > To do this in a way that lockdep is happy about, let's turn around the lock
> > ordering in both places that currently nest the locks.
> > Since SINGLE_DEPTH_NESTING is normally used for the inner nesting layer,
> > make up our own lock subclass MMAP_LOCK_SUBCLASS_NASCENT and use that
> > instead.
> >
> > The added locking calls in exec_mmap() are temporary; the following patch
> > will move the locking out of exec_mmap().
> >
> > Signed-off-by: Jann Horn <jannh@google.com>
> > ---
> >  arch/um/include/asm/mmu_context.h |  3 +--
> >  fs/exec.c                         |  4 ++++
> >  include/linux/mmap_lock.h         | 23 +++++++++++++++++++++--
> >  kernel/fork.c                     |  7 ++-----
> >  4 files changed, 28 insertions(+), 9 deletions(-)
> >
> > diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
> > index 17ddd4edf875..c13bc5150607 100644
> > --- a/arch/um/include/asm/mmu_context.h
> > +++ b/arch/um/include/asm/mmu_context.h
> > @@ -48,9 +48,8 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new)
> >        * when the new ->mm is used for the first time.
> >        */
> >       __switch_mm(&new->context.id);
> > -     mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING);
> > +     mmap_assert_write_locked(new);
> >       uml_setup_stubs(new);
> > -     mmap_write_unlock(new);
> >  }
>
> FWIW, this was I believe causing lockdep issues.
>
> I think I had previously determined that this was pointless, since it's
> still nascent and cannot be used yet?

Well.. the thing is that with patch 2/2, I'm not just protecting the
mm while it hasn't been installed yet, but also after it's been
installed, until setup_arg_pages() is done (which still uses a VMA
pointer that we obtained really early in the nascent phase). With the
recent rework Eric Biederman has done to clean up the locking around
execve, operations like process_vm_writev() and (currently only in the
MM tree, not mainline yet) process_madvise() can remotely occur on our
new mm after setup_new_exec(), before we've reached setup_arg_pages().
While AFAIK all those operations *currently* only read the VMA tree,
that would change as soon as someone e.g. changes the list of allowed
operations for process_madvise() to include something like
MADV_MERGEABLE. In that case, we'd get a UAF if the madvise code
merges away our VMA while we still hold and use a dangling pointer to
it.

So in summary, I think the code currently is not (visibly) buggy in
the sense that you can make it do something bad, but it's extremely
fragile and probably only safe by chance. This patchset is partly my
attempt to make this a bit more future-proof before someone comes
along and turns it into an actual memory corruption bug with some
innocuous little change. (Because I've had a situation before where I
thought "oh, this looks really fragile and only works by chance, but
eh, it's not really worth changing that code" and then the next time I
looked, it had turned into a security bug that had already made its
way into kernel releases people were using.)

> But I didn't really know for sure,
> and this patch was never applied:
>
> https://patchwork.ozlabs.org/project/linux-um/patch/20200604133752.397dedea0758.I7a24aaa26794eb3fa432003c1bf55cbb816489e2@changeid/

Eeeh... with all the kernel debugging infrastructure *disabled*,
down_write_nested() is defined as:

# define down_write_nested(sem, subclass) down_write(sem)

and then down_write() is:

void __sched down_write(struct rw_semaphore *sem)
{
  might_sleep();
  rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_);
  LOCK_CONTENDED(sem, __down_write_trylock, __down_write);
}

and that might_sleep() there is not just used for atomic sleep
debugging, but actually also creates an explicit preemption point
(independent of CONFIG_DEBUG_ATOMIC_SLEEP; here's the version with
atomic sleep debugging *disabled*):

# define might_sleep() do { might_resched(); } while (0)

where might_resched() is:

#ifdef CONFIG_PREEMPT_VOLUNTARY
extern int _cond_resched(void);
# define might_resched() _cond_resched()
#else
# define might_resched() do { } while (0)
#endif

_cond_resched() has a check for preempt_count before triggering the
scheduler, but on PREEMPT_VOLUNTARY without debugging, taking a
spinlock currently won't increment that, I think. And even if
preempt_count was active for PREEMPT_VOLUNTARY (which I think the x86
folks were discussing?), you'd still hit a call into the RCU core,
which probably shouldn't be happening under a spinlock either.

Now, arch/um/ sets ARCH_NO_PREEMPT, so we can't actually be configured
with PREEMPT_VOLUNTARY, so this can't actually happen. But it feels
like we're on pretty thin ice here.

> I guess your patches will also fix the lockdep complaints in UML in this
> area, I hope I'll be able to test it soon.

That would be a nice side effect. :)
Johannes Berg Oct. 7, 2020, 11:35 a.m. UTC | #3
Hi Jann,

> > > +++ b/arch/um/include/asm/mmu_context.h
> > > @@ -48,9 +48,8 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new)
> > >        * when the new ->mm is used for the first time.
> > >        */
> > >       __switch_mm(&new->context.id);
> > > -     mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING);
> > > +     mmap_assert_write_locked(new);
> > >       uml_setup_stubs(new);
> > > -     mmap_write_unlock(new);
> > >  }
> > 
> > FWIW, this was I believe causing lockdep issues.
> > 
> > I think I had previously determined that this was pointless, since it's
> > still nascent and cannot be used yet?
> 
> Well.. the thing is that with patch 2/2, I'm not just protecting the
> mm while it hasn't been installed yet, but also after it's been
> installed, until setup_arg_pages() is done (which still uses a VMA
> pointer that we obtained really early in the nascent phase). 

Oh, sure. I was referring only to the locking in UML's activate_mm(),
quoted above. Sorry for not making that clear.

> So in summary, I think the code currently is not (visibly) buggy in
> the sense that you can make it do something bad, but it's extremely
> fragile and probably only safe by chance. This patchset is partly my
> attempt to make this a bit more future-proof before someone comes
> along and turns it into an actual memory corruption bug with some
> innocuous little change. (Because I've had a situation before where I
> thought "oh, this looks really fragile and only works by chance, but
> eh, it's not really worth changing that code" and then the next time I
> looked, it had turned into a security bug that had already made its
> way into kernel releases people were using.)

Right.

> > But I didn't really know for sure,
> > and this patch was never applied:
> > 
> > https://patchwork.ozlabs.org/project/linux-um/patch/20200604133752.397dedea0758.I7a24aaa26794eb3fa432003c1bf55cbb816489e2@changeid/
> 
> Eeeh... with all the kernel debugging infrastructure *disabled*,

but I didn't have it disabled, I had lockdep enabled, and lockdep was
complaining (now granted, I was still on 5.8 for that patch):

=============================
[ BUG: Invalid wait context ]
5.8.0-00006-gef4b340c886a #23 Not tainted
-----------------------------
swapper/1 is trying to lock:
000000006e54c160 (&mm->mmap_lock/1){....}-{3:3}, at: begin_new_exec+0x6c5/0xb26
other info that might help us debug this:
context-{4:4}
3 locks held by swapper/1:
 #0: 00000000705f4548 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: __do_execve_file+0x12c/0x7ea
 #1: 00000000705f45e0 (&sig->exec_update_mutex){+.+.}-{3:3}, at: begin_new_exec+0x5db/0xb26
 #2: 00000000705e05a8 (&p->alloc_lock){+.+.}-{2:2}, at: begin_new_exec+0x66b/0xb26
stack backtrace:
CPU: 0 PID: 1 Comm: swapper Not tainted 5.8.0-00006-gef4b340c886a #23
Stack:
 6057fa2d 705e0760 705ebbb0 00000133
 6008d289 705e0760 705e0040 00000003
 705ebbc0 6028e02f 705ebc50 60080b29
Call Trace:
 [<6008d289>] ? printk+0x0/0x94
 [<60024a1a>] show_stack+0x153/0x174
 [<6008d289>] ? printk+0x0/0x94
 [<6028e02f>] dump_stack+0x34/0x36
 [<60080b29>] __lock_acquire+0x515/0x15f5
 [<6007c593>] ? hlock_class+0x0/0xa1
 [<6007fd90>] lock_acquire+0x347/0x42d
 [<6013def5>] ? begin_new_exec+0x6c5/0xb26
 [<60039b51>] ? set_signals+0x29/0x3f
 [<600835c1>] ? lock_acquired+0x310/0x320
 [<6013b5ce>] ? would_dump+0x0/0x8a
 [<600798fd>] down_write_nested+0x2f/0x83
 [<6013def5>] ? begin_new_exec+0x6c5/0xb26
 [<600798ce>] ? down_write_nested+0x0/0x83
 [<6013def5>] begin_new_exec+0x6c5/0xb26
 [<6019593b>] ? load_elf_phdrs+0x6f/0x9d
 [<60298d55>] ? memcmp+0x0/0x20
 [<60196612>] load_elf_binary+0x2cb/0xc49
 [...]

but it really looks just about the same on v5.9-rc8.

> > I guess your patches will also fix the lockdep complaints in UML in this
> > area, I hope I'll be able to test it soon.
> 
> That would be a nice side effect. :)

It does indeed fix it :)

johannes
diff mbox series

Patch

diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
index 17ddd4edf875..c13bc5150607 100644
--- a/arch/um/include/asm/mmu_context.h
+++ b/arch/um/include/asm/mmu_context.h
@@ -48,9 +48,8 @@  static inline void activate_mm(struct mm_struct *old, struct mm_struct *new)
 	 * when the new ->mm is used for the first time.
 	 */
 	__switch_mm(&new->context.id);
-	mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING);
+	mmap_assert_write_locked(new);
 	uml_setup_stubs(new);
-	mmap_write_unlock(new);
 }
 
 static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, 
diff --git a/fs/exec.c b/fs/exec.c
index a91003e28eaa..229dbc7aa61a 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1114,6 +1114,8 @@  static int exec_mmap(struct mm_struct *mm)
 	if (ret)
 		return ret;
 
+	mmap_write_lock_nascent(mm);
+
 	if (old_mm) {
 		/*
 		 * Make sure that if there is a core dump in progress
@@ -1125,6 +1127,7 @@  static int exec_mmap(struct mm_struct *mm)
 		if (unlikely(old_mm->core_state)) {
 			mmap_read_unlock(old_mm);
 			mutex_unlock(&tsk->signal->exec_update_mutex);
+			mmap_write_unlock(mm);
 			return -EINTR;
 		}
 	}
@@ -1138,6 +1141,7 @@  static int exec_mmap(struct mm_struct *mm)
 	tsk->mm->vmacache_seqnum = 0;
 	vmacache_flush(tsk);
 	task_unlock(tsk);
+	mmap_write_unlock(mm);
 	if (old_mm) {
 		mmap_read_unlock(old_mm);
 		BUG_ON(active_mm != old_mm);
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 0707671851a8..24de1fe99ee4 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -3,6 +3,18 @@ 
 
 #include <linux/mmdebug.h>
 
+/*
+ * Lock subclasses for the mmap_lock.
+ *
+ * MMAP_LOCK_SUBCLASS_NASCENT is for core kernel code that wants to lock an mm
+ * that is still being constructed and wants to be able to access the active mm
+ * normally at the same time. It nests outside MMAP_LOCK_SUBCLASS_NORMAL.
+ */
+enum {
+	MMAP_LOCK_SUBCLASS_NORMAL = 0,
+	MMAP_LOCK_SUBCLASS_NASCENT
+};
+
 #define MMAP_LOCK_INITIALIZER(name) \
 	.mmap_lock = __RWSEM_INITIALIZER((name).mmap_lock),
 
@@ -16,9 +28,16 @@  static inline void mmap_write_lock(struct mm_struct *mm)
 	down_write(&mm->mmap_lock);
 }
 
-static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass)
+/*
+ * Lock an mm_struct that is still being set up (during fork or exec).
+ * This nests outside the mmap locks of live mm_struct instances.
+ * No interruptible/killable versions exist because at the points where you're
+ * supposed to use this helper, the mm isn't visible to anything else, so we
+ * expect the mmap_lock to be uncontended.
+ */
+static inline void mmap_write_lock_nascent(struct mm_struct *mm)
 {
-	down_write_nested(&mm->mmap_lock, subclass);
+	down_write_nested(&mm->mmap_lock, MMAP_LOCK_SUBCLASS_NASCENT);
 }
 
 static inline int mmap_write_lock_killable(struct mm_struct *mm)
diff --git a/kernel/fork.c b/kernel/fork.c
index da8d360fb032..db67eb4ac7bd 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -474,6 +474,7 @@  static __latent_entropy int dup_mmap(struct mm_struct *mm,
 	unsigned long charge;
 	LIST_HEAD(uf);
 
+	mmap_write_lock_nascent(mm);
 	uprobe_start_dup_mmap();
 	if (mmap_write_lock_killable(oldmm)) {
 		retval = -EINTR;
@@ -481,10 +482,6 @@  static __latent_entropy int dup_mmap(struct mm_struct *mm,
 	}
 	flush_cache_dup_mm(oldmm);
 	uprobe_dup_mmap(oldmm, mm);
-	/*
-	 * Not linked in yet - no deadlock potential:
-	 */
-	mmap_write_lock_nested(mm, SINGLE_DEPTH_NESTING);
 
 	/* No ordering required: file already has been exposed. */
 	RCU_INIT_POINTER(mm->exe_file, get_mm_exe_file(oldmm));
@@ -600,12 +597,12 @@  static __latent_entropy int dup_mmap(struct mm_struct *mm,
 	/* a new mm has just been created */
 	retval = arch_dup_mmap(oldmm, mm);
 out:
-	mmap_write_unlock(mm);
 	flush_tlb_mm(oldmm);
 	mmap_write_unlock(oldmm);
 	dup_userfaultfd_complete(&uf);
 fail_uprobe_end:
 	uprobe_end_dup_mmap();
+	mmap_write_unlock(mm);
 	return retval;
 fail_nomem_anon_vma_fork:
 	mpol_put(vma_policy(tmp));