diff mbox series

[v5] x86/sgx: Fix use-after-free in sgx_mmu_notifier_release()

Message ID 20210128125823.18660-1-jarkko@kernel.org (mailing list archive)
State New, archived
Headers show
Series [v5] x86/sgx: Fix use-after-free in sgx_mmu_notifier_release() | expand

Commit Message

Jarkko Sakkinen Jan. 28, 2021, 12:58 p.m. UTC
The most trivial example of a race condition can be demonstrated by this
sequence where mm_list contains just one entry:

CPU A                           CPU B
-> sgx_release()
                                -> sgx_mmu_notifier_release()
                                -> list_del_rcu()
                                <- list_del_rcu()
-> kref_put()
-> sgx_encl_release()
                                -> synchronize_srcu()
-> cleanup_srcu_struct()

A sequence similar to this has also been spotted in tests under high
stress:

[  +0.000008] WARNING: CPU: 3 PID: 7620 at kernel/rcu/srcutree.c:374 cleanup_srcu_struct+0xed/0x100

Albeit not spotted in the tests, it's also entirely possible that the
following scenario could happen:

CPU A                           CPU B
-> sgx_release()
                                -> sgx_mmu_notifier_release()
                                -> list_del_rcu()
-> kref_put()
-> sgx_encl_release()
-> cleanup_srcu_struct()
<- cleanup_srcu_struct()
                                -> synchronize_srcu()

This scenario would lead into use-after free in cleaup_srcu_struct().

Fix this by taking a reference to the enclave in
sgx_mmu_notifier_release().

Cc: stable@vger.kernel.org
Fixes: 1728ab54b4be ("x86/sgx: Add a page reclaimer")
Suggested-by: Sean Christopherson <seanjc@google.com>
Reported-by: Haitao Huang <haitao.huang@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
---
v5:
- To make sure that the instance does not get deleted use kref_get()
  kref_put(). This also removes the need for additional
  synchronize_srcu().
v4:
- Rewrite the commit message.
- Just change the call order. *_expedited() is out of scope for this
  bug fix.
v3: Fine-tuned tags, and added missing change log for v2.
v2: Switch to synchronize_srcu_expedited().
 arch/x86/kernel/cpu/sgx/encl.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Dave Hansen Jan. 28, 2021, 4:33 p.m. UTC | #1
On 1/28/21 4:58 AM, Jarkko Sakkinen wrote:
> The most trivial example of a race condition can be demonstrated by this
> sequence where mm_list contains just one entry:
> 
> CPU A                           CPU B
> -> sgx_release()
>                                 -> sgx_mmu_notifier_release()
>                                 -> list_del_rcu()
>                                 <- list_del_rcu()
> -> kref_put()
> -> sgx_encl_release()
>                                 -> synchronize_srcu()
> -> cleanup_srcu_struct()

This is missing some key details including a clear, unambiguous, problem
statement.  To me, the patch should concentrate on the SRCU warning
since that's where we started.  Here's the detail that needs to be added
about the issue and the locking in general in this path:

sgx_release() also does this:

	mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm);

which does another synchronize_srcu() on the mmu_notifier's srcu_struct.
 *But*, it only does this if its own list_del_rcu() is successful.  It
does all of this before the kref_put().

In other words, sgx_release() can *only* get to this buggy path if
sgx_mmu_notifier_release() races with sgx_release and does a
list_del_rcu() first.

The key to this patch is that the sgx_mmu_notifier_release() will now
take an 'encl' reference in that case, which prevents kref_put() from
calling sgx_release() which cleans up and frees 'encl'.

I was actually also hoping to see some better comments about the new
refcount, and the locking in general.  There are *TWO* struct_srcu's in
play, a spinlock and a refcount.  I took me several days with Sean and
your help to identify the actual path and get a proper fix (versions 1-4
did *not* fix the race).

Also, the use-after-free is *fixed* in sgx_mmu_notifier_release() but
does not *occur* in sgx_mmu_notifier_release().  The subject here is a
bit misleading in that regard.
Jarkko Sakkinen Jan. 30, 2021, 7:20 p.m. UTC | #2
On Thu, 2021-01-28 at 08:33 -0800, Dave Hansen wrote:
> On 1/28/21 4:58 AM, Jarkko Sakkinen wrote:
> > The most trivial example of a race condition can be demonstrated by this
> > sequence where mm_list contains just one entry:
> > 
> > CPU A                           CPU B
> > -> sgx_release()
> >                                 -> sgx_mmu_notifier_release()
> >                                 -> list_del_rcu()
> >                                 <- list_del_rcu()
> > -> kref_put()
> > -> sgx_encl_release()
> >                                 -> synchronize_srcu()
> > -> cleanup_srcu_struct()
> 
> This is missing some key details including a clear, unambiguous, problem
> statement.  To me, the patch should concentrate on the SRCU warning
> since that's where we started.  Here's the detail that needs to be added
> about the issue and the locking in general in this path:
> 
> sgx_release() also does this:
> 
>         mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm);
> 
> which does another synchronize_srcu() on the mmu_notifier's srcu_struct.
>  *But*, it only does this if its own list_del_rcu() is successful.  It
> does all of this before the kref_put().
> 
> In other words, sgx_release() can *only* get to this buggy path if
> sgx_mmu_notifier_release() races with sgx_release and does a
> list_del_rcu() first.
> 
> The key to this patch is that the sgx_mmu_notifier_release() will now
> take an 'encl' reference in that case, which prevents kref_put() from
> calling sgx_release() which cleans up and frees 'encl'.
> 
> I was actually also hoping to see some better comments about the new
> refcount, and the locking in general.  There are *TWO* struct_srcu's in
> play, a spinlock and a refcount.  I took me several days with Sean and
> your help to identify the actual path and get a proper fix (versions 1-4
> did *not* fix the race).

This was really good input, thank you. It made realize something but
now I need a sanity check.

I think that this bug fix is *neither* a legit one :-)

Example scenario would such that all removals "side-channel" through
the notifier callback. Then mmu_notifier_unregister() gets called
exactly zero times. No MMU notifier srcu sync would be then happening.

NOTE: There's bunch of other examples, I'm just giving one.

How I think this should be actually fixed is:

1. Whenever MMU notifier is *registered* kref_get() should be called for
   the enclave reference count.
2. *BOTH* sgx_release() and sgx_mmu_notifier_release() should
   decrease the refcount when they process an entry.
   
I.e. the fix that I sent does kref_get() in wrong location. Please
sanity check my conclusion. 
 
> Also, the use-after-free is *fixed* in sgx_mmu_notifier_release() but
> does not *occur* in sgx_mmu_notifier_release().  The subject here is a
> bit misleading in that regard.

Right, this is a valid point. It's incorrect. So if I just change the
short summary by substituting sgx_mmu_notifier_release() with
sgx_release()?

/Jarkko
Jarkko Sakkinen Jan. 30, 2021, 7:26 p.m. UTC | #3
On Sat, 2021-01-30 at 21:20 +0200, Jarkko Sakkinen wrote:
> On Thu, 2021-01-28 at 08:33 -0800, Dave Hansen wrote:
> > On 1/28/21 4:58 AM, Jarkko Sakkinen wrote:
> > > The most trivial example of a race condition can be demonstrated by this
> > > sequence where mm_list contains just one entry:
> > > 
> > > CPU A                           CPU B
> > > -> sgx_release()
> > >                                 -> sgx_mmu_notifier_release()
> > >                                 -> list_del_rcu()
> > >                                 <- list_del_rcu()
> > > -> kref_put()
> > > -> sgx_encl_release()
> > >                                 -> synchronize_srcu()
> > > -> cleanup_srcu_struct()
> > 
> > This is missing some key details including a clear, unambiguous, problem
> > statement.  To me, the patch should concentrate on the SRCU warning
> > since that's where we started.  Here's the detail that needs to be added
> > about the issue and the locking in general in this path:
> > 
> > sgx_release() also does this:
> > 
> >         mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm);
> > 
> > which does another synchronize_srcu() on the mmu_notifier's srcu_struct.
> >  *But*, it only does this if its own list_del_rcu() is successful.  It
> > does all of this before the kref_put().
> > 
> > In other words, sgx_release() can *only* get to this buggy path if
> > sgx_mmu_notifier_release() races with sgx_release and does a
> > list_del_rcu() first.
> > 
> > The key to this patch is that the sgx_mmu_notifier_release() will now
> > take an 'encl' reference in that case, which prevents kref_put() from
> > calling sgx_release() which cleans up and frees 'encl'.
> > 
> > I was actually also hoping to see some better comments about the new
> > refcount, and the locking in general.  There are *TWO* struct_srcu's in
> > play, a spinlock and a refcount.  I took me several days with Sean and
> > your help to identify the actual path and get a proper fix (versions 1-4
> > did *not* fix the race).
> 
> This was really good input, thank you. It made realize something but
> now I need a sanity check.
> 
> I think that this bug fix is *neither* a legit one :-)
> 
> Example scenario would such that all removals "side-channel" through
> the notifier callback. Then mmu_notifier_unregister() gets called
> exactly zero times. No MMU notifier srcu sync would be then happening.
> 
> NOTE: There's bunch of other examples, I'm just giving one.
> 
> How I think this should be actually fixed is:
> 
> 1. Whenever MMU notifier is *registered* kref_get() should be called for
>    the enclave reference count.
> 2. *BOTH* sgx_release() and sgx_mmu_notifier_release() should
>    decrease the refcount when they process an entry.
>    
> I.e. the fix that I sent does kref_get() in wrong location. Please
> sanity check my conclusion. 
>  
> > Also, the use-after-free is *fixed* in sgx_mmu_notifier_release() but
> > does not *occur* in sgx_mmu_notifier_release().  The subject here is a
> > bit misleading in that regard.
> 
> Right, this is a valid point. It's incorrect. So if I just change the
> short summary by substituting sgx_mmu_notifier_release() with
> sgx_release()?

I.e. refcount should be increased in sgx_encl_mm_add(). That way the
whole thing should be somewhat stable.

/Jarkko
Dave Hansen Feb. 3, 2021, 3:46 p.m. UTC | #4
On 1/30/21 11:20 AM, Jarkko Sakkinen wrote:
...
> Example scenario would such that all removals "side-channel" through
> the notifier callback. Then mmu_notifier_unregister() gets called
> exactly zero times. No MMU notifier srcu sync would be then happening.
> 
> NOTE: There's bunch of other examples, I'm just giving one.

Could you flesh this out a bit?  I don't quite understand the scenario
from what you describe above.

In any case, I'm open to other implementations that fix the race we know
about.  If you think you have a better fix, I'm happy to review it and
make sure it closes the other race.
Jarkko Sakkinen Feb. 3, 2021, 9:54 p.m. UTC | #5
On Wed, Feb 03, 2021 at 07:46:48AM -0800, Dave Hansen wrote:
> On 1/30/21 11:20 AM, Jarkko Sakkinen wrote:
> ...
> > Example scenario would such that all removals "side-channel" through
> > the notifier callback. Then mmu_notifier_unregister() gets called
> > exactly zero times. No MMU notifier srcu sync would be then happening.
> > 
> > NOTE: There's bunch of other examples, I'm just giving one.
> 
> Could you flesh this out a bit?  I don't quite understand the scenario
> from what you describe above.
> 
> In any case, I'm open to other implementations that fix the race we know
> about.  If you think you have a better fix, I'm happy to review it and
> make sure it closes the other race.

I'll bake up a new patch. Generally speaking, I think why this has been so
difficult, is because of a chicken-egg-problem. The whole issue should be
sorted when a new entry is first added to the mm_list, i.e. increase the
refcount for each added entry.

/Jarkko
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index ee50a5010277..5ecbcf94ec2a 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -465,6 +465,7 @@  static void sgx_mmu_notifier_release(struct mmu_notifier *mn,
 	spin_lock(&encl_mm->encl->mm_lock);
 	list_for_each_entry(tmp, &encl_mm->encl->mm_list, list) {
 		if (tmp == encl_mm) {
+			kref_get(&encl_mm->encl->refcount);
 			list_del_rcu(&encl_mm->list);
 			break;
 		}
@@ -474,6 +475,7 @@  static void sgx_mmu_notifier_release(struct mmu_notifier *mn,
 	if (tmp == encl_mm) {
 		synchronize_srcu(&encl_mm->encl->srcu);
 		mmu_notifier_put(mn);
+		kref_put(&encl_mm->encl->refcount, sgx_encl_release);
 	}
 }