diff mbox series

[PATCH-next] ipc: Fix race condition in ipc_idr_alloc()

Message ID 20190311012536.21747-1-longman@redhat.com (mailing list archive)
State New, archived
Headers show
Series [PATCH-next] ipc: Fix race condition in ipc_idr_alloc() | expand

Commit Message

Waiman Long March 11, 2019, 1:25 a.m. UTC
In ipc_idr_alloc(), the sequence number of the kern_ipc_perm object was
also set before calling idr_alloc(). That gets changed recently in order
to conserve the sequence number space. That can lead to a possible race
condition where another thread of the same kern_ipc_perm may have called
ipc_obtain_object_check() concurrently with a recently deleted IPC id.
If idr_alloc() function happens to allocate the deleted index value,
that thread will incorrectly get a handle to the new IPC id.

However, we don't know if we should increment seq before the index value
is allocated and compared with the previously allocated index value. To
solve this dilemma, we will always put a new sequence number into the
kern_ipc_perm object before calling idr_alloc(). If it happens that the
sequence number don't need to be changed, we write back the right value
afterward. This will ensure that a concurrent ipc_obtain_object_check()
will not incorrectly match a deleted IPC id to to a new one.

Reported-by: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 ipc/util.c | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

Comments

Matthew Wilcox March 11, 2019, 3:35 a.m. UTC | #1
On Sun, Mar 10, 2019 at 09:25:36PM -0400, Waiman Long wrote:
> @@ -221,15 +221,34 @@ static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
>  	 */
>  
>  	if (next_id < 0) { /* !CHECKPOINT_RESTORE or next_id is unset */
> +		/*
> +		 * It is possible that another thread of the same
> +		 * kern_ipc_perm may have called ipc_obtain_object_check()
> +		 * concurrently with a recently deleted IPC id (idx|seq).
> +		 * If idr_alloc() happens to allocate this deleted idx value,
> +		 * the other thread may incorrectly get a handle to the new
> +		 * IPC id.
> +		 *
> +		 * To prevent this race condition from happening, we will
> +		 * always store a new sequence number into the kern_ipc_perm
> +		 * object before calling idr_alloc(). If we find out that we
> +		 * don't need to change seq, we write back the right value.
> +		 */
> +		new->seq = ids->seq + 1;
> +		if (new->seq > IPCID_SEQ_MAX)
> +			new->seq = 0;
> +
>  		if (ipc_mni_extended)
>  			idx = idr_alloc_cyclic(&ids->ipcs_idr, new, 0, ipc_mni,
>  						GFP_NOWAIT);
>  		else
>  			idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
>  
> -		if ((idx <= ids->last_idx) && (++ids->seq > IPCID_SEQ_MAX))
> -			ids->seq = 0;
> -		new->seq = ids->seq;
> +		/* Make ids->seq and new->seq stay in sync */
> +		if (idx <= ids->last_idx)
> +			ids->seq = new->seq;
> +		else
> +			new->seq = ids->seq;

This can't possibly be right.  It's no better to occasionally find the
wrong ID than to find an uninitialised ID.

The normal pattern for solving this kind of problem is to idr_alloc()
a NULL pointer, initialise new->seq, then call idr_replace() to turn
that NULL pointer into the actual pointer you want.
Waiman Long March 11, 2019, 2:14 p.m. UTC | #2
On 03/10/2019 11:35 PM, Matthew Wilcox wrote:
> On Sun, Mar 10, 2019 at 09:25:36PM -0400, Waiman Long wrote:
>> @@ -221,15 +221,34 @@ static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
>>  	 */
>>  
>>  	if (next_id < 0) { /* !CHECKPOINT_RESTORE or next_id is unset */
>> +		/*
>> +		 * It is possible that another thread of the same
>> +		 * kern_ipc_perm may have called ipc_obtain_object_check()
>> +		 * concurrently with a recently deleted IPC id (idx|seq).
>> +		 * If idr_alloc() happens to allocate this deleted idx value,
>> +		 * the other thread may incorrectly get a handle to the new
>> +		 * IPC id.
>> +		 *
>> +		 * To prevent this race condition from happening, we will
>> +		 * always store a new sequence number into the kern_ipc_perm
>> +		 * object before calling idr_alloc(). If we find out that we
>> +		 * don't need to change seq, we write back the right value.
>> +		 */
>> +		new->seq = ids->seq + 1;
>> +		if (new->seq > IPCID_SEQ_MAX)
>> +			new->seq = 0;
>> +
>>  		if (ipc_mni_extended)
>>  			idx = idr_alloc_cyclic(&ids->ipcs_idr, new, 0, ipc_mni,
>>  						GFP_NOWAIT);
>>  		else
>>  			idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
>>  
>> -		if ((idx <= ids->last_idx) && (++ids->seq > IPCID_SEQ_MAX))
>> -			ids->seq = 0;
>> -		new->seq = ids->seq;
>> +		/* Make ids->seq and new->seq stay in sync */
>> +		if (idx <= ids->last_idx)
>> +			ids->seq = new->seq;
>> +		else
>> +			new->seq = ids->seq;
> This can't possibly be right.  It's no better to occasionally find the
> wrong ID than to find an uninitialised ID.

The kern_ipc_perm object isn't uninitialized. The seq value, however, is
tentative rather than final.

> The normal pattern for solving this kind of problem is to idr_alloc()
> a NULL pointer, initialise new->seq, then call idr_replace() to turn
> that NULL pointer into the actual pointer you want.

That will work too, I think. My only concern is that it will slow down
ipc_idr_alloc() process as each idr_*() call can be pretty expensive.
Even though it is in the slow path, we still don't want to introduce
unnecessary overhead.

Actually, it is no different from what ipc_idr_alloc() used to be. The
seq value in kern_ipc_perm was updated every time ipc_idr_alloc() was
called. So older IPC id would fail ipc_checkid(). This patch guarantees
that it will happen again. The new IPC id won't be returned until the
kern_ipc_perm object is correctly set. So there is no danger of new IPC
id failing the test. I will update my patch to better discuss the
rationale for this change.

Cheers,
Longman
diff mbox series

Patch

diff --git a/ipc/util.c b/ipc/util.c
index 78e14acb51a7..6cb4129a2649 100644
--- a/ipc/util.c
+++ b/ipc/util.c
@@ -221,15 +221,34 @@  static inline int ipc_idr_alloc(struct ipc_ids *ids, struct kern_ipc_perm *new)
 	 */
 
 	if (next_id < 0) { /* !CHECKPOINT_RESTORE or next_id is unset */
+		/*
+		 * It is possible that another thread of the same
+		 * kern_ipc_perm may have called ipc_obtain_object_check()
+		 * concurrently with a recently deleted IPC id (idx|seq).
+		 * If idr_alloc() happens to allocate this deleted idx value,
+		 * the other thread may incorrectly get a handle to the new
+		 * IPC id.
+		 *
+		 * To prevent this race condition from happening, we will
+		 * always store a new sequence number into the kern_ipc_perm
+		 * object before calling idr_alloc(). If we find out that we
+		 * don't need to change seq, we write back the right value.
+		 */
+		new->seq = ids->seq + 1;
+		if (new->seq > IPCID_SEQ_MAX)
+			new->seq = 0;
+
 		if (ipc_mni_extended)
 			idx = idr_alloc_cyclic(&ids->ipcs_idr, new, 0, ipc_mni,
 						GFP_NOWAIT);
 		else
 			idx = idr_alloc(&ids->ipcs_idr, new, 0, 0, GFP_NOWAIT);
 
-		if ((idx <= ids->last_idx) && (++ids->seq > IPCID_SEQ_MAX))
-			ids->seq = 0;
-		new->seq = ids->seq;
+		/* Make ids->seq and new->seq stay in sync */
+		if (idx <= ids->last_idx)
+			ids->seq = new->seq;
+		else
+			new->seq = ids->seq;
 		ids->last_idx = idx;
 	} else {
 		new->seq = ipcid_to_seqx(next_id);