From patchwork Fri Mar 23 19:16:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Eric W. Biederman" X-Patchwork-Id: 10305471 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 375C260384 for ; Fri, 23 Mar 2018 19:22:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 291C628D50 for ; Fri, 23 Mar 2018 19:22:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1D89028E1D; Fri, 23 Mar 2018 19:22:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9BE2528D50 for ; Fri, 23 Mar 2018 19:22:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752474AbeCWTVs (ORCPT ); Fri, 23 Mar 2018 15:21:48 -0400 Received: from out03.mta.xmission.com ([166.70.13.233]:59462 "EHLO out03.mta.xmission.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752253AbeCWTVp (ORCPT ); Fri, 23 Mar 2018 15:21:45 -0400 Received: from in01.mta.xmission.com ([166.70.13.51]) by out03.mta.xmission.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.87) (envelope-from ) id 1ezSG0-0004Nq-Hz; Fri, 23 Mar 2018 13:21:44 -0600 Received: from 97-119-121-173.omah.qwest.net ([97.119.121.173] helo=x220.int.ebiederm.org) by in01.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.87) (envelope-from ) id 1ezSCr-00033I-LJ; Fri, 23 Mar 2018 13:18:30 -0600 From: "Eric W. Biederman" To: Linux Containers Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, khlebnikov@yandex-team.ru, prakash.sangappa@oracle.com, luto@kernel.org, akpm@linux-foundation.org, oleg@redhat.com, serge.hallyn@ubuntu.com, esyr@redhat.com, jannh@google.com, linux-security-module@vger.kernel.org, Pavel Emelyanov , Nagarathnam Muthusamy , "Eric W. Biederman" Date: Fri, 23 Mar 2018 14:16:14 -0500 Message-Id: <20180323191614.32489-11-ebiederm@xmission.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <87vadmobdw.fsf_-_@xmission.com> References: <87vadmobdw.fsf_-_@xmission.com> X-XM-SPF: eid=1ezSCr-00033I-LJ; ; ; mid=<20180323191614.32489-11-ebiederm@xmission.com>; ; ; hst=in01.mta.xmission.com; ; ; ip=97.119.121.173; ; ; frm=ebiederm@xmission.com; ; ; spf=neutral X-XM-AID: U2FsdGVkX18jfMYCTvQEchlVOiXnfr4kjCa0s3RxgWw= X-SA-Exim-Connect-IP: 97.119.121.173 X-SA-Exim-Mail-From: ebiederm@xmission.com Subject: [REVIEW][PATCH 11/11] ipc/sem: Fix semctl(..., GETPID, ...) between pid namespaces X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Today the last process to update a semaphore is remembered and reported in the pid namespace of that process. If there are processes in any other pid namespace querying that process id with GETPID the result will be unusable nonsense as it does not make any sense in your own pid namespace. Due to ipc_update_pid I don't think you will be able to get System V ipc semaphores into a troublesome cache line ping-pong. Using struct pids from separate process are not a problem because they do not share a cache line. Using struct pid from different threads of the same process are unlikely to be a problem as the reference count update can be avoided. Further linux futexes are a much better tool for the job of mutual exclusion between processes than System V semaphores. So I expect programs that are performance limited by their interprocess mutual exclusion primitive will be using futexes. So while it is possible that enhancing the storage of the last rocess of a System V semaphore from an integer to a struct pid will cause a performance regression because of the effect of frequently updating the pid reference count. I don't expect that to happen in practice. This change updates semctl(..., GETPID, ...) to return the process id of the last process to update a semphore inthe pid namespace of the calling process. Fixes: b488893a390e ("pid namespaces: changes to show virtual ids to user") Signed-off-by: "Eric W. Biederman" --- ipc/sem.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/ipc/sem.c b/ipc/sem.c index d661c491b0a5..47b263960524 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -98,7 +98,7 @@ struct sem { * - semctl, via SETVAL and SETALL. * - at task exit when performing undo adjustments (see exit_sem). */ - int sempid; + struct pid *sempid; spinlock_t lock; /* spinlock for fine-grained semtimedop */ struct list_head pending_alter; /* pending single-sop operations */ /* that alter the semaphore */ @@ -128,7 +128,7 @@ struct sem_queue { struct list_head list; /* queue of pending operations */ struct task_struct *sleeper; /* this process */ struct sem_undo *undo; /* undo structure */ - int pid; /* process id of requesting process */ + struct pid *pid; /* process id of requesting process */ int status; /* completion status of operation */ struct sembuf *sops; /* array of pending operations */ struct sembuf *blocking; /* the operation that blocked */ @@ -628,7 +628,8 @@ SYSCALL_DEFINE3(semget, key_t, key, int, nsems, int, semflg) */ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q) { - int result, sem_op, nsops, pid; + int result, sem_op, nsops; + struct pid *pid; struct sembuf *sop; struct sem *curr; struct sembuf *sops; @@ -666,7 +667,7 @@ static int perform_atomic_semop_slow(struct sem_array *sma, struct sem_queue *q) sop--; pid = q->pid; while (sop >= sops) { - sma->sems[sop->sem_num].sempid = pid; + ipc_update_pid(&sma->sems[sop->sem_num].sempid, pid); sop--; } @@ -753,7 +754,7 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q) un->semadj[sop->sem_num] = undo; } curr->semval += sem_op; - curr->sempid = q->pid; + ipc_update_pid(&curr->sempid, q->pid); } return 0; @@ -1160,6 +1161,7 @@ static void freeary(struct ipc_namespace *ns, struct kern_ipc_perm *ipcp) unlink_queue(sma, q); wake_up_sem_queue_prepare(q, -EIDRM, &wake_q); } + ipc_update_pid(&sem->sempid, NULL); } /* Remove the semaphore set from the IDR */ @@ -1352,7 +1354,7 @@ static int semctl_setval(struct ipc_namespace *ns, int semid, int semnum, un->semadj[semnum] = 0; curr->semval = val; - curr->sempid = task_tgid_vnr(current); + ipc_update_pid(&curr->sempid, task_tgid(current)); sma->sem_ctime = ktime_get_real_seconds(); /* maybe some queued-up processes were waiting for this */ do_smart_update(sma, NULL, 0, 0, &wake_q); @@ -1473,7 +1475,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum, for (i = 0; i < nsems; i++) { sma->sems[i].semval = sem_io[i]; - sma->sems[i].sempid = task_tgid_vnr(current); + ipc_update_pid(&sma->sems[i].sempid, task_tgid(current)); } ipc_assert_locked_object(&sma->sem_perm); @@ -1505,7 +1507,7 @@ static int semctl_main(struct ipc_namespace *ns, int semid, int semnum, err = curr->semval; goto out_unlock; case GETPID: - err = curr->sempid; + err = pid_vnr(curr->sempid); goto out_unlock; case GETNCNT: err = count_semcnt(sma, semnum, 0); @@ -2024,7 +2026,7 @@ static long do_semtimedop(int semid, struct sembuf __user *tsops, queue.sops = sops; queue.nsops = nsops; queue.undo = un; - queue.pid = task_tgid_vnr(current); + queue.pid = task_tgid(current); queue.alter = alter; queue.dupsop = dupsop; @@ -2318,7 +2320,7 @@ void exit_sem(struct task_struct *tsk) semaphore->semval = 0; if (semaphore->semval > SEMVMX) semaphore->semval = SEMVMX; - semaphore->sempid = task_tgid_vnr(current); + ipc_update_pid(&semaphore->sempid, task_tgid(current)); } } /* maybe some queued-up processes were waiting for this */