From patchwork Thu Oct 15 00:00:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11838317 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5329D14B2 for ; Thu, 15 Oct 2020 00:01:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E59CE221FF for ; Thu, 15 Oct 2020 00:00:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="HEYZPCQS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E59CE221FF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 33E796B006E; Wed, 14 Oct 2020 20:00:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 29D896B0070; Wed, 14 Oct 2020 20:00:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EEBA6B0071; Wed, 14 Oct 2020 20:00:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id B83576B006E for ; Wed, 14 Oct 2020 20:00:57 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 47681180AD807 for ; Thu, 15 Oct 2020 00:00:57 +0000 (UTC) X-FDA: 77372204154.23.end38_3f1514b27210 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 24D2337604 for ; Thu, 15 Oct 2020 00:00:57 +0000 (UTC) X-Spam-Summary: 10,1,0,163a43fe130f8f64,d41d8cd98f00b204,jannh@google.com,,RULES_HIT:2:41:355:379:541:560:800:960:967:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1730:1747:1777:1792:1801:2198:2199:2393:2525:2559:2564:2682:2685:2689:2693:2859:2901:2914:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4120:4250:4321:4605:5007:6261:6653:6742:7901:7903:9025:9969:10004:11026:11473:11657:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12895:12986:13141:13153:13161:13215:13228:13229:13230:13255:13894:14096:14394:21080:21433:21444:21451:21627:21740:21749:21811:21889:21990:30012:30029:30054:30070,0,RBL:209.85.221.66:@google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yrkakokb6feaoj7kamirqcrbi6socm9dzbg56srurtc9g6dr7tifa376pa18w.jwyraotdk6k3wgogw7c46ge91drqbhjtfz6tndnkscd4x7ukqgupyxf4n4nf7ww.4-lbl8.mailshell.net-223.238.255 .100,Cac X-HE-Tag: end38_3f1514b27210 X-Filterd-Recvd-Size: 9397 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Thu, 15 Oct 2020 00:00:56 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id t9so992277wrq.11 for ; Wed, 14 Oct 2020 17:00:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X+eg8YgelJaIrse4cpi7Ihnp7zRy6v8TM+zVlOepTsI=; b=HEYZPCQSEY6/zZcdjkGO/ocyGBY4uDIlADTK4kJrkxdC0NoC6FN45s4b9Y48I7MXdY YZH5LYzFpYbFD34aBR6uUqKngYLSocyGlWnapLc2uxIJyua77nyDSn9rnXvWGy2fIARo SgZoKw+kcrLlbz05EST4yTQuYm3+cR2S4jlgX9Zhfvk61DLdHNCmkxt2G88WL9d+MRAR nTRbE7x8iJCjoWDgBM7IsI92Br5Gibvrrjia/pDchZq24bjhxqiCsp6+CJFGLjGi2Mkx xmpA5yL4KuqggemD/KpwxyGrGi0nd+iAboQaY5R/hH2DQJtdUjVBSaZc/mly7DdZ4njC ZzvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X+eg8YgelJaIrse4cpi7Ihnp7zRy6v8TM+zVlOepTsI=; b=AEbkjrvTOUxQPm107WXdoRK+bGUJfK4n/tfx1zGwjBJnyvlCqX2/P9u0M2C1xE9az2 p1rdUUKnUQkhSfs7Ps6K51SpnlRT9Q7HwvUlnE7y/4pfY23o3Iu/dln+GMaxjVRkA/Of LjPNtjl3/fmCzuIQRFhLpszYHwlRSlu/BIBHL707V2XWO2on6N5X/I6463sV1tUU+H1a w+ymSwPI8Z+ZCEimzzqejSPVjOMsXECaZHiPU+y1i+nh7JBOF/So6hTAPbu1FYmpafjM K+ihGDYMiMABEdCUChD1jaLK0cHMDezVGkF/Tc3YDvxDlbeyGOJBaT0zTREa7St/Cm3D e9rw== X-Gm-Message-State: AOAM532hoUo3yThI8OufYP9FgwCHeAOdJLld3xEXt1fJc9vnSBQvmw+t 6lwIzWld7Kh2N3FYdeQztlePbQ== X-Google-Smtp-Source: ABdhPJyy7N9UEaPfdtetQQx+qXWmXhJLnRo6ObdxA5iADhOSskrhvebUWIovEChg8l+P+XJ+Ki5Xbw== X-Received: by 2002:a5d:558e:: with SMTP id i14mr1242206wrv.40.1602720055421; Wed, 14 Oct 2020 17:00:55 -0700 (PDT) Received: from localhost ([2a02:168:96c5:1:55ed:514f:6ad7:5bcc]) by smtp.gmail.com with ESMTPSA id q8sm1428293wro.32.2020.10.14.17.00.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Oct 2020 17:00:54 -0700 (PDT) From: Jann Horn To: Andrew Morton , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, "Eric W . Biederman" , Michel Lespinasse , Mauro Carvalho Chehab , Sakari Ailus , Jeff Dike , Richard Weinberger , Anton Ivanov , linux-um@lists.infradead.org, Jason Gunthorpe , John Hubbard , Johannes Berg Subject: [PATCH v3 1/2] mmap locking API: Order lock of nascent mm outside lock of live mm Date: Thu, 15 Oct 2020 02:00:40 +0200 Message-Id: <20201015000041.1734214-2-jannh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog In-Reply-To: <20201015000041.1734214-1-jannh@google.com> References: <20201015000041.1734214-1-jannh@google.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Until now, the mmap lock of the nascent mm was ordered inside the mmap lock of the old mm (in dup_mmap() and in UML's activate_mm()). A following patch will change the exec path to very broadly lock the nascent mm, but fine-grained locking should still work at the same time for the old mm. In particular, mmap locking calls are hidden behind the copy_from_user() calls and such that are reached through functions like copy_strings() - when a page fault occurs on a userspace memory access, the mmap lock will be taken. To do this in a way that lockdep is happy about, let's turn around the lock ordering in both places that currently nest the locks. Since SINGLE_DEPTH_NESTING is normally used for the inner nesting layer, make up our own lock subclass MMAP_LOCK_SUBCLASS_NASCENT and use that instead. The added locking calls in exec_mmap() are temporary; the following patch will move the locking out of exec_mmap(). As Johannes Berg pointed out[1][2], moving the mmap locking of arch/um/'s activate_mm() up into the execve code also fixes an issue that would've caused a scheduling-in-atomic bug due to mmap_write_lock_nested() while holding a spinlock if UM had support for voluntary preemption. (Even when a semaphore is uncontended, locking it can still trigger rescheduling via might_sleep().) [1] https://lore.kernel.org/linux-mm/115d17aa221b73a479e26ffee52899ddb18b1f53.camel@sipsolutions.net/ [2] https://lore.kernel.org/linux-mm/7b7d6954b74e109e653539d880173fa9cb5c5ddf.camel@sipsolutions.net/ Signed-off-by: Jann Horn --- arch/um/include/asm/mmu_context.h | 3 +-- fs/exec.c | 4 ++++ include/linux/mmap_lock.h | 23 +++++++++++++++++++++-- kernel/fork.c | 7 ++----- 4 files changed, 28 insertions(+), 9 deletions(-) diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h index 17ddd4edf875..c13bc5150607 100644 --- a/arch/um/include/asm/mmu_context.h +++ b/arch/um/include/asm/mmu_context.h @@ -48,9 +48,8 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new) * when the new ->mm is used for the first time. */ __switch_mm(&new->context.id); - mmap_write_lock_nested(new, SINGLE_DEPTH_NESTING); + mmap_assert_write_locked(new); uml_setup_stubs(new); - mmap_write_unlock(new); } static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, diff --git a/fs/exec.c b/fs/exec.c index a91003e28eaa..229dbc7aa61a 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1114,6 +1114,8 @@ static int exec_mmap(struct mm_struct *mm) if (ret) return ret; + mmap_write_lock_nascent(mm); + if (old_mm) { /* * Make sure that if there is a core dump in progress @@ -1125,6 +1127,7 @@ static int exec_mmap(struct mm_struct *mm) if (unlikely(old_mm->core_state)) { mmap_read_unlock(old_mm); mutex_unlock(&tsk->signal->exec_update_mutex); + mmap_write_unlock(mm); return -EINTR; } } @@ -1138,6 +1141,7 @@ static int exec_mmap(struct mm_struct *mm) tsk->mm->vmacache_seqnum = 0; vmacache_flush(tsk); task_unlock(tsk); + mmap_write_unlock(mm); if (old_mm) { mmap_read_unlock(old_mm); BUG_ON(active_mm != old_mm); diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 0707671851a8..24de1fe99ee4 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -3,6 +3,18 @@ #include +/* + * Lock subclasses for the mmap_lock. + * + * MMAP_LOCK_SUBCLASS_NASCENT is for core kernel code that wants to lock an mm + * that is still being constructed and wants to be able to access the active mm + * normally at the same time. It nests outside MMAP_LOCK_SUBCLASS_NORMAL. + */ +enum { + MMAP_LOCK_SUBCLASS_NORMAL = 0, + MMAP_LOCK_SUBCLASS_NASCENT +}; + #define MMAP_LOCK_INITIALIZER(name) \ .mmap_lock = __RWSEM_INITIALIZER((name).mmap_lock), @@ -16,9 +28,16 @@ static inline void mmap_write_lock(struct mm_struct *mm) down_write(&mm->mmap_lock); } -static inline void mmap_write_lock_nested(struct mm_struct *mm, int subclass) +/* + * Lock an mm_struct that is still being set up (during fork or exec). + * This nests outside the mmap locks of live mm_struct instances. + * No interruptible/killable versions exist because at the points where you're + * supposed to use this helper, the mm isn't visible to anything else, so we + * expect the mmap_lock to be uncontended. + */ +static inline void mmap_write_lock_nascent(struct mm_struct *mm) { - down_write_nested(&mm->mmap_lock, subclass); + down_write_nested(&mm->mmap_lock, MMAP_LOCK_SUBCLASS_NASCENT); } static inline int mmap_write_lock_killable(struct mm_struct *mm) diff --git a/kernel/fork.c b/kernel/fork.c index da8d360fb032..db67eb4ac7bd 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -474,6 +474,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, unsigned long charge; LIST_HEAD(uf); + mmap_write_lock_nascent(mm); uprobe_start_dup_mmap(); if (mmap_write_lock_killable(oldmm)) { retval = -EINTR; @@ -481,10 +482,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, } flush_cache_dup_mm(oldmm); uprobe_dup_mmap(oldmm, mm); - /* - * Not linked in yet - no deadlock potential: - */ - mmap_write_lock_nested(mm, SINGLE_DEPTH_NESTING); /* No ordering required: file already has been exposed. */ RCU_INIT_POINTER(mm->exe_file, get_mm_exe_file(oldmm)); @@ -600,12 +597,12 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, /* a new mm has just been created */ retval = arch_dup_mmap(oldmm, mm); out: - mmap_write_unlock(mm); flush_tlb_mm(oldmm); mmap_write_unlock(oldmm); dup_userfaultfd_complete(&uf); fail_uprobe_end: uprobe_end_dup_mmap(); + mmap_write_unlock(mm); return retval; fail_nomem_anon_vma_fork: mpol_put(vma_policy(tmp)); From patchwork Thu Oct 15 00:00:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11838319 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6DFD14B5 for ; Thu, 15 Oct 2020 00:01:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 83FF3218AC for ; Thu, 15 Oct 2020 00:01:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="GFSgmnGu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 83FF3218AC Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D7F676B0071; Wed, 14 Oct 2020 20:00:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D31E96B0072; Wed, 14 Oct 2020 20:00:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC5AF6B0073; Wed, 14 Oct 2020 20:00:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id 8B20F6B0071 for ; Wed, 14 Oct 2020 20:00:59 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 26047181AEF00 for ; Thu, 15 Oct 2020 00:00:59 +0000 (UTC) X-FDA: 77372204238.18.tax40_4c0006627210 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 05C5D100FEE84 for ; Thu, 15 Oct 2020 00:00:59 +0000 (UTC) X-Spam-Summary: 1,0,0,880fcc99a41c4bcf,d41d8cd98f00b204,jannh@google.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:966:968:973:981:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:2895:2901:2914:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4049:4250:4321:4385:4605:5007:6261:6653:6742:7875:7901:7903:9592:9969:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:13149:13161:13229:13230:13894:14394:21063:21080:21444:21451:21627:21740:21990:30003:30012:30054:30064:30070,0,RBL:209.85.221.66:@google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100;04yrxofemcrzohm8csaqbsjqjxqmpococngmegpqezeuf65rpykujcue9b3qtxh.wogu6i3a6o71dnuae4zbcn6rkxb39bj8mbpbzjibradxb86s8pusxqowkhnz6gg.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: tax40_4c0006627210 X-Filterd-Recvd-Size: 10148 Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Thu, 15 Oct 2020 00:00:58 +0000 (UTC) Received: by mail-wr1-f66.google.com with SMTP id b8so1068073wrn.0 for ; Wed, 14 Oct 2020 17:00:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d++93rmuKVnPkbqL51tLjdy/HXENTumGpvgD+/W0kLI=; b=GFSgmnGueUyX/IIcG66ic4hUSck0/SZq/9BVg5ujVleyAhystaeQUgdcn5Q4ADHH1u uIfQ3IrfmOJc9vjv6EmXIdhTwUpz73jl43Sy53sTLHAUzDOThOBCU+RqFrkffOvgvZcd FbvMRsG71qEacfwpsLhR7ltynkxX7aUkaN26bkU89+h4SSR9PE6AyoOMfJE+u4LwZZs7 vCgIqlLLUlw2eFeNsOGbNgNCwmCs4NcaRqT7DT3+k+GjtyUiIvVCu3/KdSbXM3RfcnrY K1Q0GKChjL4HFh8HH9GVt1gNfH5Wm7tPZt24/CYwSvP7X9dYBY/yyRaEBfIbKfsUqho9 8ojg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d++93rmuKVnPkbqL51tLjdy/HXENTumGpvgD+/W0kLI=; b=ukYuDlkHkiSSYTJ+zz6gNF47ruO+HlcP7yomoa64IJtJMKD+LKTeSQdfdeOeEg+3f1 S13fxxm+mV2A8w+CXu6TP+rrt4u0j//X3eTX0UeK3i5MgtJ0mxZaXCbKIHwTMERVaQVS WDWMjNiuePoMj2nKae0deinh19Mk2YmN60QEDZi+2POP1TLEtXBZruaYF/i3LQwim8UL R1sAKd6yx2fbHU/GtxevoynwXd7Z7XM1F9X9eL7JntZLnait/w8/gwjgcvPEet8A3qlp ZsosobEqnelw9Tf1TpywcxHbeO2uH+xdB1tatbBJHvokOFyy9phpEnQuymnKwAKMzWxU aKHA== X-Gm-Message-State: AOAM5336U9zJWfYwUojJXLMDaZfr3OaZugaqCK248eKzLaiqJExBWbZS Qrok8/ZG3wgc+fRQcD/TrwS5GQ== X-Google-Smtp-Source: ABdhPJwfV6oOuN2y0EXBVhh4fwUCpwP/+XRg9XR/9m+8iM2Eykpq28sU+nr3UCEmLC4xcRt7oEP7Xg== X-Received: by 2002:adf:9282:: with SMTP id 2mr1104452wrn.43.1602720057220; Wed, 14 Oct 2020 17:00:57 -0700 (PDT) Received: from localhost ([2a02:168:96c5:1:55ed:514f:6ad7:5bcc]) by smtp.gmail.com with ESMTPSA id h3sm1382253wrw.78.2020.10.14.17.00.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Oct 2020 17:00:56 -0700 (PDT) From: Jann Horn To: Andrew Morton , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, "Eric W . Biederman" , Michel Lespinasse , Mauro Carvalho Chehab , Sakari Ailus , Jeff Dike , Richard Weinberger , Anton Ivanov , linux-um@lists.infradead.org, Jason Gunthorpe , John Hubbard , Johannes Berg Subject: [PATCH v3 2/2] exec: Broadly lock nascent mm until setup_arg_pages() Date: Thu, 15 Oct 2020 02:00:41 +0200 Message-Id: <20201015000041.1734214-3-jannh@google.com> X-Mailer: git-send-email 2.28.0.1011.ga647a8990f-goog In-Reply-To: <20201015000041.1734214-1-jannh@google.com> References: <20201015000041.1734214-1-jannh@google.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: While AFAIK there currently is nothing that can modify the VMA tree of a new mm until userspace has started running under the mm, we should properly lock the mm here anyway, both to keep lockdep happy when adding locking assertions and to be safe in the future in case someone e.g. decides to permit VMA-tree-mutating operations in process_madvise_behavior_valid(). The goal of this patch is to broadly lock the nascent mm in the exec path, from around the time it is created all the way to the end of setup_arg_pages() (because setup_arg_pages() accesses bprm->vma). As long as the mm is write-locked, keep it around in bprm->mm, even after it has been installed on the task (with an extra reference on the mm, to reduce complexity in free_bprm()). After setup_arg_pages(), we have to unlock the mm so that APIs such as copy_to_user() will work in the following binfmt-specific setup code. Suggested-by: Jason Gunthorpe Suggested-by: Michel Lespinasse Signed-off-by: Jann Horn --- fs/exec.c | 68 ++++++++++++++++++++--------------------- include/linux/binfmts.h | 2 +- 2 files changed, 35 insertions(+), 35 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index 229dbc7aa61a..00edf833781f 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -254,11 +254,6 @@ static int __bprm_mm_init(struct linux_binprm *bprm) return -ENOMEM; vma_set_anonymous(vma); - if (mmap_write_lock_killable(mm)) { - err = -EINTR; - goto err_free; - } - /* * Place the stack at the largest stack address the architecture * supports. Later, we'll move this to an appropriate place. We don't @@ -276,12 +271,9 @@ static int __bprm_mm_init(struct linux_binprm *bprm) goto err; mm->stack_vm = mm->total_vm = 1; - mmap_write_unlock(mm); bprm->p = vma->vm_end - sizeof(void *); return 0; err: - mmap_write_unlock(mm); -err_free: bprm->vma = NULL; vm_area_free(vma); return err; @@ -364,9 +356,9 @@ static int bprm_mm_init(struct linux_binprm *bprm) struct mm_struct *mm = NULL; bprm->mm = mm = mm_alloc(); - err = -ENOMEM; if (!mm) - goto err; + return -ENOMEM; + mmap_write_lock_nascent(mm); /* Save current stack limit for all calculations made during exec. */ task_lock(current->group_leader); @@ -374,17 +366,12 @@ static int bprm_mm_init(struct linux_binprm *bprm) task_unlock(current->group_leader); err = __bprm_mm_init(bprm); - if (err) - goto err; - - return 0; - -err: - if (mm) { - bprm->mm = NULL; - mmdrop(mm); - } + if (!err) + return 0; + bprm->mm = NULL; + mmap_write_unlock(mm); + mmdrop(mm); return err; } @@ -735,6 +722,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, unsigned long shift) /* * Finalizes the stack vm_area_struct. The flags and permissions are updated, * the stack is optionally relocated, and some extra space is added. + * At the end of this, the mm_struct will be unlocked on success. */ int setup_arg_pages(struct linux_binprm *bprm, unsigned long stack_top, @@ -787,9 +775,6 @@ int setup_arg_pages(struct linux_binprm *bprm, bprm->loader -= stack_shift; bprm->exec -= stack_shift; - if (mmap_write_lock_killable(mm)) - return -EINTR; - vm_flags = VM_STACK_FLAGS; /* @@ -807,7 +792,7 @@ int setup_arg_pages(struct linux_binprm *bprm, ret = mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end, vm_flags); if (ret) - goto out_unlock; + return ret; BUG_ON(prev != vma); if (unlikely(vm_flags & VM_EXEC)) { @@ -819,7 +804,7 @@ int setup_arg_pages(struct linux_binprm *bprm, if (stack_shift) { ret = shift_arg_pages(vma, stack_shift); if (ret) - goto out_unlock; + return ret; } /* mprotect_fixup is overkill to remove the temporary stack flags */ @@ -846,11 +831,17 @@ int setup_arg_pages(struct linux_binprm *bprm, current->mm->start_stack = bprm->p; ret = expand_stack(vma, stack_base); if (ret) - ret = -EFAULT; + return -EFAULT; -out_unlock: + /* + * From this point on, anything that wants to poke around in the + * mm_struct must lock it by itself. + */ + bprm->vma = NULL; mmap_write_unlock(mm); - return ret; + mmput(mm); + bprm->mm = NULL; + return 0; } EXPORT_SYMBOL(setup_arg_pages); @@ -1114,8 +1105,6 @@ static int exec_mmap(struct mm_struct *mm) if (ret) return ret; - mmap_write_lock_nascent(mm); - if (old_mm) { /* * Make sure that if there is a core dump in progress @@ -1127,11 +1116,12 @@ static int exec_mmap(struct mm_struct *mm) if (unlikely(old_mm->core_state)) { mmap_read_unlock(old_mm); mutex_unlock(&tsk->signal->exec_update_mutex); - mmap_write_unlock(mm); return -EINTR; } } + /* bprm->mm stays refcounted, current->mm takes an extra reference */ + mmget(mm); task_lock(tsk); active_mm = tsk->active_mm; membarrier_exec_mmap(mm); @@ -1141,7 +1131,6 @@ static int exec_mmap(struct mm_struct *mm) tsk->mm->vmacache_seqnum = 0; vmacache_flush(tsk); task_unlock(tsk); - mmap_write_unlock(mm); if (old_mm) { mmap_read_unlock(old_mm); BUG_ON(active_mm != old_mm); @@ -1397,8 +1386,6 @@ int begin_new_exec(struct linux_binprm * bprm) if (retval) goto out; - bprm->mm = NULL; - #ifdef CONFIG_POSIX_TIMERS exit_itimers(me->signal); flush_itimer_signals(); @@ -1545,6 +1532,18 @@ void setup_new_exec(struct linux_binprm * bprm) me->mm->task_size = TASK_SIZE; mutex_unlock(&me->signal->exec_update_mutex); mutex_unlock(&me->signal->cred_guard_mutex); + + if (!IS_ENABLED(CONFIG_MMU)) { + /* + * On MMU, setup_arg_pages() wants to access bprm->vma after + * this point, so we can't drop the mmap lock yet. + * On !MMU, we have neither setup_arg_pages() nor bprm->vma, + * so we should drop the lock here. + */ + mmap_write_unlock(bprm->mm); + mmput(bprm->mm); + bprm->mm = NULL; + } } EXPORT_SYMBOL(setup_new_exec); @@ -1581,6 +1580,7 @@ static void free_bprm(struct linux_binprm *bprm) { if (bprm->mm) { acct_arg_size(bprm, 0); + mmap_write_unlock(bprm->mm); mmput(bprm->mm); } free_arg_pages(bprm); diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h index 0571701ab1c5..3bf06212fbae 100644 --- a/include/linux/binfmts.h +++ b/include/linux/binfmts.h @@ -22,7 +22,7 @@ struct linux_binprm { # define MAX_ARG_PAGES 32 struct page *page[MAX_ARG_PAGES]; #endif - struct mm_struct *mm; + struct mm_struct *mm; /* nascent mm, write-locked */ unsigned long p; /* current top of mem */ unsigned long argmin; /* rlimit marker for copy_strings() */ unsigned int