From patchwork Wed Apr 29 21:49:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518587 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9281515E6 for ; Wed, 29 Apr 2020 21:50:37 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6D620214AF for ; Wed, 29 Apr 2020 21:50:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="kzZp9oJp"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="hq7/1zfU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D620214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=swVXf+92JXyJ4vJLxRWSb5s71RW1mCHIsBYRTGg1jEU=; b=kzZp9oJpWUK+AB HnbGBEffFGWvEgZHWZhaDWU3FVnefM6OpBT1Iuw30Nm/OvuiucT42w3lSB+WG8pkardYT/hG/ZLXS CLttBCWtEu5fwwDFD8rHLV0VpbdP+lKqOvJU2esCCsHC830wHwkR7C9UJ3px6LRjSpuZ5JR1HL32V LZAKCx3Z/13qVGmbhcgElMHeoTSoBPGmd8/wjlb6X4sDVyBchbk3uOk1R26dVS8YF6Z+ks9HRlBZK 8NXz5F11cGJIdPzNieaeOxCLQ076viOqj2nJ5fmie7BsJ22D3P4tW6zZc0yonU8GQrO3RWx73pFk4 ZShibTqzdQ0f1YuXYFrw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTubA-0000GW-Pg; Wed, 29 Apr 2020 21:50:32 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTuas-0008Rw-OC for linux-arm-kernel@lists.infradead.org; Wed, 29 Apr 2020 21:50:17 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id e2so5132251ybm.19 for ; Wed, 29 Apr 2020 14:50:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=X8Ilta2TozKsEzasXrZac28hkMQagz57pEK9xvAhCMU=; b=hq7/1zfUEsP2iAtt4ZQ4PwzfpMH/LEAjv1O6M4T2kyI5ocMSPyrezh78agwH8E2fzC 5UoVyHnot4yvU3uT3T/gW30svDPED38wwzcwKUzc1ikntjBs+9t1DIScbtC1T7t5gt+O mS6z/RcBXMryw96KRHKWMXSzRFEqU6HnTb477iWgeCgmihZ523VW2z3+tpWL0Olncc0H DbIb3P9b+pQ9ux7Mr0Ope9w48HA1d2qcLTAWT5s9KeaNQb1A3h6o/n0bM6oiKdKbONLG xXvcA9wtEp7X3YvX7oN/lCMYNvQZ5hT+jnRwGzIUoXJDAz6icA6ycOc2YEueJSe+gqSn I33Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=X8Ilta2TozKsEzasXrZac28hkMQagz57pEK9xvAhCMU=; b=e4ff/vSmw2rfaKHHbxHzAWZKDGYPNLUtLvR2rmrOrLLPK56aW5yZ9iicd4Zog4xTpj KiOLh/ezoqb7Ti9rcu1bZ1yuMX4qenSX3et6uU1JoO+5lL5my5UeDAZm4lORwUhbvhss bpawDXwNPJugz2jnZJjuDKadB/s8TQtEsB9uxnhzpYA9HwRbMfdbr43qiDddNlZ9Wsd9 A1g8KfKI8wp76QHNV1bTqjzdoWt+CkjvKIJX2jvKWARIqP139dLIPaKJFHcYy6ddIQsd hCQnSB45B2WucPUVMfz3MVfpMpdM5EggzX7hqtxZZCudNwBrqctXNE+AqeghPJPfhAbu A35g== X-Gm-Message-State: AGi0PuaXrkZ7vglQj9zd/xa2ERh9W+hy+ce5W+0RofHv+0qw1L58B3go N5GbAC+Qm+wjJU8ZZo63xWdmRrdcRg== X-Google-Smtp-Source: APiQypK393UboBZKtVhF2VXC53kboONpVDbYMmk4wx3dfVKrxYMarbZX5Q8DMVWqcVWoG5mrxwWJr8Q+6w== X-Received: by 2002:a25:c048:: with SMTP id c69mr565867ybf.169.1588197012786; Wed, 29 Apr 2020 14:50:12 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:50 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-2-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 1/5] binfmt_elf_fdpic: Stop using dump_emit() on user pointers on !MMU From: Jann Horn To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200429_145014_790324_63E61065 X-CRM114-Status: GOOD ( 14.79 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b4a listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rich Felker , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org, Oleg Nesterov , linux-mm@kvack.org, Alexander Viro , Mark Salter , linux-fsdevel@vger.kernel.org, Russell King , Aurelien Jacquiot , Linus Torvalds , Christoph Hellwig , linux-arm-kernel@lists.infradead.org, "Eric W . Biederman" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org dump_emit() is for kernel pointers, and VMAs describe userspace memory. Let's be tidy here and avoid accessing userspace pointers under KERNEL_DS, even if it probably doesn't matter much on !MMU systems - especially given that it looks like we can just use the same get_dump_page() as on MMU if we move it out of the CONFIG_MMU block. Signed-off-by: Jann Horn --- fs/binfmt_elf_fdpic.c | 8 ------ mm/gup.c | 58 +++++++++++++++++++++---------------------- 2 files changed, 29 insertions(+), 37 deletions(-) diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index c62c17a5c34a9..f5b47076fa762 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1495,14 +1495,11 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) struct vm_area_struct *vma; for (vma = current->mm->mmap; vma; vma = vma->vm_next) { -#ifdef CONFIG_MMU unsigned long addr; -#endif if (!maydump(vma, cprm->mm_flags)) continue; -#ifdef CONFIG_MMU for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) { bool res; @@ -1518,11 +1515,6 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) if (!res) return false; } -#else - if (!dump_emit(cprm, (void *) vma->vm_start, - vma->vm_end - vma->vm_start)) - return false; -#endif } return true; } diff --git a/mm/gup.c b/mm/gup.c index 50681f0286ded..76080c4dbff05 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1490,35 +1490,6 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) up_read(&mm->mmap_sem); return ret; /* 0 or negative error code */ } - -/** - * get_dump_page() - pin user page in memory while writing it to core dump - * @addr: user address - * - * Returns struct page pointer of user page pinned for dump, - * to be freed afterwards by put_page(). - * - * Returns NULL on any kind of failure - a hole must then be inserted into - * the corefile, to preserve alignment with its headers; and also returns - * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - - * allowing a hole to be left in the corefile to save diskspace. - * - * Called without mmap_sem, but after all other threads have been killed. - */ -#ifdef CONFIG_ELF_CORE -struct page *get_dump_page(unsigned long addr) -{ - struct vm_area_struct *vma; - struct page *page; - - if (__get_user_pages(current, current->mm, addr, 1, - FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, - NULL) < 1) - return NULL; - flush_cache_page(vma, addr, page_to_pfn(page)); - return page; -} -#endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ static long __get_user_pages_locked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, @@ -1565,6 +1536,35 @@ static long __get_user_pages_locked(struct task_struct *tsk, } #endif /* !CONFIG_MMU */ +/** + * get_dump_page() - pin user page in memory while writing it to core dump + * @addr: user address + * + * Returns struct page pointer of user page pinned for dump, + * to be freed afterwards by put_page(). + * + * Returns NULL on any kind of failure - a hole must then be inserted into + * the corefile, to preserve alignment with its headers; and also returns + * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - + * allowing a hole to be left in the corefile to save diskspace. + * + * Called without mmap_sem, but after all other threads have been killed. + */ +#ifdef CONFIG_ELF_CORE +struct page *get_dump_page(unsigned long addr) +{ + struct vm_area_struct *vma; + struct page *page; + + if (__get_user_pages(current, current->mm, addr, 1, + FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, + NULL) < 1) + return NULL; + flush_cache_page(vma, addr, page_to_pfn(page)); + return page; +} +#endif /* CONFIG_ELF_CORE */ + #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) { From patchwork Wed Apr 29 21:49:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518591 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 102AA81 for ; Wed, 29 Apr 2020 21:50:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C478820757 for ; Wed, 29 Apr 2020 21:50:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="hJ4oUQgw"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="aiXVr9LW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C478820757 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CAbqfj1QX/j+uHyIYW/r7iS/4flLHndfE8Dcya81y8c=; b=hJ4oUQgwpsGA8E iNpbJn8Z0GKzkb+xZzroijZTAM4s2UC+GMwMOzZ5oFBopPopbIKX58AnnTHkuZsFCbhGtMLHR3lNe 9VwI1HuXIvsSlMaNR1Sm9kgkSRjZbd/Bp8eSVy/B18jsb5dAEY4x+d1zbGStW8mg3UgjC1KZhRRhk wm6IWKSLeHBzfi9rot2p1gORmOSMQvMS+qiFp/rrxkriS8xXvJRjg8sv6e8uAdJ890OZwD9AorlgA hNbzAu/4BTz30uNFJUhtl3d6sjWZrL/016hsxu5pDlyBu0YEaSXgeIueQ8tJwDbaxpUNoYZMilbyp 3s6yzZj6p0Nbr0FA7log==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTubW-0000aQ-FT; Wed, 29 Apr 2020 21:50:54 +0000 Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTuaw-0008TM-Oj for linux-arm-kernel@lists.infradead.org; Wed, 29 Apr 2020 21:50:21 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id v2so4182552qvy.1 for ; Wed, 29 Apr 2020 14:50:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JqiR2vOdqFfBqjhyMGJp8d/qL3dB8+X/PfK7MWBosOQ=; b=aiXVr9LW0+RfYp/BGKCZwf2eFb8u9bW41qbryYBrMOTMAxizbTvkyUsj2hMqoewXVA EbenkI3pjlj6j3Jz6v58GVbjLsRucPnyBIwyrf/cQxI7QYwEK5k1uOps+KFUBQSyYzSq 4ScWZwqWRdd/I9EOQCike+A/g8LgBL08ZkugaoTqx2zKcpO0yY7MZeP+Y4VZXfi2opO+ u0ixT5IVpYsz33tv0SuE/w161GkFjCv2PC0mWH6RqlvaBsCA3Nae7CGPq+IMF6tcDpg/ xDEfx6+SvCzPkM2Gw22E+XqkcpxanCcgz9Szd+JSjxTdcGi6OboSgHQ79epRcTxfLjBk yKJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JqiR2vOdqFfBqjhyMGJp8d/qL3dB8+X/PfK7MWBosOQ=; b=D38Qviz5TsV3csoYPg7i129Uv7mWPnpwiWvEtNq9nXTi8bsyBc0E8QHQgO5hGpZKA3 GlAvbwhJlwfJreN68uFlD7kqMnulgMZ817HfUMimIg8MWUO2F0ULdkfkHkCwQSpUOSDH iWyD9N8FiLzyo4/6Jt+cFJK8TJX5B+gDlD9+PJGnXdm158OCgnqGVE2xlCrNYNh3RphT 1IkALdrVZmgGpuS7QFM7fxm3ZSdOve2ecJkHbs7kqg9Jii4UI17oKISzQnVdyLFbrwUd 5lGhewd2l6HAVUygM2IvrPsVH5g7HSguw2+PkqxIgsFMYIBcaUdptNACgIN2EdmCyOBY 56ng== X-Gm-Message-State: AGi0Pua045RFB8uaParRaYHro7TjuqsqGufUzt+oPGxKQZ1Oa1b2wzPx edAlbVW33UiFTBJHV/YjLEzEYBI25g== X-Google-Smtp-Source: APiQypIhYRdYrJfvjviu8u64QDDNe2LAeKB+wfqrsyuISOkwKNnezPoPTQS28YSpGgdHWAPJujbNXC6IaA== X-Received: by 2002:ad4:49d3:: with SMTP id j19mr34024184qvy.78.1588197016281; Wed, 29 Apr 2020 14:50:16 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:51 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-3-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 2/5] coredump: Let dump_emit() bail out on short writes From: Jann Horn To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200429_145019_211831_1A597A02 X-CRM114-Status: GOOD ( 12.17 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:f4a listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rich Felker , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org, Oleg Nesterov , linux-mm@kvack.org, Alexander Viro , Mark Salter , linux-fsdevel@vger.kernel.org, Russell King , Aurelien Jacquiot , Linus Torvalds , Christoph Hellwig , linux-arm-kernel@lists.infradead.org, "Eric W . Biederman" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org dump_emit() has a retry loop, but there seems to be no way for that retry logic to actually be used; and it was also buggy, writing the same data repeatedly after a short write. Let's just bail out on a short write. Suggested-by: Linus Torvalds Signed-off-by: Jann Horn --- fs/coredump.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/coredump.c b/fs/coredump.c index 408418e6aa131..d6fcc36a7db1f 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -823,17 +823,17 @@ int dump_emit(struct coredump_params *cprm, const void *addr, int nr) ssize_t n; if (cprm->written + nr > cprm->limit) return 0; - while (nr) { - if (dump_interrupted()) - return 0; - n = __kernel_write(file, addr, nr, &pos); - if (n <= 0) - return 0; - file->f_pos = pos; - cprm->written += n; - cprm->pos += n; - nr -= n; - } + + + if (dump_interrupted()) + return 0; + n = __kernel_write(file, addr, nr, &pos); + if (n != nr) + return 0; + file->f_pos = pos; + cprm->written += n; + cprm->pos += n; + return 1; } EXPORT_SYMBOL(dump_emit); From patchwork Wed Apr 29 21:49:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518593 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9172C81 for ; Wed, 29 Apr 2020 21:51:15 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5E47720757 for ; Wed, 29 Apr 2020 21:51:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="aX1TXuZm"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="GSSmtlGa" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E47720757 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bl4ykSvGS+eHMR9Txw/mVTXEDR3KfQC3HxW7VXdQ/7o=; b=aX1TXuZmxnrvDk VGpUa5BMiXcb7lRSpkRN9xj6DF309kGypopCZVV4Jj6LbLAVvkrvkB20RZRoTDiVDobCTmrEhb+V0 khx6NJgCzbxLj3HN8WAgejIzTruPBPWcGkLih5upyuVK98wAtWA8DYjI55KfmVHEhyqVZ2f9I4q0q wYg++gNWThN4jzdtBFYWQsUO8jRCgUlgBCMPWY06AedCVFaGYrRVfUUKOnLeMriGzoUp2rgA56PcF VM+2R+mHnNUz1qLFhBegeT0cSeSTrMsHZr7E01TEa5P0Ad2nobk13hI6yK/AG+A8wWLXtH700unHD dQUlo8T8JGwonnFOfprA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTubm-0000o5-L1; Wed, 29 Apr 2020 21:51:10 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTuaz-0008UO-Q8 for linux-arm-kernel@lists.infradead.org; Wed, 29 Apr 2020 21:50:23 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id w15so5154626ybp.16 for ; Wed, 29 Apr 2020 14:50:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fj0KQLirV2pXFAzqsm5ihCveY8LPZo7Z3GnKM9xZWh8=; b=GSSmtlGaezqJYFHNtVCvKDyaoNbqh56EGYP/Fc0Wpoc90PdevFi0ZD8Ooxmy4Widco XcbbF/6xFqrDTE3pvTjufefO3SdgXCEgmHNIH9znF7d3Dv6qql7V2mJ6j2XkmEzcxb+e Q/nbBfHbhQW5NR+typqj2ECOa8wKbh2zIt7zeL0s3MpBSXb9SaMuJJnQJA09mHV1duew +kxt1Rlviwam3qFMVMOkxHlbrH0RpD0cBRshXoSf9ZOHzBfh4Fz+xyWcJy8cNq44YKCV +gPnFCFX0kj9nyPZqhGwFOphjLBAkVr2TlnIC3XFKIJucLwWslWMNE8mD0qjBqsXJozC D1zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fj0KQLirV2pXFAzqsm5ihCveY8LPZo7Z3GnKM9xZWh8=; b=rw+Uox0R/aB3yKFgiEdf6hVrid2cPLg8npWsuKQKb7j+Gu2LmMRkR8k8GSHdNAj42L tNJjsrpZjkHj9AxDzg1dJkvg5NWByK/mBAsYKBD0FeOi82aTkbIRRzgZN87eNQyQif5X zJyPxMUMJnJOXg1yI+uMYNPzr72lg6mdnLabl4IH339XKXvtH2skTUdFctJXb3xgXIK+ wYMQ54KR3T2YOOp2oplYoaJ05cjxagvVwTv9oq2HYSkreQfa2+vIDPSd+pKaPYJgVgmm iofgk7aXgOWaTB3beKj5XN1xhQlLH3+Le3iThQ3wW5fFaCNK9p8JRly9A0pYwsS9RO0S B33Q== X-Gm-Message-State: AGi0PuZXnHDG9k1mPd0jByY/q0rBC+EAaN+vbHI4SuF8FWGmatVx8uQs k0SOtZZ3DMB0fJWnD7U04E2QOtBMqA== X-Google-Smtp-Source: APiQypK22sMV8j+VmwP/R0uq3K5qZ7QQPdC0JIfmnivMb6yvBD5UGTsQwHujsiWG2GGJfDkTsCczijqUrA== X-Received: by 2002:a5b:546:: with SMTP id r6mr602488ybp.253.1588197019883; Wed, 29 Apr 2020 14:50:19 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:52 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-4-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 3/5] coredump: Refactor page range dumping into common helper From: Jann Horn To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200429_145021_863902_7D66CFC2 X-CRM114-Status: GOOD ( 15.96 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b4a listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rich Felker , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org, Oleg Nesterov , linux-mm@kvack.org, Alexander Viro , Mark Salter , linux-fsdevel@vger.kernel.org, Russell King , Aurelien Jacquiot , Linus Torvalds , Christoph Hellwig , linux-arm-kernel@lists.infradead.org, "Eric W . Biederman" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Both fs/binfmt_elf.c and fs/binfmt_elf_fdpic.c need to dump ranges of pages into the coredump file. Extract that logic into a common helper. Any other binfmt that actually wants to create coredumps will probably need the same function; so stop making get_dump_page() depend on CONFIG_ELF_CORE. Signed-off-by: Jann Horn --- fs/binfmt_elf.c | 22 ++-------------------- fs/binfmt_elf_fdpic.c | 18 +++--------------- fs/coredump.c | 33 +++++++++++++++++++++++++++++++++ include/linux/coredump.h | 2 ++ mm/gup.c | 2 -- 5 files changed, 40 insertions(+), 37 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index b29b84595b09f..fb36469848323 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -2323,26 +2323,8 @@ static int elf_core_dump(struct coredump_params *cprm) for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; vma = next_vma(vma, gate_vma)) { - unsigned long addr; - unsigned long end; - - end = vma->vm_start + vma_filesz[i++]; - - for (addr = vma->vm_start; addr < end; addr += PAGE_SIZE) { - struct page *page; - int stop; - - page = get_dump_page(addr); - if (page) { - void *kaddr = kmap(page); - stop = !dump_emit(cprm, kaddr, PAGE_SIZE); - kunmap(page); - put_page(page); - } else - stop = !dump_skip(cprm, PAGE_SIZE); - if (stop) - goto cleanup; - } + if (!dump_user_range(cprm, vma->vm_start, vma_filesz[i++])) + goto cleanup; } dump_truncate(cprm); diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index f5b47076fa762..938f66f4de9b2 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1500,21 +1500,9 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) if (!maydump(vma, cprm->mm_flags)) continue; - for (addr = vma->vm_start; addr < vma->vm_end; - addr += PAGE_SIZE) { - bool res; - struct page *page = get_dump_page(addr); - if (page) { - void *kaddr = kmap(page); - res = dump_emit(cprm, kaddr, PAGE_SIZE); - kunmap(page); - put_page(page); - } else { - res = dump_skip(cprm, PAGE_SIZE); - } - if (!res) - return false; - } + if (!dump_user_range(cprm, vma->vm_start, + vma->vma_end - vma->vm_start)) + return false; } return true; } diff --git a/fs/coredump.c b/fs/coredump.c index d6fcc36a7db1f..88f625eecaac1 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -859,6 +859,39 @@ int dump_skip(struct coredump_params *cprm, size_t nr) } EXPORT_SYMBOL(dump_skip); +#ifdef CONFIG_ELF_CORE +int dump_user_range(struct coredump_params *cprm, unsigned long start, + unsigned long len) +{ + unsigned long addr; + + for (addr = start; addr < start + len; addr += PAGE_SIZE) { + struct page *page; + int stop; + + /* + * To avoid having to allocate page tables for virtual address + * ranges that have never been used yet, use a helper that + * returns NULL when encountering an empty page table entry that + * would otherwise have been filled with the zero page. + */ + page = get_dump_page(addr); + if (page) { + void *kaddr = kmap(page); + + stop = !dump_emit(cprm, kaddr, PAGE_SIZE); + kunmap(page); + put_page(page); + } else { + stop = !dump_skip(cprm, PAGE_SIZE); + } + if (stop) + return 0; + } + return 1; +} +#endif + int dump_align(struct coredump_params *cprm, int align) { unsigned mod = cprm->pos & (align - 1); diff --git a/include/linux/coredump.h b/include/linux/coredump.h index abf4b4e65dbb9..4289dc21c04ff 100644 --- a/include/linux/coredump.h +++ b/include/linux/coredump.h @@ -16,6 +16,8 @@ extern int dump_skip(struct coredump_params *cprm, size_t nr); extern int dump_emit(struct coredump_params *cprm, const void *addr, int nr); extern int dump_align(struct coredump_params *cprm, int align); extern void dump_truncate(struct coredump_params *cprm); +int dump_user_range(struct coredump_params *cprm, unsigned long start, + unsigned long len); #ifdef CONFIG_COREDUMP extern void do_coredump(const kernel_siginfo_t *siginfo); #else diff --git a/mm/gup.c b/mm/gup.c index 76080c4dbff05..9a7e83772f1fe 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1550,7 +1550,6 @@ static long __get_user_pages_locked(struct task_struct *tsk, * * Called without mmap_sem, but after all other threads have been killed. */ -#ifdef CONFIG_ELF_CORE struct page *get_dump_page(unsigned long addr) { struct vm_area_struct *vma; @@ -1563,7 +1562,6 @@ struct page *get_dump_page(unsigned long addr) flush_cache_page(vma, addr, page_to_pfn(page)); return page; } -#endif /* CONFIG_ELF_CORE */ #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) From patchwork Wed Apr 29 21:49:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518595 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 17A1B1392 for ; Wed, 29 Apr 2020 21:51:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E616220757 for ; Wed, 29 Apr 2020 21:51:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="qJ0ADG1X"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="cgr+TYE8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E616220757 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kBBZHlcxBP4IBoTL2YHMQr3tuBqp6MLcH0iBvFRVTKQ=; b=qJ0ADG1XnF74f9 qF1x0jVd6yHeIQrfLrpLBzMBWlzPcKoRwrCQfZ8rfGtGXRBQAdEnGxGVSdWQz52Ycuml4N06uQQDX TuEw+v1f+k3f5b0xepj/whCq/DyFELh9HJRcqVMRICChKMzQKxIqSpGtu6qkWZ9AtEv3xJ6aUbRDA rqboxa3CZagWDpBlxKfVeFVeu42ZD/MUiqIeWq1wn7BArOPTuihFLSx2gbyXRWNyvR+sbbis1/TUn 8x/G6NN9yD9Z/aWzzaWi5uNyq+OELFK1j/7d7WEICwiT9fbQMfjlVdv4LhGphCPGPO3IiLJR9UAQA 3N8er1LXUC/UrYBZ+4aw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTuc3-00013Z-EX; Wed, 29 Apr 2020 21:51:27 +0000 Received: from mail-qv1-xf49.google.com ([2607:f8b0:4864:20::f49]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTub4-00008I-5a for linux-arm-kernel@lists.infradead.org; Wed, 29 Apr 2020 21:50:29 +0000 Received: by mail-qv1-xf49.google.com with SMTP id c89so4176029qva.2 for ; Wed, 29 Apr 2020 14:50:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n/kbYO/P8r9qjIHatbomr9oYvX4Fs7o+QJm2b7IkvvY=; b=cgr+TYE8vkEUBlnn/4x8wGFcQHJEel5hBhVI86rYp5miZ1UTXoa/QgOK3lFq5WxQnE Q9QKmjZcshlFnwye5iZFHSIrrlVsvXOpvAuaVN524lnmBgCJ4c193lY0bjiDTK6FqXP6 Dpjbx5QcjLnHddqo2KtnhA9fxXjVlv/cttYcUMyPphdkzEudpO9qQR6qbW57b465lfYu riQi5N1++sHU6boVo7r4zt/O+gclwLv97eUHeibTBDOoWth+4/PwMSh9O84b/krR+c2n ivy7FpzW0Tqeg5ovtiqCbYMOWDuwq9k3vgln3Iom2sJZRH71J0SHyGa4zjoWCyKxv+b9 NDCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n/kbYO/P8r9qjIHatbomr9oYvX4Fs7o+QJm2b7IkvvY=; b=ar//uXTFg4+dzeodn9EFUX7QzQ1q3JFd5gEjNy6Ei3ayiY8xF/736wQZh7qxhV+ZjT dekwCSZbKpReWkvdixX/ViBcjbUogo8Oj27AsDrFtn/6N+BGXAdDAbpHC3xIyPHmsuqF P6avsGefn/gJH1DFLyzPbifJYYD8VTMeTUtm+EtuV4hljDoAtCtL+ZtNUua1CT3X87N9 +iKajsL8sQn6VjKpadRZI0zNMR+LCboLXdeqBAx/OPhaTLr3nBQjqwvzErctIJn0vqyz beGVQrYykK3ge+gK+sCmN79q5jm/3fbSD6B8T9vxJq1Icfclls/utDccPjHvYB4WMyoK 1P/A== X-Gm-Message-State: AGi0PuaM5MQvKYrAgGwf6ER0573sMyCS/yY+a2CJKYitIyBfQLH3uUoM 7Chs/rFVbegG8rarNS1I8ZvEyZoLNg== X-Google-Smtp-Source: APiQypKSLyLimCosOw/U/0bAe2cFRJTCmuvNNA0mLLgmQmMHrvTTR8ykhhbxYF0u24On0xQdRORFJUz8vg== X-Received: by 2002:a05:6214:42b:: with SMTP id a11mr35017166qvy.186.1588197023571; Wed, 29 Apr 2020 14:50:23 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:53 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-5-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 4/5] binfmt_elf, binfmt_elf_fdpic: Use a VMA list snapshot From: Jann Horn To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200429_145026_252127_AE2C39C9 X-CRM114-Status: GOOD ( 25.74 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:f49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rich Felker , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org, Oleg Nesterov , linux-mm@kvack.org, Alexander Viro , Mark Salter , linux-fsdevel@vger.kernel.org, Russell King , Aurelien Jacquiot , Linus Torvalds , Christoph Hellwig , linux-arm-kernel@lists.infradead.org, "Eric W . Biederman" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In both binfmt_elf and binfmt_elf_fdpic, use a new helper dump_vma_snapshot() to take a snapshot of the VMA list (including the gate VMA, if we have one) while protected by the mmap_sem, and then use that snapshot instead of walking the VMA list without locking. An alternative approach would be to keep the mmap_sem held across the entire core dumping operation; however, keeping the mmap_sem locked while we may be blocked for an unbounded amount of time (e.g. because we're dumping to a FUSE filesystem or so) isn't really optimal; the mmap_sem blocks things like the ->release handler of userfaultfd, and we don't really want critical system daemons to grind to a halt just because someone "gifted" them SCM_RIGHTS to an eternally-locked userfaultfd, or something like that. Since both the normal ELF code and the FDPIC ELF code need this functionality (and if any other binfmt wants to add coredump support in the future, they'd probably need it, too), implement this with a common helper in fs/coredump.c. A downside of this approach is that we now need a bigger amount of kernel memory per userspace VMA in the normal ELF case, and that we need O(n) kernel memory in the FDPIC ELF case at all; but 40 bytes per VMA shouldn't be terribly bad. Signed-off-by: Jann Horn --- fs/binfmt_elf.c | 152 +++++++++++++-------------------------- fs/binfmt_elf_fdpic.c | 86 ++++++++++------------ fs/coredump.c | 68 ++++++++++++++++++ include/linux/coredump.h | 10 +++ 4 files changed, 168 insertions(+), 148 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb36469848323..dffe9dc8497ca 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1292,8 +1292,12 @@ static bool always_dump_vma(struct vm_area_struct *vma) return false; } +#define DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER 1 + /* * Decide what to dump of a segment, part, all or none. + * The result must be fixed up via vma_dump_size_fixup() once we're in a context + * that's allowed to sleep arbitrarily long. */ static unsigned long vma_dump_size(struct vm_area_struct *vma, unsigned long mm_flags) @@ -1348,30 +1352,15 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma, /* * If this looks like the beginning of a DSO or executable mapping, - * check for an ELF header. If we find one, dump the first page to - * aid in determining what was mapped here. + * we'll check for an ELF header. If we find one, we'll dump the first + * page to aid in determining what was mapped here. + * However, we shouldn't sleep on userspace reads while holding the + * mmap_sem, so we just return a placeholder for now that will be fixed + * up later in vma_dump_size_fixup(). */ if (FILTER(ELF_HEADERS) && - vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) { - u32 __user *header = (u32 __user *) vma->vm_start; - u32 word; - /* - * Doing it this way gets the constant folded by GCC. - */ - union { - u32 cmp; - char elfmag[SELFMAG]; - } magic; - BUILD_BUG_ON(SELFMAG != sizeof word); - magic.elfmag[EI_MAG0] = ELFMAG0; - magic.elfmag[EI_MAG1] = ELFMAG1; - magic.elfmag[EI_MAG2] = ELFMAG2; - magic.elfmag[EI_MAG3] = ELFMAG3; - if (unlikely(get_user(word, header))) - word = 0; - if (word == magic.cmp) - return PAGE_SIZE; - } + vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) + return DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER; #undef FILTER @@ -1381,6 +1370,22 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma, return vma->vm_end - vma->vm_start; } +/* Fix up the result from vma_dump_size(), now that we're allowed to sleep. */ +static void vma_dump_size_fixup(struct core_vma_metadata *meta) +{ + char elfmag[SELFMAG]; + + if (meta->dump_size != DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER) + return; + + if (copy_from_user(elfmag, (void __user *)meta->start, SELFMAG)) { + meta->dump_size = 0; + return; + } + meta->dump_size = + (memcmp(elfmag, ELFMAG, SELFMAG) == 0) ? PAGE_SIZE : 0; +} + /* An ELF note in memory */ struct memelfnote { @@ -2124,32 +2129,6 @@ static void free_note_info(struct elf_note_info *info) #endif -static struct vm_area_struct *first_vma(struct task_struct *tsk, - struct vm_area_struct *gate_vma) -{ - struct vm_area_struct *ret = tsk->mm->mmap; - - if (ret) - return ret; - return gate_vma; -} -/* - * Helper function for iterating across a vma list. It ensures that the caller - * will visit `gate_vma' prior to terminating the search. - */ -static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma, - struct vm_area_struct *gate_vma) -{ - struct vm_area_struct *ret; - - ret = this_vma->vm_next; - if (ret) - return ret; - if (this_vma == gate_vma) - return NULL; - return gate_vma; -} - static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum, elf_addr_t e_shoff, int segs) { @@ -2176,9 +2155,8 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum, static int elf_core_dump(struct coredump_params *cprm) { int has_dumped = 0; - int segs, i; + int vma_count, segs, i; size_t vma_data_size = 0; - struct vm_area_struct *vma, *gate_vma; struct elfhdr elf; loff_t offset = 0, dataoff; struct elf_note_info info = { }; @@ -2186,30 +2164,21 @@ static int elf_core_dump(struct coredump_params *cprm) struct elf_shdr *shdr4extnum = NULL; Elf_Half e_phnum; elf_addr_t e_shoff; - elf_addr_t *vma_filesz = NULL; + struct core_vma_metadata *vma_meta; + + if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, vma_dump_size)) + return 0; + + for (i = 0; i < vma_count; i++) { + vma_dump_size_fixup(vma_meta + i); + vma_data_size += vma_meta[i].dump_size; + } - /* - * We no longer stop all VM operations. - * - * This is because those proceses that could possibly change map_count - * or the mmap / vma pages are now blocked in do_exit on current - * finishing this core dump. - * - * Only ptrace can touch these memory addresses, but it doesn't change - * the map_count or the pages allocated. So no possibility of crashing - * exists while dumping the mm->vm_next areas to the core file. - */ - /* * The number of segs are recored into ELF header as 16bit value. * Please check DEFAULT_MAX_MAP_COUNT definition when you modify here. */ - segs = current->mm->map_count; - segs += elf_core_extra_phdrs(); - - gate_vma = get_gate_vma(current->mm); - if (gate_vma != NULL) - segs++; + segs = vma_count + elf_core_extra_phdrs(); /* for notes section */ segs++; @@ -2247,24 +2216,6 @@ static int elf_core_dump(struct coredump_params *cprm) dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE); - /* - * Zero vma process will get ZERO_SIZE_PTR here. - * Let coredump continue for register state at least. - */ - vma_filesz = kvmalloc(array_size(sizeof(*vma_filesz), (segs - 1)), - GFP_KERNEL); - if (!vma_filesz) - goto cleanup; - - for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; - vma = next_vma(vma, gate_vma)) { - unsigned long dump_size; - - dump_size = vma_dump_size(vma, cprm->mm_flags); - vma_filesz[i++] = dump_size; - vma_data_size += dump_size; - } - offset += vma_data_size; offset += elf_core_extra_data_size(); e_shoff = offset; @@ -2285,22 +2236,20 @@ static int elf_core_dump(struct coredump_params *cprm) goto cleanup; /* Write program headers for segments dump */ - for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; - vma = next_vma(vma, gate_vma)) { + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; struct elf_phdr phdr; phdr.p_type = PT_LOAD; phdr.p_offset = offset; - phdr.p_vaddr = vma->vm_start; + phdr.p_vaddr = meta->start; phdr.p_paddr = 0; - phdr.p_filesz = vma_filesz[i++]; - phdr.p_memsz = vma->vm_end - vma->vm_start; + phdr.p_filesz = meta->dump_size; + phdr.p_memsz = meta->end - meta->start; offset += phdr.p_filesz; - phdr.p_flags = vma->vm_flags & VM_READ ? PF_R : 0; - if (vma->vm_flags & VM_WRITE) - phdr.p_flags |= PF_W; - if (vma->vm_flags & VM_EXEC) - phdr.p_flags |= PF_X; + phdr.p_flags = meta->flags & VM_READ ? PF_R : 0; + phdr.p_flags |= meta->flags & VM_WRITE ? PF_W : 0; + phdr.p_flags |= meta->flags & VM_EXEC ? PF_X : 0; phdr.p_align = ELF_EXEC_PAGESIZE; if (!dump_emit(cprm, &phdr, sizeof(phdr))) @@ -2321,9 +2270,10 @@ static int elf_core_dump(struct coredump_params *cprm) if (!dump_skip(cprm, dataoff - cprm->pos)) goto cleanup; - for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; - vma = next_vma(vma, gate_vma)) { - if (!dump_user_range(cprm, vma->vm_start, vma_filesz[i++])) + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; + + if (!dump_user_range(cprm, meta->start, meta->dump_size)) goto cleanup; } dump_truncate(cprm); @@ -2339,7 +2289,7 @@ static int elf_core_dump(struct coredump_params *cprm) cleanup: free_note_info(&info); kfree(shdr4extnum); - kvfree(vma_filesz); + kvfree(vma_meta); kfree(phdr4note); return has_dumped; } diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index 938f66f4de9b2..bde51f40085b9 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1190,7 +1190,8 @@ static int elf_fdpic_map_file_by_direct_mmap(struct elf_fdpic_params *params, * * I think we should skip something. But I am not sure how. H.J. */ -static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) +static unsigned long vma_dump_size(struct vm_area_struct *vma, + unsigned long mm_flags) { int dump_ok; @@ -1219,7 +1220,7 @@ static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) kdcore("%08lx: %08lx: %s (DAX private)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); } - return dump_ok; + goto out; } /* By default, dump shared memory if mapped from an anonymous file. */ @@ -1228,13 +1229,13 @@ static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) dump_ok = test_bit(MMF_DUMP_ANON_SHARED, &mm_flags); kdcore("%08lx: %08lx: %s (share)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + goto out; } dump_ok = test_bit(MMF_DUMP_MAPPED_SHARED, &mm_flags); kdcore("%08lx: %08lx: %s (share)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + goto out; } #ifdef CONFIG_MMU @@ -1243,14 +1244,16 @@ static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) dump_ok = test_bit(MMF_DUMP_MAPPED_PRIVATE, &mm_flags); kdcore("%08lx: %08lx: %s (!anon)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + goto out; } #endif dump_ok = test_bit(MMF_DUMP_ANON_PRIVATE, &mm_flags); kdcore("%08lx: %08lx: %s", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + +out: + return dump_ok ? vma->vm_end - vma->vm_start : 0; } /* An ELF note in memory */ @@ -1490,31 +1493,30 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum, /* * dump the segments for an MMU process */ -static bool elf_fdpic_dump_segments(struct coredump_params *cprm) +static bool elf_fdpic_dump_segments(struct coredump_params *cprm, + struct core_vma_metadata *vma_meta, + int vma_count) { - struct vm_area_struct *vma; + int i; - for (vma = current->mm->mmap; vma; vma = vma->vm_next) { - unsigned long addr; + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; - if (!maydump(vma, cprm->mm_flags)) - continue; - - if (!dump_user_range(cprm, vma->vm_start, - vma->vma_end - vma->vm_start)) + if (!dump_user_range(cprm, meta->start, meta->dump_size)) return false; } return true; } -static size_t elf_core_vma_data_size(unsigned long mm_flags) +static size_t elf_core_vma_data_size(unsigned long mm_flags, + struct core_vma_metadata *vma_meta, + int vma_count) { - struct vm_area_struct *vma; size_t size = 0; + int i; - for (vma = current->mm->mmap; vma; vma = vma->vm_next) - if (maydump(vma, mm_flags)) - size += vma->vm_end - vma->vm_start; + for (i = 0; i < vma_count; i++) + size += vma_meta[i].dump_size; return size; } @@ -1529,9 +1531,8 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) { #define NUM_NOTES 6 int has_dumped = 0; - int segs; + int vma_count, segs; int i; - struct vm_area_struct *vma; struct elfhdr *elf = NULL; loff_t offset = 0, dataoff; int numnote; @@ -1552,18 +1553,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) elf_addr_t e_shoff; struct core_thread *ct; struct elf_thread_status *tmp; - - /* - * We no longer stop all VM operations. - * - * This is because those proceses that could possibly change map_count - * or the mmap / vma pages are now blocked in do_exit on current - * finishing this core dump. - * - * Only ptrace can touch these memory addresses, but it doesn't change - * the map_count or the pages allocated. So no possibility of crashing - * exists while dumping the mm->vm_next areas to the core file. - */ + struct core_vma_metadata *vma_meta = NULL; /* alloc memory for large data structures: too large to be on stack */ elf = kmalloc(sizeof(*elf), GFP_KERNEL); @@ -1588,6 +1578,9 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) goto cleanup; #endif + if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, vma_dump_size)) + goto cleanup; + for (ct = current->mm->core_state->dumper.next; ct; ct = ct->next) { tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); @@ -1611,8 +1604,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) fill_prstatus(prstatus, current, cprm->siginfo->si_signo); elf_core_copy_regs(&prstatus->pr_reg, cprm->regs); - segs = current->mm->map_count; - segs += elf_core_extra_phdrs(); + segs = vma_count + elf_core_extra_phdrs(); /* for notes section */ segs++; @@ -1680,7 +1672,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) /* Page-align dumped data */ dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE); - offset += elf_core_vma_data_size(cprm->mm_flags); + offset += elf_core_vma_data_size(cprm->mm_flags, vma_meta, vma_count); offset += elf_core_extra_data_size(); e_shoff = offset; @@ -1700,24 +1692,23 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) goto cleanup; /* write program headers for segments dump */ - for (vma = current->mm->mmap; vma; vma = vma->vm_next) { + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; struct elf_phdr phdr; size_t sz; - sz = vma->vm_end - vma->vm_start; + sz = meta->end - meta->start; phdr.p_type = PT_LOAD; phdr.p_offset = offset; - phdr.p_vaddr = vma->vm_start; + phdr.p_vaddr = meta->start; phdr.p_paddr = 0; - phdr.p_filesz = maydump(vma, cprm->mm_flags) ? sz : 0; + phdr.p_filesz = meta->dump_size; phdr.p_memsz = sz; offset += phdr.p_filesz; - phdr.p_flags = vma->vm_flags & VM_READ ? PF_R : 0; - if (vma->vm_flags & VM_WRITE) - phdr.p_flags |= PF_W; - if (vma->vm_flags & VM_EXEC) - phdr.p_flags |= PF_X; + phdr.p_flags = meta->flags & VM_READ ? PF_R : 0; + phdr.p_flags |= meta->flags & VM_WRITE ? PF_W : 0; + phdr.p_flags |= meta->flags & VM_EXEC ? PF_X : 0; phdr.p_align = ELF_EXEC_PAGESIZE; if (!dump_emit(cprm, &phdr, sizeof(phdr))) @@ -1745,7 +1736,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) if (!dump_skip(cprm, dataoff - cprm->pos)) goto cleanup; - if (!elf_fdpic_dump_segments(cprm)) + if (!elf_fdpic_dump_segments(cprm, vma_meta, vma_count)) goto cleanup; if (!elf_core_write_extra_data(cprm)) @@ -1769,6 +1760,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) list_del(tmp); kfree(list_entry(tmp, struct elf_thread_status, list)); } + kvfree(vma_meta); kfree(phdr4note); kfree(elf); kfree(prstatus); diff --git a/fs/coredump.c b/fs/coredump.c index 88f625eecaac1..4213eab89190f 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -918,3 +918,71 @@ void dump_truncate(struct coredump_params *cprm) } } EXPORT_SYMBOL(dump_truncate); + +static struct vm_area_struct *first_vma(struct task_struct *tsk, + struct vm_area_struct *gate_vma) +{ + struct vm_area_struct *ret = tsk->mm->mmap; + + if (ret) + return ret; + return gate_vma; +} +/* + * Helper function for iterating across a vma list. It ensures that the caller + * will visit `gate_vma' prior to terminating the search. + */ +static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma, + struct vm_area_struct *gate_vma) +{ + struct vm_area_struct *ret; + + ret = this_vma->vm_next; + if (ret) + return ret; + if (this_vma == gate_vma) + return NULL; + return gate_vma; +} + +/* + * Under the mmap_sem, take a snapshot of relevant information about the task's + * VMAs. + */ +int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count, + struct core_vma_metadata **vma_meta, + unsigned long (*dump_size_cb)(struct vm_area_struct *, unsigned long)) +{ + struct vm_area_struct *vma, *gate_vma; + struct mm_struct *mm = current->mm; + int i; + + if (down_read_killable(&mm->mmap_sem)) + return -EINTR; + + gate_vma = get_gate_vma(mm); + *vma_count = mm->map_count + (gate_vma ? 1 : 0); + + *vma_meta = kvmalloc_array(*vma_count, sizeof(**vma_meta), GFP_KERNEL); + if (!*vma_meta) { + up_read(&mm->mmap_sem); + return -ENOMEM; + } + + for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; + vma = next_vma(vma, gate_vma)) { + (*vma_meta)[i++] = (struct core_vma_metadata) { + .start = vma->vm_start, + .end = vma->vm_end, + .flags = vma->vm_flags, + .dump_size = dump_size_cb(vma, cprm->mm_flags) + }; + } + + up_read(&mm->mmap_sem); + + if (WARN_ON(i != *vma_count)) + return -EFAULT; + + return 0; +} diff --git a/include/linux/coredump.h b/include/linux/coredump.h index 4289dc21c04ff..d3387866dce7b 100644 --- a/include/linux/coredump.h +++ b/include/linux/coredump.h @@ -7,6 +7,13 @@ #include #include +struct core_vma_metadata { + unsigned long start, end; + unsigned long filesize; + unsigned long flags; + unsigned long dump_size; +}; + /* * These are the only things you should do on a core-file: use only these * functions to write out all the necessary info. @@ -18,6 +25,9 @@ extern int dump_align(struct coredump_params *cprm, int align); extern void dump_truncate(struct coredump_params *cprm); int dump_user_range(struct coredump_params *cprm, unsigned long start, unsigned long len); +int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count, + struct core_vma_metadata **vma_meta, + unsigned long (*dump_size_cb)(struct vm_area_struct *, unsigned long)); #ifdef CONFIG_COREDUMP extern void do_coredump(const kernel_siginfo_t *siginfo); #else From patchwork Wed Apr 29 21:49:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518597 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01AE781 for ; Wed, 29 Apr 2020 21:51:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF45520B1F for ; Wed, 29 Apr 2020 21:51:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="pN6QW1b0"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="rpSOhgpZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF45520B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zEQO6nxqQNBB4ACA58S3kp6hV882PuiA+87q9N326FI=; b=pN6QW1b079MlCR yt9gcAS6pB8ct+NeGp3sZ5hBiO2akVepjRVHZW/zfIHnVtQrCVm6WnQhky50WCyv1jAmzl/pKBIjI cpfDaFiQSG0p5f29SF9EWKFDwahDmaxcWe4Moe9oP1xbnHBw2HbHv2lfJF8QtHDptqyJpiHEjsxRE uSOlWUkjBeRAtoboehYDRKK474GZ0Z4sct1seghwLt0c0tsCcs3190Sbqd+UyjGt4UbStSOZUnF6i 598GHKNQ3GgIMpCW+q707QSLddm5bbx9szQFRqScmQjEuA+mPp1n1YNVOmQs9y9LS/qgLhPAJmIO/ kVmG9K/P+UNNpA2IBHhw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTucI-0001In-L4; Wed, 29 Apr 2020 21:51:42 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jTub8-0000CS-5T for linux-arm-kernel@lists.infradead.org; Wed, 29 Apr 2020 21:50:31 +0000 Received: by mail-yb1-xb49.google.com with SMTP id s62so5356394ybs.0 for ; Wed, 29 Apr 2020 14:50:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j9WUvKonPgTjIUwjN6gAw2Ow91kmiZMiVo0dKn965xM=; b=rpSOhgpZAHdOAwMyVgY0CFeQV4JBSuitbsj273BpBIYbY5RLsc4oU8PEL0H8l2TB5Q jG553aIKfQF2YF6o13fsBMT5zGN9segQ20ek4BoudBSd1oldgaKUIzQE1gUENm6VWCJL rwV8ZCHjEEotGPzpCJokeDTsN3sON5Be1JY9+ilHhlCYSwmVmYep7UjgmJeL0j+AtidP A/WuYCLR/baWvIOOSlanC6XH+h7tjaSu5ed9lIJtvsjuLczSBPgDCXkkJ+gL9HeXObZ4 LPEB4qS5TfOwPWx/cfdcs+Q13OO74wWcsFq39oxOHl9vj9OcZRqB9BpSiip4xH6q2UDi xagA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j9WUvKonPgTjIUwjN6gAw2Ow91kmiZMiVo0dKn965xM=; b=L3+09MS5jVske4o+CT7M/5MJ+vG873KkFlgp+XTpUqcGTazlJ+N7rpMNhIIOlOSBXj AgvlcJ3y4WkxtsKjhrc+/lnSp0br485HkQSb9sNg5P5qfuiOxMsKIJOF/wMUOoFB3E8c 3+koWTF5NvOdnFsA0JWJOjWsizqwGTikNnw1TRlZYWSPnU8xTCpaSghz9a2N055XdrH8 16vozKm8+mbnmrPiftjb0dF211vrfYdmchyTukEhbqb0BsQMhzutYKTxiRR51WtgCSgS geNoGcIaa/QqDK29AoqT6wM+KCvhYSRbRN+2PhfhuUq6tU1gePkrRKYXO8Qsf12WKVri RXJg== X-Gm-Message-State: AGi0Pub4wK0InVCRP3esuE67zPpyycAYN6U9pAIcGJNCyY8dziH5Arq9 WrnzfKD1N00JibeZhrc+d2viAot5FQ== X-Google-Smtp-Source: APiQypIHJ7rV1yTDZpZYpBo1v+hnjoUzGur3b82l9wJsRlWX+KQ5lDvvOT4btIQYmN3rHDSbWDUnK04YhA== X-Received: by 2002:a25:23d4:: with SMTP id j203mr618266ybj.97.1588197027087; Wed, 29 Apr 2020 14:50:27 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:54 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-6-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 5/5] mm/gup: Take mmap_sem in get_dump_page() From: Jann Horn To: Andrew Morton X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200429_145030_253634_D2B29205 X-CRM114-Status: GOOD ( 12.66 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-7.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:b49 listed in] [list.dnswl.org] -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.0 DKIMWL_WL_MED DKIMwl.org - Medium sender X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Rich Felker , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , linux-sh@vger.kernel.org, linux-kernel@vger.kernel.org, Oleg Nesterov , linux-mm@kvack.org, Alexander Viro , Mark Salter , linux-fsdevel@vger.kernel.org, Russell King , Aurelien Jacquiot , Linus Torvalds , Christoph Hellwig , linux-arm-kernel@lists.infradead.org, "Eric W . Biederman" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Properly take the mmap_sem before calling into the GUP code from get_dump_page(); and play nice, allowing the GUP code to drop the mmap_sem if it has to sleep. As Linus pointed out, we don't actually need the VMA because __get_user_pages() will flush the dcache for us if necessary. Signed-off-by: Jann Horn --- mm/gup.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 9a7e83772f1fe..03f659ddd830a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1548,19 +1548,23 @@ static long __get_user_pages_locked(struct task_struct *tsk, * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - * allowing a hole to be left in the corefile to save diskspace. * - * Called without mmap_sem, but after all other threads have been killed. + * Called without mmap_sem (takes and releases the mmap_sem by itself). */ struct page *get_dump_page(unsigned long addr) { - struct vm_area_struct *vma; + struct mm_struct *mm = current->mm; struct page *page; + int locked = 1; + int ret; - if (__get_user_pages(current, current->mm, addr, 1, - FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, - NULL) < 1) + if (down_read_killable(&mm->mmap_sem)) return NULL; - flush_cache_page(vma, addr, page_to_pfn(page)); - return page; + ret = __get_user_pages_locked(current, mm, addr, 1, &page, NULL, + &locked, + FOLL_FORCE | FOLL_DUMP | FOLL_GET); + if (locked) + up_read(&mm->mmap_sem); + return (ret == 1) ? page : NULL; } #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)