From patchwork Wed Apr 29 21:49:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518553 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B8B681 for ; Wed, 29 Apr 2020 21:50:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F64520757 for ; Wed, 29 Apr 2020 21:50:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hq7/1zfU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727823AbgD2VuO (ORCPT ); Wed, 29 Apr 2020 17:50:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727812AbgD2VuO (ORCPT ); Wed, 29 Apr 2020 17:50:14 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFAC6C035495 for ; Wed, 29 Apr 2020 14:50:13 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id s8so5225016ybj.9 for ; Wed, 29 Apr 2020 14:50:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=X8Ilta2TozKsEzasXrZac28hkMQagz57pEK9xvAhCMU=; b=hq7/1zfUEsP2iAtt4ZQ4PwzfpMH/LEAjv1O6M4T2kyI5ocMSPyrezh78agwH8E2fzC 5UoVyHnot4yvU3uT3T/gW30svDPED38wwzcwKUzc1ikntjBs+9t1DIScbtC1T7t5gt+O mS6z/RcBXMryw96KRHKWMXSzRFEqU6HnTb477iWgeCgmihZ523VW2z3+tpWL0Olncc0H DbIb3P9b+pQ9ux7Mr0Ope9w48HA1d2qcLTAWT5s9KeaNQb1A3h6o/n0bM6oiKdKbONLG xXvcA9wtEp7X3YvX7oN/lCMYNvQZ5hT+jnRwGzIUoXJDAz6icA6ycOc2YEueJSe+gqSn I33Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=X8Ilta2TozKsEzasXrZac28hkMQagz57pEK9xvAhCMU=; b=R95cl+pNWoxbwJz8ye1e+DRebdpqniTzuR9lCqQANgSb6yNPhZVhDqtm93WyTk5nvx R6Jf+wqbz/VOGHyiTwIkJ5cYWZxksOM7cTRbjo9J4rFeCM/f/OwcCx2kejp93UaH25XC WP+fTukJIx5n61liS3Z93eQnJuz1osmuHuwElLM8tf3LYJowRsDRZtJFJ952U6Cbt8UL 3hp6+W2o+o2gEDeGPP75OHmwtPfCsey2UYDPaQ0A4SofFbPPg5etVsVz/EXhKm0Lx2ER c5g1bS2unUCFHWkDvCrXExisuKdSurBuShxA1hdzrKBFOrau817q+8pGHU9WPkieark0 5SVQ== X-Gm-Message-State: AGi0PubPlubAr+yyUSIDV05bX6sZwp9cNme7YXqtFyq+E0ziFjaOSKSS OgadVaBQGQyZqIrlDdVmOtDLbOzo5A== X-Google-Smtp-Source: APiQypK393UboBZKtVhF2VXC53kboONpVDbYMmk4wx3dfVKrxYMarbZX5Q8DMVWqcVWoG5mrxwWJr8Q+6w== X-Received: by 2002:a25:c048:: with SMTP id c69mr565867ybf.169.1588197012786; Wed, 29 Apr 2020 14:50:12 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:50 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-2-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 1/5] binfmt_elf_fdpic: Stop using dump_emit() on user pointers on !MMU From: Jann Horn To: Andrew Morton Cc: Linus Torvalds , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Alexander Viro , "Eric W . Biederman" , Oleg Nesterov , Russell King , linux-arm-kernel@lists.infradead.org, Mark Salter , Aurelien Jacquiot , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org dump_emit() is for kernel pointers, and VMAs describe userspace memory. Let's be tidy here and avoid accessing userspace pointers under KERNEL_DS, even if it probably doesn't matter much on !MMU systems - especially given that it looks like we can just use the same get_dump_page() as on MMU if we move it out of the CONFIG_MMU block. Signed-off-by: Jann Horn --- fs/binfmt_elf_fdpic.c | 8 ------ mm/gup.c | 58 +++++++++++++++++++++---------------------- 2 files changed, 29 insertions(+), 37 deletions(-) diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index c62c17a5c34a9..f5b47076fa762 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1495,14 +1495,11 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) struct vm_area_struct *vma; for (vma = current->mm->mmap; vma; vma = vma->vm_next) { -#ifdef CONFIG_MMU unsigned long addr; -#endif if (!maydump(vma, cprm->mm_flags)) continue; -#ifdef CONFIG_MMU for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) { bool res; @@ -1518,11 +1515,6 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) if (!res) return false; } -#else - if (!dump_emit(cprm, (void *) vma->vm_start, - vma->vm_end - vma->vm_start)) - return false; -#endif } return true; } diff --git a/mm/gup.c b/mm/gup.c index 50681f0286ded..76080c4dbff05 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1490,35 +1490,6 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) up_read(&mm->mmap_sem); return ret; /* 0 or negative error code */ } - -/** - * get_dump_page() - pin user page in memory while writing it to core dump - * @addr: user address - * - * Returns struct page pointer of user page pinned for dump, - * to be freed afterwards by put_page(). - * - * Returns NULL on any kind of failure - a hole must then be inserted into - * the corefile, to preserve alignment with its headers; and also returns - * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - - * allowing a hole to be left in the corefile to save diskspace. - * - * Called without mmap_sem, but after all other threads have been killed. - */ -#ifdef CONFIG_ELF_CORE -struct page *get_dump_page(unsigned long addr) -{ - struct vm_area_struct *vma; - struct page *page; - - if (__get_user_pages(current, current->mm, addr, 1, - FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, - NULL) < 1) - return NULL; - flush_cache_page(vma, addr, page_to_pfn(page)); - return page; -} -#endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ static long __get_user_pages_locked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, @@ -1565,6 +1536,35 @@ static long __get_user_pages_locked(struct task_struct *tsk, } #endif /* !CONFIG_MMU */ +/** + * get_dump_page() - pin user page in memory while writing it to core dump + * @addr: user address + * + * Returns struct page pointer of user page pinned for dump, + * to be freed afterwards by put_page(). + * + * Returns NULL on any kind of failure - a hole must then be inserted into + * the corefile, to preserve alignment with its headers; and also returns + * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - + * allowing a hole to be left in the corefile to save diskspace. + * + * Called without mmap_sem, but after all other threads have been killed. + */ +#ifdef CONFIG_ELF_CORE +struct page *get_dump_page(unsigned long addr) +{ + struct vm_area_struct *vma; + struct page *page; + + if (__get_user_pages(current, current->mm, addr, 1, + FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, + NULL) < 1) + return NULL; + flush_cache_page(vma, addr, page_to_pfn(page)); + return page; +} +#endif /* CONFIG_ELF_CORE */ + #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) { From patchwork Wed Apr 29 21:49:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518589 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 27D4881 for ; Wed, 29 Apr 2020 21:50:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F13F2137B for ; Wed, 29 Apr 2020 21:50:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aiXVr9LW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727833AbgD2VuU (ORCPT ); Wed, 29 Apr 2020 17:50:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727827AbgD2VuR (ORCPT ); Wed, 29 Apr 2020 17:50:17 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26BD2C035495 for ; Wed, 29 Apr 2020 14:50:17 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id z3so4496166qtb.6 for ; Wed, 29 Apr 2020 14:50:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JqiR2vOdqFfBqjhyMGJp8d/qL3dB8+X/PfK7MWBosOQ=; b=aiXVr9LW0+RfYp/BGKCZwf2eFb8u9bW41qbryYBrMOTMAxizbTvkyUsj2hMqoewXVA EbenkI3pjlj6j3Jz6v58GVbjLsRucPnyBIwyrf/cQxI7QYwEK5k1uOps+KFUBQSyYzSq 4ScWZwqWRdd/I9EOQCike+A/g8LgBL08ZkugaoTqx2zKcpO0yY7MZeP+Y4VZXfi2opO+ u0ixT5IVpYsz33tv0SuE/w161GkFjCv2PC0mWH6RqlvaBsCA3Nae7CGPq+IMF6tcDpg/ xDEfx6+SvCzPkM2Gw22E+XqkcpxanCcgz9Szd+JSjxTdcGi6OboSgHQ79epRcTxfLjBk yKJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JqiR2vOdqFfBqjhyMGJp8d/qL3dB8+X/PfK7MWBosOQ=; b=NheQ6aU0kxnwGrFTwWOtgq+9pDtHDSvVnhRMfmTRvaMN/fMruUs7WYuZ9vYq36gzxX cjAMBocherBhJl8h+FUfc86Qou7TD/t8lwh6KuZCCwIrn9/FdBqCj1I7nGfqrzB4rHIB 1tsEvHqx+dSxIk5hsqC1maAU1JX7D7bl1LxHtvPfjDMJPl5sforwZG4PYMx7yhFEo4hP 7ItN0xAGtMiHXhdsob5ALeWui37W0kF8NW6sgo68B4BdR+CcXVtRpCzKS0fkmrBIi3eh ybaYC1m6duOB1DplFBO9XgV/I93Elg/xuOo7WhdQsVv06Th9JgHPBD2O9WmlwCsj1Wb8 ZBQw== X-Gm-Message-State: AGi0PuZXS1noF6RPUdlOLRSvc3keHHNo6NhjwtObU2/PfPaLYoDm4o/i Myh864EKgrMW7/YwRnkdVNXXNjDkoQ== X-Google-Smtp-Source: APiQypIhYRdYrJfvjviu8u64QDDNe2LAeKB+wfqrsyuISOkwKNnezPoPTQS28YSpGgdHWAPJujbNXC6IaA== X-Received: by 2002:ad4:49d3:: with SMTP id j19mr34024184qvy.78.1588197016281; Wed, 29 Apr 2020 14:50:16 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:51 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-3-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 2/5] coredump: Let dump_emit() bail out on short writes From: Jann Horn To: Andrew Morton Cc: Linus Torvalds , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Alexander Viro , "Eric W . Biederman" , Oleg Nesterov , Russell King , linux-arm-kernel@lists.infradead.org, Mark Salter , Aurelien Jacquiot , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org dump_emit() has a retry loop, but there seems to be no way for that retry logic to actually be used; and it was also buggy, writing the same data repeatedly after a short write. Let's just bail out on a short write. Suggested-by: Linus Torvalds Signed-off-by: Jann Horn --- fs/coredump.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/coredump.c b/fs/coredump.c index 408418e6aa131..d6fcc36a7db1f 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -823,17 +823,17 @@ int dump_emit(struct coredump_params *cprm, const void *addr, int nr) ssize_t n; if (cprm->written + nr > cprm->limit) return 0; - while (nr) { - if (dump_interrupted()) - return 0; - n = __kernel_write(file, addr, nr, &pos); - if (n <= 0) - return 0; - file->f_pos = pos; - cprm->written += n; - cprm->pos += n; - nr -= n; - } + + + if (dump_interrupted()) + return 0; + n = __kernel_write(file, addr, nr, &pos); + if (n != nr) + return 0; + file->f_pos = pos; + cprm->written += n; + cprm->pos += n; + return 1; } EXPORT_SYMBOL(dump_emit); From patchwork Wed Apr 29 21:49:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518583 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 208D481 for ; Wed, 29 Apr 2020 21:50:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ECBF020B1F for ; Wed, 29 Apr 2020 21:50:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GSSmtlGa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727846AbgD2VuW (ORCPT ); Wed, 29 Apr 2020 17:50:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727843AbgD2VuU (ORCPT ); Wed, 29 Apr 2020 17:50:20 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFF84C035494 for ; Wed, 29 Apr 2020 14:50:20 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 186so5255876ybq.1 for ; Wed, 29 Apr 2020 14:50:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fj0KQLirV2pXFAzqsm5ihCveY8LPZo7Z3GnKM9xZWh8=; b=GSSmtlGaezqJYFHNtVCvKDyaoNbqh56EGYP/Fc0Wpoc90PdevFi0ZD8Ooxmy4Widco XcbbF/6xFqrDTE3pvTjufefO3SdgXCEgmHNIH9znF7d3Dv6qql7V2mJ6j2XkmEzcxb+e Q/nbBfHbhQW5NR+typqj2ECOa8wKbh2zIt7zeL0s3MpBSXb9SaMuJJnQJA09mHV1duew +kxt1Rlviwam3qFMVMOkxHlbrH0RpD0cBRshXoSf9ZOHzBfh4Fz+xyWcJy8cNq44YKCV +gPnFCFX0kj9nyPZqhGwFOphjLBAkVr2TlnIC3XFKIJucLwWslWMNE8mD0qjBqsXJozC D1zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fj0KQLirV2pXFAzqsm5ihCveY8LPZo7Z3GnKM9xZWh8=; b=jPmVW/lI3PvO702RWKA5tGZUZHtVFxVAwVuuMTM66MUFSUdmc3y/eoFAqWi2/LX8Lj N9JtWv/NaTnr+YwUAN7ng20vL+prl83RKwQ8KkKA7CgUpa/OScFHj4OBObiqgQZdnCsl 2SCIS0ViPpDcsCVtF2SBcE6fZrEGv2qBpphpYGTAO0EJu/Y8sTkQV9SPHS1Y3tyJhZCK 7j2XZbraDunBuixK4rIHTatxc+pY/rBLRYMGhJgIE/ja9f8ludsSt1/bNjzJ+NPGK37Z gFjUqo80yK+m3qXjXUa+Qilhw52l49R0uFZY+I/G1ztq0q7r7sH0qKmFQftXYFYOFjG5 D4UA== X-Gm-Message-State: AGi0Pubm+7fUCTrU7ddZuDCMCcJxlCadShMg1vNL4XXLAFvZq2Fk2Ozo Okai9wmyLBctWO5aBDQLqLPDDfQ+6A== X-Google-Smtp-Source: APiQypK22sMV8j+VmwP/R0uq3K5qZ7QQPdC0JIfmnivMb6yvBD5UGTsQwHujsiWG2GGJfDkTsCczijqUrA== X-Received: by 2002:a5b:546:: with SMTP id r6mr602488ybp.253.1588197019883; Wed, 29 Apr 2020 14:50:19 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:52 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-4-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 3/5] coredump: Refactor page range dumping into common helper From: Jann Horn To: Andrew Morton Cc: Linus Torvalds , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Alexander Viro , "Eric W . Biederman" , Oleg Nesterov , Russell King , linux-arm-kernel@lists.infradead.org, Mark Salter , Aurelien Jacquiot , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Both fs/binfmt_elf.c and fs/binfmt_elf_fdpic.c need to dump ranges of pages into the coredump file. Extract that logic into a common helper. Any other binfmt that actually wants to create coredumps will probably need the same function; so stop making get_dump_page() depend on CONFIG_ELF_CORE. Signed-off-by: Jann Horn --- fs/binfmt_elf.c | 22 ++-------------------- fs/binfmt_elf_fdpic.c | 18 +++--------------- fs/coredump.c | 33 +++++++++++++++++++++++++++++++++ include/linux/coredump.h | 2 ++ mm/gup.c | 2 -- 5 files changed, 40 insertions(+), 37 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index b29b84595b09f..fb36469848323 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -2323,26 +2323,8 @@ static int elf_core_dump(struct coredump_params *cprm) for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; vma = next_vma(vma, gate_vma)) { - unsigned long addr; - unsigned long end; - - end = vma->vm_start + vma_filesz[i++]; - - for (addr = vma->vm_start; addr < end; addr += PAGE_SIZE) { - struct page *page; - int stop; - - page = get_dump_page(addr); - if (page) { - void *kaddr = kmap(page); - stop = !dump_emit(cprm, kaddr, PAGE_SIZE); - kunmap(page); - put_page(page); - } else - stop = !dump_skip(cprm, PAGE_SIZE); - if (stop) - goto cleanup; - } + if (!dump_user_range(cprm, vma->vm_start, vma_filesz[i++])) + goto cleanup; } dump_truncate(cprm); diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index f5b47076fa762..938f66f4de9b2 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1500,21 +1500,9 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) if (!maydump(vma, cprm->mm_flags)) continue; - for (addr = vma->vm_start; addr < vma->vm_end; - addr += PAGE_SIZE) { - bool res; - struct page *page = get_dump_page(addr); - if (page) { - void *kaddr = kmap(page); - res = dump_emit(cprm, kaddr, PAGE_SIZE); - kunmap(page); - put_page(page); - } else { - res = dump_skip(cprm, PAGE_SIZE); - } - if (!res) - return false; - } + if (!dump_user_range(cprm, vma->vm_start, + vma->vma_end - vma->vm_start)) + return false; } return true; } diff --git a/fs/coredump.c b/fs/coredump.c index d6fcc36a7db1f..88f625eecaac1 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -859,6 +859,39 @@ int dump_skip(struct coredump_params *cprm, size_t nr) } EXPORT_SYMBOL(dump_skip); +#ifdef CONFIG_ELF_CORE +int dump_user_range(struct coredump_params *cprm, unsigned long start, + unsigned long len) +{ + unsigned long addr; + + for (addr = start; addr < start + len; addr += PAGE_SIZE) { + struct page *page; + int stop; + + /* + * To avoid having to allocate page tables for virtual address + * ranges that have never been used yet, use a helper that + * returns NULL when encountering an empty page table entry that + * would otherwise have been filled with the zero page. + */ + page = get_dump_page(addr); + if (page) { + void *kaddr = kmap(page); + + stop = !dump_emit(cprm, kaddr, PAGE_SIZE); + kunmap(page); + put_page(page); + } else { + stop = !dump_skip(cprm, PAGE_SIZE); + } + if (stop) + return 0; + } + return 1; +} +#endif + int dump_align(struct coredump_params *cprm, int align) { unsigned mod = cprm->pos & (align - 1); diff --git a/include/linux/coredump.h b/include/linux/coredump.h index abf4b4e65dbb9..4289dc21c04ff 100644 --- a/include/linux/coredump.h +++ b/include/linux/coredump.h @@ -16,6 +16,8 @@ extern int dump_skip(struct coredump_params *cprm, size_t nr); extern int dump_emit(struct coredump_params *cprm, const void *addr, int nr); extern int dump_align(struct coredump_params *cprm, int align); extern void dump_truncate(struct coredump_params *cprm); +int dump_user_range(struct coredump_params *cprm, unsigned long start, + unsigned long len); #ifdef CONFIG_COREDUMP extern void do_coredump(const kernel_siginfo_t *siginfo); #else diff --git a/mm/gup.c b/mm/gup.c index 76080c4dbff05..9a7e83772f1fe 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1550,7 +1550,6 @@ static long __get_user_pages_locked(struct task_struct *tsk, * * Called without mmap_sem, but after all other threads have been killed. */ -#ifdef CONFIG_ELF_CORE struct page *get_dump_page(unsigned long addr) { struct vm_area_struct *vma; @@ -1563,7 +1562,6 @@ struct page *get_dump_page(unsigned long addr) flush_cache_page(vma, addr, page_to_pfn(page)); return page; } -#endif /* CONFIG_ELF_CORE */ #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) From patchwork Wed Apr 29 21:49:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518581 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E49671392 for ; Wed, 29 Apr 2020 21:50:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B445620B1F for ; Wed, 29 Apr 2020 21:50:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cgr+TYE8" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727078AbgD2Vu1 (ORCPT ); Wed, 29 Apr 2020 17:50:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46190 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727863AbgD2VuY (ORCPT ); Wed, 29 Apr 2020 17:50:24 -0400 Received: from mail-qk1-x74a.google.com (mail-qk1-x74a.google.com [IPv6:2607:f8b0:4864:20::74a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84A4FC035495 for ; Wed, 29 Apr 2020 14:50:24 -0700 (PDT) Received: by mail-qk1-x74a.google.com with SMTP id l19so4218706qki.14 for ; Wed, 29 Apr 2020 14:50:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n/kbYO/P8r9qjIHatbomr9oYvX4Fs7o+QJm2b7IkvvY=; b=cgr+TYE8vkEUBlnn/4x8wGFcQHJEel5hBhVI86rYp5miZ1UTXoa/QgOK3lFq5WxQnE Q9QKmjZcshlFnwye5iZFHSIrrlVsvXOpvAuaVN524lnmBgCJ4c193lY0bjiDTK6FqXP6 Dpjbx5QcjLnHddqo2KtnhA9fxXjVlv/cttYcUMyPphdkzEudpO9qQR6qbW57b465lfYu riQi5N1++sHU6boVo7r4zt/O+gclwLv97eUHeibTBDOoWth+4/PwMSh9O84b/krR+c2n ivy7FpzW0Tqeg5ovtiqCbYMOWDuwq9k3vgln3Iom2sJZRH71J0SHyGa4zjoWCyKxv+b9 NDCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n/kbYO/P8r9qjIHatbomr9oYvX4Fs7o+QJm2b7IkvvY=; b=GvoD2ahd44aq8u1JvCeSU8aofd+9m/FhK+Za+s9KuYktRQXaQAUXwJXrGtVAA5qc/+ rJN5ix+6umj2XApPrgU0VoB0B0s5Gh/AjDK6X7LLng10SsGlCqHhsdQMjgbzdU4vxsHk WmaC1JMGEZXV7qrCU4+lzycdUQogbjPJqB95B9ABC7Oh9h3Fwss9vsY8jTJRVkvn60Lb GmnB+GGSMgXGJ1ac2y3pzy1KgUwD1SGNGwHIqfwFR+Uf7iQeIvhu66yvtifWNDnqXo1d ApMSPGaVfjaFGsQtgf9n0X/uZLn3GXvm9WxxYsohmMDRkxGmjaGunDSLHklI/JtXEpcv H5Cg== X-Gm-Message-State: AGi0PubPaKgiiOh2WxEJTkEq1Ic77hP1yeIQBmA4uGSQsmBtzVSFslQH L94D+o6Sk8vXptTzPbxBLRerT5vNkg== X-Google-Smtp-Source: APiQypKSLyLimCosOw/U/0bAe2cFRJTCmuvNNA0mLLgmQmMHrvTTR8ykhhbxYF0u24On0xQdRORFJUz8vg== X-Received: by 2002:a05:6214:42b:: with SMTP id a11mr35017166qvy.186.1588197023571; Wed, 29 Apr 2020 14:50:23 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:53 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-5-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 4/5] binfmt_elf, binfmt_elf_fdpic: Use a VMA list snapshot From: Jann Horn To: Andrew Morton Cc: Linus Torvalds , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Alexander Viro , "Eric W . Biederman" , Oleg Nesterov , Russell King , linux-arm-kernel@lists.infradead.org, Mark Salter , Aurelien Jacquiot , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In both binfmt_elf and binfmt_elf_fdpic, use a new helper dump_vma_snapshot() to take a snapshot of the VMA list (including the gate VMA, if we have one) while protected by the mmap_sem, and then use that snapshot instead of walking the VMA list without locking. An alternative approach would be to keep the mmap_sem held across the entire core dumping operation; however, keeping the mmap_sem locked while we may be blocked for an unbounded amount of time (e.g. because we're dumping to a FUSE filesystem or so) isn't really optimal; the mmap_sem blocks things like the ->release handler of userfaultfd, and we don't really want critical system daemons to grind to a halt just because someone "gifted" them SCM_RIGHTS to an eternally-locked userfaultfd, or something like that. Since both the normal ELF code and the FDPIC ELF code need this functionality (and if any other binfmt wants to add coredump support in the future, they'd probably need it, too), implement this with a common helper in fs/coredump.c. A downside of this approach is that we now need a bigger amount of kernel memory per userspace VMA in the normal ELF case, and that we need O(n) kernel memory in the FDPIC ELF case at all; but 40 bytes per VMA shouldn't be terribly bad. Signed-off-by: Jann Horn --- fs/binfmt_elf.c | 152 +++++++++++++-------------------------- fs/binfmt_elf_fdpic.c | 86 ++++++++++------------ fs/coredump.c | 68 ++++++++++++++++++ include/linux/coredump.h | 10 +++ 4 files changed, 168 insertions(+), 148 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb36469848323..dffe9dc8497ca 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1292,8 +1292,12 @@ static bool always_dump_vma(struct vm_area_struct *vma) return false; } +#define DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER 1 + /* * Decide what to dump of a segment, part, all or none. + * The result must be fixed up via vma_dump_size_fixup() once we're in a context + * that's allowed to sleep arbitrarily long. */ static unsigned long vma_dump_size(struct vm_area_struct *vma, unsigned long mm_flags) @@ -1348,30 +1352,15 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma, /* * If this looks like the beginning of a DSO or executable mapping, - * check for an ELF header. If we find one, dump the first page to - * aid in determining what was mapped here. + * we'll check for an ELF header. If we find one, we'll dump the first + * page to aid in determining what was mapped here. + * However, we shouldn't sleep on userspace reads while holding the + * mmap_sem, so we just return a placeholder for now that will be fixed + * up later in vma_dump_size_fixup(). */ if (FILTER(ELF_HEADERS) && - vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) { - u32 __user *header = (u32 __user *) vma->vm_start; - u32 word; - /* - * Doing it this way gets the constant folded by GCC. - */ - union { - u32 cmp; - char elfmag[SELFMAG]; - } magic; - BUILD_BUG_ON(SELFMAG != sizeof word); - magic.elfmag[EI_MAG0] = ELFMAG0; - magic.elfmag[EI_MAG1] = ELFMAG1; - magic.elfmag[EI_MAG2] = ELFMAG2; - magic.elfmag[EI_MAG3] = ELFMAG3; - if (unlikely(get_user(word, header))) - word = 0; - if (word == magic.cmp) - return PAGE_SIZE; - } + vma->vm_pgoff == 0 && (vma->vm_flags & VM_READ)) + return DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER; #undef FILTER @@ -1381,6 +1370,22 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma, return vma->vm_end - vma->vm_start; } +/* Fix up the result from vma_dump_size(), now that we're allowed to sleep. */ +static void vma_dump_size_fixup(struct core_vma_metadata *meta) +{ + char elfmag[SELFMAG]; + + if (meta->dump_size != DUMP_SIZE_MAYBE_ELFHDR_PLACEHOLDER) + return; + + if (copy_from_user(elfmag, (void __user *)meta->start, SELFMAG)) { + meta->dump_size = 0; + return; + } + meta->dump_size = + (memcmp(elfmag, ELFMAG, SELFMAG) == 0) ? PAGE_SIZE : 0; +} + /* An ELF note in memory */ struct memelfnote { @@ -2124,32 +2129,6 @@ static void free_note_info(struct elf_note_info *info) #endif -static struct vm_area_struct *first_vma(struct task_struct *tsk, - struct vm_area_struct *gate_vma) -{ - struct vm_area_struct *ret = tsk->mm->mmap; - - if (ret) - return ret; - return gate_vma; -} -/* - * Helper function for iterating across a vma list. It ensures that the caller - * will visit `gate_vma' prior to terminating the search. - */ -static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma, - struct vm_area_struct *gate_vma) -{ - struct vm_area_struct *ret; - - ret = this_vma->vm_next; - if (ret) - return ret; - if (this_vma == gate_vma) - return NULL; - return gate_vma; -} - static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum, elf_addr_t e_shoff, int segs) { @@ -2176,9 +2155,8 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum, static int elf_core_dump(struct coredump_params *cprm) { int has_dumped = 0; - int segs, i; + int vma_count, segs, i; size_t vma_data_size = 0; - struct vm_area_struct *vma, *gate_vma; struct elfhdr elf; loff_t offset = 0, dataoff; struct elf_note_info info = { }; @@ -2186,30 +2164,21 @@ static int elf_core_dump(struct coredump_params *cprm) struct elf_shdr *shdr4extnum = NULL; Elf_Half e_phnum; elf_addr_t e_shoff; - elf_addr_t *vma_filesz = NULL; + struct core_vma_metadata *vma_meta; + + if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, vma_dump_size)) + return 0; + + for (i = 0; i < vma_count; i++) { + vma_dump_size_fixup(vma_meta + i); + vma_data_size += vma_meta[i].dump_size; + } - /* - * We no longer stop all VM operations. - * - * This is because those proceses that could possibly change map_count - * or the mmap / vma pages are now blocked in do_exit on current - * finishing this core dump. - * - * Only ptrace can touch these memory addresses, but it doesn't change - * the map_count or the pages allocated. So no possibility of crashing - * exists while dumping the mm->vm_next areas to the core file. - */ - /* * The number of segs are recored into ELF header as 16bit value. * Please check DEFAULT_MAX_MAP_COUNT definition when you modify here. */ - segs = current->mm->map_count; - segs += elf_core_extra_phdrs(); - - gate_vma = get_gate_vma(current->mm); - if (gate_vma != NULL) - segs++; + segs = vma_count + elf_core_extra_phdrs(); /* for notes section */ segs++; @@ -2247,24 +2216,6 @@ static int elf_core_dump(struct coredump_params *cprm) dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE); - /* - * Zero vma process will get ZERO_SIZE_PTR here. - * Let coredump continue for register state at least. - */ - vma_filesz = kvmalloc(array_size(sizeof(*vma_filesz), (segs - 1)), - GFP_KERNEL); - if (!vma_filesz) - goto cleanup; - - for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; - vma = next_vma(vma, gate_vma)) { - unsigned long dump_size; - - dump_size = vma_dump_size(vma, cprm->mm_flags); - vma_filesz[i++] = dump_size; - vma_data_size += dump_size; - } - offset += vma_data_size; offset += elf_core_extra_data_size(); e_shoff = offset; @@ -2285,22 +2236,20 @@ static int elf_core_dump(struct coredump_params *cprm) goto cleanup; /* Write program headers for segments dump */ - for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; - vma = next_vma(vma, gate_vma)) { + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; struct elf_phdr phdr; phdr.p_type = PT_LOAD; phdr.p_offset = offset; - phdr.p_vaddr = vma->vm_start; + phdr.p_vaddr = meta->start; phdr.p_paddr = 0; - phdr.p_filesz = vma_filesz[i++]; - phdr.p_memsz = vma->vm_end - vma->vm_start; + phdr.p_filesz = meta->dump_size; + phdr.p_memsz = meta->end - meta->start; offset += phdr.p_filesz; - phdr.p_flags = vma->vm_flags & VM_READ ? PF_R : 0; - if (vma->vm_flags & VM_WRITE) - phdr.p_flags |= PF_W; - if (vma->vm_flags & VM_EXEC) - phdr.p_flags |= PF_X; + phdr.p_flags = meta->flags & VM_READ ? PF_R : 0; + phdr.p_flags |= meta->flags & VM_WRITE ? PF_W : 0; + phdr.p_flags |= meta->flags & VM_EXEC ? PF_X : 0; phdr.p_align = ELF_EXEC_PAGESIZE; if (!dump_emit(cprm, &phdr, sizeof(phdr))) @@ -2321,9 +2270,10 @@ static int elf_core_dump(struct coredump_params *cprm) if (!dump_skip(cprm, dataoff - cprm->pos)) goto cleanup; - for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; - vma = next_vma(vma, gate_vma)) { - if (!dump_user_range(cprm, vma->vm_start, vma_filesz[i++])) + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; + + if (!dump_user_range(cprm, meta->start, meta->dump_size)) goto cleanup; } dump_truncate(cprm); @@ -2339,7 +2289,7 @@ static int elf_core_dump(struct coredump_params *cprm) cleanup: free_note_info(&info); kfree(shdr4extnum); - kvfree(vma_filesz); + kvfree(vma_meta); kfree(phdr4note); return has_dumped; } diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index 938f66f4de9b2..bde51f40085b9 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1190,7 +1190,8 @@ static int elf_fdpic_map_file_by_direct_mmap(struct elf_fdpic_params *params, * * I think we should skip something. But I am not sure how. H.J. */ -static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) +static unsigned long vma_dump_size(struct vm_area_struct *vma, + unsigned long mm_flags) { int dump_ok; @@ -1219,7 +1220,7 @@ static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) kdcore("%08lx: %08lx: %s (DAX private)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); } - return dump_ok; + goto out; } /* By default, dump shared memory if mapped from an anonymous file. */ @@ -1228,13 +1229,13 @@ static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) dump_ok = test_bit(MMF_DUMP_ANON_SHARED, &mm_flags); kdcore("%08lx: %08lx: %s (share)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + goto out; } dump_ok = test_bit(MMF_DUMP_MAPPED_SHARED, &mm_flags); kdcore("%08lx: %08lx: %s (share)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + goto out; } #ifdef CONFIG_MMU @@ -1243,14 +1244,16 @@ static int maydump(struct vm_area_struct *vma, unsigned long mm_flags) dump_ok = test_bit(MMF_DUMP_MAPPED_PRIVATE, &mm_flags); kdcore("%08lx: %08lx: %s (!anon)", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + goto out; } #endif dump_ok = test_bit(MMF_DUMP_ANON_PRIVATE, &mm_flags); kdcore("%08lx: %08lx: %s", vma->vm_start, vma->vm_flags, dump_ok ? "yes" : "no"); - return dump_ok; + +out: + return dump_ok ? vma->vm_end - vma->vm_start : 0; } /* An ELF note in memory */ @@ -1490,31 +1493,30 @@ static void fill_extnum_info(struct elfhdr *elf, struct elf_shdr *shdr4extnum, /* * dump the segments for an MMU process */ -static bool elf_fdpic_dump_segments(struct coredump_params *cprm) +static bool elf_fdpic_dump_segments(struct coredump_params *cprm, + struct core_vma_metadata *vma_meta, + int vma_count) { - struct vm_area_struct *vma; + int i; - for (vma = current->mm->mmap; vma; vma = vma->vm_next) { - unsigned long addr; + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; - if (!maydump(vma, cprm->mm_flags)) - continue; - - if (!dump_user_range(cprm, vma->vm_start, - vma->vma_end - vma->vm_start)) + if (!dump_user_range(cprm, meta->start, meta->dump_size)) return false; } return true; } -static size_t elf_core_vma_data_size(unsigned long mm_flags) +static size_t elf_core_vma_data_size(unsigned long mm_flags, + struct core_vma_metadata *vma_meta, + int vma_count) { - struct vm_area_struct *vma; size_t size = 0; + int i; - for (vma = current->mm->mmap; vma; vma = vma->vm_next) - if (maydump(vma, mm_flags)) - size += vma->vm_end - vma->vm_start; + for (i = 0; i < vma_count; i++) + size += vma_meta[i].dump_size; return size; } @@ -1529,9 +1531,8 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) { #define NUM_NOTES 6 int has_dumped = 0; - int segs; + int vma_count, segs; int i; - struct vm_area_struct *vma; struct elfhdr *elf = NULL; loff_t offset = 0, dataoff; int numnote; @@ -1552,18 +1553,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) elf_addr_t e_shoff; struct core_thread *ct; struct elf_thread_status *tmp; - - /* - * We no longer stop all VM operations. - * - * This is because those proceses that could possibly change map_count - * or the mmap / vma pages are now blocked in do_exit on current - * finishing this core dump. - * - * Only ptrace can touch these memory addresses, but it doesn't change - * the map_count or the pages allocated. So no possibility of crashing - * exists while dumping the mm->vm_next areas to the core file. - */ + struct core_vma_metadata *vma_meta = NULL; /* alloc memory for large data structures: too large to be on stack */ elf = kmalloc(sizeof(*elf), GFP_KERNEL); @@ -1588,6 +1578,9 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) goto cleanup; #endif + if (dump_vma_snapshot(cprm, &vma_count, &vma_meta, vma_dump_size)) + goto cleanup; + for (ct = current->mm->core_state->dumper.next; ct; ct = ct->next) { tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); @@ -1611,8 +1604,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) fill_prstatus(prstatus, current, cprm->siginfo->si_signo); elf_core_copy_regs(&prstatus->pr_reg, cprm->regs); - segs = current->mm->map_count; - segs += elf_core_extra_phdrs(); + segs = vma_count + elf_core_extra_phdrs(); /* for notes section */ segs++; @@ -1680,7 +1672,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) /* Page-align dumped data */ dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE); - offset += elf_core_vma_data_size(cprm->mm_flags); + offset += elf_core_vma_data_size(cprm->mm_flags, vma_meta, vma_count); offset += elf_core_extra_data_size(); e_shoff = offset; @@ -1700,24 +1692,23 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) goto cleanup; /* write program headers for segments dump */ - for (vma = current->mm->mmap; vma; vma = vma->vm_next) { + for (i = 0; i < vma_count; i++) { + struct core_vma_metadata *meta = vma_meta + i; struct elf_phdr phdr; size_t sz; - sz = vma->vm_end - vma->vm_start; + sz = meta->end - meta->start; phdr.p_type = PT_LOAD; phdr.p_offset = offset; - phdr.p_vaddr = vma->vm_start; + phdr.p_vaddr = meta->start; phdr.p_paddr = 0; - phdr.p_filesz = maydump(vma, cprm->mm_flags) ? sz : 0; + phdr.p_filesz = meta->dump_size; phdr.p_memsz = sz; offset += phdr.p_filesz; - phdr.p_flags = vma->vm_flags & VM_READ ? PF_R : 0; - if (vma->vm_flags & VM_WRITE) - phdr.p_flags |= PF_W; - if (vma->vm_flags & VM_EXEC) - phdr.p_flags |= PF_X; + phdr.p_flags = meta->flags & VM_READ ? PF_R : 0; + phdr.p_flags |= meta->flags & VM_WRITE ? PF_W : 0; + phdr.p_flags |= meta->flags & VM_EXEC ? PF_X : 0; phdr.p_align = ELF_EXEC_PAGESIZE; if (!dump_emit(cprm, &phdr, sizeof(phdr))) @@ -1745,7 +1736,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) if (!dump_skip(cprm, dataoff - cprm->pos)) goto cleanup; - if (!elf_fdpic_dump_segments(cprm)) + if (!elf_fdpic_dump_segments(cprm, vma_meta, vma_count)) goto cleanup; if (!elf_core_write_extra_data(cprm)) @@ -1769,6 +1760,7 @@ static int elf_fdpic_core_dump(struct coredump_params *cprm) list_del(tmp); kfree(list_entry(tmp, struct elf_thread_status, list)); } + kvfree(vma_meta); kfree(phdr4note); kfree(elf); kfree(prstatus); diff --git a/fs/coredump.c b/fs/coredump.c index 88f625eecaac1..4213eab89190f 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -918,3 +918,71 @@ void dump_truncate(struct coredump_params *cprm) } } EXPORT_SYMBOL(dump_truncate); + +static struct vm_area_struct *first_vma(struct task_struct *tsk, + struct vm_area_struct *gate_vma) +{ + struct vm_area_struct *ret = tsk->mm->mmap; + + if (ret) + return ret; + return gate_vma; +} +/* + * Helper function for iterating across a vma list. It ensures that the caller + * will visit `gate_vma' prior to terminating the search. + */ +static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma, + struct vm_area_struct *gate_vma) +{ + struct vm_area_struct *ret; + + ret = this_vma->vm_next; + if (ret) + return ret; + if (this_vma == gate_vma) + return NULL; + return gate_vma; +} + +/* + * Under the mmap_sem, take a snapshot of relevant information about the task's + * VMAs. + */ +int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count, + struct core_vma_metadata **vma_meta, + unsigned long (*dump_size_cb)(struct vm_area_struct *, unsigned long)) +{ + struct vm_area_struct *vma, *gate_vma; + struct mm_struct *mm = current->mm; + int i; + + if (down_read_killable(&mm->mmap_sem)) + return -EINTR; + + gate_vma = get_gate_vma(mm); + *vma_count = mm->map_count + (gate_vma ? 1 : 0); + + *vma_meta = kvmalloc_array(*vma_count, sizeof(**vma_meta), GFP_KERNEL); + if (!*vma_meta) { + up_read(&mm->mmap_sem); + return -ENOMEM; + } + + for (i = 0, vma = first_vma(current, gate_vma); vma != NULL; + vma = next_vma(vma, gate_vma)) { + (*vma_meta)[i++] = (struct core_vma_metadata) { + .start = vma->vm_start, + .end = vma->vm_end, + .flags = vma->vm_flags, + .dump_size = dump_size_cb(vma, cprm->mm_flags) + }; + } + + up_read(&mm->mmap_sem); + + if (WARN_ON(i != *vma_count)) + return -EFAULT; + + return 0; +} diff --git a/include/linux/coredump.h b/include/linux/coredump.h index 4289dc21c04ff..d3387866dce7b 100644 --- a/include/linux/coredump.h +++ b/include/linux/coredump.h @@ -7,6 +7,13 @@ #include #include +struct core_vma_metadata { + unsigned long start, end; + unsigned long filesize; + unsigned long flags; + unsigned long dump_size; +}; + /* * These are the only things you should do on a core-file: use only these * functions to write out all the necessary info. @@ -18,6 +25,9 @@ extern int dump_align(struct coredump_params *cprm, int align); extern void dump_truncate(struct coredump_params *cprm); int dump_user_range(struct coredump_params *cprm, unsigned long start, unsigned long len); +int dump_vma_snapshot(struct coredump_params *cprm, int *vma_count, + struct core_vma_metadata **vma_meta, + unsigned long (*dump_size_cb)(struct vm_area_struct *, unsigned long)); #ifdef CONFIG_COREDUMP extern void do_coredump(const kernel_siginfo_t *siginfo); #else From patchwork Wed Apr 29 21:49:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11518577 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2742A81 for ; Wed, 29 Apr 2020 21:50:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0AE9520757 for ; Wed, 29 Apr 2020 21:50:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rpSOhgpZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727879AbgD2Vu2 (ORCPT ); Wed, 29 Apr 2020 17:50:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727878AbgD2Vu2 (ORCPT ); Wed, 29 Apr 2020 17:50:28 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE99CC035494 for ; Wed, 29 Apr 2020 14:50:27 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id k197so5227098ybk.21 for ; Wed, 29 Apr 2020 14:50:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j9WUvKonPgTjIUwjN6gAw2Ow91kmiZMiVo0dKn965xM=; b=rpSOhgpZAHdOAwMyVgY0CFeQV4JBSuitbsj273BpBIYbY5RLsc4oU8PEL0H8l2TB5Q jG553aIKfQF2YF6o13fsBMT5zGN9segQ20ek4BoudBSd1oldgaKUIzQE1gUENm6VWCJL rwV8ZCHjEEotGPzpCJokeDTsN3sON5Be1JY9+ilHhlCYSwmVmYep7UjgmJeL0j+AtidP A/WuYCLR/baWvIOOSlanC6XH+h7tjaSu5ed9lIJtvsjuLczSBPgDCXkkJ+gL9HeXObZ4 LPEB4qS5TfOwPWx/cfdcs+Q13OO74wWcsFq39oxOHl9vj9OcZRqB9BpSiip4xH6q2UDi xagA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j9WUvKonPgTjIUwjN6gAw2Ow91kmiZMiVo0dKn965xM=; b=Y4NHuZpToD6nBr5/pr8JGaDVPkkYw32rF+hXdvK1fnI8AhvVYKvOcc+syXXROnBQVZ weVYW4+CQWDP6+aup82uAnEXuP2hI5b/erGYKs0QTW57NCRoZFkSj2bDdVV2SHG52r2G 2QdQCc8/yqtQhX12m9fD3gYqd5tAVtDCeYvV9fUkclHYe9NuD3IKHIIiQJRy9xv/5gfj Pn/kHrbrKPW9Jj0YsYdSBk+VAbxkljA3BXQyiMLUOQcsu70HQ9ZhsVibXXwkwbHuJOjD M3Tn48Qertz2FTWqw6xUXzGO+GGlbxDxVikqrvoKNUR/R0eF7ObGivfWcJ2PfMqAjQgE 73+Q== X-Gm-Message-State: AGi0PuYPGP0bG7DhtvIQPm1wJeDnayN0MC9RxOOU4lWvqINsr9wL4jNG 6tdNHba8U/yZmiRH8O4QhWbv/Tq3pg== X-Google-Smtp-Source: APiQypIHJ7rV1yTDZpZYpBo1v+hnjoUzGur3b82l9wJsRlWX+KQ5lDvvOT4btIQYmN3rHDSbWDUnK04YhA== X-Received: by 2002:a25:23d4:: with SMTP id j203mr618266ybj.97.1588197027087; Wed, 29 Apr 2020 14:50:27 -0700 (PDT) Date: Wed, 29 Apr 2020 23:49:54 +0200 In-Reply-To: <20200429214954.44866-1-jannh@google.com> Message-Id: <20200429214954.44866-6-jannh@google.com> Mime-Version: 1.0 References: <20200429214954.44866-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.526.g744177e7f7-goog Subject: [PATCH v2 5/5] mm/gup: Take mmap_sem in get_dump_page() From: Jann Horn To: Andrew Morton Cc: Linus Torvalds , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Alexander Viro , "Eric W . Biederman" , Oleg Nesterov , Russell King , linux-arm-kernel@lists.infradead.org, Mark Salter , Aurelien Jacquiot , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Properly take the mmap_sem before calling into the GUP code from get_dump_page(); and play nice, allowing the GUP code to drop the mmap_sem if it has to sleep. As Linus pointed out, we don't actually need the VMA because __get_user_pages() will flush the dcache for us if necessary. Signed-off-by: Jann Horn --- mm/gup.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 9a7e83772f1fe..03f659ddd830a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1548,19 +1548,23 @@ static long __get_user_pages_locked(struct task_struct *tsk, * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - * allowing a hole to be left in the corefile to save diskspace. * - * Called without mmap_sem, but after all other threads have been killed. + * Called without mmap_sem (takes and releases the mmap_sem by itself). */ struct page *get_dump_page(unsigned long addr) { - struct vm_area_struct *vma; + struct mm_struct *mm = current->mm; struct page *page; + int locked = 1; + int ret; - if (__get_user_pages(current, current->mm, addr, 1, - FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, - NULL) < 1) + if (down_read_killable(&mm->mmap_sem)) return NULL; - flush_cache_page(vma, addr, page_to_pfn(page)); - return page; + ret = __get_user_pages_locked(current, mm, addr, 1, &page, NULL, + &locked, + FOLL_FORCE | FOLL_DUMP | FOLL_GET); + if (locked) + up_read(&mm->mmap_sem); + return (ret == 1) ? page : NULL; } #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA)