From patchwork Tue Apr 28 03:27:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 11513701 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8A19D92A for ; Tue, 28 Apr 2020 03:28:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48A862073C for ; Tue, 28 Apr 2020 03:28:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="X8ZXo00l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48A862073C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 489C58E0006; Mon, 27 Apr 2020 23:28:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 460CB8E0001; Mon, 27 Apr 2020 23:28:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39D398E0006; Mon, 27 Apr 2020 23:28:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id 20A968E0001 for ; Mon, 27 Apr 2020 23:28:03 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DC05B45AA for ; Tue, 28 Apr 2020 03:28:02 +0000 (UTC) X-FDA: 76755830004.05.quiet78_463adce21c532 X-Spam-Summary: 2,0,0,998e53c4c8402555,d41d8cd98f00b204,3waknxgukckajannhgoogle.comlinux-mmkvack.org@flex--jannh.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:966:973:981:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2898:2899:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4385:4605:5007:6261:6653:6742:7875:7903:7974:9592:9969:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:12986:14096:14097:14181:14394:14659:14721:21080:21444:21627:21795:21990:22047:30003:30012:30051:30054:30070,0,RBL:209.85.219.202:@flex--jannh.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: quiet78_463adce21c532 X-Filterd-Recvd-Size: 7226 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Apr 2020 03:28:02 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id y8so23351769ybn.5 for ; Mon, 27 Apr 2020 20:28:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=I4RZCdzOpcoMpbDN2LqRR4S1SmcrD2Kj55dT+NEDBbE=; b=X8ZXo00ljasm7AZ1Els7stZGHbJUjQ3cYCjBhSxX4NYkV1i4GJ8fn/Dn9b7txZrs4Y rk4PVdLrWzKGTy55PbNAV14mS8eAapCE7OP5rLPuuiM96DTFmYOd2R8B3N+gg8WSfGWv YySTyNNFFpF2KM5Vtkzo4AKQytmhpqKUa3MnS/7ninMjf0Ix4zkRnSJbjAH3lccQA0+j z266kabr7WqjlEia0jfQlMB5LrckgW/C1smlQe3Lthb8du2UNtYHiw6qieDHzFuWydw/ 71omd7lOenbwJeViL3RPuetWYMY3l4GO7BSCZeqf3sihWLUx95KaDb3dnQkMoSwjUg3f 4X1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=I4RZCdzOpcoMpbDN2LqRR4S1SmcrD2Kj55dT+NEDBbE=; b=Tszzw4LcE0jEXGAXM/+pSS7nCpYN95IqeXL4poiZ5N88Sp6Re6SDzp/M80Vucu1LgZ tC/FVJ/3V8BYCONVivBNFIV+r0+A0HuAzpQkLFUf0nzW5GYtFiXQYGN9UEYZ3iepGLAC lUuBYypZdVhz/SuTHqekx3F0q7DjPkzLA7iAhBtspoILRwiiPQI3PRXGCuyi3gOYc7q6 pXW2lVMiIknA4W/1b6dXqkNw5IlANtzX3x9uusuF/q6D054O4jchVdX+cF/Wtm82A/uy uGCFUksjoATnuZTEReXoQaNVrueAQ7E3KYgS6yru/KjaIpafZUAVeaSeDIuOf37hvFSM b/LA== X-Gm-Message-State: AGi0Pub+kAk7pk38GEsmmm7Ds9y5IopYUjzN4DHQigYXMxNwQb/8C4ve aRobLx99ZGvFg+0c7wIGEVUQ/xWtUg== X-Google-Smtp-Source: APiQypIcdn/JR5PlTDtT8WHtlwDwffUFy2NKOR9VROQaek1qOXSfTSnx9Q8ThU+1UmQujkYLWnXr/liimA== X-Received: by 2002:a25:c402:: with SMTP id u2mr41378105ybf.82.1588044481799; Mon, 27 Apr 2020 20:28:01 -0700 (PDT) Date: Tue, 28 Apr 2020 05:27:41 +0200 In-Reply-To: <20200428032745.133556-1-jannh@google.com> Message-Id: <20200428032745.133556-2-jannh@google.com> Mime-Version: 1.0 References: <20200428032745.133556-1-jannh@google.com> X-Mailer: git-send-email 2.26.2.303.gf8c07b1a785-goog Subject: [PATCH 1/5] binfmt_elf_fdpic: Stop using dump_emit() on user pointers on !MMU From: Jann Horn To: Andrew Morton Cc: Linus Torvalds , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Alexander Viro , "Eric W . Biederman" , Oleg Nesterov , Russell King , linux-arm-kernel@lists.infradead.org, Mark Salter , Aurelien Jacquiot , linux-c6x-dev@linux-c6x.org, Yoshinori Sato , Rich Felker , linux-sh@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dump_emit() is for kernel pointers, and VMAs describe userspace memory. Let's be tidy here and avoid accessing userspace pointers under KERNEL_DS, even if it probably doesn't matter much on !MMU systems - especially given that it looks like we can just use the same get_dump_page() as on MMU if we move it out of the CONFIG_MMU block. Signed-off-by: Jann Horn --- fs/binfmt_elf_fdpic.c | 8 ------ mm/gup.c | 58 +++++++++++++++++++++---------------------- 2 files changed, 29 insertions(+), 37 deletions(-) diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c index c62c17a5c34a9..f5b47076fa762 100644 --- a/fs/binfmt_elf_fdpic.c +++ b/fs/binfmt_elf_fdpic.c @@ -1495,14 +1495,11 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) struct vm_area_struct *vma; for (vma = current->mm->mmap; vma; vma = vma->vm_next) { -#ifdef CONFIG_MMU unsigned long addr; -#endif if (!maydump(vma, cprm->mm_flags)) continue; -#ifdef CONFIG_MMU for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) { bool res; @@ -1518,11 +1515,6 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm) if (!res) return false; } -#else - if (!dump_emit(cprm, (void *) vma->vm_start, - vma->vm_end - vma->vm_start)) - return false; -#endif } return true; } diff --git a/mm/gup.c b/mm/gup.c index 50681f0286ded..76080c4dbff05 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1490,35 +1490,6 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) up_read(&mm->mmap_sem); return ret; /* 0 or negative error code */ } - -/** - * get_dump_page() - pin user page in memory while writing it to core dump - * @addr: user address - * - * Returns struct page pointer of user page pinned for dump, - * to be freed afterwards by put_page(). - * - * Returns NULL on any kind of failure - a hole must then be inserted into - * the corefile, to preserve alignment with its headers; and also returns - * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - - * allowing a hole to be left in the corefile to save diskspace. - * - * Called without mmap_sem, but after all other threads have been killed. - */ -#ifdef CONFIG_ELF_CORE -struct page *get_dump_page(unsigned long addr) -{ - struct vm_area_struct *vma; - struct page *page; - - if (__get_user_pages(current, current->mm, addr, 1, - FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, - NULL) < 1) - return NULL; - flush_cache_page(vma, addr, page_to_pfn(page)); - return page; -} -#endif /* CONFIG_ELF_CORE */ #else /* CONFIG_MMU */ static long __get_user_pages_locked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, @@ -1565,6 +1536,35 @@ static long __get_user_pages_locked(struct task_struct *tsk, } #endif /* !CONFIG_MMU */ +/** + * get_dump_page() - pin user page in memory while writing it to core dump + * @addr: user address + * + * Returns struct page pointer of user page pinned for dump, + * to be freed afterwards by put_page(). + * + * Returns NULL on any kind of failure - a hole must then be inserted into + * the corefile, to preserve alignment with its headers; and also returns + * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - + * allowing a hole to be left in the corefile to save diskspace. + * + * Called without mmap_sem, but after all other threads have been killed. + */ +#ifdef CONFIG_ELF_CORE +struct page *get_dump_page(unsigned long addr) +{ + struct vm_area_struct *vma; + struct page *page; + + if (__get_user_pages(current, current->mm, addr, 1, + FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, + NULL) < 1) + return NULL; + flush_cache_page(vma, addr, page_to_pfn(page)); + return page; +} +#endif /* CONFIG_ELF_CORE */ + #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) {