From patchwork Mon Nov 2 12:31:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 11873809 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C21A14B4 for ; Mon, 2 Nov 2020 12:32:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 144B0222EC for ; Mon, 2 Nov 2020 12:32:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="EyZhfqCu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 144B0222EC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 379E36B006E; Mon, 2 Nov 2020 07:32:22 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 32B7C6B0070; Mon, 2 Nov 2020 07:32:22 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F30F6B0071; Mon, 2 Nov 2020 07:32:22 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E302C6B006E for ; Mon, 2 Nov 2020 07:32:21 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 813391EE6 for ; Mon, 2 Nov 2020 12:32:21 +0000 (UTC) X-FDA: 77439416082.21.prose01_0717f4b272b0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 63EF8180442C2 for ; Mon, 2 Nov 2020 12:32:21 +0000 (UTC) X-Spam-Summary: 1,0,0,99255ad95979e7f1,d41d8cd98f00b204,arnd@kernel.org,,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:2393:2559:2562:2692:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3872:3874:4117:4605:5007:6261:6653:6742:7576:8660:9592:10004:11026:11473:11658:11914:12043:12114:12296:12297:12438:12517:12519:12555:12679:12683:12895:13148:13161:13229:13230:13894:14394:21080:21325:21433:21451:21627:21795:21821:21939:21990:30051:30054:30070,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yrfsadfsy1j79ujn8iykdzwrh16oct7iwr91h5zxn4y7tgfe656dogi1xiida.8bpowstcs7bupckoo53h1wr7ssngjgxkn1mgyawgf4gprabe1i1uat7ff9mu78s.6-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:70,LUA_SUMMARY:none X-HE-Tag: prose01_0717f4b272b0 X-Filterd-Recvd-Size: 6841 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 2 Nov 2020 12:32:20 +0000 (UTC) Received: from localhost.localdomain (unknown [192.30.34.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BB355222BA; Mon, 2 Nov 2020 12:32:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604320339; bh=pR0lLs09MNuIjT8UFqkl89v87fssHHRgqxxRCJAIoSA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EyZhfqCu6aBJFN3qmiNYQdFatyVKmXN1QpFwf9vRhlzU8iG48Enacz25lEU0bTqRn OoZqSKJh+W1xIvhiUqa2xNg5t74NPZVLvMnJtMywvbKX1bH8UE0c8LN7zJ2c9FTEWg su188ogt7UTw5LluoD95V+4+fHUjSVEzlkR/A0Xo= From: Arnd Bergmann To: linux-arch@vger.kernel.org Cc: Arnd Bergmann , Alexander Viro , Andrew Morton , Andy Lutomirski , Borislav Petkov , Brian Gerst , Christoph Hellwig , Eric Biederman , Ingo Molnar , "H . Peter Anvin" , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kexec@lists.infradead.org Subject: [PATCH v2 1/4] kexec: simplify compat_sys_kexec_load Date: Mon, 2 Nov 2020 13:31:48 +0100 Message-Id: <20201102123151.2860165-2-arnd@kernel.org> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201102123151.2860165-1-arnd@kernel.org> References: <20201102123151.2860165-1-arnd@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Arnd Bergmann The compat version of sys_kexec_load() uses compat_alloc_user_space to convert the user-provided arguments into the native format. Move the conversion into the regular implementation with an in_compat_syscall() check to simplify it and avoid the compat_alloc_user_space() call. compat_sys_kexec_load() now behaves the same as sys_kexec_load(). Signed-off-by: Arnd Bergmann --- include/linux/kexec.h | 2 - kernel/kexec.c | 95 +++++++++++++++++++------------------------ 2 files changed, 42 insertions(+), 55 deletions(-) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 9e93bef52968..7b6717cd5c4a 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -88,14 +88,12 @@ struct kexec_segment { size_t memsz; }; -#ifdef CONFIG_COMPAT struct compat_kexec_segment { compat_uptr_t buf; compat_size_t bufsz; compat_ulong_t mem; /* User space sees this as a (void *) ... */ compat_size_t memsz; }; -#endif #ifdef CONFIG_KEXEC_FILE struct purgatory_info { diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..ec04791eea3e 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,21 +19,46 @@ #include "kexec_internal.h" +static int copy_user_compat_segment_list(struct kimage *image, + unsigned long nr_segments, + void __user *segments) +{ + struct compat_kexec_segment __user *cs = segments; + struct compat_kexec_segment segment; + int i; + + for (i=0; i < nr_segments; i++) { + if (copy_from_user(&segment, &cs[i], sizeof(segment))) + return -EFAULT; + + image->segment[i] = (struct kexec_segment) { + .buf = compat_ptr(segment.buf), + .bufsz = segment.bufsz, + .mem = segment.mem, + .memsz = segment.memsz, + }; + } + + return 0; +} + + static int copy_user_segment_list(struct kimage *image, unsigned long nr_segments, struct kexec_segment __user *segments) { - int ret; size_t segment_bytes; /* Read in the segments */ image->nr_segments = nr_segments; segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; + if (in_compat_syscall()) + return copy_user_compat_segment_list(image, nr_segments, segments); - return ret; + if (copy_from_user(image->segment, segments, segment_bytes)) + return -EFAULT; + + return 0; } static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, @@ -233,8 +258,9 @@ static inline int kexec_load_check(unsigned long nr_segments, return 0; } -SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, - struct kexec_segment __user *, segments, unsigned long, flags) +static int kernel_kexec_load(unsigned long entry, unsigned long nr_segments, + struct kexec_segment __user * segments, + unsigned long flags) { int result; @@ -265,57 +291,20 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, return result; } +SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, + struct kexec_segment __user *, segments, unsigned long, flags) +{ + return kernel_kexec_load(entry, nr_segments, segments, flags); +} + #ifdef CONFIG_COMPAT COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, nr_segments, struct compat_kexec_segment __user *, segments, compat_ulong_t, flags) { - struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; - unsigned long i, result; - - result = kexec_load_check(nr_segments, flags); - if (result) - return result; - - /* Don't allow clients that don't understand the native - * architecture to do anything. - */ - if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) - return -EINVAL; - - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); - for (i = 0; i < nr_segments; i++) { - result = copy_from_user(&in, &segments[i], sizeof(in)); - if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; - - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; - } - - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - - result = do_kexec_load(entry, nr_segments, ksegments, flags); - - mutex_unlock(&kexec_mutex); - - return result; + return kernel_kexec_load(entry, nr_segments, + (struct kexec_segment __user *)segments, + flags); } #endif