From patchwork Mon May 17 20:33:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 12263131 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10CCEC43460 for ; Mon, 17 May 2021 20:34:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 96C2E610CC for ; Mon, 17 May 2021 20:34:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 96C2E610CC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A90D8D0005; Mon, 17 May 2021 16:34:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37FB48D0001; Mon, 17 May 2021 16:34:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 220688D0005; Mon, 17 May 2021 16:34:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id E95248D0001 for ; Mon, 17 May 2021 16:34:48 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 84CB98249980 for ; Mon, 17 May 2021 20:34:48 +0000 (UTC) X-FDA: 78151876656.18.73B74B8 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf27.hostedemail.com (Postfix) with ESMTP id 7020C80192E2 for ; Mon, 17 May 2021 20:34:46 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 30EB6610E9; Mon, 17 May 2021 20:34:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1621283687; bh=ZZNz2Q+SLgBdSv/ifYl85QG/mIUzpB8x9gnhCHyDi5M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=k/vJxtcFKaPfZ7l8QenJzV4MmvoZNAIW3BlZkdVZ/unk4lFJDT2cclnR43OMrNQ+G 022MX2RkVOBqA3OXeZUEkw4mWmBjh7VABrrnDr5de+b4gP9ilIBdzmMiZZ4ycgwlMM Bi4zTVmDbyA74+6SQFRJvWav4JnANNzz5S49QnhMOIDdf6GMjIimFSesnDKM3CUmtH OXbW/W7Yt8ajUcxRWsgvMwFWNY8wCCov/CXUb2yw0ehK1WIn7XcwCyyBOUYAVHPl0B /7J5UswKo0RXLFUCFxeLMvVSIWaSjEh++nlTWSk1LNT0AfobTNzH7ZSmca1fBBhz8o sYsZXwGwsCH7w== From: Arnd Bergmann To: linux-arch@vger.kernel.org Cc: Arnd Bergmann , Christoph Hellwig , Alexander Viro , Andrew Morton , Borislav Petkov , Brian Gerst , Eric Biederman , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Linux ARM , linux-kernel@vger.kernel.org, Linux-MM , kexec@lists.infradead.org Subject: [PATCH v3 1/4] kexec: simplify compat_sys_kexec_load Date: Mon, 17 May 2021 22:33:40 +0200 Message-Id: <20210517203343.3941777-2-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210517203343.3941777-1-arnd@kernel.org> References: <20210517203343.3941777-1-arnd@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7020C80192E2 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="k/vJxtcF"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of arnd@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=arnd@kernel.org X-Rspamd-Server: rspam03 X-Stat-Signature: ryc4ecthnepsmfc583di4pkahxfaraj9 X-HE-Tag: 1621283686-407096 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Arnd Bergmann The compat version of sys_kexec_load() uses compat_alloc_user_space to convert the user-provided arguments into the native format. Move the conversion into the regular implementation with an in_compat_syscall() check to simplify it and avoid the compat_alloc_user_space() call. compat_sys_kexec_load() now behaves the same as sys_kexec_load(). Signed-off-by: Arnd Bergmann Reviewed-by: Christoph Hellwig Nacked-by: "Eric W. Biederman" --- include/linux/kexec.h | 2 - kernel/kexec.c | 95 +++++++++++++++++++------------------------ 2 files changed, 42 insertions(+), 55 deletions(-) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 0c994ae37729..f61e310d7a85 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -88,14 +88,12 @@ struct kexec_segment { size_t memsz; }; -#ifdef CONFIG_COMPAT struct compat_kexec_segment { compat_uptr_t buf; compat_size_t bufsz; compat_ulong_t mem; /* User space sees this as a (void *) ... */ compat_size_t memsz; }; -#endif #ifdef CONFIG_KEXEC_FILE struct purgatory_info { diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..6618b1d9f00b 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,21 +19,46 @@ #include "kexec_internal.h" +static int copy_user_compat_segment_list(struct kimage *image, + unsigned long nr_segments, + void __user *segments) +{ + struct compat_kexec_segment __user *cs = segments; + struct compat_kexec_segment segment; + int i; + + for (i = 0; i < nr_segments; i++) { + if (copy_from_user(&segment, &cs[i], sizeof(segment))) + return -EFAULT; + + image->segment[i] = (struct kexec_segment) { + .buf = compat_ptr(segment.buf), + .bufsz = segment.bufsz, + .mem = segment.mem, + .memsz = segment.memsz, + }; + } + + return 0; +} + + static int copy_user_segment_list(struct kimage *image, unsigned long nr_segments, struct kexec_segment __user *segments) { - int ret; size_t segment_bytes; /* Read in the segments */ image->nr_segments = nr_segments; segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; + if (in_compat_syscall()) + return copy_user_compat_segment_list(image, nr_segments, segments); - return ret; + if (copy_from_user(image->segment, segments, segment_bytes)) + return -EFAULT; + + return 0; } static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, @@ -233,8 +258,9 @@ static inline int kexec_load_check(unsigned long nr_segments, return 0; } -SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, - struct kexec_segment __user *, segments, unsigned long, flags) +static int kernel_kexec_load(unsigned long entry, unsigned long nr_segments, + struct kexec_segment __user * segments, + unsigned long flags) { int result; @@ -265,57 +291,20 @@ SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, return result; } +SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, + struct kexec_segment __user *, segments, unsigned long, flags) +{ + return kernel_kexec_load(entry, nr_segments, segments, flags); +} + #ifdef CONFIG_COMPAT COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, nr_segments, struct compat_kexec_segment __user *, segments, compat_ulong_t, flags) { - struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; - unsigned long i, result; - - result = kexec_load_check(nr_segments, flags); - if (result) - return result; - - /* Don't allow clients that don't understand the native - * architecture to do anything. - */ - if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) - return -EINVAL; - - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); - for (i = 0; i < nr_segments; i++) { - result = copy_from_user(&in, &segments[i], sizeof(in)); - if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; - - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; - } - - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - - result = do_kexec_load(entry, nr_segments, ksegments, flags); - - mutex_unlock(&kexec_mutex); - - return result; + return kernel_kexec_load(entry, nr_segments, + (struct kexec_segment __user *)segments, + flags); } #endif