From patchwork Sat Aug 13 03:18:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thiago Jung Bauermann X-Patchwork-Id: 9278311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A9BFB60CDC for ; Sat, 13 Aug 2016 03:19:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C2F128AD6 for ; Sat, 13 Aug 2016 03:19:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8AAA728B44; Sat, 13 Aug 2016 03:19:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DF2AC28B47 for ; Sat, 13 Aug 2016 03:19:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753164AbcHMDTc (ORCPT ); Fri, 12 Aug 2016 23:19:32 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:45591 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753163AbcHMDTa (ORCPT ); Fri, 12 Aug 2016 23:19:30 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7D3JTSD142232 for ; Fri, 12 Aug 2016 23:19:30 -0400 Received: from e24smtp05.br.ibm.com (e24smtp05.br.ibm.com [32.104.18.26]) by mx0a-001b2d01.pphosted.com with ESMTP id 24s2v8yxb7-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 12 Aug 2016 23:19:30 -0400 Received: from localhost by e24smtp05.br.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 13 Aug 2016 00:19:27 -0300 Received: from d24dlp02.br.ibm.com (9.18.248.206) by e24smtp05.br.ibm.com (10.172.0.141) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sat, 13 Aug 2016 00:19:23 -0300 X-IBM-Helo: d24dlp02.br.ibm.com X-IBM-MailFrom: bauerman@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org; linux-security-module@vger.kernel.org Received: from d24relay03.br.ibm.com (d24relay03.br.ibm.com [9.13.184.25]) by d24dlp02.br.ibm.com (Postfix) with ESMTP id 202721DC0051; Fri, 12 Aug 2016 23:19:14 -0400 (EDT) Received: from d24av05.br.ibm.com (d24av05.br.ibm.com [9.18.232.44]) by d24relay03.br.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7D3JNCO6422926; Sat, 13 Aug 2016 00:19:23 -0300 Received: from d24av05.br.ibm.com (localhost [127.0.0.1]) by d24av05.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7D3JKLJ014304; Sat, 13 Aug 2016 00:19:22 -0300 Received: from hactar.ibm.com ([9.78.148.164]) by d24av05.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u7D3ImfY013761; Sat, 13 Aug 2016 00:19:14 -0300 From: Thiago Jung Bauermann To: kexec@lists.infradead.org Cc: linux-security-module@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, x86@kernel.org, Eric Biederman , Dave Young , Vivek Goyal , Baoquan He , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Stewart Smith , Samuel Mendoza-Jonas , Mimi Zohar , Eric Richter , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Andrew Morton , Petko Manolov , David Laight , Balbir Singh , Thiago Jung Bauermann Subject: [PATCH v2 5/6] kexec: Share logic to copy segment page contents. Date: Sat, 13 Aug 2016 00:18:24 -0300 X-Mailer: git-send-email 1.9.1 In-Reply-To: <1471058305-30198-1-git-send-email-bauerman@linux.vnet.ibm.com> References: <1471058305-30198-1-git-send-email-bauerman@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16081303-0032-0000-0000-00000274D3D3 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16081303-0033-0000-0000-00000EBA571C Message-Id: <1471058305-30198-6-git-send-email-bauerman@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-12_10:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=13 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608130039 Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Make kimage_load_normal_segment and kexec_update_segment share code which they currently duplicate. Signed-off-by: Thiago Jung Bauermann --- kernel/kexec_core.c | 159 +++++++++++++++++++++++++++++++--------------------- 1 file changed, 95 insertions(+), 64 deletions(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 806735201de6..68b5b245e457 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -721,6 +721,65 @@ static struct page *kimage_alloc_page(struct kimage *image, return page; } +struct kimage_update_buffer_state { + /* Destination memory address currently being copied to. */ + unsigned long maddr; + + /* Bytes in buffer still left to copy. */ + size_t ubytes; + + /* Bytes in memory still left to copy. */ + size_t mbytes; + + /* If true, copy from kbuf. */ + bool from_kernel; + + /* Clear pages before copying? */ + bool clear_pages; + + /* Buffer position to continue copying from. */ + const unsigned char *kbuf; + const unsigned char __user *buf; +}; + +static int kimage_update_page(struct page *page, + struct kimage_update_buffer_state *state) +{ + char *ptr; + int result = 0; + size_t uchunk, mchunk; + + ptr = kmap(page); + + /* Start with a clear page */ + if (state->clear_pages) + clear_page(ptr); + + ptr += state->maddr & ~PAGE_MASK; + mchunk = min_t(size_t, state->mbytes, + PAGE_SIZE - (state->maddr & ~PAGE_MASK)); + uchunk = min(state->ubytes, mchunk); + + if (state->from_kernel) + memcpy(ptr, state->kbuf, uchunk); + else + result = copy_from_user(ptr, state->buf, uchunk); + + kunmap(page); + if (result) + return -EFAULT; + + state->ubytes -= uchunk; + state->maddr += mchunk; + if (state->from_kernel) + state->kbuf += mchunk; + else + state->buf += mchunk; + state->mbytes -= mchunk; + + return 0; +} + /** * kexec_update_segment - update the contents of a kimage segment * @buffer: New contents of the segment. @@ -739,6 +798,7 @@ int kexec_update_segment(const char *buffer, unsigned long bufsz, unsigned long entry; unsigned long *ptr = NULL; void *dest = NULL; + struct kimage_update_buffer_state state; if (kexec_image == NULL) { pr_err("Can't update segment: no kexec image loaded.\n"); @@ -768,8 +828,15 @@ int kexec_update_segment(const char *buffer, unsigned long bufsz, return -EINVAL; } - for (entry = kexec_image->head; !(entry & IND_DONE) && memsz; - entry = *ptr++) { + state.maddr = load_addr; + state.ubytes = bufsz; + state.mbytes = memsz; + state.kbuf = buffer; + state.from_kernel = true; + state.clear_pages = false; + + for (entry = kexec_image->head; !(entry & IND_DONE) && + state.mbytes; entry = *ptr++) { void *addr = (void *) (entry & PAGE_MASK); switch (entry & IND_FLAGS) { @@ -786,26 +853,13 @@ int kexec_update_segment(const char *buffer, unsigned long bufsz, return -EINVAL; } - if (dest == (void *) load_addr) { - struct page *page; - char *ptr; - size_t uchunk, mchunk; - - page = kmap_to_page(addr); - - ptr = kmap(page); - ptr += load_addr & ~PAGE_MASK; - mchunk = min_t(size_t, memsz, - PAGE_SIZE - (load_addr & ~PAGE_MASK)); - uchunk = min(bufsz, mchunk); - memcpy(ptr, buffer, uchunk); - - kunmap(page); + if (dest == (void *) state.maddr) { + int ret; - bufsz -= uchunk; - load_addr += mchunk; - buffer += mchunk; - memsz -= mchunk; + ret = kimage_update_page(kmap_to_page(addr), + &state); + if (ret) + return ret; } dest += PAGE_SIZE; } @@ -823,31 +877,30 @@ int kexec_update_segment(const char *buffer, unsigned long bufsz, static int kimage_load_normal_segment(struct kimage *image, struct kexec_segment *segment) { - unsigned long maddr; - size_t ubytes, mbytes; - int result; - unsigned char __user *buf = NULL; - unsigned char *kbuf = NULL; - - result = 0; - if (image->file_mode) - kbuf = segment->kbuf; - else - buf = segment->buf; - ubytes = segment->bufsz; - mbytes = segment->memsz; - maddr = segment->mem; + int result = 0; + struct kimage_update_buffer_state state; + + /* For file based kexec, source pages are in kernel memory */ + if (image->file_mode) { + state.kbuf = segment->kbuf; + state.from_kernel = true; + } else { + state.buf = segment->buf; + state.from_kernel = false; + } + state.ubytes = segment->bufsz; + state.mbytes = segment->memsz; + state.maddr = segment->mem; + state.clear_pages = true; - result = kimage_set_destination(image, maddr); + result = kimage_set_destination(image, state.maddr); if (result < 0) goto out; - while (mbytes) { + while (state.mbytes) { struct page *page; - char *ptr; - size_t uchunk, mchunk; - page = kimage_alloc_page(image, GFP_HIGHUSER, maddr); + page = kimage_alloc_page(image, GFP_HIGHUSER, state.maddr); if (!page) { result = -ENOMEM; goto out; @@ -857,31 +910,9 @@ static int kimage_load_normal_segment(struct kimage *image, if (result < 0) goto out; - ptr = kmap(page); - /* Start with a clear page */ - clear_page(ptr); - ptr += maddr & ~PAGE_MASK; - mchunk = min_t(size_t, mbytes, - PAGE_SIZE - (maddr & ~PAGE_MASK)); - uchunk = min(ubytes, mchunk); - - /* For file based kexec, source pages are in kernel memory */ - if (image->file_mode) - memcpy(ptr, kbuf, uchunk); - else - result = copy_from_user(ptr, buf, uchunk); - kunmap(page); - if (result) { - result = -EFAULT; + result = kimage_update_page(page, &state); + if (result) goto out; - } - ubytes -= uchunk; - maddr += mchunk; - if (image->file_mode) - kbuf += mchunk; - else - buf += mchunk; - mbytes -= mchunk; } out: return result;