From patchwork Wed Jan 22 11:25:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 3522641 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A13619F2D6 for ; Wed, 22 Jan 2014 11:32:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DC63720171 for ; Wed, 22 Jan 2014 11:32:15 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8F95320170 for ; Wed, 22 Jan 2014 11:32:14 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1W5w13-0003wx-Cu; Wed, 22 Jan 2014 11:30:42 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1W5w0Z-0002kV-Jh; Wed, 22 Jan 2014 11:30:11 +0000 Received: from szxga03-in.huawei.com ([119.145.14.66]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1W5vzs-0002dl-JJ; Wed, 22 Jan 2014 11:29:36 +0000 Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id AJQ35807; Wed, 22 Jan 2014 19:28:34 +0800 (CST) Received: from SZXEML423-HUB.china.huawei.com (10.82.67.162) by szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 22 Jan 2014 19:28:31 +0800 Received: from LGGEML424-HUB.china.huawei.com (10.72.61.124) by szxeml423-hub.china.huawei.com (10.82.67.162) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 22 Jan 2014 19:28:32 +0800 Received: from kernel-host.huawei (10.107.197.247) by lggeml424-hub.china.huawei.com (10.72.61.124) with Microsoft SMTP Server id 14.3.158.1; Wed, 22 Jan 2014 19:28:16 +0800 From: Wang Nan To: Subject: [PATCH 2/3] ARM: kexec: copying code to ioremapped area Date: Wed, 22 Jan 2014 19:25:15 +0800 Message-ID: <1390389916-8711-3-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1390389916-8711-1-git-send-email-wangnan0@huawei.com> References: <1390389916-8711-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140122_062929_603185_2AA7CDA9 X-CRM114-Status: GOOD ( 14.22 ) X-Spam-Score: -3.2 (---) Cc: Wang Nan , stable@vger.kernel.org, linux-kernel@vger.kernel.org, Geng Hui , linux-mm@kvack.org, Eric Biederman , Russell King , Andrew Morton , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ARM's kdump is actually corrupted (at least for omap4460), mainly because of cache problem: flush_icache_range can't reliably ensure the copied data correctly goes into RAM. After mmu turned off and jump to the trampoline, kexec always failed due to random undef instructions. This patch use ioremap to make sure the destnation of all memcpy() is uncachable memory, including copying of target kernel and trampoline. Signed-off-by: Wang Nan Cc: # 3.4+ Cc: Eric Biederman Cc: Russell King Cc: Andrew Morton Cc: Geng Hui --- arch/arm/kernel/machine_kexec.c | 18 ++++++++++++++++-- kernel/kexec.c | 40 +++++++++++++++++++++++++++++++++++----- 2 files changed, 51 insertions(+), 7 deletions(-) diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c index f0d180d..ba0a5a8 100644 --- a/arch/arm/kernel/machine_kexec.c +++ b/arch/arm/kernel/machine_kexec.c @@ -144,6 +144,7 @@ void machine_kexec(struct kimage *image) unsigned long page_list; unsigned long reboot_code_buffer_phys; unsigned long reboot_entry = (unsigned long)relocate_new_kernel; + void __iomem *reboot_entry_remap; unsigned long reboot_entry_phys; void *reboot_code_buffer; @@ -171,9 +172,22 @@ void machine_kexec(struct kimage *image) /* copy our kernel relocation code to the control code page */ - reboot_entry = fncpy(reboot_code_buffer, - reboot_entry, + reboot_entry_remap = ioremap_nocache(reboot_code_buffer_phys, + relocate_new_kernel_size); + if (reboot_entry_remap == NULL) { + pr_warn("startup code may not be reliably flushed\n"); + reboot_entry_remap = (void __iomem *)reboot_code_buffer; + } + + reboot_entry = fncpy(reboot_entry_remap, reboot_entry, relocate_new_kernel_size); + reboot_entry = (unsigned long)reboot_code_buffer + + (reboot_entry - + (unsigned long)reboot_entry_remap); + + if (reboot_entry_remap != reboot_code_buffer) + iounmap(reboot_entry_remap); + reboot_entry_phys = (unsigned long)reboot_entry + (reboot_code_buffer_phys - (unsigned long)reboot_code_buffer); diff --git a/kernel/kexec.c b/kernel/kexec.c index 9c97016..3e92999 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -806,6 +806,7 @@ static int kimage_load_normal_segment(struct kimage *image, while (mbytes) { struct page *page; char *ptr; + void __iomem *ioptr; size_t uchunk, mchunk; page = kimage_alloc_page(image, GFP_HIGHUSER, maddr); @@ -818,7 +819,17 @@ static int kimage_load_normal_segment(struct kimage *image, if (result < 0) goto out; - ptr = kmap(page); + /* + * Try ioremap to make sure the copied data goes into RAM + * reliably. If failed (some archs don't allow ioremap RAM), + * use kmap instead. + */ + ioptr = ioremap(page_to_pfn(page) << PAGE_SHIFT, + PAGE_SIZE); + if (ioptr != NULL) + ptr = ioptr; + else + ptr = kmap(page); /* Start with a clear page */ clear_page(ptr); ptr += maddr & ~PAGE_MASK; @@ -827,7 +838,10 @@ static int kimage_load_normal_segment(struct kimage *image, uchunk = min(ubytes, mchunk); result = copy_from_user(ptr, buf, uchunk); - kunmap(page); + if (ioptr != NULL) + iounmap(ioptr); + else + kunmap(page); if (result) { result = -EFAULT; goto out; @@ -846,7 +860,7 @@ static int kimage_load_crash_segment(struct kimage *image, { /* For crash dumps kernels we simply copy the data from * user space to it's destination. - * We do things a page at a time for the sake of kmap. + * We do things a page at a time for the sake of ioremap/kmap. */ unsigned long maddr; size_t ubytes, mbytes; @@ -861,6 +875,7 @@ static int kimage_load_crash_segment(struct kimage *image, while (mbytes) { struct page *page; char *ptr; + void __iomem *ioptr; size_t uchunk, mchunk; page = pfn_to_page(maddr >> PAGE_SHIFT); @@ -868,7 +883,18 @@ static int kimage_load_crash_segment(struct kimage *image, result = -ENOMEM; goto out; } - ptr = kmap(page); + /* + * Try ioremap to make sure the copied data goes into RAM + * reliably. If failed (some archs don't allow ioremap RAM), + * use kmap instead. + */ + ioptr = ioremap_nocache(page_to_pfn(page) << PAGE_SHIFT, + PAGE_SIZE); + if (ioptr != NULL) + ptr = ioptr; + else + ptr = kmap(page); + ptr += maddr & ~PAGE_MASK; mchunk = min_t(size_t, mbytes, PAGE_SIZE - (maddr & ~PAGE_MASK)); @@ -879,7 +905,11 @@ static int kimage_load_crash_segment(struct kimage *image, } result = copy_from_user(ptr, buf, uchunk); kexec_flush_icache_page(page); - kunmap(page); + if (ioptr != NULL) + iounmap(ioptr); + else + kunmap(page); + if (result) { result = -EFAULT; goto out;