From patchwork Wed Mar 18 05:36:05 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Magnus Damm X-Patchwork-Id: 12764 Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by demeter.kernel.org (8.14.2/8.14.2) with ESMTP id n2I5bQD4011155 for ; Wed, 18 Mar 2009 05:38:39 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752502AbZCRFij (ORCPT ); Wed, 18 Mar 2009 01:38:39 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752742AbZCRFij (ORCPT ); Wed, 18 Mar 2009 01:38:39 -0400 Received: from wa-out-1112.google.com ([209.85.146.180]:47956 "EHLO wa-out-1112.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752502AbZCRFii (ORCPT ); Wed, 18 Mar 2009 01:38:38 -0400 Received: by wa-out-1112.google.com with SMTP id j5so224775wah.21 for ; Tue, 17 Mar 2009 22:38:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:from:to:cc:date:message-id :subject; bh=BJ7NQgPLPMEceIEgKRBVT3X1TmmVmTNjfECz0HXCBd0=; b=nULF1Yqp6+Gw42zzmi4LMfWytdWBSbYb8I9xk3uTanuOnv0KtJi0QsYdSV8HF0/+M+ cN0EHcOEQL35NIblASJvzhZOtiZBM6MReURoAhyVKIHuLrLHnf7ygLTLKpys3HfAxrLZ ii1AGZpFOjSoVgxYr86vqv+mrTm5dN8/AJZ6g= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:date:message-id:subject; b=JPUtCI9HCW56YeuZSXYR853GXB/2GgoT+NBrCHV/os65asfWE/Wz7Tk/vvw2hoMmrM 2lOd05WmBGGU77/OUzfculotDJuQQVG1E0DIGN1o9IKb78pGoIe/bpnokTX/T+vKF4gj M811HMyLPUuWxD8yB/H5OMO8q1QKltRrC9jVg= Received: by 10.114.25.19 with SMTP id 19mr511514way.89.1237354716056; Tue, 17 Mar 2009 22:38:36 -0700 (PDT) Received: from rx1.opensource.se (210.5.32.202.bf.2iij.net [202.32.5.210]) by mx.google.com with ESMTPS id y11sm2543921pod.1.2009.03.17.22.38.34 (version=TLSv1/SSLv3 cipher=RC4-MD5); Tue, 17 Mar 2009 22:38:35 -0700 (PDT) From: Magnus Damm To: linux-sh@vger.kernel.org Cc: francesco.virlinzi@st.com, Magnus Damm , lethal@linux-sh.org Date: Wed, 18 Mar 2009 14:36:05 +0900 Message-Id: <20090318053605.25269.14801.sendpatchset@rx1.opensource.se> Subject: [PATCH] sh: add kexec jump support Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org From: Magnus Damm Add kexec jump support to the SuperH architecture. Similar to the x86 implementation, with the following exceptions: - Instead of separating the assembly code flow into two parts for regular kexec and kexec jump we use a single code path. In the assembly snippet regular kexec is just kexec jump that never comes back. - Instead of using a swap page when moving data between pages the page copy assembly routine has been modified to exchange the data between the pages using registers. Signed-off-by: Magnus Damm --- Apply on top of the previously posted vbr and p1 patches. Tested with regular kexec and kexec jump (no return) and kexec jump with kernel returning early from head_32.S. Kexec tools a few patches for kexec jump support on SuperH. arch/sh/Kconfig | 7 + arch/sh/kernel/machine_kexec.c | 18 ++- arch/sh/kernel/relocate_kernel.S | 197 ++++++++++++++++++++++++++++++++------ 3 files changed, 190 insertions(+), 32 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-sh" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- 0004/arch/sh/Kconfig +++ work/arch/sh/Kconfig 2009-03-18 12:41:55.000000000 +0900 @@ -558,6 +558,13 @@ config CRASH_DUMP For more details see Documentation/kdump/kdump.txt +config KEXEC_JUMP + bool "kexec jump (EXPERIMENTAL)" + depends on SUPERH32 && KEXEC && HIBERNATION && EXPERIMENTAL + help + Jump between original kernel and kexeced kernel and invoke + code via KEXEC + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" depends on PROC_FS --- 0006/arch/sh/kernel/machine_kexec.c +++ work/arch/sh/kernel/machine_kexec.c 2009-03-18 12:41:55.000000000 +0900 @@ -14,16 +14,16 @@ #include #include #include +#include #include #include #include #include #include -typedef NORET_TYPE void (*relocate_new_kernel_t)( - unsigned long indirection_page, - unsigned long reboot_code_buffer, - unsigned long start_address) ATTRIB_NORET; +typedef void (*relocate_new_kernel_t)(unsigned long indirection_page, + unsigned long reboot_code_buffer, + unsigned long start_address); extern const unsigned char relocate_new_kernel[]; extern const unsigned int relocate_new_kernel_size; @@ -77,6 +77,11 @@ void machine_kexec(struct kimage *image) unsigned long reboot_code_buffer; relocate_new_kernel_t rnk; +#ifdef CONFIG_KEXEC_JUMP + if (image->preserve_context) + save_processor_state(); +#endif + /* Interrupts aren't acceptable while we reboot */ local_irq_disable(); @@ -101,6 +106,11 @@ void machine_kexec(struct kimage *image) /* now call it */ rnk = (relocate_new_kernel_t) reboot_code_buffer; (*rnk)(page_list, reboot_code_buffer, P1SEGADDR(image->start)); + +#ifdef CONFIG_KEXEC_JUMP + if (image->preserve_context) + restore_processor_state(); +#endif } void arch_crash_save_vmcoreinfo(void) --- 0006/arch/sh/kernel/relocate_kernel.S +++ work/arch/sh/kernel/relocate_kernel.S 2009-03-18 13:35:54.000000000 +0900 @@ -4,6 +4,8 @@ * * LANDISK/sh4 is supported. Maybe, SH archtecture works well. * + * 2009-03-18 Magnus Damm - Added Kexec Jump support + * * This source code is licensed under the GNU General Public License, * Version 2. See the file COPYING for more details. */ @@ -17,15 +19,136 @@ relocate_new_kernel: /* r5 = reboot_code_buffer */ /* r6 = start_address */ - mov.l 10f,r8 /* PAGE_SIZE */ - mov.l 11f,r9 /* P1SEG */ + mov.l 10f, r0 /* PAGE_SIZE */ + add r5, r0 /* setup new stack at end of control page */ - /* stack setting */ - add r8,r5 - mov r5,r15 + /* save r15->r8 to new stack */ + mov.l r15, @-r0 + mov r0, r15 + mov.l r14, @-r15 + mov.l r13, @-r15 + mov.l r12, @-r15 + mov.l r11, @-r15 + mov.l r10, @-r15 + mov.l r9, @-r15 + mov.l r8, @-r15 + + /* save other random registers */ + sts.l macl, @-r15 + sts.l mach, @-r15 + stc.l gbr, @-r15 + stc.l ssr, @-r15 + stc.l sr, @-r15 + sts.l pr, @-r15 + stc.l spc, @-r15 + + /* switch to bank1 and save r7->r0 */ + mov.l 13f, r9 + stc sr, r8 + or r9, r8 + ldc r8, sr + mov.l r7, @-r15 + mov.l r6, @-r15 + mov.l r5, @-r15 + mov.l r4, @-r15 + mov.l r3, @-r15 + mov.l r2, @-r15 + mov.l r1, @-r15 + mov.l r0, @-r15 + + /* switch to bank0 and save r7->r0 */ + mov.l 13f, r9 + not r9, r9 + stc sr, r8 + and r9, r8 + ldc r8, sr + mov.l r7, @-r15 + mov.l r6, @-r15 + mov.l r5, @-r15 + mov.l r4, @-r15 + mov.l r3, @-r15 + mov.l r2, @-r15 + mov.l r1, @-r15 + mov.l r0, @-r15 + + mov.l r4, @-r15 /* save indirection page again */ + + bsr swap_pages /* swap pages before jumping to new kernel */ + nop + + mova 12f, r0 + mov.l r15, @r0 /* save pointer to stack */ + + jsr @r6 /* hand over control to new kernel */ + nop + + mov.l 12f, r15 /* get pointer to stack */ + mov.l @r15+, r4 /* restore r4 to get indirection page */ + + bsr swap_pages /* swap pages back to previous state */ + nop + + /* make sure bank0 is active and restore r0->r7 */ + mov.l 13f, r9 + not r9, r9 + stc sr, r8 + and r9, r8 + ldc r8, sr + mov.l @r15+, r0 + mov.l @r15+, r1 + mov.l @r15+, r2 + mov.l @r15+, r3 + mov.l @r15+, r4 + mov.l @r15+, r5 + mov.l @r15+, r6 + mov.l @r15+, r7 + + /* switch to bank1 and restore r0->r7 */ + mov.l 13f, r9 + stc sr, r8 + or r9, r8 + ldc r8, sr + mov.l @r15+, r0 + mov.l @r15+, r1 + mov.l @r15+, r2 + mov.l @r15+, r3 + mov.l @r15+, r4 + mov.l @r15+, r5 + mov.l @r15+, r6 + mov.l @r15+, r7 + + /* switch back to bank0 */ + mov.l 13f, r9 + not r9, r9 + stc sr, r8 + and r9, r8 + ldc r8, sr + + /* restore other random registers */ + ldc.l @r15+, spc + lds.l @r15+, pr + ldc.l @r15+, sr + ldc.l @r15+, ssr + ldc.l @r15+, gbr + lds.l @r15+, mach + lds.l @r15+, macl + + /* restore r8->r15 */ + mov.l @r15+, r8 + mov.l @r15+, r9 + mov.l @r15+, r10 + mov.l @r15+, r11 + mov.l @r15+, r12 + mov.l @r15+, r13 + mov.l @r15+, r14 + mov.l @r15+, r15 + rts + nop +swap_pages: + mov.l 11f,r9 /* P1SEG */ bra 1f - mov r4,r0 /* cmd = indirection_page */ + mov r4,r0 /* cmd = indirection_page */ 0: mov.l @r4+,r0 /* cmd = *ind++ */ @@ -39,54 +162,72 @@ relocate_new_kernel: tst #1,r0 bt 2f bra 0b - mov r2,r5 + mov r2,r5 2: /* else if(cmd & IND_INDIRECTION) ind = addr */ tst #2,r0 bt 3f bra 0b - mov r2,r4 + mov r2,r4 -3: /* else if(cmd & IND_DONE) goto 6 */ +3: /* else if(cmd & IND_DONE) return */ tst #4,r0 bt 4f - bra 6f - nop + rts + nop 4: /* else if(cmd & IND_SOURCE) memcpy(dst,addr,PAGE_SIZE) */ tst #8,r0 bt 0b - mov r8,r3 + mov.l 10f,r3 /* PAGE_SIZE */ shlr2 r3 shlr2 r3 5: dt r3 - mov.l @r2+,r1 /* 16n+0 */ - mov.l r1,@r5 - add #4,r5 - mov.l @r2+,r1 /* 16n+4 */ - mov.l r1,@r5 - add #4,r5 - mov.l @r2+,r1 /* 16n+8 */ - mov.l r1,@r5 - add #4,r5 - mov.l @r2+,r1 /* 16n+12 */ - mov.l r1,@r5 - add #4,r5 + + /* regular kexec just overwrites the destination page + * with the contents of the source page. + * for the kexec jump case we need to swap the contents + * of the pages. + * to keep it simple swap the contents for both cases. + */ + mov.l @(0, r2), r8 + mov.l @(0, r5), r1 + mov.l r8, @(0, r5) + mov.l r1, @(0, r2) + + mov.l @(4, r2), r8 + mov.l @(4, r5), r1 + mov.l r8, @(4, r5) + mov.l r1, @(4, r2) + + mov.l @(8, r2), r8 + mov.l @(8, r5), r1 + mov.l r8, @(8, r5) + mov.l r1, @(8, r2) + + mov.l @(12, r2), r8 + mov.l @(12, r5), r1 + mov.l r8, @(12, r5) + mov.l r1, @(12, r2) + + add #16,r5 + add #16,r2 bf 5b bra 0b - nop -6: - jmp @r6 - nop + nop .align 2 10: .long PAGE_SIZE 11: .long P1SEG +12: + .long 0 +13: + .long 0x20000000 ! RB=1 relocate_new_kernel_end: