Message ID | 7a562ef6f77ff83c1fbdc6a2ecc7af387ce1fd71.1537275915.git.yu.c.chen@intel.com (mailing list archive) |
---|---|
State | Changes Requested, archived |
Headers | show |
Series | Backport several fixes from 64bits to 32bits hibernation | expand |
On Wed 2018-09-19 15:43:12, Chen Yu wrote: > From: Zhimin Gu <kookoo.gu@intel.com> > > Code should be executed in a safe page during page > restoring, as the page where instruction is running > during resume might be scribbled and causes issues. > > Backport the code from 64 bit system to fix this bug. On 32 bit, we only suspend resuming by same kernel that did the suspend. 64 bit does not have that restriction. So 32 bit code should not be actually bugy. > Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> > Signed-off-by: Zhimin Gu <kookoo.gu@intel.com> > Signed-off-by: Chen Yu <yu.c.chen@intel.com> But we'd like to remove that restriction in future, so: Acked-by: Pavel Machek <pavel@ucw.cz>
diff --git a/arch/x86/power/hibernate_32.c b/arch/x86/power/hibernate_32.c index a44bdada4e4e..a9861095fbb8 100644 --- a/arch/x86/power/hibernate_32.c +++ b/arch/x86/power/hibernate_32.c @@ -158,6 +158,10 @@ asmlinkage int swsusp_arch_resume(void) temp_pgt = __pa(resume_pg_dir); + error = relocate_restore_code(); + if (error) + return error; + /* We have got enough memory and from now on we cannot recover */ restore_image(); return 0; diff --git a/arch/x86/power/hibernate_asm_32.S b/arch/x86/power/hibernate_asm_32.S index 6b2b94937113..e9adda6b6b02 100644 --- a/arch/x86/power/hibernate_asm_32.S +++ b/arch/x86/power/hibernate_asm_32.S @@ -39,6 +39,13 @@ ENTRY(restore_image) movl restore_cr3, %ebp movl mmu_cr4_features, %ecx + + /* jump to relocated restore code */ + movl relocated_restore_code, %eax + jmpl *%eax + +/* code below has been relocated to a safe page */ +ENTRY(core_restore_code) movl temp_pgt, %eax movl %eax, %cr3