From patchwork Tue May 31 11:50:55 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank Hofmann X-Patchwork-Id: 832192 Received: from smtp1.linux-foundation.org (smtp1.linux-foundation.org [140.211.169.13]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4VBtqpn024976 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Tue, 31 May 2011 11:56:13 GMT Received: from daredevil.linux-foundation.org (localhost [127.0.0.1]) by smtp1.linux-foundation.org (8.14.2/8.13.5/Debian-3ubuntu1.1) with ESMTP id p4VBqJeM017684; Tue, 31 May 2011 04:52:49 -0700 Received: from mxvs2.esa.t-systems.com (mxvs2.esa.t-systems.com [81.7.202.143]) by smtp1.linux-foundation.org (8.14.2/8.13.5/Debian-3ubuntu1.1) with ESMTP id p4VBpiCe017654 for ; Tue, 31 May 2011 04:51:46 -0700 Received: from unknown (HELO nl-exc-01.intra.local) ([82.210.235.24]) by mx.esa.t-systems.com with ESMTP; 31 May 2011 11:51:41 +0000 Received: from magrathea ([10.101.8.37]) by nl-exc-01.intra.local with Microsoft SMTPSVC(6.0.3790.3959); Tue, 31 May 2011 13:51:40 +0200 Date: Tue, 31 May 2011 12:50:55 +0100 (BST) From: Frank Hofmann To: Nicolas Pitre In-Reply-To: Message-ID: References: User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 X-OriginalArrivalTime: 31 May 2011 11:51:40.0804 (UTC) FILETIME=[1AD4DC40:01CC1F89] Received-SPF: pass (localhost is always allowed.) X-Spam-Status: No, hits=-3.41 required=5 tests=AWL, BAYES_00, OSDL_HEADER_SUBJECT_BRACKETED X-Spam-Checker-Version: SpamAssassin 3.2.4-osdl_revision__1.47__ X-MIMEDefang-Filter: lf$Revision: 1.188 $ X-Scanned-By: MIMEDefang 2.63 on 140.211.169.21 Cc: Frank Hofmann , linux-pm@lists.linux-foundation.org, tuxonice-devel@tuxonice.net, linux-arm-kernel@lists.infradead.org Subject: Re: [linux-pm] [RFC PATCH v3] ARM hibernation/suspend-to-disk support X-BeenThere: linux-pm@lists.linux-foundation.org X-Mailman-Version: 2.1.9 Precedence: list Reply-To: frank.hofmann@tomtom.com List-Id: Linux power management List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-pm-bounces@lists.linux-foundation.org Errors-To: linux-pm-bounces@lists.linux-foundation.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Tue, 31 May 2011 11:56:13 +0000 (UTC) On Fri, 27 May 2011, Nicolas Pitre wrote: > On Fri, 27 May 2011, Frank Hofmann wrote: > >> /* >> * r0 = control register value >> * r1 = v:p offset (preserved by cpu_do_resume) >> + * if this is zero, do not reenable MMU (it's on) > > This is wrong. It is well possible for this to be zero when the MMU is > active. > > The best way to determine if MMU is on or off is: > > mrc p15, 0, rx, c1, c0 @ load ctrl reg > tst rx, #1 @ test M bit Ah, thanks. I had thought only MMU-less kernels will run on identity but you're right of course there's nothing to stop it as such. This one: ============================================================================== does indeed do that part of the job. > >> I wonder; is there a proper/suggested way to switch MMU off (and not end in >> binary nirvana), to have the reentry / reenable work ? > > This is slightly complicated. You first need to turn of and disable the > caches, and ideally set up a 1:1 mapping for the transition. There are > cpu_proc_fin() and cpu_reset(branch_location). Hmm, just looked through that. One of the issues with this is my usecase - ARM11x6 and Cortex-A8/9, for which these are cpu_v[67]_reset() - a no-op (in mainline / rmk devel-stable). I.e. neither cpu_proc_fin() nor cpu_reset() on v6/v7 currently switch the MMU off. The older chips do ... Anyway, the setup for resume after hibernation at the moment is: - swsusp_arch_resume switches to swapper_pg_dir (which is guaranteed to be kernel flat addresses ?!) - image restoration [ caches should probably be flushed / turned off after this ? ] - cpu_do_resume() restores pre-suspend TTBR (which in effect is a cpu_switch_mm) - cpu_resume_mmu bypassed because MMU already on But that means as part of the resume, a context switch is done anyway. Which sort of leads to the question whether the 1:1 mapping for the switch off case is really required; wouldn't it be acceptable to simply turn the MMU off and jump to the physical address of cpu_do_resume() instead ? Something like: [ caches off ... ] @ assume r0 == phys addr of restore buffer (however retrieved) ldr r1, =virt_addr_of_restore_buffer @ known sub r2, r1, r0 @ calc v:p offset ldr r3, =cpu_do_resume @ virt func addr sub r3, r3, r2 @ to phys mrc p15, 0, r1, cr0, cr1, 0 bic r1, #CR_M adr lr, =post_resume @ load virtual mcr r15, 0, r1, cr0, cr1, 0 @ MMU off crit: mov pc, r3 @ jump phys post_resume: [ continue processing when done / returned ] Or is it necessary to have a 1:1 mapping for 'crit:' when switching the MMU off, to make sure one actually reaches the jump ? > > You may also investigate how kexec is handled which purpose is to let > the kernel boot another kernel. machine_kexec() you mean ? I vaguely remember having read that to get this working on v6/v7 CPUs one needs non-mainline patches, is that still so ? The current fin / reset codepaths for v6/v7 don't turn the MMU off, anyway. Thanks for the pointer. Reading that, it looks like flushing / disabling all caches is necessary before entering/resuming the target ? I'm starting to wonder whether for a first-stab at hibernation support on ARM, the ability to resume non-identical kernels / resume not via the kernel hibernation restore codepaths (i.e. invocation via bootloader) is required. As Rafael answered a while back, to make that work a temporary MMU initialization / setup is necessary for the image restoration. The current code assumes swapper_pg_dir has been set up, and maps the entire kernel heap; how true is that assumption, actually, at "kernel entry" ? Thanks, FrankH. > > > Nicolas > ============================================================================== diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S index 6398ead..a793644 100644 --- a/arch/arm/kernel/sleep.S +++ b/arch/arm/kernel/sleep.S @@ -75,6 +75,9 @@ ENDPROC(cpu_suspend) * r3 = L1 section flags */ ENTRY(cpu_resume_mmu) + mrc p15, 0, r4, c1, c0, 0 + tst r4, #CR_M + bne 0f @ return if MMU already on adr r4, cpu_resume_turn_mmu_on mov r4, r4, lsr #20 orr r3, r3, r4, lsl #20 @@ -96,6 +99,7 @@ cpu_resume_turn_mmu_on: ENDPROC(cpu_resume_turn_mmu_on) cpu_resume_after_mmu: str r5, [r2, r4, lsl #2] @ restore old mapping +0: mcr p15, 0, r0, c1, c0, 0 @ turn on D-cache mov pc, lr ENDPROC(cpu_resume_after_mmu)