diff mbox

[1/4] KVM: PPC: Book3S: PR: Don't clobber our exit handler id

Message ID 1385693701-27632-2-git-send-email-agraf@suse.de (mailing list archive)
State New, archived
Headers show

Commit Message

Alexander Graf Nov. 29, 2013, 2:54 a.m. UTC
We call a C helper to save all svcpu fields into our vcpu. The C
ABI states that r12 is considered volatile. However, we keep our
exit handler id in r12 currently.

So we need to save it away into a non-volatile register instead
that definitely does get preserved across the C call.

This bug usually didn't hit anyone yet since gcc is smart enough
to generate code that doesn't even need r12 which means it stayed
identical throughout the call by sheer luck. But we can't rely on
that.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 arch/powerpc/kvm/book3s_interrupts.S | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

Comments

Paul Mackerras Nov. 30, 2013, 7:20 a.m. UTC | #1
On Fri, Nov 29, 2013 at 03:54:58AM +0100, Alexander Graf wrote:
> We call a C helper to save all svcpu fields into our vcpu. The C
> ABI states that r12 is considered volatile. However, we keep our
> exit handler id in r12 currently.
> 
> So we need to save it away into a non-volatile register instead
> that definitely does get preserved across the C call.
> 
> This bug usually didn't hit anyone yet since gcc is smart enough
> to generate code that doesn't even need r12 which means it stayed
> identical throughout the call by sheer luck. But we can't rely on
> that.
> 
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  arch/powerpc/kvm/book3s_interrupts.S | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
> index f4dd041..2f7d571 100644
> --- a/arch/powerpc/kvm/book3s_interrupts.S
> +++ b/arch/powerpc/kvm/book3s_interrupts.S
> @@ -132,9 +132,18 @@ kvm_start_lightweight:
>  	 *
>  	 */
>  
> +	PPC_LL	r3, GPR4(r1)		/* vcpu pointer */
> +
> +	/*
> +	 * kvmppc_copy_from_svcpu can clobber volatile registers, save
> +	 * r14 to get a spare scratch register for the exit handler id.
> +	 */
> +	PPC_STL	r14, VCPU_GPR(R14)(r3)
> +	mr	r14, r12

In the case where kvmppc_handle_exit_pr returns RESUME_GUEST and we do
the lightweight guest re-entry, I don't see anywhere that we re-load
r14 with the guest value.  So I think this results in guest-visible
corruption of r14.

Why not just use the vcpu->arch.trap field?  Store r12 to
VCPU_TRAP(r3) here and load r5 from VCPU_TRAP(r4) just before calling
kvmppc_handle_exit_pr().

Regards,
Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Nov. 30, 2013, 12:38 p.m. UTC | #2
On 30.11.2013, at 08:20, Paul Mackerras <paulus@samba.org> wrote:

> On Fri, Nov 29, 2013 at 03:54:58AM +0100, Alexander Graf wrote:
>> We call a C helper to save all svcpu fields into our vcpu. The C
>> ABI states that r12 is considered volatile. However, we keep our
>> exit handler id in r12 currently.
>> 
>> So we need to save it away into a non-volatile register instead
>> that definitely does get preserved across the C call.
>> 
>> This bug usually didn't hit anyone yet since gcc is smart enough
>> to generate code that doesn't even need r12 which means it stayed
>> identical throughout the call by sheer luck. But we can't rely on
>> that.
>> 
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>> ---
>> arch/powerpc/kvm/book3s_interrupts.S | 15 +++++++++++----
>> 1 file changed, 11 insertions(+), 4 deletions(-)
>> 
>> diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
>> index f4dd041..2f7d571 100644
>> --- a/arch/powerpc/kvm/book3s_interrupts.S
>> +++ b/arch/powerpc/kvm/book3s_interrupts.S
>> @@ -132,9 +132,18 @@ kvm_start_lightweight:
>> 	 *
>> 	 */
>> 
>> +	PPC_LL	r3, GPR4(r1)		/* vcpu pointer */
>> +
>> +	/*
>> +	 * kvmppc_copy_from_svcpu can clobber volatile registers, save
>> +	 * r14 to get a spare scratch register for the exit handler id.
>> +	 */
>> +	PPC_STL	r14, VCPU_GPR(R14)(r3)
>> +	mr	r14, r12
> 
> In the case where kvmppc_handle_exit_pr returns RESUME_GUEST and we do
> the lightweight guest re-entry, I don't see anywhere that we re-load
> r14 with the guest value.  So I think this results in guest-visible
> corruption of r14.

Ugh. You're right.

> Why not just use the vcpu->arch.trap field?  Store r12 to
> VCPU_TRAP(r3) here and load r5 from VCPU_TRAP(r4) just before calling
> kvmppc_handle_exit_pr().

I was trying to be smart and not do a memory access for a field that I always need. But then again I always do need r14 just as well, so it doesn't actually make a difference. I'll change it.


Thanks a lot!

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/powerpc/kvm/book3s_interrupts.S b/arch/powerpc/kvm/book3s_interrupts.S
index f4dd041..2f7d571 100644
--- a/arch/powerpc/kvm/book3s_interrupts.S
+++ b/arch/powerpc/kvm/book3s_interrupts.S
@@ -132,9 +132,18 @@  kvm_start_lightweight:
 	 *
 	 */
 
+	PPC_LL	r3, GPR4(r1)		/* vcpu pointer */
+
+	/*
+	 * kvmppc_copy_from_svcpu can clobber volatile registers, save
+	 * r14 to get a spare scratch register for the exit handler id.
+	 */
+	PPC_STL	r14, VCPU_GPR(R14)(r3)
+	mr	r14, r12
+   
 	/* Transfer reg values from shadow vcpu back to vcpu struct */
 	/* On 64-bit, interrupts are still off at this point */
-	PPC_LL	r3, GPR4(r1)		/* vcpu pointer */
+
 	GET_SHADOW_VCPU(r4)
 	bl	FUNC(kvmppc_copy_from_svcpu)
 	nop
@@ -151,13 +160,11 @@  kvm_start_lightweight:
 	 */
 	ld	r3, PACA_SPRG3(r13)
 	mtspr	SPRN_SPRG3, r3
-
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 	/* R7 = vcpu */
 	PPC_LL	r7, GPR4(r1)
 
-	PPC_STL	r14, VCPU_GPR(R14)(r7)
 	PPC_STL	r15, VCPU_GPR(R15)(r7)
 	PPC_STL	r16, VCPU_GPR(R16)(r7)
 	PPC_STL	r17, VCPU_GPR(R17)(r7)
@@ -177,7 +184,7 @@  kvm_start_lightweight:
 	PPC_STL	r31, VCPU_GPR(R31)(r7)
 
 	/* Pass the exit number as 3rd argument to kvmppc_handle_exit */
-	mr	r5, r12
+	mr	r5, r14
 
 	/* Restore r3 (kvm_run) and r4 (vcpu) */
 	REST_2GPRS(3, r1)