diff mbox

target-ppc: Update slb array with correct index values.

Message ID 87k3jd277j.fsf@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Aneesh Kumar K.V Aug. 22, 2013, 1:20 p.m. UTC
Alexander Graf <agraf@suse.de> writes:

> Am 21.08.2013 um 16:59 schrieb "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>:
>
>> Alexander Graf <agraf@suse.de> writes:
>> 
>> 
>> ....
>> 
>>>> 
>>>> On HV KVM yes, that would be the end of the list, but PR KVM could
>>>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>>>> entries.  Perhaps the best approach is to ignore any entries with
>>>> SLB_ESID_V clear.
>>> 
>>> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>>> 
>>> So yes, clear all cached entries first (to make sure we have no stale
>>> ones), then loop through all and only add entries with V=1 should fix
>>> everything for PR as well as HV.
>> 
>> This is more or less what the patch is doing. The kernel already
>> does memset of all the slb entries. The only difference is we don't
>> depend on the slb index in the return value. Instead we just use the
>> array index as the slb index. Do we really need to make sure the slb
>> index remain same ?
>
> Yes, otherwise get/set change SLB numbering which the guest doesn't
> expect.

how about 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Alexander Graf Aug. 22, 2013, 4:36 p.m. UTC | #1
On 22.08.2013, at 14:20, Aneesh Kumar K.V wrote:

> Alexander Graf <agraf@suse.de> writes:
> 
>> Am 21.08.2013 um 16:59 schrieb "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>:
>> 
>>> Alexander Graf <agraf@suse.de> writes:
>>> 
>>> 
>>> ....
>>> 
>>>>> 
>>>>> On HV KVM yes, that would be the end of the list, but PR KVM could
>>>>> give you entry 0 containing esid==0 and vsid==0 followed by valid
>>>>> entries.  Perhaps the best approach is to ignore any entries with
>>>>> SLB_ESID_V clear.
>>>> 
>>>> That means we don't clear entries we don't receive from the kernel because they're V=0 but which were V=1 before. Which with the current code is probably already broken.
>>>> 
>>>> So yes, clear all cached entries first (to make sure we have no stale
>>>> ones), then loop through all and only add entries with V=1 should fix
>>>> everything for PR as well as HV.
>>> 
>>> This is more or less what the patch is doing. The kernel already
>>> does memset of all the slb entries. The only difference is we don't
>>> depend on the slb index in the return value. Instead we just use the
>>> array index as the slb index. Do we really need to make sure the slb
>>> index remain same ?
>> 
>> Yes, otherwise get/set change SLB numbering which the guest doesn't
>> expect.
> 
> how about 
> 
> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
> index 30a870e..313f866 100644
> --- a/target-ppc/kvm.c
> +++ b/target-ppc/kvm.c
> @@ -1033,9 +1033,21 @@ int kvm_arch_get_registers(CPUState *cs)
> 
>         /* Sync SLB */
> #ifdef TARGET_PPC64
> +        /*
> +         * KVM_GET_SREGS doesn't retun slb entry with slot information

return

> +         * same as index. So don't depend on the slot information in
> +         * the returned value.

what returned value?

> +         * Zero out the SLB array invalidating all the entries

Better put the below into a function with a speaking name rather than write a comment. But when I read the comment it's only clear to me what you're trying to say because of the email thread. Please write what you're doing and how things look, not what they don't look like. Reading positive statements is easier :).

> +         */
> +        memset(env->slb, 0, 64 * sizeof(ppc_slb_t));
>         for (i = 0; i < 64; i++) {
> -            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
> -                               sregs.u.s.ppc64.slb[i].slbv);
> +            target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
> +            target_ulong rs = sregs.u.s.ppc64.slb[i].slbv;
> +            /*
> +             * Only restore valid entries
> +             */
> +            if (rb & SLB_ESID_V)
> +                ppc_store_slb(env, rb, rs);

Coding style.

But the code should work, yes :). Paul, any objections?


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 30a870e..313f866 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -1033,9 +1033,21 @@  int kvm_arch_get_registers(CPUState *cs)
 
         /* Sync SLB */
 #ifdef TARGET_PPC64
+        /*
+         * KVM_GET_SREGS doesn't retun slb entry with slot information
+         * same as index. So don't depend on the slot information in
+         * the returned value.
+         * Zero out the SLB array invalidating all the entries
+         */
+        memset(env->slb, 0, 64 * sizeof(ppc_slb_t));
         for (i = 0; i < 64; i++) {
-            ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
-                               sregs.u.s.ppc64.slb[i].slbv);
+            target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
+            target_ulong rs = sregs.u.s.ppc64.slb[i].slbv;
+            /*
+             * Only restore valid entries
+             */
+            if (rb & SLB_ESID_V)
+                ppc_store_slb(env, rb, rs);
         }
 #endif