diff mbox series

[2/2] KVM: arm64: nv: fixup! Support multiple nested Stage-2 mmu structures

Message ID 20211122095803.28943-3-gankulkarni@os.amperecomputing.com (mailing list archive)
State New, archived
Headers show
Series KVM: arm64: nv: Fix issue with Stage 2 MMU init for Nested case. | expand

Commit Message

Ganapatrao Kulkarni Nov. 22, 2021, 9:58 a.m. UTC
Commit 1776c91346b6 ("KVM: arm64: nv: Support multiple nested Stage-2 mmu
structures")[1] added a function kvm_vcpu_init_nested which expands the
stage-2 mmu structures array when ever a new vCPU is created. The array
is expanded using krealloc() and results in a stale mmu address pointer
in pgt->mmu. Adding a fix to update the pointer with the new address after
successful krealloc.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/
branch kvm-arm64/nv-5.13

Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
---
 arch/arm64/kvm/nested.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Marc Zyngier Nov. 25, 2021, 2:23 p.m. UTC | #1
On Mon, 22 Nov 2021 09:58:03 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> 
> Commit 1776c91346b6 ("KVM: arm64: nv: Support multiple nested Stage-2 mmu
> structures")[1] added a function kvm_vcpu_init_nested which expands the
> stage-2 mmu structures array when ever a new vCPU is created. The array
> is expanded using krealloc() and results in a stale mmu address pointer
> in pgt->mmu. Adding a fix to update the pointer with the new address after
> successful krealloc.
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/
> branch kvm-arm64/nv-5.13
> 
> Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
> ---
>  arch/arm64/kvm/nested.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 4ffbc14d0245..57ad8d8f4ee5 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -68,6 +68,8 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>  		       num_mmus * sizeof(*kvm->arch.nested_mmus),
>  		       GFP_KERNEL | __GFP_ZERO);
>  	if (tmp) {
> +		int i;
> +
>  		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
>  		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
>  			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
> @@ -80,6 +82,13 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>  		}
>  
>  		kvm->arch.nested_mmus = tmp;
> +
> +		/* Fixup pgt->mmu after krealloc */
> +		for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
> +			struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
> +
> +			mmu->pgt->mmu = mmu;
> +		}
>  	}
>  
>  	mutex_unlock(&kvm->lock);

Another good catch. I've tweaked a bit to avoid some unnecessary
repainting, see below.

Thanks again,

	M.

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index a4dfffa1dae0..92b225db59ac 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -66,8 +66,19 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
 	num_mmus = atomic_read(&kvm->online_vcpus) * 2;
 	tmp = krealloc(kvm->arch.nested_mmus,
 		       num_mmus * sizeof(*kvm->arch.nested_mmus),
-		       GFP_KERNEL | __GFP_ZERO);
+		       GFP_KERNEL_ACCOUNT | __GFP_ZERO);
 	if (tmp) {
+		/*
+		 * If we went through a realocation, adjust the MMU
+		 * back-pointers in the pg_table structures.
+		 */
+		if (kvm->arch.nested_mmus != tmp) {
+			int i;
+
+			for (i = 0; i < num_mms - 2; i++)
+				tmp[i].pgt->mmu = &tmp[i];
+		}
+
 		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
 		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
 			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
Ganapatrao Kulkarni Nov. 26, 2021, 5:59 a.m. UTC | #2
Hi Marc,

On 25-11-2021 07:53 pm, Marc Zyngier wrote:
> On Mon, 22 Nov 2021 09:58:03 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>> Commit 1776c91346b6 ("KVM: arm64: nv: Support multiple nested Stage-2 mmu
>> structures")[1] added a function kvm_vcpu_init_nested which expands the
>> stage-2 mmu structures array when ever a new vCPU is created. The array
>> is expanded using krealloc() and results in a stale mmu address pointer
>> in pgt->mmu. Adding a fix to update the pointer with the new address after
>> successful krealloc.
>>
>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/
>> branch kvm-arm64/nv-5.13
>>
>> Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
>> ---
>>   arch/arm64/kvm/nested.c | 9 +++++++++
>>   1 file changed, 9 insertions(+)
>>
>> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
>> index 4ffbc14d0245..57ad8d8f4ee5 100644
>> --- a/arch/arm64/kvm/nested.c
>> +++ b/arch/arm64/kvm/nested.c
>> @@ -68,6 +68,8 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>>   		       num_mmus * sizeof(*kvm->arch.nested_mmus),
>>   		       GFP_KERNEL | __GFP_ZERO);
>>   	if (tmp) {
>> +		int i;
>> +
>>   		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
>>   		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
>>   			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
>> @@ -80,6 +82,13 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>>   		}
>>   
>>   		kvm->arch.nested_mmus = tmp;
>> +
>> +		/* Fixup pgt->mmu after krealloc */
>> +		for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
>> +			struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
>> +
>> +			mmu->pgt->mmu = mmu;
>> +		}
>>   	}
>>   
>>   	mutex_unlock(&kvm->lock);
> 
> Another good catch. I've tweaked a bit to avoid some unnecessary
> repainting, see below.
> 
> Thanks again,
> 
> 	M.
> 
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index a4dfffa1dae0..92b225db59ac 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -66,8 +66,19 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>   	num_mmus = atomic_read(&kvm->online_vcpus) * 2;
>   	tmp = krealloc(kvm->arch.nested_mmus,
>   		       num_mmus * sizeof(*kvm->arch.nested_mmus),
> -		       GFP_KERNEL | __GFP_ZERO);
> +		       GFP_KERNEL_ACCOUNT | __GFP_ZERO);
>   	if (tmp) {
> +		/*
> +		 * If we went through a realocation, adjust the MMU

Is it more precise to say?
> +		 * back-pointers in the pg_table structures.
* back-pointers in the pg_table structures of previous inits.

> +		 */
> +		if (kvm->arch.nested_mmus != tmp) {
> +			int i;
> +
> +			for (i = 0; i < num_mms - 2; i++)
> +				tmp[i].pgt->mmu = &tmp[i];
> +		}

Thanks for this optimization, it saves 2 redundant iterations.
> +
>   		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
>   		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
>   			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
> 

Feel free to add,
Reviewed-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>


Thanks,
Ganapat
Marc Zyngier Nov. 26, 2021, 7:20 p.m. UTC | #3
[resending after having sorted my email config]

On Fri, 26 Nov 2021 05:59:00 +0000,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> 
> Hi Marc,
> 
> On 25-11-2021 07:53 pm, Marc Zyngier wrote:
> > On Mon, 22 Nov 2021 09:58:03 +0000,
> > Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
> >> 
> >> Commit 1776c91346b6 ("KVM: arm64: nv: Support multiple nested Stage-2 mmu
> >> structures")[1] added a function kvm_vcpu_init_nested which expands the
> >> stage-2 mmu structures array when ever a new vCPU is created. The array
> >> is expanded using krealloc() and results in a stale mmu address pointer
> >> in pgt->mmu. Adding a fix to update the pointer with the new address after
> >> successful krealloc.
> >> 
> >> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/
> >> branch kvm-arm64/nv-5.13
> >> 
> >> Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
> >> ---
> >>   arch/arm64/kvm/nested.c | 9 +++++++++
> >>   1 file changed, 9 insertions(+)
> >> 
> >> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> >> index 4ffbc14d0245..57ad8d8f4ee5 100644
> >> --- a/arch/arm64/kvm/nested.c
> >> +++ b/arch/arm64/kvm/nested.c
> >> @@ -68,6 +68,8 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
> >>   		       num_mmus * sizeof(*kvm->arch.nested_mmus),
> >>   		       GFP_KERNEL | __GFP_ZERO);
> >>   	if (tmp) {
> >> +		int i;
> >> +
> >>   		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
> >>   		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
> >>   			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
> >> @@ -80,6 +82,13 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
> >>   		}
> >>     		kvm->arch.nested_mmus = tmp;
> >> +
> >> +		/* Fixup pgt->mmu after krealloc */
> >> +		for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
> >> +			struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
> >> +
> >> +			mmu->pgt->mmu = mmu;
> >> +		}
> >>   	}
> >>     	mutex_unlock(&kvm->lock);
> > 
> > Another good catch. I've tweaked a bit to avoid some unnecessary
> > repainting, see below.
> > 
> > Thanks again,
> > 
> > 	M.
> > 
> > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> > index a4dfffa1dae0..92b225db59ac 100644
> > --- a/arch/arm64/kvm/nested.c
> > +++ b/arch/arm64/kvm/nested.c
> > @@ -66,8 +66,19 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
> >   	num_mmus = atomic_read(&kvm->online_vcpus) * 2;
> >   	tmp = krealloc(kvm->arch.nested_mmus,
> >   		       num_mmus * sizeof(*kvm->arch.nested_mmus),
> > -		       GFP_KERNEL | __GFP_ZERO);
> > +		       GFP_KERNEL_ACCOUNT | __GFP_ZERO);
> >   	if (tmp) {
> > +		/*
> > +		 * If we went through a realocation, adjust the MMU
> 
> Is it more precise to say?
> > +		 * back-pointers in the pg_table structures.
> * back-pointers in the pg_table structures of previous inits.

Yes. I have added something along those lines.

> > +		 */
> > +		if (kvm->arch.nested_mmus != tmp) {
> > +			int i;
> > +
> > +			for (i = 0; i < num_mms - 2; i++)
> > +				tmp[i].pgt->mmu = &tmp[i];
> > +		}
> 
> Thanks for this optimization, it saves 2 redundant iterations.
> > +
> >   		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
> >   		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
> >   			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
> > 
> 
> Feel free to add,
> Reviewed-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>

Given that this was a fixup, I haven't taken this tag. I will Cc you
on the whole series, and you can give you tag on the whole patch if
you are happy with it.

BTW, I have now fixed the bug that was preventing L2 userspace from
running (bad interaction with the pgtable code which was unhappy about
my use of the SW bits when relaxing the permissions). You should now
be able to test the whole series.

Thanks,

	M.
Ganapatrao Kulkarni Nov. 29, 2021, 6 a.m. UTC | #4
On 27-11-2021 12:50 am, Marc Zyngier wrote:
> [resending after having sorted my email config]
> 
> On Fri, 26 Nov 2021 05:59:00 +0000,
> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>
>> Hi Marc,
>>
>> On 25-11-2021 07:53 pm, Marc Zyngier wrote:
>>> On Mon, 22 Nov 2021 09:58:03 +0000,
>>> Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>>>>
>>>> Commit 1776c91346b6 ("KVM: arm64: nv: Support multiple nested Stage-2 mmu
>>>> structures")[1] added a function kvm_vcpu_init_nested which expands the
>>>> stage-2 mmu structures array when ever a new vCPU is created. The array
>>>> is expanded using krealloc() and results in a stale mmu address pointer
>>>> in pgt->mmu. Adding a fix to update the pointer with the new address after
>>>> successful krealloc.
>>>>
>>>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/
>>>> branch kvm-arm64/nv-5.13
>>>>
>>>> Signed-off-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
>>>> ---
>>>>    arch/arm64/kvm/nested.c | 9 +++++++++
>>>>    1 file changed, 9 insertions(+)
>>>>
>>>> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
>>>> index 4ffbc14d0245..57ad8d8f4ee5 100644
>>>> --- a/arch/arm64/kvm/nested.c
>>>> +++ b/arch/arm64/kvm/nested.c
>>>> @@ -68,6 +68,8 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>>>>    		       num_mmus * sizeof(*kvm->arch.nested_mmus),
>>>>    		       GFP_KERNEL | __GFP_ZERO);
>>>>    	if (tmp) {
>>>> +		int i;
>>>> +
>>>>    		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
>>>>    		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
>>>>    			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
>>>> @@ -80,6 +82,13 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>>>>    		}
>>>>      		kvm->arch.nested_mmus = tmp;
>>>> +
>>>> +		/* Fixup pgt->mmu after krealloc */
>>>> +		for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
>>>> +			struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
>>>> +
>>>> +			mmu->pgt->mmu = mmu;
>>>> +		}
>>>>    	}
>>>>      	mutex_unlock(&kvm->lock);
>>>
>>> Another good catch. I've tweaked a bit to avoid some unnecessary
>>> repainting, see below.
>>>
>>> Thanks again,
>>>
>>> 	M.
>>>
>>> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
>>> index a4dfffa1dae0..92b225db59ac 100644
>>> --- a/arch/arm64/kvm/nested.c
>>> +++ b/arch/arm64/kvm/nested.c
>>> @@ -66,8 +66,19 @@ int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
>>>    	num_mmus = atomic_read(&kvm->online_vcpus) * 2;
>>>    	tmp = krealloc(kvm->arch.nested_mmus,
>>>    		       num_mmus * sizeof(*kvm->arch.nested_mmus),
>>> -		       GFP_KERNEL | __GFP_ZERO);
>>> +		       GFP_KERNEL_ACCOUNT | __GFP_ZERO);
>>>    	if (tmp) {
>>> +		/*
>>> +		 * If we went through a realocation, adjust the MMU
>>
>> Is it more precise to say?
>>> +		 * back-pointers in the pg_table structures.
>> * back-pointers in the pg_table structures of previous inits.
> 
> Yes. I have added something along those lines.
> 
>>> +		 */
>>> +		if (kvm->arch.nested_mmus != tmp) {
>>> +			int i;
>>> +
>>> +			for (i = 0; i < num_mms - 2; i++)
>>> +				tmp[i].pgt->mmu = &tmp[i];
>>> +		}
>>
>> Thanks for this optimization, it saves 2 redundant iterations.
>>> +
>>>    		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
>>>    		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
>>>    			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
>>>
>>
>> Feel free to add,
>> Reviewed-by: Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com>
> 
> Given that this was a fixup, I haven't taken this tag. I will Cc you

no problem, makes sense to fold this in to original patch.

> on the whole series, and you can give you tag on the whole patch if
> you are happy with it.

Sure.
> 
> BTW, I have now fixed the bug that was preventing L2 userspace from
> running (bad interaction with the pgtable code which was unhappy about
> my use of the SW bits when relaxing the permissions). You should now
> be able to test the whole series.

Yes, I have rebased to latest branch kvm-arm64/nv-5.16 and I am able to 
boot L1 and L2.

> 
> Thanks,
> 
> 	M.
> 

Thanks,
Ganapat
diff mbox series

Patch

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 4ffbc14d0245..57ad8d8f4ee5 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -68,6 +68,8 @@  int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
 		       num_mmus * sizeof(*kvm->arch.nested_mmus),
 		       GFP_KERNEL | __GFP_ZERO);
 	if (tmp) {
+		int i;
+
 		if (kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 1]) ||
 		    kvm_init_stage2_mmu(kvm, &tmp[num_mmus - 2])) {
 			kvm_free_stage2_pgd(&tmp[num_mmus - 1]);
@@ -80,6 +82,13 @@  int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu)
 		}
 
 		kvm->arch.nested_mmus = tmp;
+
+		/* Fixup pgt->mmu after krealloc */
+		for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
+			struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
+
+			mmu->pgt->mmu = mmu;
+		}
 	}
 
 	mutex_unlock(&kvm->lock);