diff mbox

[09/15] KVM: MMU: split kvm_mmu_free_page

Message ID 4DEE21AE.9050703@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

Xiao Guangrong June 7, 2011, 1:03 p.m. UTC
Split kvm_mmu_free_page to kvm_mmu_free_lock_parts and
kvm_mmu_free_unlock_parts

One is used to free the parts which is under mmu lock and the other is
used to free the parts which can allow be freed out of mmu lock

It is used by later patch

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |   16 +++++++++++++---
 1 files changed, 13 insertions(+), 3 deletions(-)

Comments

Avi Kivity June 9, 2011, 7:07 a.m. UTC | #1
On 06/07/2011 04:03 PM, Xiao Guangrong wrote:
> Split kvm_mmu_free_page to kvm_mmu_free_lock_parts and
> kvm_mmu_free_unlock_parts
>
> One is used to free the parts which is under mmu lock and the other is
> used to free the parts which can allow be freed out of mmu lock
>
> It is used by later patch

Suggest kvm_mmu_isolate_page() and kvm_mmu_free_page().  Plus a comment 
explaining things, unless someone can come up with a better name.

> -static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
> +static void kvm_mmu_free_lock_parts(struct kvm_mmu_page *sp)
>   {
>   	ASSERT(is_empty_shadow_page(sp->spt));
>   	hlist_del(&sp->hash_link);
> -	list_del(&sp->link);
> -	free_page((unsigned long)sp->spt);
>   	if (!sp->role.direct)
>   		free_page((unsigned long)sp->gfns);
> +}
> +
> +static void kvm_mmu_free_unlock_parts(struct kvm_mmu_page *sp)
> +{
> +	list_del(&sp->link);
> +	free_page((unsigned long)sp->spt);
>   	kmem_cache_free(mmu_page_header_cache, sp);
>   }

The list_del() must be run under a lock, no? it can access 
kvm->arch.active_mmu_pages.
Xiao Guangrong June 10, 2011, 3:50 a.m. UTC | #2
On 06/09/2011 03:07 PM, Avi Kivity wrote:
> On 06/07/2011 04:03 PM, Xiao Guangrong wrote:
>> Split kvm_mmu_free_page to kvm_mmu_free_lock_parts and
>> kvm_mmu_free_unlock_parts
>>
>> One is used to free the parts which is under mmu lock and the other is
>> used to free the parts which can allow be freed out of mmu lock
>>
>> It is used by later patch
> 
> Suggest kvm_mmu_isolate_page() and kvm_mmu_free_page().  Plus a comment explaining things, unless someone can come up with a better name.
> 

Good names, will fix.

>> -static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
>> +static void kvm_mmu_free_lock_parts(struct kvm_mmu_page *sp)
>>   {
>>       ASSERT(is_empty_shadow_page(sp->spt));
>>       hlist_del(&sp->hash_link);
>> -    list_del(&sp->link);
>> -    free_page((unsigned long)sp->spt);
>>       if (!sp->role.direct)
>>           free_page((unsigned long)sp->gfns);
>> +}
>> +
>> +static void kvm_mmu_free_unlock_parts(struct kvm_mmu_page *sp)
>> +{
>> +    list_del(&sp->link);
>> +    free_page((unsigned long)sp->spt);
>>       kmem_cache_free(mmu_page_header_cache, sp);
>>   }
> 
> The list_del() must be run under a lock, no? it can access kvm->arch.active_mmu_pages.
> 

In prepare path, we have moved the sp from active_mmu_pages to invlaid_list.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity June 12, 2011, 8:33 a.m. UTC | #3
On 06/10/2011 06:50 AM, Xiao Guangrong wrote:
> >>  +static void kvm_mmu_free_unlock_parts(struct kvm_mmu_page *sp)
> >>  +{
> >>  +    list_del(&sp->link);
> >>  +    free_page((unsigned long)sp->spt);
> >>        kmem_cache_free(mmu_page_header_cache, sp);
> >>    }
> >
> >  The list_del() must be run under a lock, no? it can access kvm->arch.active_mmu_pages.
> >
>
> In prepare path, we have moved the sp from active_mmu_pages to invlaid_list.

It still needs to be accessed under a lock, no matter which list is used.
Xiao Guangrong June 13, 2011, 3:15 a.m. UTC | #4
On 06/12/2011 04:33 PM, Avi Kivity wrote:
> On 06/10/2011 06:50 AM, Xiao Guangrong wrote:
>> >>  +static void kvm_mmu_free_unlock_parts(struct kvm_mmu_page *sp)
>> >>  +{
>> >>  +    list_del(&sp->link);
>> >>  +    free_page((unsigned long)sp->spt);
>> >>        kmem_cache_free(mmu_page_header_cache, sp);
>> >>    }
>> >
>> >  The list_del() must be run under a lock, no? it can access kvm->arch.active_mmu_pages.
>> >
>>
>> In prepare path, we have moved the sp from active_mmu_pages to invlaid_list.
> 
> It still needs to be accessed under a lock, no matter which list is used.
> 

Actually, if we need to free page in RCU context, we unlink them from invalid_list firstly:

if (atomic_read(&kvm->arch.reader_counter)) {
		......
		list_del_init(invalid_list);
		trace_kvm_mmu_delay_free_pages(sp);
		call_rcu(&sp->rcu, free_invalid_pages_rcu);
		return;
	}

Then, global list is not used anymore. 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 43e7ca1..9f3a746 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1039,17 +1039,27 @@  static inline void kvm_mod_used_mmu_pages(struct kvm *kvm, int nr)
 	percpu_counter_add(&kvm_total_used_mmu_pages, nr);
 }
 
-static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
+static void kvm_mmu_free_lock_parts(struct kvm_mmu_page *sp)
 {
 	ASSERT(is_empty_shadow_page(sp->spt));
 	hlist_del(&sp->hash_link);
-	list_del(&sp->link);
-	free_page((unsigned long)sp->spt);
 	if (!sp->role.direct)
 		free_page((unsigned long)sp->gfns);
+}
+
+static void kvm_mmu_free_unlock_parts(struct kvm_mmu_page *sp)
+{
+	list_del(&sp->link);
+	free_page((unsigned long)sp->spt);
 	kmem_cache_free(mmu_page_header_cache, sp);
 }
 
+static void kvm_mmu_free_page(struct kvm_mmu_page *sp)
+{
+	kvm_mmu_free_lock_parts(sp);
+	kvm_mmu_free_unlock_parts(sp);
+}
+
 static unsigned kvm_page_table_hashfn(gfn_t gfn)
 {
 	return gfn & ((1 << KVM_MMU_HASH_SHIFT) - 1);