diff mbox series

[2/2] hv_balloon: do adjust_managed_page_count() when ballooning/un-ballooning

Message ID 20201202161245.2406143-3-vkuznets@redhat.com (mailing list archive)
State New, archived
Headers show
Series hv_balloon: hide ballooned out memory in stats | expand

Commit Message

Vitaly Kuznetsov Dec. 2, 2020, 4:12 p.m. UTC
Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
does not adjust managed pages count when ballooning/un-ballooning and this leads
to incorrect stats being reported, e.g. unexpected 'free' output.

Note, the calculation in post_status() seems to remain correct: ballooned out
pages are never 'available' and we manually add dm->num_pages_ballooned to
'commited'.

Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
---
 drivers/hv/hv_balloon.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

David Hildenbrand Dec. 3, 2020, 4:13 p.m. UTC | #1
On 02.12.20 17:12, Vitaly Kuznetsov wrote:
> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
> does not adjust managed pages count when ballooning/un-ballooning and this leads
> to incorrect stats being reported, e.g. unexpected 'free' output.
> 
> Note, the calculation in post_status() seems to remain correct: ballooned out
> pages are never 'available' and we manually add dm->num_pages_ballooned to
> 'commited'.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
> ---
>  drivers/hv/hv_balloon.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
> index da3b6bd2367c..8c471823a5af 100644
> --- a/drivers/hv/hv_balloon.c
> +++ b/drivers/hv/hv_balloon.c
> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
>  		__ClearPageOffline(pg);
>  		__free_page(pg);
>  		dm->num_pages_ballooned--;
> +		adjust_managed_page_count(pg, 1);
>  	}
>  }
>  
> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
>  			split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>  
>  		/* mark all pages offline */
> -		for (j = 0; j < alloc_unit; j++)
> +		for (j = 0; j < alloc_unit; j++) {
>  			__SetPageOffline(pg + j);
> +			adjust_managed_page_count(pg + j, -1);
> +		}
>  
>  		bl_resp->range_count++;
>  		bl_resp->range_array[i].finfo.start_page =
> 

I assume this has been properly tested such that it does not change the
system behavior regarding when/how HyperV decides to add/remove memory.

LGTM

Reviewed-by: David Hildenbrand <david@redhat.com>
Vitaly Kuznetsov Dec. 3, 2020, 5:49 p.m. UTC | #2
David Hildenbrand <david@redhat.com> writes:

> On 02.12.20 17:12, Vitaly Kuznetsov wrote:
>> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
>> does not adjust managed pages count when ballooning/un-ballooning and this leads
>> to incorrect stats being reported, e.g. unexpected 'free' output.
>> 
>> Note, the calculation in post_status() seems to remain correct: ballooned out
>> pages are never 'available' and we manually add dm->num_pages_ballooned to
>> 'commited'.
>> 
>> Suggested-by: David Hildenbrand <david@redhat.com>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>> ---
>>  drivers/hv/hv_balloon.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>> 
>> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
>> index da3b6bd2367c..8c471823a5af 100644
>> --- a/drivers/hv/hv_balloon.c
>> +++ b/drivers/hv/hv_balloon.c
>> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
>>  		__ClearPageOffline(pg);
>>  		__free_page(pg);
>>  		dm->num_pages_ballooned--;
>> +		adjust_managed_page_count(pg, 1);
>>  	}
>>  }
>>  
>> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
>>  			split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>>  
>>  		/* mark all pages offline */
>> -		for (j = 0; j < alloc_unit; j++)
>> +		for (j = 0; j < alloc_unit; j++) {
>>  			__SetPageOffline(pg + j);
>> +			adjust_managed_page_count(pg + j, -1);
>> +		}
>>  
>>  		bl_resp->range_count++;
>>  		bl_resp->range_array[i].finfo.start_page =
>> 
>
> I assume this has been properly tested such that it does not change the
> system behavior regarding when/how HyperV decides to add/remove memory.
>

I'm always reluctant to confirm 'proper testing' as no matter how small
and 'obvious' the change is, regressions keep happening :-) But yes,
this was tested on a Hyper-V host and 'stress' and I observed 'free'
when the balloon was both inflated and deflated, values looked sane.

> LGTM
>
> Reviewed-by: David Hildenbrand <david@redhat.com>

Thanks!
David Hildenbrand Dec. 3, 2020, 5:49 p.m. UTC | #3
On 03.12.20 18:49, Vitaly Kuznetsov wrote:
> David Hildenbrand <david@redhat.com> writes:
> 
>> On 02.12.20 17:12, Vitaly Kuznetsov wrote:
>>> Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver
>>> does not adjust managed pages count when ballooning/un-ballooning and this leads
>>> to incorrect stats being reported, e.g. unexpected 'free' output.
>>>
>>> Note, the calculation in post_status() seems to remain correct: ballooned out
>>> pages are never 'available' and we manually add dm->num_pages_ballooned to
>>> 'commited'.
>>>
>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>>> ---
>>>  drivers/hv/hv_balloon.c | 5 ++++-
>>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
>>> index da3b6bd2367c..8c471823a5af 100644
>>> --- a/drivers/hv/hv_balloon.c
>>> +++ b/drivers/hv/hv_balloon.c
>>> @@ -1198,6 +1198,7 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
>>>  		__ClearPageOffline(pg);
>>>  		__free_page(pg);
>>>  		dm->num_pages_ballooned--;
>>> +		adjust_managed_page_count(pg, 1);
>>>  	}
>>>  }
>>>  
>>> @@ -1238,8 +1239,10 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
>>>  			split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
>>>  
>>>  		/* mark all pages offline */
>>> -		for (j = 0; j < alloc_unit; j++)
>>> +		for (j = 0; j < alloc_unit; j++) {
>>>  			__SetPageOffline(pg + j);
>>> +			adjust_managed_page_count(pg + j, -1);
>>> +		}
>>>  
>>>  		bl_resp->range_count++;
>>>  		bl_resp->range_array[i].finfo.start_page =
>>>
>>
>> I assume this has been properly tested such that it does not change the
>> system behavior regarding when/how HyperV decides to add/remove memory.
>>
> 
> I'm always reluctant to confirm 'proper testing' as no matter how small
> and 'obvious' the change is, regressions keep happening :-) But yes,
> this was tested on a Hyper-V host and 'stress' and I observed 'free'
> when the balloon was both inflated and deflated, values looked sane.

That;s what I wanted to hear ;)
diff mbox series

Patch

diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index da3b6bd2367c..8c471823a5af 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -1198,6 +1198,7 @@  static void free_balloon_pages(struct hv_dynmem_device *dm,
 		__ClearPageOffline(pg);
 		__free_page(pg);
 		dm->num_pages_ballooned--;
+		adjust_managed_page_count(pg, 1);
 	}
 }
 
@@ -1238,8 +1239,10 @@  static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
 			split_page(pg, get_order(alloc_unit << PAGE_SHIFT));
 
 		/* mark all pages offline */
-		for (j = 0; j < alloc_unit; j++)
+		for (j = 0; j < alloc_unit; j++) {
 			__SetPageOffline(pg + j);
+			adjust_managed_page_count(pg + j, -1);
+		}
 
 		bl_resp->range_count++;
 		bl_resp->range_array[i].finfo.start_page =