Message ID | 20190306155048.12868-3-nitesh@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: Guest Free Page Hinting | expand |
On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > This patch enables the kernel to scan the per cpu array > which carries head pages from the buddy free list of order > FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by > guest_free_page_hinting(). > guest_free_page_hinting() scans the entire per cpu array by > acquiring a zone lock corresponding to the pages which are > being scanned. If the page is still free and present in the > buddy it tries to isolate the page and adds it to a > dynamically allocated array. > > Once this scanning process is complete and if there are any > isolated pages added to the dynamically allocated array > guest_free_page_report() is invoked. However, before this the > per-cpu array index is reset so that it can continue capturing > the pages from buddy free list. > > In this patch guest_free_page_report() simply releases the pages back > to the buddy by using __free_one_page() > > Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> I'm pretty sure this code is not thread safe and has a few various issues. > --- > include/linux/page_hinting.h | 5 ++ > mm/page_alloc.c | 2 +- > virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ > 3 files changed, 160 insertions(+), 1 deletion(-) > > diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h > index 90254c582789..d554a2581826 100644 > --- a/include/linux/page_hinting.h > +++ b/include/linux/page_hinting.h > @@ -13,3 +13,8 @@ > > void guest_free_page_enqueue(struct page *page, int order); > void guest_free_page_try_hinting(void); > +extern int __isolate_free_page(struct page *page, unsigned int order); > +extern void __free_one_page(struct page *page, unsigned long pfn, > + struct zone *zone, unsigned int order, > + int migratetype); > +void release_buddy_pages(void *obj_to_free, int entries); > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 684d047f33ee..d38b7eea207b 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, > * -- nyc > */ > > -static inline void __free_one_page(struct page *page, > +inline void __free_one_page(struct page *page, > unsigned long pfn, > struct zone *zone, unsigned int order, > int migratetype) > diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c > index 48b4b5e796b0..9885b372b5a9 100644 > --- a/virt/kvm/page_hinting.c > +++ b/virt/kvm/page_hinting.c > @@ -1,5 +1,9 @@ > #include <linux/mm.h> > #include <linux/page_hinting.h> > +#include <linux/page_ref.h> > +#include <linux/kvm_host.h> > +#include <linux/kernel.h> > +#include <linux/sort.h> > > /* > * struct guest_free_pages- holds array of guest freed PFN's along with an > @@ -16,6 +20,54 @@ struct guest_free_pages { > > DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); > > +/* > + * struct guest_isolated_pages- holds the buddy isolated pages which are > + * supposed to be freed by the host. > + * @pfn: page frame number for the isolated page. > + * @order: order of the isolated page. > + */ > +struct guest_isolated_pages { > + unsigned long pfn; > + unsigned int order; > +}; > + > +void release_buddy_pages(void *obj_to_free, int entries) > +{ > + int i = 0; > + int mt = 0; > + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; > + > + while (i < entries) { > + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); > + > + mt = get_pageblock_migratetype(page); > + __free_one_page(page, page_to_pfn(page), page_zone(page), > + isolated_pages_obj[i].order, mt); > + i++; > + } > + kfree(isolated_pages_obj); > +} You shouldn't be accessing __free_one_page without holding the zone lock for the page. You might consider confining yourself to one zone worth of hints at a time. Then you can acquire the lock once, and then return the memory you have freed. This is one of the reasons why I am thinking maybe a bit in the page and then spinning on that bit in arch_alloc_page might be a nice way to get around this. Then you only have to take the zone lock when you are finding the pages you want to hint on and setting the bit indicating they are mid hint. Otherwise you have to take the zone lock to pull pages out, and to put them back in and the likelihood of a lock collision is much higher. > + > +void guest_free_page_report(struct guest_isolated_pages *isolated_pages_obj, > + int entries) > +{ > + release_buddy_pages(isolated_pages_obj, entries); > +} > + > +static int sort_zonenum(const void *a1, const void *b1) > +{ > + const unsigned long *a = a1; > + const unsigned long *b = b1; > + > + if (page_zonenum(pfn_to_page(a[0])) > page_zonenum(pfn_to_page(b[0]))) > + return 1; > + > + if (page_zonenum(pfn_to_page(a[0])) < page_zonenum(pfn_to_page(b[0]))) > + return -1; > + > + return 0; > +} > + > struct page *get_buddy_page(struct page *page) > { > unsigned long pfn = page_to_pfn(page); > @@ -33,9 +85,111 @@ struct page *get_buddy_page(struct page *page) > static void guest_free_page_hinting(void) > { > struct guest_free_pages *hinting_obj = &get_cpu_var(free_pages_obj); > + struct guest_isolated_pages *isolated_pages_obj; > + int idx = 0, ret = 0; > + struct zone *zone_cur, *zone_prev; > + unsigned long flags = 0; > + int hyp_idx = 0; > + int free_pages_idx = hinting_obj->free_pages_idx; > + > + isolated_pages_obj = kmalloc(MAX_FGPT_ENTRIES * > + sizeof(struct guest_isolated_pages), GFP_KERNEL); > + if (!isolated_pages_obj) { > + hinting_obj->free_pages_idx = 0; > + put_cpu_var(hinting_obj); > + return; > + /* return some logical error here*/ > + } > + > + sort(hinting_obj->free_page_arr, free_pages_idx, > + sizeof(unsigned long), sort_zonenum, NULL); > + > + while (idx < free_pages_idx) { > + unsigned long pfn = hinting_obj->free_page_arr[idx]; > + unsigned long pfn_end = hinting_obj->free_page_arr[idx] + > + (1 << FREE_PAGE_HINTING_MIN_ORDER) - 1; > + > + zone_cur = page_zone(pfn_to_page(pfn)); > + if (idx == 0) { > + zone_prev = zone_cur; > + spin_lock_irqsave(&zone_cur->lock, flags); > + } else if (zone_prev != zone_cur) { > + spin_unlock_irqrestore(&zone_prev->lock, flags); > + spin_lock_irqsave(&zone_cur->lock, flags); > + zone_prev = zone_cur; > + } > + > + while (pfn <= pfn_end) { > + struct page *page = pfn_to_page(pfn); > + struct page *buddy_page = NULL; > + > + if (PageCompound(page)) { > + struct page *head_page = compound_head(page); > + unsigned long head_pfn = page_to_pfn(head_page); > + unsigned int alloc_pages = > + 1 << compound_order(head_page); > + > + pfn = head_pfn + alloc_pages; > + continue; > + } > + I don't think the buddy allocator has compound pages. > + if (page_ref_count(page)) { > + pfn++; > + continue; > + } > + A ref count of 0 doesn't mean the page isn't in use. It could be in use by something such as SLUB for instance. > + if (PageBuddy(page) && page_private(page) >= > + FREE_PAGE_HINTING_MIN_ORDER) { > + int buddy_order = page_private(page); > + > + ret = __isolate_free_page(page, buddy_order); > + if (ret) { > + isolated_pages_obj[hyp_idx].pfn = pfn; > + isolated_pages_obj[hyp_idx].order = > + buddy_order; > + hyp_idx += 1; > + } > + pfn = pfn + (1 << buddy_order); > + continue; > + } > + So this is where things start to get ugly. Basically because we were acquiring the hints when they were freed we end up needing to check either this page, and the PFN for all of the higher order pages this page could be a part of. Since we are currently limiting ourselves to MAX_ORDER - 1 it shouldn't be too expensive. I don't recall if your get_buddy_page already had that limitation coded in but we should probably look at doing that there. Then we can just skip the PageBuddy check up here and have it automatically start walking all pages your original page could be a part of looking for the highest page order that might still be free. > + buddy_page = get_buddy_page(page); > + if (buddy_page && page_private(buddy_page) >= > + FREE_PAGE_HINTING_MIN_ORDER) { > + int buddy_order = page_private(buddy_page); > + > + ret = __isolate_free_page(buddy_page, > + buddy_order); > + if (ret) { > + unsigned long buddy_pfn = > + page_to_pfn(buddy_page); > + > + isolated_pages_obj[hyp_idx].pfn = > + buddy_pfn; > + isolated_pages_obj[hyp_idx].order = > + buddy_order; > + hyp_idx += 1; > + } > + pfn = page_to_pfn(buddy_page) + > + (1 << buddy_order); > + continue; > + } This is essentially just a duplicate of the code above. As I mentioned before it would probably make sense to just combine this block with that one. > + pfn++; > + } > + hinting_obj->free_page_arr[idx] = 0; > + idx++; > + if (idx == free_pages_idx) > + spin_unlock_irqrestore(&zone_cur->lock, flags); > + } > > hinting_obj->free_pages_idx = 0; > put_cpu_var(hinting_obj); > + > + if (hyp_idx > 0) > + guest_free_page_report(isolated_pages_obj, hyp_idx); > + else > + kfree(isolated_pages_obj); > + /* return some logical error here*/ > } > > int if_exist(struct page *page) > -- > 2.17.2 >
On 3/7/19 1:30 PM, Alexander Duyck wrote: > On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> This patch enables the kernel to scan the per cpu array >> which carries head pages from the buddy free list of order >> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by >> guest_free_page_hinting(). >> guest_free_page_hinting() scans the entire per cpu array by >> acquiring a zone lock corresponding to the pages which are >> being scanned. If the page is still free and present in the >> buddy it tries to isolate the page and adds it to a >> dynamically allocated array. >> >> Once this scanning process is complete and if there are any >> isolated pages added to the dynamically allocated array >> guest_free_page_report() is invoked. However, before this the >> per-cpu array index is reset so that it can continue capturing >> the pages from buddy free list. >> >> In this patch guest_free_page_report() simply releases the pages back >> to the buddy by using __free_one_page() >> >> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> > I'm pretty sure this code is not thread safe and has a few various issues. > >> --- >> include/linux/page_hinting.h | 5 ++ >> mm/page_alloc.c | 2 +- >> virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ >> 3 files changed, 160 insertions(+), 1 deletion(-) >> >> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h >> index 90254c582789..d554a2581826 100644 >> --- a/include/linux/page_hinting.h >> +++ b/include/linux/page_hinting.h >> @@ -13,3 +13,8 @@ >> >> void guest_free_page_enqueue(struct page *page, int order); >> void guest_free_page_try_hinting(void); >> +extern int __isolate_free_page(struct page *page, unsigned int order); >> +extern void __free_one_page(struct page *page, unsigned long pfn, >> + struct zone *zone, unsigned int order, >> + int migratetype); >> +void release_buddy_pages(void *obj_to_free, int entries); >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 684d047f33ee..d38b7eea207b 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, >> * -- nyc >> */ >> >> -static inline void __free_one_page(struct page *page, >> +inline void __free_one_page(struct page *page, >> unsigned long pfn, >> struct zone *zone, unsigned int order, >> int migratetype) >> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c >> index 48b4b5e796b0..9885b372b5a9 100644 >> --- a/virt/kvm/page_hinting.c >> +++ b/virt/kvm/page_hinting.c >> @@ -1,5 +1,9 @@ >> #include <linux/mm.h> >> #include <linux/page_hinting.h> >> +#include <linux/page_ref.h> >> +#include <linux/kvm_host.h> >> +#include <linux/kernel.h> >> +#include <linux/sort.h> >> >> /* >> * struct guest_free_pages- holds array of guest freed PFN's along with an >> @@ -16,6 +20,54 @@ struct guest_free_pages { >> >> DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); >> >> +/* >> + * struct guest_isolated_pages- holds the buddy isolated pages which are >> + * supposed to be freed by the host. >> + * @pfn: page frame number for the isolated page. >> + * @order: order of the isolated page. >> + */ >> +struct guest_isolated_pages { >> + unsigned long pfn; >> + unsigned int order; >> +}; >> + >> +void release_buddy_pages(void *obj_to_free, int entries) >> +{ >> + int i = 0; >> + int mt = 0; >> + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; >> + >> + while (i < entries) { >> + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); >> + >> + mt = get_pageblock_migratetype(page); >> + __free_one_page(page, page_to_pfn(page), page_zone(page), >> + isolated_pages_obj[i].order, mt); >> + i++; >> + } >> + kfree(isolated_pages_obj); >> +} > You shouldn't be accessing __free_one_page without holding the zone > lock for the page. You might consider confining yourself to one zone > worth of hints at a time. Then you can acquire the lock once, and then > return the memory you have freed. That is correct. > > This is one of the reasons why I am thinking maybe a bit in the page > and then spinning on that bit in arch_alloc_page might be a nice way > to get around this. Then you only have to take the zone lock when you > are finding the pages you want to hint on and setting the bit > indicating they are mid hint. Otherwise you have to take the zone lock > to pull pages out, and to put them back in and the likelihood of a > lock collision is much higher. Do you think adding a new flag to the page structure will be acceptable? > >> + >> +void guest_free_page_report(struct guest_isolated_pages *isolated_pages_obj, >> + int entries) >> +{ >> + release_buddy_pages(isolated_pages_obj, entries); >> +} >> + >> +static int sort_zonenum(const void *a1, const void *b1) >> +{ >> + const unsigned long *a = a1; >> + const unsigned long *b = b1; >> + >> + if (page_zonenum(pfn_to_page(a[0])) > page_zonenum(pfn_to_page(b[0]))) >> + return 1; >> + >> + if (page_zonenum(pfn_to_page(a[0])) < page_zonenum(pfn_to_page(b[0]))) >> + return -1; >> + >> + return 0; >> +} >> + >> struct page *get_buddy_page(struct page *page) >> { >> unsigned long pfn = page_to_pfn(page); >> @@ -33,9 +85,111 @@ struct page *get_buddy_page(struct page *page) >> static void guest_free_page_hinting(void) >> { >> struct guest_free_pages *hinting_obj = &get_cpu_var(free_pages_obj); >> + struct guest_isolated_pages *isolated_pages_obj; >> + int idx = 0, ret = 0; >> + struct zone *zone_cur, *zone_prev; >> + unsigned long flags = 0; >> + int hyp_idx = 0; >> + int free_pages_idx = hinting_obj->free_pages_idx; >> + >> + isolated_pages_obj = kmalloc(MAX_FGPT_ENTRIES * >> + sizeof(struct guest_isolated_pages), GFP_KERNEL); >> + if (!isolated_pages_obj) { >> + hinting_obj->free_pages_idx = 0; >> + put_cpu_var(hinting_obj); >> + return; >> + /* return some logical error here*/ >> + } >> + >> + sort(hinting_obj->free_page_arr, free_pages_idx, >> + sizeof(unsigned long), sort_zonenum, NULL); >> + >> + while (idx < free_pages_idx) { >> + unsigned long pfn = hinting_obj->free_page_arr[idx]; >> + unsigned long pfn_end = hinting_obj->free_page_arr[idx] + >> + (1 << FREE_PAGE_HINTING_MIN_ORDER) - 1; >> + >> + zone_cur = page_zone(pfn_to_page(pfn)); >> + if (idx == 0) { >> + zone_prev = zone_cur; >> + spin_lock_irqsave(&zone_cur->lock, flags); >> + } else if (zone_prev != zone_cur) { >> + spin_unlock_irqrestore(&zone_prev->lock, flags); >> + spin_lock_irqsave(&zone_cur->lock, flags); >> + zone_prev = zone_cur; >> + } >> + >> + while (pfn <= pfn_end) { >> + struct page *page = pfn_to_page(pfn); >> + struct page *buddy_page = NULL; >> + >> + if (PageCompound(page)) { >> + struct page *head_page = compound_head(page); >> + unsigned long head_pfn = page_to_pfn(head_page); >> + unsigned int alloc_pages = >> + 1 << compound_order(head_page); >> + >> + pfn = head_pfn + alloc_pages; >> + continue; >> + } >> + > I don't think the buddy allocator has compound pages. Yes, I don't need this. > >> + if (page_ref_count(page)) { >> + pfn++; >> + continue; >> + } >> + > A ref count of 0 doesn't mean the page isn't in use. It could be in > use by something such as SLUB for instance. Yes but it is not the criteria by which we are isolating. If PageBuddy() is returning true then only we actually try and isolate. I can possibly remove the compound and page_ref_count() checks. > >> + if (PageBuddy(page) && page_private(page) >= >> + FREE_PAGE_HINTING_MIN_ORDER) { >> + int buddy_order = page_private(page); >> + >> + ret = __isolate_free_page(page, buddy_order); >> + if (ret) { >> + isolated_pages_obj[hyp_idx].pfn = pfn; >> + isolated_pages_obj[hyp_idx].order = >> + buddy_order; >> + hyp_idx += 1; >> + } >> + pfn = pfn + (1 << buddy_order); >> + continue; >> + } >> + > So this is where things start to get ugly. Basically because we were > acquiring the hints when they were freed we end up needing to check > either this page, and the PFN for all of the higher order pages this > page could be a part of. Since we are currently limiting ourselves to > MAX_ORDER - 1 it shouldn't be too expensive. I don't recall if your > get_buddy_page already had that limitation coded in but we should > probably look at doing that there. Do you mean the check for page order? > Then we can just skip the PageBuddy > check up here and have it automatically start walking all pages your > original page could be a part of looking for the highest page order > that might still be free. > >> + buddy_page = get_buddy_page(page); >> + if (buddy_page && page_private(buddy_page) >= >> + FREE_PAGE_HINTING_MIN_ORDER) { >> + int buddy_order = page_private(buddy_page); >> + >> + ret = __isolate_free_page(buddy_page, >> + buddy_order); >> + if (ret) { >> + unsigned long buddy_pfn = >> + page_to_pfn(buddy_page); >> + >> + isolated_pages_obj[hyp_idx].pfn = >> + buddy_pfn; >> + isolated_pages_obj[hyp_idx].order = >> + buddy_order; >> + hyp_idx += 1; >> + } >> + pfn = page_to_pfn(buddy_page) + >> + (1 << buddy_order); >> + continue; >> + } > This is essentially just a duplicate of the code above. As I mentioned > before it would probably make sense to just combine this block with > that one. Yeap, I should get rid of this. Now as we are capturing post buddy merging we don't need this. Thanks. > >> + pfn++; >> + } >> + hinting_obj->free_page_arr[idx] = 0; >> + idx++; >> + if (idx == free_pages_idx) >> + spin_unlock_irqrestore(&zone_cur->lock, flags); >> + } >> >> hinting_obj->free_pages_idx = 0; >> put_cpu_var(hinting_obj); >> + >> + if (hyp_idx > 0) >> + guest_free_page_report(isolated_pages_obj, hyp_idx); >> + else >> + kfree(isolated_pages_obj); >> + /* return some logical error here*/ >> } >> >> int if_exist(struct page *page) >> -- >> 2.17.2 >>
On 07.03.19 20:23, Nitesh Narayan Lal wrote: > > On 3/7/19 1:30 PM, Alexander Duyck wrote: >> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>> This patch enables the kernel to scan the per cpu array >>> which carries head pages from the buddy free list of order >>> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by >>> guest_free_page_hinting(). >>> guest_free_page_hinting() scans the entire per cpu array by >>> acquiring a zone lock corresponding to the pages which are >>> being scanned. If the page is still free and present in the >>> buddy it tries to isolate the page and adds it to a >>> dynamically allocated array. >>> >>> Once this scanning process is complete and if there are any >>> isolated pages added to the dynamically allocated array >>> guest_free_page_report() is invoked. However, before this the >>> per-cpu array index is reset so that it can continue capturing >>> the pages from buddy free list. >>> >>> In this patch guest_free_page_report() simply releases the pages back >>> to the buddy by using __free_one_page() >>> >>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> >> I'm pretty sure this code is not thread safe and has a few various issues. >> >>> --- >>> include/linux/page_hinting.h | 5 ++ >>> mm/page_alloc.c | 2 +- >>> virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ >>> 3 files changed, 160 insertions(+), 1 deletion(-) >>> >>> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h >>> index 90254c582789..d554a2581826 100644 >>> --- a/include/linux/page_hinting.h >>> +++ b/include/linux/page_hinting.h >>> @@ -13,3 +13,8 @@ >>> >>> void guest_free_page_enqueue(struct page *page, int order); >>> void guest_free_page_try_hinting(void); >>> +extern int __isolate_free_page(struct page *page, unsigned int order); >>> +extern void __free_one_page(struct page *page, unsigned long pfn, >>> + struct zone *zone, unsigned int order, >>> + int migratetype); >>> +void release_buddy_pages(void *obj_to_free, int entries); >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index 684d047f33ee..d38b7eea207b 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, >>> * -- nyc >>> */ >>> >>> -static inline void __free_one_page(struct page *page, >>> +inline void __free_one_page(struct page *page, >>> unsigned long pfn, >>> struct zone *zone, unsigned int order, >>> int migratetype) >>> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c >>> index 48b4b5e796b0..9885b372b5a9 100644 >>> --- a/virt/kvm/page_hinting.c >>> +++ b/virt/kvm/page_hinting.c >>> @@ -1,5 +1,9 @@ >>> #include <linux/mm.h> >>> #include <linux/page_hinting.h> >>> +#include <linux/page_ref.h> >>> +#include <linux/kvm_host.h> >>> +#include <linux/kernel.h> >>> +#include <linux/sort.h> >>> >>> /* >>> * struct guest_free_pages- holds array of guest freed PFN's along with an >>> @@ -16,6 +20,54 @@ struct guest_free_pages { >>> >>> DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); >>> >>> +/* >>> + * struct guest_isolated_pages- holds the buddy isolated pages which are >>> + * supposed to be freed by the host. >>> + * @pfn: page frame number for the isolated page. >>> + * @order: order of the isolated page. >>> + */ >>> +struct guest_isolated_pages { >>> + unsigned long pfn; >>> + unsigned int order; >>> +}; >>> + >>> +void release_buddy_pages(void *obj_to_free, int entries) >>> +{ >>> + int i = 0; >>> + int mt = 0; >>> + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; >>> + >>> + while (i < entries) { >>> + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); >>> + >>> + mt = get_pageblock_migratetype(page); >>> + __free_one_page(page, page_to_pfn(page), page_zone(page), >>> + isolated_pages_obj[i].order, mt); >>> + i++; >>> + } >>> + kfree(isolated_pages_obj); >>> +} >> You shouldn't be accessing __free_one_page without holding the zone >> lock for the page. You might consider confining yourself to one zone >> worth of hints at a time. Then you can acquire the lock once, and then >> return the memory you have freed. > That is correct. >> >> This is one of the reasons why I am thinking maybe a bit in the page >> and then spinning on that bit in arch_alloc_page might be a nice way >> to get around this. Then you only have to take the zone lock when you >> are finding the pages you want to hint on and setting the bit >> indicating they are mid hint. Otherwise you have to take the zone lock >> to pull pages out, and to put them back in and the likelihood of a >> lock collision is much higher. > Do you think adding a new flag to the page structure will be acceptable? My lesson learned: forget it. If (at all) reuse some other one that might be safe in that context. Hard to tell if that is even possible and will be accepted upstream. Spinning is not the solution. What you would want is the buddy to actually skip over these pages and only try to use them (-> spin) when OOM. Core mm changes (see my other reply). This all sounds like future work which can be built on top of this work.
On Thu, Mar 7, 2019 at 11:30 AM David Hildenbrand <david@redhat.com> wrote: > > On 07.03.19 20:23, Nitesh Narayan Lal wrote: > > > > On 3/7/19 1:30 PM, Alexander Duyck wrote: > >> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>> This patch enables the kernel to scan the per cpu array > >>> which carries head pages from the buddy free list of order > >>> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by > >>> guest_free_page_hinting(). > >>> guest_free_page_hinting() scans the entire per cpu array by > >>> acquiring a zone lock corresponding to the pages which are > >>> being scanned. If the page is still free and present in the > >>> buddy it tries to isolate the page and adds it to a > >>> dynamically allocated array. > >>> > >>> Once this scanning process is complete and if there are any > >>> isolated pages added to the dynamically allocated array > >>> guest_free_page_report() is invoked. However, before this the > >>> per-cpu array index is reset so that it can continue capturing > >>> the pages from buddy free list. > >>> > >>> In this patch guest_free_page_report() simply releases the pages back > >>> to the buddy by using __free_one_page() > >>> > >>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> > >> I'm pretty sure this code is not thread safe and has a few various issues. > >> > >>> --- > >>> include/linux/page_hinting.h | 5 ++ > >>> mm/page_alloc.c | 2 +- > >>> virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ > >>> 3 files changed, 160 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h > >>> index 90254c582789..d554a2581826 100644 > >>> --- a/include/linux/page_hinting.h > >>> +++ b/include/linux/page_hinting.h > >>> @@ -13,3 +13,8 @@ > >>> > >>> void guest_free_page_enqueue(struct page *page, int order); > >>> void guest_free_page_try_hinting(void); > >>> +extern int __isolate_free_page(struct page *page, unsigned int order); > >>> +extern void __free_one_page(struct page *page, unsigned long pfn, > >>> + struct zone *zone, unsigned int order, > >>> + int migratetype); > >>> +void release_buddy_pages(void *obj_to_free, int entries); > >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >>> index 684d047f33ee..d38b7eea207b 100644 > >>> --- a/mm/page_alloc.c > >>> +++ b/mm/page_alloc.c > >>> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, > >>> * -- nyc > >>> */ > >>> > >>> -static inline void __free_one_page(struct page *page, > >>> +inline void __free_one_page(struct page *page, > >>> unsigned long pfn, > >>> struct zone *zone, unsigned int order, > >>> int migratetype) > >>> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c > >>> index 48b4b5e796b0..9885b372b5a9 100644 > >>> --- a/virt/kvm/page_hinting.c > >>> +++ b/virt/kvm/page_hinting.c > >>> @@ -1,5 +1,9 @@ > >>> #include <linux/mm.h> > >>> #include <linux/page_hinting.h> > >>> +#include <linux/page_ref.h> > >>> +#include <linux/kvm_host.h> > >>> +#include <linux/kernel.h> > >>> +#include <linux/sort.h> > >>> > >>> /* > >>> * struct guest_free_pages- holds array of guest freed PFN's along with an > >>> @@ -16,6 +20,54 @@ struct guest_free_pages { > >>> > >>> DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); > >>> > >>> +/* > >>> + * struct guest_isolated_pages- holds the buddy isolated pages which are > >>> + * supposed to be freed by the host. > >>> + * @pfn: page frame number for the isolated page. > >>> + * @order: order of the isolated page. > >>> + */ > >>> +struct guest_isolated_pages { > >>> + unsigned long pfn; > >>> + unsigned int order; > >>> +}; > >>> + > >>> +void release_buddy_pages(void *obj_to_free, int entries) > >>> +{ > >>> + int i = 0; > >>> + int mt = 0; > >>> + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; > >>> + > >>> + while (i < entries) { > >>> + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); > >>> + > >>> + mt = get_pageblock_migratetype(page); > >>> + __free_one_page(page, page_to_pfn(page), page_zone(page), > >>> + isolated_pages_obj[i].order, mt); > >>> + i++; > >>> + } > >>> + kfree(isolated_pages_obj); > >>> +} > >> You shouldn't be accessing __free_one_page without holding the zone > >> lock for the page. You might consider confining yourself to one zone > >> worth of hints at a time. Then you can acquire the lock once, and then > >> return the memory you have freed. > > That is correct. > >> > >> This is one of the reasons why I am thinking maybe a bit in the page > >> and then spinning on that bit in arch_alloc_page might be a nice way > >> to get around this. Then you only have to take the zone lock when you > >> are finding the pages you want to hint on and setting the bit > >> indicating they are mid hint. Otherwise you have to take the zone lock > >> to pull pages out, and to put them back in and the likelihood of a > >> lock collision is much higher. > > Do you think adding a new flag to the page structure will be acceptable? > > My lesson learned: forget it. If (at all) reuse some other one that > might be safe in that context. Hard to tell if that is even possible and > will be accepted upstream. I was thinking we could probably just resort to reuse. Essentially what we are looking at doing is idle page tracking so my thought is to see if we can just reuse those bits in the buddy allocator. Then we would essentially have 3 stages, young, "hinting", and idle. > Spinning is not the solution. What you would want is the buddy to > actually skip over these pages and only try to use them (-> spin) when > OOM. Core mm changes (see my other reply). It is more of a workaround. Ideally we should almost never encounter this anyway as what we really want to be doing is performing hints on cold pages, so hopefully we will be on the other end of the LRU list from any active allocations. > This all sounds like future work which can be built on top of this work. Actually I was kind of thinking about this the other way. The simple spin approach is a good first step. If we have a bit or two in the page that tells us if the page is available or not we could then follow-up with optimizations to only allocate either a young or idle page and doesn't bother with pages being "hinted", at least in the first pass. As it currently stands we are only really performing hints on higher order pages anyway so if we happen to encounter a slight delay under memory pressure it probably wouldn't be that noticeable versus the memory system having to go through and try to compact things from some lower order pages. In my mind us introducing a delay in memory allocation in the case of a collision would be preferable versus us triggering allocation failures.
On 07.03.19 22:32, Alexander Duyck wrote: > On Thu, Mar 7, 2019 at 11:30 AM David Hildenbrand <david@redhat.com> wrote: >> >> On 07.03.19 20:23, Nitesh Narayan Lal wrote: >>> >>> On 3/7/19 1:30 PM, Alexander Duyck wrote: >>>> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>> This patch enables the kernel to scan the per cpu array >>>>> which carries head pages from the buddy free list of order >>>>> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by >>>>> guest_free_page_hinting(). >>>>> guest_free_page_hinting() scans the entire per cpu array by >>>>> acquiring a zone lock corresponding to the pages which are >>>>> being scanned. If the page is still free and present in the >>>>> buddy it tries to isolate the page and adds it to a >>>>> dynamically allocated array. >>>>> >>>>> Once this scanning process is complete and if there are any >>>>> isolated pages added to the dynamically allocated array >>>>> guest_free_page_report() is invoked. However, before this the >>>>> per-cpu array index is reset so that it can continue capturing >>>>> the pages from buddy free list. >>>>> >>>>> In this patch guest_free_page_report() simply releases the pages back >>>>> to the buddy by using __free_one_page() >>>>> >>>>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> >>>> I'm pretty sure this code is not thread safe and has a few various issues. >>>> >>>>> --- >>>>> include/linux/page_hinting.h | 5 ++ >>>>> mm/page_alloc.c | 2 +- >>>>> virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ >>>>> 3 files changed, 160 insertions(+), 1 deletion(-) >>>>> >>>>> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h >>>>> index 90254c582789..d554a2581826 100644 >>>>> --- a/include/linux/page_hinting.h >>>>> +++ b/include/linux/page_hinting.h >>>>> @@ -13,3 +13,8 @@ >>>>> >>>>> void guest_free_page_enqueue(struct page *page, int order); >>>>> void guest_free_page_try_hinting(void); >>>>> +extern int __isolate_free_page(struct page *page, unsigned int order); >>>>> +extern void __free_one_page(struct page *page, unsigned long pfn, >>>>> + struct zone *zone, unsigned int order, >>>>> + int migratetype); >>>>> +void release_buddy_pages(void *obj_to_free, int entries); >>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>>>> index 684d047f33ee..d38b7eea207b 100644 >>>>> --- a/mm/page_alloc.c >>>>> +++ b/mm/page_alloc.c >>>>> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, >>>>> * -- nyc >>>>> */ >>>>> >>>>> -static inline void __free_one_page(struct page *page, >>>>> +inline void __free_one_page(struct page *page, >>>>> unsigned long pfn, >>>>> struct zone *zone, unsigned int order, >>>>> int migratetype) >>>>> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c >>>>> index 48b4b5e796b0..9885b372b5a9 100644 >>>>> --- a/virt/kvm/page_hinting.c >>>>> +++ b/virt/kvm/page_hinting.c >>>>> @@ -1,5 +1,9 @@ >>>>> #include <linux/mm.h> >>>>> #include <linux/page_hinting.h> >>>>> +#include <linux/page_ref.h> >>>>> +#include <linux/kvm_host.h> >>>>> +#include <linux/kernel.h> >>>>> +#include <linux/sort.h> >>>>> >>>>> /* >>>>> * struct guest_free_pages- holds array of guest freed PFN's along with an >>>>> @@ -16,6 +20,54 @@ struct guest_free_pages { >>>>> >>>>> DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); >>>>> >>>>> +/* >>>>> + * struct guest_isolated_pages- holds the buddy isolated pages which are >>>>> + * supposed to be freed by the host. >>>>> + * @pfn: page frame number for the isolated page. >>>>> + * @order: order of the isolated page. >>>>> + */ >>>>> +struct guest_isolated_pages { >>>>> + unsigned long pfn; >>>>> + unsigned int order; >>>>> +}; >>>>> + >>>>> +void release_buddy_pages(void *obj_to_free, int entries) >>>>> +{ >>>>> + int i = 0; >>>>> + int mt = 0; >>>>> + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; >>>>> + >>>>> + while (i < entries) { >>>>> + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); >>>>> + >>>>> + mt = get_pageblock_migratetype(page); >>>>> + __free_one_page(page, page_to_pfn(page), page_zone(page), >>>>> + isolated_pages_obj[i].order, mt); >>>>> + i++; >>>>> + } >>>>> + kfree(isolated_pages_obj); >>>>> +} >>>> You shouldn't be accessing __free_one_page without holding the zone >>>> lock for the page. You might consider confining yourself to one zone >>>> worth of hints at a time. Then you can acquire the lock once, and then >>>> return the memory you have freed. >>> That is correct. >>>> >>>> This is one of the reasons why I am thinking maybe a bit in the page >>>> and then spinning on that bit in arch_alloc_page might be a nice way >>>> to get around this. Then you only have to take the zone lock when you >>>> are finding the pages you want to hint on and setting the bit >>>> indicating they are mid hint. Otherwise you have to take the zone lock >>>> to pull pages out, and to put them back in and the likelihood of a >>>> lock collision is much higher. >>> Do you think adding a new flag to the page structure will be acceptable? >> >> My lesson learned: forget it. If (at all) reuse some other one that >> might be safe in that context. Hard to tell if that is even possible and >> will be accepted upstream. > > I was thinking we could probably just resort to reuse. Essentially > what we are looking at doing is idle page tracking so my thought is to > see if we can just reuse those bits in the buddy allocator. Then we > would essentially have 3 stages, young, "hinting", and idle. Haven't thought this through, but I wonder if 2 stages would even be enough right now, But well, you have a point that idle *might* reduce the amount of pages hinted multiple time (although that might still happen when we want to hint with different page sizes / buddy merging). > >> Spinning is not the solution. What you would want is the buddy to >> actually skip over these pages and only try to use them (-> spin) when >> OOM. Core mm changes (see my other reply). > > It is more of a workaround. Ideally we should almost never encounter > this anyway as what we really want to be doing is performing hints on > cold pages, so hopefully we will be on the other end of the LRU list > from any active allocations. > >> This all sounds like future work which can be built on top of this work. > > Actually I was kind of thinking about this the other way. The simple > spin approach is a good first step. If we have a bit or two in the > page that tells us if the page is available or not we could then > follow-up with optimizations to only allocate either a young or idle > page and doesn't bother with pages being "hinted", at least in the > first pass. > > As it currently stands we are only really performing hints on higher > order pages anyway so if we happen to encounter a slight delay under > memory pressure it probably wouldn't be that noticeable versus the Well, the issue is that with your approach one pending hinting request might block all other VCPUs in the worst case until hitning is done. Something that is not possible with Niteshs approach. It will never block allocation paths (well apart from the zone lock and the OOM thingy). And I think this is important. It is a fundamental design problem until we fix core mm. Your other synchronous approach doesn't have this problem either. > memory system having to go through and try to compact things from some > lower order pages. In my mind us introducing a delay in memory > allocation in the case of a collision would be preferable versus us > triggering allocation failures. > Valid points, I think to see which approach would be the better starting point is to have a version that does what you propose and compare it. Essentially to find out how severe this "blocking other VCPUs" thingy can be.
On Thu, Mar 7, 2019 at 1:40 PM David Hildenbrand <david@redhat.com> wrote: > > On 07.03.19 22:32, Alexander Duyck wrote: > > On Thu, Mar 7, 2019 at 11:30 AM David Hildenbrand <david@redhat.com> wrote: > >> > >> On 07.03.19 20:23, Nitesh Narayan Lal wrote: > >>> > >>> On 3/7/19 1:30 PM, Alexander Duyck wrote: > >>>> On Wed, Mar 6, 2019 at 7:51 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>> This patch enables the kernel to scan the per cpu array > >>>>> which carries head pages from the buddy free list of order > >>>>> FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by > >>>>> guest_free_page_hinting(). > >>>>> guest_free_page_hinting() scans the entire per cpu array by > >>>>> acquiring a zone lock corresponding to the pages which are > >>>>> being scanned. If the page is still free and present in the > >>>>> buddy it tries to isolate the page and adds it to a > >>>>> dynamically allocated array. > >>>>> > >>>>> Once this scanning process is complete and if there are any > >>>>> isolated pages added to the dynamically allocated array > >>>>> guest_free_page_report() is invoked. However, before this the > >>>>> per-cpu array index is reset so that it can continue capturing > >>>>> the pages from buddy free list. > >>>>> > >>>>> In this patch guest_free_page_report() simply releases the pages back > >>>>> to the buddy by using __free_one_page() > >>>>> > >>>>> Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> > >>>> I'm pretty sure this code is not thread safe and has a few various issues. > >>>> > >>>>> --- > >>>>> include/linux/page_hinting.h | 5 ++ > >>>>> mm/page_alloc.c | 2 +- > >>>>> virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ > >>>>> 3 files changed, 160 insertions(+), 1 deletion(-) > >>>>> > >>>>> diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h > >>>>> index 90254c582789..d554a2581826 100644 > >>>>> --- a/include/linux/page_hinting.h > >>>>> +++ b/include/linux/page_hinting.h > >>>>> @@ -13,3 +13,8 @@ > >>>>> > >>>>> void guest_free_page_enqueue(struct page *page, int order); > >>>>> void guest_free_page_try_hinting(void); > >>>>> +extern int __isolate_free_page(struct page *page, unsigned int order); > >>>>> +extern void __free_one_page(struct page *page, unsigned long pfn, > >>>>> + struct zone *zone, unsigned int order, > >>>>> + int migratetype); > >>>>> +void release_buddy_pages(void *obj_to_free, int entries); > >>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >>>>> index 684d047f33ee..d38b7eea207b 100644 > >>>>> --- a/mm/page_alloc.c > >>>>> +++ b/mm/page_alloc.c > >>>>> @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, > >>>>> * -- nyc > >>>>> */ > >>>>> > >>>>> -static inline void __free_one_page(struct page *page, > >>>>> +inline void __free_one_page(struct page *page, > >>>>> unsigned long pfn, > >>>>> struct zone *zone, unsigned int order, > >>>>> int migratetype) > >>>>> diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c > >>>>> index 48b4b5e796b0..9885b372b5a9 100644 > >>>>> --- a/virt/kvm/page_hinting.c > >>>>> +++ b/virt/kvm/page_hinting.c > >>>>> @@ -1,5 +1,9 @@ > >>>>> #include <linux/mm.h> > >>>>> #include <linux/page_hinting.h> > >>>>> +#include <linux/page_ref.h> > >>>>> +#include <linux/kvm_host.h> > >>>>> +#include <linux/kernel.h> > >>>>> +#include <linux/sort.h> > >>>>> > >>>>> /* > >>>>> * struct guest_free_pages- holds array of guest freed PFN's along with an > >>>>> @@ -16,6 +20,54 @@ struct guest_free_pages { > >>>>> > >>>>> DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); > >>>>> > >>>>> +/* > >>>>> + * struct guest_isolated_pages- holds the buddy isolated pages which are > >>>>> + * supposed to be freed by the host. > >>>>> + * @pfn: page frame number for the isolated page. > >>>>> + * @order: order of the isolated page. > >>>>> + */ > >>>>> +struct guest_isolated_pages { > >>>>> + unsigned long pfn; > >>>>> + unsigned int order; > >>>>> +}; > >>>>> + > >>>>> +void release_buddy_pages(void *obj_to_free, int entries) > >>>>> +{ > >>>>> + int i = 0; > >>>>> + int mt = 0; > >>>>> + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; > >>>>> + > >>>>> + while (i < entries) { > >>>>> + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); > >>>>> + > >>>>> + mt = get_pageblock_migratetype(page); > >>>>> + __free_one_page(page, page_to_pfn(page), page_zone(page), > >>>>> + isolated_pages_obj[i].order, mt); > >>>>> + i++; > >>>>> + } > >>>>> + kfree(isolated_pages_obj); > >>>>> +} > >>>> You shouldn't be accessing __free_one_page without holding the zone > >>>> lock for the page. You might consider confining yourself to one zone > >>>> worth of hints at a time. Then you can acquire the lock once, and then > >>>> return the memory you have freed. > >>> That is correct. > >>>> > >>>> This is one of the reasons why I am thinking maybe a bit in the page > >>>> and then spinning on that bit in arch_alloc_page might be a nice way > >>>> to get around this. Then you only have to take the zone lock when you > >>>> are finding the pages you want to hint on and setting the bit > >>>> indicating they are mid hint. Otherwise you have to take the zone lock > >>>> to pull pages out, and to put them back in and the likelihood of a > >>>> lock collision is much higher. > >>> Do you think adding a new flag to the page structure will be acceptable? > >> > >> My lesson learned: forget it. If (at all) reuse some other one that > >> might be safe in that context. Hard to tell if that is even possible and > >> will be accepted upstream. > > > > I was thinking we could probably just resort to reuse. Essentially > > what we are looking at doing is idle page tracking so my thought is to > > see if we can just reuse those bits in the buddy allocator. Then we > > would essentially have 3 stages, young, "hinting", and idle. > > Haven't thought this through, but I wonder if 2 stages would even be > enough right now, But well, you have a point that idle *might* reduce > the amount of pages hinted multiple time (although that might still > happen when we want to hint with different page sizes / buddy merging). Splitting wouldn't be so much an issue as merging. The problem is if you are merging pages you have to assume the page is no longer hinted, and need to hint for the new higher order page. The worst case scenerio would be a page that is hinted, merged, split, and then has to be hinted again because the information on hit being hinted is lost. > > > >> Spinning is not the solution. What you would want is the buddy to > >> actually skip over these pages and only try to use them (-> spin) when > >> OOM. Core mm changes (see my other reply). > > > > It is more of a workaround. Ideally we should almost never encounter > > this anyway as what we really want to be doing is performing hints on > > cold pages, so hopefully we will be on the other end of the LRU list > > from any active allocations. > > > >> This all sounds like future work which can be built on top of this work. > > > > Actually I was kind of thinking about this the other way. The simple > > spin approach is a good first step. If we have a bit or two in the > > page that tells us if the page is available or not we could then > > follow-up with optimizations to only allocate either a young or idle > > page and doesn't bother with pages being "hinted", at least in the > > first pass. > > > > As it currently stands we are only really performing hints on higher > > order pages anyway so if we happen to encounter a slight delay under > > memory pressure it probably wouldn't be that noticeable versus the > > Well, the issue is that with your approach one pending hinting request > might block all other VCPUs in the worst case until hitning is done. > Something that is not possible with Niteshs approach. It will never > block allocation paths (well apart from the zone lock and the OOM > thingy). And I think this is important. > > It is a fundamental design problem until we fix core mm. Your other > synchronous approach doesn't have this problem either. Even with the approach I had there are still possibilities for all VCPUs eventually becoming hung if the host is holding the write lock on the mmap semaphore. My initial thought was to try and reduce the amount of time we need to sit on the zone lock since we have to hold it to isolate the pages, and then to put them back in the buddy. However the idle bits approach will be just as difficult to deal with due to potential for splits and merges while performing the hint. > > memory system having to go through and try to compact things from some > > lower order pages. In my mind us introducing a delay in memory > > allocation in the case of a collision would be preferable versus us > > triggering allocation failures. > > > > Valid points, I think to see which approach would be the better starting > point is to have a version that does what you propose and compare it. > Essentially to find out how severe this "blocking other VCPUs" thingy > can be. I figure if nothing else the current solution probably is just in need of a few tweaks. In my mind the simplest solution is still to have a single bit somewhere for tracking what pages we have hinted on on which ones we haven't. However we could probably skip the second bit and just put the pages in isolation while we are performing the hint and that would get rid of the need for a second bit. With us hinting currently on MAX_ORDER - 1 pages only that actually takes care of the risk of a merge really wiping out any data about what has been hinted on and what hasn't. The only other thing I still want to try and see if I can do is to add a jiffies value to the page private data in the case of the buddy pages. With that we could track the age of the page so it becomes easier to only target pages that are truly going cold rather than trying to grab pages that were added to the freelist recently.
On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > With us hinting currently on MAX_ORDER - 1 pages only that actually > takes care of the risk of a merge really wiping out any data about > what has been hinted on and what hasn't. Oh nice. I had this feeling MAX_ORDER - 1 specifically will turn out being a better choice than something related to THP. Now there's an actual reason why this makes things easier!
On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > The only other thing I still want to try and see if I can do is to add > a jiffies value to the page private data in the case of the buddy > pages. Actually there's one extra thing I think we should do, and that is make sure we do not leave less than X% off the free memory at a time. This way chances of triggering an OOM are lower. > With that we could track the age of the page so it becomes > easier to only target pages that are truly going cold rather than > trying to grab pages that were added to the freelist recently. I like that but I have a vague memory of discussing this with Rik van Riel and him saying it's actually better to take away recently used ones. Can't see why would that be but maybe I remember wrong. Rik - am I just confused?
On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > > The only other thing I still want to try and see if I can do is to add > > a jiffies value to the page private data in the case of the buddy > > pages. > > Actually there's one extra thing I think we should do, and that is make > sure we do not leave less than X% off the free memory at a time. > This way chances of triggering an OOM are lower. If nothing else we could probably look at doing a watermark of some sort so we have to have X amount of memory free but not hinted before we will start providing the hints. It would just be a matter of tracking how much memory we have hinted on versus the amount of memory that has been pulled from that pool. It is another reason why we probably want a bit in the buddy pages somewhere to indicate if a page has been hinted or not as we can then use that to determine if we have to account for it in the statistics. > > With that we could track the age of the page so it becomes > > easier to only target pages that are truly going cold rather than > > trying to grab pages that were added to the freelist recently. > > I like that but I have a vague memory of discussing this with Rik van > Riel and him saying it's actually better to take away recently used > ones. Can't see why would that be but maybe I remember wrong. Rik - am I > just confused? It is probably to cut down on the need for disk writes in the case of swap. If that is the case it ends up being a trade off. The sooner we hint the less likely it is that we will need to write a given page to disk. However the sooner we hint, the more likely it is we will need to trigger a page fault and pull back in a zero page to populate the last page we were working on. The sweet spot will be that period of time that is somewhere in between so we don't trigger unnecessary page faults and we don't need to perform additional swap reads/writes.
On Fri, Mar 08, 2019 at 10:06:14AM -0800, Alexander Duyck wrote: > On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > > > > On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > > > The only other thing I still want to try and see if I can do is to add > > > a jiffies value to the page private data in the case of the buddy > > > pages. > > > > Actually there's one extra thing I think we should do, and that is make > > sure we do not leave less than X% off the free memory at a time. > > This way chances of triggering an OOM are lower. > > If nothing else we could probably look at doing a watermark of some > sort so we have to have X amount of memory free but not hinted before > we will start providing the hints. It would just be a matter of > tracking how much memory we have hinted on versus the amount of memory > that has been pulled from that pool. It is another reason why we > probably want a bit in the buddy pages somewhere to indicate if a page > has been hinted or not as we can then use that to determine if we have > to account for it in the statistics. > > > > With that we could track the age of the page so it becomes > > > easier to only target pages that are truly going cold rather than > > > trying to grab pages that were added to the freelist recently. > > > > I like that but I have a vague memory of discussing this with Rik van > > Riel and him saying it's actually better to take away recently used > > ones. Can't see why would that be but maybe I remember wrong. Rik - am I > > just confused? > > It is probably to cut down on the need for disk writes in the case of > swap. If that is the case it ends up being a trade off. > > The sooner we hint the less likely it is that we will need to write a > given page to disk. However the sooner we hint, the more likely it is > we will need to trigger a page fault and pull back in a zero page to > populate the last page we were working on. The sweet spot will be that > period of time that is somewhere in between so we don't trigger > unnecessary page faults and we don't need to perform additional swap > reads/writes. Right but the question is - is it better to hint on least recently used, or most recently used pages? It looks like LRU should be better, but I vaguely rememeber there were arguments for why most recently used might be better. Can't figure out why, maybe I am remembering wrong.
On 3/8/19 1:06 PM, Alexander Duyck wrote: > On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>> The only other thing I still want to try and see if I can do is to add >>> a jiffies value to the page private data in the case of the buddy >>> pages. >> Actually there's one extra thing I think we should do, and that is make >> sure we do not leave less than X% off the free memory at a time. >> This way chances of triggering an OOM are lower. > If nothing else we could probably look at doing a watermark of some > sort so we have to have X amount of memory free but not hinted before > we will start providing the hints. It would just be a matter of > tracking how much memory we have hinted on versus the amount of memory > that has been pulled from that pool. This is to avoid false OOM in the guest? > It is another reason why we > probably want a bit in the buddy pages somewhere to indicate if a page > has been hinted or not as we can then use that to determine if we have > to account for it in the statistics. The one benefit which I can see of having an explicit bit is that it will help us to have a single hook away from the hot path within buddy merging code (just like your arch_merge_page) and still avoid duplicate hints while releasing pages. I still have to check PG_idle and PG_young which you mentioned but I don't think we can reuse any existing bits. If we really want to have something like a watermark, then can't we use zone->free_pages before isolating to see how many free pages are there and put a threshold on it? (__isolate_free_page() does a similar thing but it does that on per request basis). > >>> With that we could track the age of the page so it becomes >>> easier to only target pages that are truly going cold rather than >>> trying to grab pages that were added to the freelist recently. >> I like that but I have a vague memory of discussing this with Rik van >> Riel and him saying it's actually better to take away recently used >> ones. Can't see why would that be but maybe I remember wrong. Rik - am I >> just confused? > It is probably to cut down on the need for disk writes in the case of > swap. If that is the case it ends up being a trade off. > > The sooner we hint the less likely it is that we will need to write a > given page to disk. However the sooner we hint, the more likely it is > we will need to trigger a page fault and pull back in a zero page to > populate the last page we were working on. The sweet spot will be that > period of time that is somewhere in between so we don't trigger > unnecessary page faults and we don't need to perform additional swap > reads/writes.
On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > > On 3/8/19 1:06 PM, Alexander Duyck wrote: > > On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > >> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > >>> The only other thing I still want to try and see if I can do is to add > >>> a jiffies value to the page private data in the case of the buddy > >>> pages. > >> Actually there's one extra thing I think we should do, and that is make > >> sure we do not leave less than X% off the free memory at a time. > >> This way chances of triggering an OOM are lower. > > If nothing else we could probably look at doing a watermark of some > > sort so we have to have X amount of memory free but not hinted before > > we will start providing the hints. It would just be a matter of > > tracking how much memory we have hinted on versus the amount of memory > > that has been pulled from that pool. > This is to avoid false OOM in the guest? Partially, though it would still be possible. Basically it would just be a way of determining when we have hinted "enough". Basically it doesn't do us much good to be hinting on free memory if the guest is already constrained and just going to reallocate the memory shortly after we hinted on it. The idea is with a watermark we can avoid hinting until we start having pages that are actually going to stay free for a while. > > It is another reason why we > > probably want a bit in the buddy pages somewhere to indicate if a page > > has been hinted or not as we can then use that to determine if we have > > to account for it in the statistics. > > The one benefit which I can see of having an explicit bit is that it > will help us to have a single hook away from the hot path within buddy > merging code (just like your arch_merge_page) and still avoid duplicate > hints while releasing pages. > > I still have to check PG_idle and PG_young which you mentioned but I > don't think we can reuse any existing bits. Those are bits that are already there for 64b. I think those exist in the page extension for 32b systems. If I am not mistaken they are only used in VMA mapped memory. What I was getting at is that those are the bits we could think about reusing. > If we really want to have something like a watermark, then can't we use > zone->free_pages before isolating to see how many free pages are there > and put a threshold on it? (__isolate_free_page() does a similar thing > but it does that on per request basis). Right. That is only part of it though since that tells you how many free pages are there. But how many of those free pages are hinted? That is the part we would need to track separately and then then compare to free_pages to determine if we need to start hinting on more memory or not. > > > >>> With that we could track the age of the page so it becomes > >>> easier to only target pages that are truly going cold rather than > >>> trying to grab pages that were added to the freelist recently. > >> I like that but I have a vague memory of discussing this with Rik van > >> Riel and him saying it's actually better to take away recently used > >> ones. Can't see why would that be but maybe I remember wrong. Rik - am I > >> just confused? > > It is probably to cut down on the need for disk writes in the case of > > swap. If that is the case it ends up being a trade off. > > > > The sooner we hint the less likely it is that we will need to write a > > given page to disk. However the sooner we hint, the more likely it is > > we will need to trigger a page fault and pull back in a zero page to > > populate the last page we were working on. The sweet spot will be that > > period of time that is somewhere in between so we don't trigger > > unnecessary page faults and we don't need to perform additional swap > > reads/writes. > -- > Regards > Nitesh >
On 3/8/19 2:25 PM, Alexander Duyck wrote: > On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> >> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>> The only other thing I still want to try and see if I can do is to add >>>>> a jiffies value to the page private data in the case of the buddy >>>>> pages. >>>> Actually there's one extra thing I think we should do, and that is make >>>> sure we do not leave less than X% off the free memory at a time. >>>> This way chances of triggering an OOM are lower. >>> If nothing else we could probably look at doing a watermark of some >>> sort so we have to have X amount of memory free but not hinted before >>> we will start providing the hints. It would just be a matter of >>> tracking how much memory we have hinted on versus the amount of memory >>> that has been pulled from that pool. >> This is to avoid false OOM in the guest? > Partially, though it would still be possible. Basically it would just > be a way of determining when we have hinted "enough". Basically it > doesn't do us much good to be hinting on free memory if the guest is > already constrained and just going to reallocate the memory shortly > after we hinted on it. The idea is with a watermark we can avoid > hinting until we start having pages that are actually going to stay > free for a while. > >>> It is another reason why we >>> probably want a bit in the buddy pages somewhere to indicate if a page >>> has been hinted or not as we can then use that to determine if we have >>> to account for it in the statistics. >> The one benefit which I can see of having an explicit bit is that it >> will help us to have a single hook away from the hot path within buddy >> merging code (just like your arch_merge_page) and still avoid duplicate >> hints while releasing pages. >> >> I still have to check PG_idle and PG_young which you mentioned but I >> don't think we can reuse any existing bits. > Those are bits that are already there for 64b. I think those exist in > the page extension for 32b systems. If I am not mistaken they are only > used in VMA mapped memory. What I was getting at is that those are the > bits we could think about reusing. > >> If we really want to have something like a watermark, then can't we use >> zone->free_pages before isolating to see how many free pages are there >> and put a threshold on it? (__isolate_free_page() does a similar thing >> but it does that on per request basis). > Right. That is only part of it though since that tells you how many > free pages are there. But how many of those free pages are hinted? > That is the part we would need to track separately and then then > compare to free_pages to determine if we need to start hinting on more > memory or not. Only pages which are isolated will be hinted, and once a page is isolated it will not be counted in the zone free pages. Feel free to correct me if I am wrong. If I am understanding it correctly you only want to hint the idle pages, is that right? > >>>>> With that we could track the age of the page so it becomes >>>>> easier to only target pages that are truly going cold rather than >>>>> trying to grab pages that were added to the freelist recently. >>>> I like that but I have a vague memory of discussing this with Rik van >>>> Riel and him saying it's actually better to take away recently used >>>> ones. Can't see why would that be but maybe I remember wrong. Rik - am I >>>> just confused? >>> It is probably to cut down on the need for disk writes in the case of >>> swap. If that is the case it ends up being a trade off. >>> >>> The sooner we hint the less likely it is that we will need to write a >>> given page to disk. However the sooner we hint, the more likely it is >>> we will need to trigger a page fault and pull back in a zero page to >>> populate the last page we were working on. The sweet spot will be that >>> period of time that is somewhere in between so we don't trigger >>> unnecessary page faults and we don't need to perform additional swap >>> reads/writes. >> -- >> Regards >> Nitesh >>
On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > On 3/8/19 2:25 PM, Alexander Duyck wrote: > > On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >> > >> On 3/8/19 1:06 PM, Alexander Duyck wrote: > >>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > >>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > >>>>> The only other thing I still want to try and see if I can do is to add > >>>>> a jiffies value to the page private data in the case of the buddy > >>>>> pages. > >>>> Actually there's one extra thing I think we should do, and that is make > >>>> sure we do not leave less than X% off the free memory at a time. > >>>> This way chances of triggering an OOM are lower. > >>> If nothing else we could probably look at doing a watermark of some > >>> sort so we have to have X amount of memory free but not hinted before > >>> we will start providing the hints. It would just be a matter of > >>> tracking how much memory we have hinted on versus the amount of memory > >>> that has been pulled from that pool. > >> This is to avoid false OOM in the guest? > > Partially, though it would still be possible. Basically it would just > > be a way of determining when we have hinted "enough". Basically it > > doesn't do us much good to be hinting on free memory if the guest is > > already constrained and just going to reallocate the memory shortly > > after we hinted on it. The idea is with a watermark we can avoid > > hinting until we start having pages that are actually going to stay > > free for a while. > > > >>> It is another reason why we > >>> probably want a bit in the buddy pages somewhere to indicate if a page > >>> has been hinted or not as we can then use that to determine if we have > >>> to account for it in the statistics. > >> The one benefit which I can see of having an explicit bit is that it > >> will help us to have a single hook away from the hot path within buddy > >> merging code (just like your arch_merge_page) and still avoid duplicate > >> hints while releasing pages. > >> > >> I still have to check PG_idle and PG_young which you mentioned but I > >> don't think we can reuse any existing bits. > > Those are bits that are already there for 64b. I think those exist in > > the page extension for 32b systems. If I am not mistaken they are only > > used in VMA mapped memory. What I was getting at is that those are the > > bits we could think about reusing. > > > >> If we really want to have something like a watermark, then can't we use > >> zone->free_pages before isolating to see how many free pages are there > >> and put a threshold on it? (__isolate_free_page() does a similar thing > >> but it does that on per request basis). > > Right. That is only part of it though since that tells you how many > > free pages are there. But how many of those free pages are hinted? > > That is the part we would need to track separately and then then > > compare to free_pages to determine if we need to start hinting on more > > memory or not. > Only pages which are isolated will be hinted, and once a page is > isolated it will not be counted in the zone free pages. > Feel free to correct me if I am wrong. You are correct up to here. When we isolate the page it isn't counted against the free pages. However after we complete the hint we end up taking it out of isolation and returning it to the "free" state, so it will be counted against the free pages. > If I am understanding it correctly you only want to hint the idle pages, > is that right? Getting back to the ideas from our earlier discussion, we had 3 stages for things. Free but not hinted, isolated due to hinting, and free and hinted. So what we would need to do is identify the size of the first pool that is free and not hinted by knowing the total number of free pages, and then subtract the size of the pages that are hinted and still free. > > > >>>>> With that we could track the age of the page so it becomes > >>>>> easier to only target pages that are truly going cold rather than > >>>>> trying to grab pages that were added to the freelist recently. > >>>> I like that but I have a vague memory of discussing this with Rik van > >>>> Riel and him saying it's actually better to take away recently used > >>>> ones. Can't see why would that be but maybe I remember wrong. Rik - am I > >>>> just confused? > >>> It is probably to cut down on the need for disk writes in the case of > >>> swap. If that is the case it ends up being a trade off. > >>> > >>> The sooner we hint the less likely it is that we will need to write a > >>> given page to disk. However the sooner we hint, the more likely it is > >>> we will need to trigger a page fault and pull back in a zero page to > >>> populate the last page we were working on. The sweet spot will be that > >>> period of time that is somewhere in between so we don't trigger > >>> unnecessary page faults and we don't need to perform additional swap > >>> reads/writes. > >> -- > >> Regards > >> Nitesh > >> > -- > Regards > Nitesh >
On 3/8/19 4:39 PM, Alexander Duyck wrote: > On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>> pages. >>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>> This way chances of triggering an OOM are lower. >>>>> If nothing else we could probably look at doing a watermark of some >>>>> sort so we have to have X amount of memory free but not hinted before >>>>> we will start providing the hints. It would just be a matter of >>>>> tracking how much memory we have hinted on versus the amount of memory >>>>> that has been pulled from that pool. >>>> This is to avoid false OOM in the guest? >>> Partially, though it would still be possible. Basically it would just >>> be a way of determining when we have hinted "enough". Basically it >>> doesn't do us much good to be hinting on free memory if the guest is >>> already constrained and just going to reallocate the memory shortly >>> after we hinted on it. The idea is with a watermark we can avoid >>> hinting until we start having pages that are actually going to stay >>> free for a while. >>> >>>>> It is another reason why we >>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>> has been hinted or not as we can then use that to determine if we have >>>>> to account for it in the statistics. >>>> The one benefit which I can see of having an explicit bit is that it >>>> will help us to have a single hook away from the hot path within buddy >>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>> hints while releasing pages. >>>> >>>> I still have to check PG_idle and PG_young which you mentioned but I >>>> don't think we can reuse any existing bits. >>> Those are bits that are already there for 64b. I think those exist in >>> the page extension for 32b systems. If I am not mistaken they are only >>> used in VMA mapped memory. What I was getting at is that those are the >>> bits we could think about reusing. >>> >>>> If we really want to have something like a watermark, then can't we use >>>> zone->free_pages before isolating to see how many free pages are there >>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>> but it does that on per request basis). >>> Right. That is only part of it though since that tells you how many >>> free pages are there. But how many of those free pages are hinted? >>> That is the part we would need to track separately and then then >>> compare to free_pages to determine if we need to start hinting on more >>> memory or not. >> Only pages which are isolated will be hinted, and once a page is >> isolated it will not be counted in the zone free pages. >> Feel free to correct me if I am wrong. > You are correct up to here. When we isolate the page it isn't counted > against the free pages. However after we complete the hint we end up > taking it out of isolation and returning it to the "free" state, so it > will be counted against the free pages. > >> If I am understanding it correctly you only want to hint the idle pages, >> is that right? > Getting back to the ideas from our earlier discussion, we had 3 stages > for things. Free but not hinted, isolated due to hinting, and free and > hinted. So what we would need to do is identify the size of the first > pool that is free and not hinted by knowing the total number of free > pages, and then subtract the size of the pages that are hinted and > still free. To summarize, for now, I think it makes sense to stick with the current approach as this way we can avoid any locking in the allocation path and reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. For the next step other than the comments received in the code and what I mentioned in the cover email, I would like to do the following: 1. Explore the watermark idea suggested by Alex and bring down memhog execution time if possible. 2. Benchmark hinting v/s non-hinting more extensively. Let me know if you have any specific suggestions in terms of the tools I can run to do the same. (I am planning to run atleast netperf, hackbench and stress for this). > >>>>>>> With that we could track the age of the page so it becomes >>>>>>> easier to only target pages that are truly going cold rather than >>>>>>> trying to grab pages that were added to the freelist recently. >>>>>> I like that but I have a vague memory of discussing this with Rik van >>>>>> Riel and him saying it's actually better to take away recently used >>>>>> ones. Can't see why would that be but maybe I remember wrong. Rik - am I >>>>>> just confused? >>>>> It is probably to cut down on the need for disk writes in the case of >>>>> swap. If that is the case it ends up being a trade off. >>>>> >>>>> The sooner we hint the less likely it is that we will need to write a >>>>> given page to disk. However the sooner we hint, the more likely it is >>>>> we will need to trigger a page fault and pull back in a zero page to >>>>> populate the last page we were working on. The sweet spot will be that >>>>> period of time that is somewhere in between so we don't trigger >>>>> unnecessary page faults and we don't need to perform additional swap >>>>> reads/writes. >>>> -- >>>> Regards >>>> Nitesh >>>> >> -- >> Regards >> Nitesh >>
On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > > On 3/8/19 4:39 PM, Alexander Duyck wrote: > > On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >> On 3/8/19 2:25 PM, Alexander Duyck wrote: > >>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: > >>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > >>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > >>>>>>> The only other thing I still want to try and see if I can do is to add > >>>>>>> a jiffies value to the page private data in the case of the buddy > >>>>>>> pages. > >>>>>> Actually there's one extra thing I think we should do, and that is make > >>>>>> sure we do not leave less than X% off the free memory at a time. > >>>>>> This way chances of triggering an OOM are lower. > >>>>> If nothing else we could probably look at doing a watermark of some > >>>>> sort so we have to have X amount of memory free but not hinted before > >>>>> we will start providing the hints. It would just be a matter of > >>>>> tracking how much memory we have hinted on versus the amount of memory > >>>>> that has been pulled from that pool. > >>>> This is to avoid false OOM in the guest? > >>> Partially, though it would still be possible. Basically it would just > >>> be a way of determining when we have hinted "enough". Basically it > >>> doesn't do us much good to be hinting on free memory if the guest is > >>> already constrained and just going to reallocate the memory shortly > >>> after we hinted on it. The idea is with a watermark we can avoid > >>> hinting until we start having pages that are actually going to stay > >>> free for a while. > >>> > >>>>> It is another reason why we > >>>>> probably want a bit in the buddy pages somewhere to indicate if a page > >>>>> has been hinted or not as we can then use that to determine if we have > >>>>> to account for it in the statistics. > >>>> The one benefit which I can see of having an explicit bit is that it > >>>> will help us to have a single hook away from the hot path within buddy > >>>> merging code (just like your arch_merge_page) and still avoid duplicate > >>>> hints while releasing pages. > >>>> > >>>> I still have to check PG_idle and PG_young which you mentioned but I > >>>> don't think we can reuse any existing bits. > >>> Those are bits that are already there for 64b. I think those exist in > >>> the page extension for 32b systems. If I am not mistaken they are only > >>> used in VMA mapped memory. What I was getting at is that those are the > >>> bits we could think about reusing. > >>> > >>>> If we really want to have something like a watermark, then can't we use > >>>> zone->free_pages before isolating to see how many free pages are there > >>>> and put a threshold on it? (__isolate_free_page() does a similar thing > >>>> but it does that on per request basis). > >>> Right. That is only part of it though since that tells you how many > >>> free pages are there. But how many of those free pages are hinted? > >>> That is the part we would need to track separately and then then > >>> compare to free_pages to determine if we need to start hinting on more > >>> memory or not. > >> Only pages which are isolated will be hinted, and once a page is > >> isolated it will not be counted in the zone free pages. > >> Feel free to correct me if I am wrong. > > You are correct up to here. When we isolate the page it isn't counted > > against the free pages. However after we complete the hint we end up > > taking it out of isolation and returning it to the "free" state, so it > > will be counted against the free pages. > > > >> If I am understanding it correctly you only want to hint the idle pages, > >> is that right? > > Getting back to the ideas from our earlier discussion, we had 3 stages > > for things. Free but not hinted, isolated due to hinting, and free and > > hinted. So what we would need to do is identify the size of the first > > pool that is free and not hinted by knowing the total number of free > > pages, and then subtract the size of the pages that are hinted and > > still free. > To summarize, for now, I think it makes sense to stick with the current > approach as this way we can avoid any locking in the allocation path and > reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. I'm not sure what you are talking about by "avoid any locking in the allocation path". Are you talking about the spin on idle bit, if so then yes. However I have been testing your patches and I was correct in the assumption that you forgot to handle the zone lock when you were freeing __free_one_page. I just did a quick copy/paste from your zone lock handling from the guest_free_page_hinting function into the release_buddy_pages function and then I was able to enable multiple CPUs without any issues. > For the next step other than the comments received in the code and what > I mentioned in the cover email, I would like to do the following: > 1. Explore the watermark idea suggested by Alex and bring down memhog > execution time if possible. So there are a few things that are hurting us on the memhog test: 1. The current QEMU patch is only madvising 4K pages at a time, this is disabling THP and hurts the test. 2. The fact that we madvise the pages away makes it so that we have to fault the page back in in order to use it for the memhog test. In order to avoid that penalty we may want to see if we can introduce some sort of "timeout" on the pages so that we are only hinting away old pages that have not been used for some period of time. 3. Currently we are still doing a large amount of processing in the page free path. Ideally we should look at getting away from trying to do so much per-cpu work and instead just have some small tasks that put the data needed in the page, and then have a separate thread walking the free_list checking that data, isolating the pages, hinting them, and then returning them back to the free_list. > 2. Benchmark hinting v/s non-hinting more extensively. > Let me know if you have any specific suggestions in terms of the tools I > can run to do the same. (I am planning to run atleast netperf, hackbench > and stress for this). So I have been running the memhog 32g test and the will-it-scale page_fault1 test as my primary two tests for this so far. What I have seen so far has been pretty promising. I had to do some build fixes, fixes to QEMU to hint on the full size page instead of 4K page, and fixes for locking so this isn't exactly your original patch set, but with all that I am seeing data comparable to the original patch set I had. For memhog 32g I am seeing performance similar to a VM that was fresh booted. I make that the comparison because you will have to take page faults on a fresh boot as you access additional memory. However after the first run of the runtime drops from 22s to 20s without the hinting enabled. The big one that probably still needs some work will be the multi-cpu scaling. With the per-cpu locking for the zone lock to pull pages out, and put them back in the free list I am seeing what looks like about a 10% drop in the page_fault1 test. Here are the results as I have seen so far on a 16 cpu 32G VM: -- baseline -- ./runtest.py page_fault1 tasks,processes,processes_idle,threads,threads_idle,linear 0,0,100,0,100,0 1,522242,93.73,514965,93.74,522242 2,929433,87.48,857280,87.50,1044484 3,1360651,81.25,1214224,81.48,1566726 4,1693709,75.01,1437156,76.33,2088968 5,2062392,68.77,1743294,70.78,2611210 6,2271363,62.54,1787238,66.75,3133452 7,2564479,56.33,1924684,61.77,3655694 8,2699897,50.09,2205783,54.28,4177936 9,2931697,43.85,2135788,50.20,4700178 10,2939384,37.63,2258725,45.04,5222420 11,3039010,31.41,2209401,41.04,5744662 12,3022976,25.19,2177655,35.68,6266904 13,3015683,18.98,2123546,31.73,6789146 14,2921798,12.77,2160489,27.30,7311388 15,2846758,6.51,1815036,17.40,7833630 16,2703146,0.36,2121018,18.21,8355872 -- modified rh patchset -- ./runtest.py page_fault1 tasks,processes,processes_idle,threads,threads_idle,linear 0,0,100,0,100,0 1,527216,93.72,517459,93.70,527216 2,911239,87.48,843278,87.51,1054432 3,1295059,81.22,1193523,81.61,1581648 4,1649332,75.02,1439403,76.17,2108864 5,1985780,68.81,1745556,70.44,2636080 6,2174751,62.56,1769433,66.84,3163296 7,2433273,56.33,2121777,58.46,3690512 8,2537356,50.17,1901743,57.23,4217728 9,2737689,43.87,1859179,54.17,4744944 10,2718474,37.65,2188891,43.69,5272160 11,2743381,31.47,2205112,38.00,5799376 12,2738717,25.26,2117281,38.09,6326592 13,2643648,19.06,1887956,35.31,6853808 14,2598001,12.92,1916544,27.87,7381024 15,2498325,6.70,1992580,26.10,7908240 16,2424587,0.45,2137742,21.37,8435456 As we discussed earlier, it would probably be good to focus on only pulling something like 4 to 8 (MAX_ORDER - 1) pages per round of hinting. You might also look at only working one zone at a time. Then what you could do is look at placing the pages you have already hinted on at the tail end of the free_list and pull a new set of pages out to hint on. You could do this all in one shot while holding the zone lock.
On 12.03.19 22:13, Alexander Duyck wrote: > On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> >> On 3/8/19 4:39 PM, Alexander Duyck wrote: >>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>>>> pages. >>>>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>>>> This way chances of triggering an OOM are lower. >>>>>>> If nothing else we could probably look at doing a watermark of some >>>>>>> sort so we have to have X amount of memory free but not hinted before >>>>>>> we will start providing the hints. It would just be a matter of >>>>>>> tracking how much memory we have hinted on versus the amount of memory >>>>>>> that has been pulled from that pool. >>>>>> This is to avoid false OOM in the guest? >>>>> Partially, though it would still be possible. Basically it would just >>>>> be a way of determining when we have hinted "enough". Basically it >>>>> doesn't do us much good to be hinting on free memory if the guest is >>>>> already constrained and just going to reallocate the memory shortly >>>>> after we hinted on it. The idea is with a watermark we can avoid >>>>> hinting until we start having pages that are actually going to stay >>>>> free for a while. >>>>> >>>>>>> It is another reason why we >>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>>>> has been hinted or not as we can then use that to determine if we have >>>>>>> to account for it in the statistics. >>>>>> The one benefit which I can see of having an explicit bit is that it >>>>>> will help us to have a single hook away from the hot path within buddy >>>>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>>>> hints while releasing pages. >>>>>> >>>>>> I still have to check PG_idle and PG_young which you mentioned but I >>>>>> don't think we can reuse any existing bits. >>>>> Those are bits that are already there for 64b. I think those exist in >>>>> the page extension for 32b systems. If I am not mistaken they are only >>>>> used in VMA mapped memory. What I was getting at is that those are the >>>>> bits we could think about reusing. >>>>> >>>>>> If we really want to have something like a watermark, then can't we use >>>>>> zone->free_pages before isolating to see how many free pages are there >>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>>>> but it does that on per request basis). >>>>> Right. That is only part of it though since that tells you how many >>>>> free pages are there. But how many of those free pages are hinted? >>>>> That is the part we would need to track separately and then then >>>>> compare to free_pages to determine if we need to start hinting on more >>>>> memory or not. >>>> Only pages which are isolated will be hinted, and once a page is >>>> isolated it will not be counted in the zone free pages. >>>> Feel free to correct me if I am wrong. >>> You are correct up to here. When we isolate the page it isn't counted >>> against the free pages. However after we complete the hint we end up >>> taking it out of isolation and returning it to the "free" state, so it >>> will be counted against the free pages. >>> >>>> If I am understanding it correctly you only want to hint the idle pages, >>>> is that right? >>> Getting back to the ideas from our earlier discussion, we had 3 stages >>> for things. Free but not hinted, isolated due to hinting, and free and >>> hinted. So what we would need to do is identify the size of the first >>> pool that is free and not hinted by knowing the total number of free >>> pages, and then subtract the size of the pages that are hinted and >>> still free. >> To summarize, for now, I think it makes sense to stick with the current >> approach as this way we can avoid any locking in the allocation path and >> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. > > I'm not sure what you are talking about by "avoid any locking in the > allocation path". Are you talking about the spin on idle bit, if so > then yes. However I have been testing your patches and I was correct > in the assumption that you forgot to handle the zone lock when you > were freeing __free_one_page. I just did a quick copy/paste from your > zone lock handling from the guest_free_page_hinting function into the > release_buddy_pages function and then I was able to enable multiple > CPUs without any issues. > >> For the next step other than the comments received in the code and what >> I mentioned in the cover email, I would like to do the following: >> 1. Explore the watermark idea suggested by Alex and bring down memhog >> execution time if possible. > > So there are a few things that are hurting us on the memhog test: > 1. The current QEMU patch is only madvising 4K pages at a time, this > is disabling THP and hurts the test. > > 2. The fact that we madvise the pages away makes it so that we have to > fault the page back in in order to use it for the memhog test. In > order to avoid that penalty we may want to see if we can introduce > some sort of "timeout" on the pages so that we are only hinting away > old pages that have not been used for some period of time. > > 3. Currently we are still doing a large amount of processing in the > page free path. Ideally we should look at getting away from trying to > do so much per-cpu work and instead just have some small tasks that > put the data needed in the page, and then have a separate thread > walking the free_list checking that data, isolating the pages, hinting > them, and then returning them back to the free_list. This is highly debatable. Whenever the is concurrency, there is the need for locking (well, at least synchronization - maybe using existing locks like the zone lock). The other thread has to run somewhere. One thread per VCPU might not what we want ... sorting this out might be more complicated than it would seem. I would suggest to defer the discussion of this change to a later stage. It can be easily reworked later - in theory :) 1 and 2 you mention are the lower hanging fruits that will definitely improve performance.
On Tue, Mar 12, 2019 at 2:53 PM David Hildenbrand <david@redhat.com> wrote: > > On 12.03.19 22:13, Alexander Duyck wrote: > > On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >> > >> On 3/8/19 4:39 PM, Alexander Duyck wrote: > >>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: > >>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: > >>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > >>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > >>>>>>>>> The only other thing I still want to try and see if I can do is to add > >>>>>>>>> a jiffies value to the page private data in the case of the buddy > >>>>>>>>> pages. > >>>>>>>> Actually there's one extra thing I think we should do, and that is make > >>>>>>>> sure we do not leave less than X% off the free memory at a time. > >>>>>>>> This way chances of triggering an OOM are lower. > >>>>>>> If nothing else we could probably look at doing a watermark of some > >>>>>>> sort so we have to have X amount of memory free but not hinted before > >>>>>>> we will start providing the hints. It would just be a matter of > >>>>>>> tracking how much memory we have hinted on versus the amount of memory > >>>>>>> that has been pulled from that pool. > >>>>>> This is to avoid false OOM in the guest? > >>>>> Partially, though it would still be possible. Basically it would just > >>>>> be a way of determining when we have hinted "enough". Basically it > >>>>> doesn't do us much good to be hinting on free memory if the guest is > >>>>> already constrained and just going to reallocate the memory shortly > >>>>> after we hinted on it. The idea is with a watermark we can avoid > >>>>> hinting until we start having pages that are actually going to stay > >>>>> free for a while. > >>>>> > >>>>>>> It is another reason why we > >>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page > >>>>>>> has been hinted or not as we can then use that to determine if we have > >>>>>>> to account for it in the statistics. > >>>>>> The one benefit which I can see of having an explicit bit is that it > >>>>>> will help us to have a single hook away from the hot path within buddy > >>>>>> merging code (just like your arch_merge_page) and still avoid duplicate > >>>>>> hints while releasing pages. > >>>>>> > >>>>>> I still have to check PG_idle and PG_young which you mentioned but I > >>>>>> don't think we can reuse any existing bits. > >>>>> Those are bits that are already there for 64b. I think those exist in > >>>>> the page extension for 32b systems. If I am not mistaken they are only > >>>>> used in VMA mapped memory. What I was getting at is that those are the > >>>>> bits we could think about reusing. > >>>>> > >>>>>> If we really want to have something like a watermark, then can't we use > >>>>>> zone->free_pages before isolating to see how many free pages are there > >>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing > >>>>>> but it does that on per request basis). > >>>>> Right. That is only part of it though since that tells you how many > >>>>> free pages are there. But how many of those free pages are hinted? > >>>>> That is the part we would need to track separately and then then > >>>>> compare to free_pages to determine if we need to start hinting on more > >>>>> memory or not. > >>>> Only pages which are isolated will be hinted, and once a page is > >>>> isolated it will not be counted in the zone free pages. > >>>> Feel free to correct me if I am wrong. > >>> You are correct up to here. When we isolate the page it isn't counted > >>> against the free pages. However after we complete the hint we end up > >>> taking it out of isolation and returning it to the "free" state, so it > >>> will be counted against the free pages. > >>> > >>>> If I am understanding it correctly you only want to hint the idle pages, > >>>> is that right? > >>> Getting back to the ideas from our earlier discussion, we had 3 stages > >>> for things. Free but not hinted, isolated due to hinting, and free and > >>> hinted. So what we would need to do is identify the size of the first > >>> pool that is free and not hinted by knowing the total number of free > >>> pages, and then subtract the size of the pages that are hinted and > >>> still free. > >> To summarize, for now, I think it makes sense to stick with the current > >> approach as this way we can avoid any locking in the allocation path and > >> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. > > > > I'm not sure what you are talking about by "avoid any locking in the > > allocation path". Are you talking about the spin on idle bit, if so > > then yes. However I have been testing your patches and I was correct > > in the assumption that you forgot to handle the zone lock when you > > were freeing __free_one_page. I just did a quick copy/paste from your > > zone lock handling from the guest_free_page_hinting function into the > > release_buddy_pages function and then I was able to enable multiple > > CPUs without any issues. > > > >> For the next step other than the comments received in the code and what > >> I mentioned in the cover email, I would like to do the following: > >> 1. Explore the watermark idea suggested by Alex and bring down memhog > >> execution time if possible. > > > > So there are a few things that are hurting us on the memhog test: > > 1. The current QEMU patch is only madvising 4K pages at a time, this > > is disabling THP and hurts the test. > > > > 2. The fact that we madvise the pages away makes it so that we have to > > fault the page back in in order to use it for the memhog test. In > > order to avoid that penalty we may want to see if we can introduce > > some sort of "timeout" on the pages so that we are only hinting away > > old pages that have not been used for some period of time. > > > > 3. Currently we are still doing a large amount of processing in the > > page free path. Ideally we should look at getting away from trying to > > do so much per-cpu work and instead just have some small tasks that > > put the data needed in the page, and then have a separate thread > > walking the free_list checking that data, isolating the pages, hinting > > them, and then returning them back to the free_list. > > This is highly debatable. Whenever the is concurrency, there is the need > for locking (well, at least synchronization - maybe using existing locks > like the zone lock). The other thread has to run somewhere. One thread > per VCPU might not what we want ... sorting this out might be more > complicated than it would seem. I would suggest to defer the discussion > of this change to a later stage. It can be easily reworked later - in > theory :) I'm not suggesting anything too complex for now. I would be happy with just using the zone lock. The only other thing we would really need to make it work is some sort of bit we could set once a page has been hinted, and cleared when it is allocated. I"m leaning toward PG_owner_priv_1 at this point since it doesn't seem to be used in the buddy allocator but is heavily used/re-purposed in multiple other spots. > 1 and 2 you mention are the lower hanging fruits that will definitely > improve performance. Agreed. Although the challenge with 2 is getting to the page later instead of trying to immediately hint on the page we just freed. That is why I still thing 3 is going to tie in closely with 2. > -- > > Thanks, > > David / dhildenb
On 3/12/19 5:13 PM, Alexander Duyck wrote: > On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >> On 3/8/19 4:39 PM, Alexander Duyck wrote: >>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>>>> pages. >>>>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>>>> This way chances of triggering an OOM are lower. >>>>>>> If nothing else we could probably look at doing a watermark of some >>>>>>> sort so we have to have X amount of memory free but not hinted before >>>>>>> we will start providing the hints. It would just be a matter of >>>>>>> tracking how much memory we have hinted on versus the amount of memory >>>>>>> that has been pulled from that pool. >>>>>> This is to avoid false OOM in the guest? >>>>> Partially, though it would still be possible. Basically it would just >>>>> be a way of determining when we have hinted "enough". Basically it >>>>> doesn't do us much good to be hinting on free memory if the guest is >>>>> already constrained and just going to reallocate the memory shortly >>>>> after we hinted on it. The idea is with a watermark we can avoid >>>>> hinting until we start having pages that are actually going to stay >>>>> free for a while. >>>>> >>>>>>> It is another reason why we >>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>>>> has been hinted or not as we can then use that to determine if we have >>>>>>> to account for it in the statistics. >>>>>> The one benefit which I can see of having an explicit bit is that it >>>>>> will help us to have a single hook away from the hot path within buddy >>>>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>>>> hints while releasing pages. >>>>>> >>>>>> I still have to check PG_idle and PG_young which you mentioned but I >>>>>> don't think we can reuse any existing bits. >>>>> Those are bits that are already there for 64b. I think those exist in >>>>> the page extension for 32b systems. If I am not mistaken they are only >>>>> used in VMA mapped memory. What I was getting at is that those are the >>>>> bits we could think about reusing. >>>>> >>>>>> If we really want to have something like a watermark, then can't we use >>>>>> zone->free_pages before isolating to see how many free pages are there >>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>>>> but it does that on per request basis). >>>>> Right. That is only part of it though since that tells you how many >>>>> free pages are there. But how many of those free pages are hinted? >>>>> That is the part we would need to track separately and then then >>>>> compare to free_pages to determine if we need to start hinting on more >>>>> memory or not. >>>> Only pages which are isolated will be hinted, and once a page is >>>> isolated it will not be counted in the zone free pages. >>>> Feel free to correct me if I am wrong. >>> You are correct up to here. When we isolate the page it isn't counted >>> against the free pages. However after we complete the hint we end up >>> taking it out of isolation and returning it to the "free" state, so it >>> will be counted against the free pages. >>> >>>> If I am understanding it correctly you only want to hint the idle pages, >>>> is that right? >>> Getting back to the ideas from our earlier discussion, we had 3 stages >>> for things. Free but not hinted, isolated due to hinting, and free and >>> hinted. So what we would need to do is identify the size of the first >>> pool that is free and not hinted by knowing the total number of free >>> pages, and then subtract the size of the pages that are hinted and >>> still free. >> To summarize, for now, I think it makes sense to stick with the current >> approach as this way we can avoid any locking in the allocation path and >> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. > I'm not sure what you are talking about by "avoid any locking in the > allocation path". Are you talking about the spin on idle bit, if so > then yes. Yeap! > However I have been testing your patches and I was correct > in the assumption that you forgot to handle the zone lock when you > were freeing __free_one_page. Yes, these are the steps other than the comments you provided in the code. (One of them is to fix release_buddy_page()) > I just did a quick copy/paste from your > zone lock handling from the guest_free_page_hinting function into the > release_buddy_pages function and then I was able to enable multiple > CPUs without any issues. > >> For the next step other than the comments received in the code and what >> I mentioned in the cover email, I would like to do the following: >> 1. Explore the watermark idea suggested by Alex and bring down memhog >> execution time if possible. > So there are a few things that are hurting us on the memhog test: > 1. The current QEMU patch is only madvising 4K pages at a time, this > is disabling THP and hurts the test. Makes sense, thanks for pointing this out. > > 2. The fact that we madvise the pages away makes it so that we have to > fault the page back in in order to use it for the memhog test. In > order to avoid that penalty we may want to see if we can introduce > some sort of "timeout" on the pages so that we are only hinting away > old pages that have not been used for some period of time. Possibly using MADVISE_FREE should also help in this, I will try this as well. If we could come up with something bit which we could reuse then we may be able to tackle this issue easily. I will look into this. > > 3. Currently we are still doing a large amount of processing in the > page free path. Ideally we should look at getting away from trying to > do so much per-cpu work and instead just have some small tasks that > put the data needed in the page, and then have a separate thread > walking the free_list checking that data, isolating the pages, hinting > them, and then returning them back to the free_list. I will probably defer this analysis for now, once we have other things fixed. I can possibly evaluate/compare the performance impact with both the approach and chose from them. > >> 2. Benchmark hinting v/s non-hinting more extensively. >> Let me know if you have any specific suggestions in terms of the tools I >> can run to do the same. (I am planning to run atleast netperf, hackbench >> and stress for this). > So I have been running the memhog 32g test and the will-it-scale > page_fault1 test as my primary two tests for this so far. > > What I have seen so far has been pretty promising. I had to do some > build fixes, fixes to QEMU to hint on the full size page instead of 4K > page, and fixes for locking so this isn't exactly your original patch > set, but with all that I am seeing data comparable to the original > patch set I had. > > For memhog 32g I am seeing performance similar to a VM that was fresh > booted. I make that the comparison because you will have to take page > faults on a fresh boot as you access additional memory. However after > the first run of the runtime drops from 22s to 20s without the > hinting enabled. > > The big one that probably still needs some work will be the multi-cpu > scaling. With the per-cpu locking for the zone lock to pull pages out, > and put them back in the free list I am seeing what looks like about a > 10% drop in the page_fault1 test. Here are the results as I have seen > so far on a 16 cpu 32G VM: > > -- baseline -- > ./runtest.py page_fault1 > tasks,processes,processes_idle,threads,threads_idle,linear > 0,0,100,0,100,0 > 1,522242,93.73,514965,93.74,522242 > 2,929433,87.48,857280,87.50,1044484 > 3,1360651,81.25,1214224,81.48,1566726 > 4,1693709,75.01,1437156,76.33,2088968 > 5,2062392,68.77,1743294,70.78,2611210 > 6,2271363,62.54,1787238,66.75,3133452 > 7,2564479,56.33,1924684,61.77,3655694 > 8,2699897,50.09,2205783,54.28,4177936 > 9,2931697,43.85,2135788,50.20,4700178 > 10,2939384,37.63,2258725,45.04,5222420 > 11,3039010,31.41,2209401,41.04,5744662 > 12,3022976,25.19,2177655,35.68,6266904 > 13,3015683,18.98,2123546,31.73,6789146 > 14,2921798,12.77,2160489,27.30,7311388 > 15,2846758,6.51,1815036,17.40,7833630 > 16,2703146,0.36,2121018,18.21,8355872 > > -- modified rh patchset -- > ./runtest.py page_fault1 > tasks,processes,processes_idle,threads,threads_idle,linear > 0,0,100,0,100,0 > 1,527216,93.72,517459,93.70,527216 > 2,911239,87.48,843278,87.51,1054432 > 3,1295059,81.22,1193523,81.61,1581648 > 4,1649332,75.02,1439403,76.17,2108864 > 5,1985780,68.81,1745556,70.44,2636080 > 6,2174751,62.56,1769433,66.84,3163296 > 7,2433273,56.33,2121777,58.46,3690512 > 8,2537356,50.17,1901743,57.23,4217728 > 9,2737689,43.87,1859179,54.17,4744944 > 10,2718474,37.65,2188891,43.69,5272160 > 11,2743381,31.47,2205112,38.00,5799376 > 12,2738717,25.26,2117281,38.09,6326592 > 13,2643648,19.06,1887956,35.31,6853808 > 14,2598001,12.92,1916544,27.87,7381024 > 15,2498325,6.70,1992580,26.10,7908240 > 16,2424587,0.45,2137742,21.37,8435456 > > As we discussed earlier, it would probably be good to focus on only > pulling something like 4 to 8 (MAX_ORDER - 1) pages per round of > hinting. I agree that I should bring down the page-set on which I am working. > You might also look at only working one zone at a time. Then > what you could do is look at placing the pages you have already hinted > on at the tail end of the free_list and pull a new set of pages out to > hint on. I think for this we still need a way to check if a particular page is hinted or not. > You could do this all in one shot while holding the zone > lock.
On 13.03.19 12:54, Nitesh Narayan Lal wrote: > > On 3/12/19 5:13 PM, Alexander Duyck wrote: >> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>> On 3/8/19 4:39 PM, Alexander Duyck wrote: >>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>>>>> pages. >>>>>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>>>>> This way chances of triggering an OOM are lower. >>>>>>>> If nothing else we could probably look at doing a watermark of some >>>>>>>> sort so we have to have X amount of memory free but not hinted before >>>>>>>> we will start providing the hints. It would just be a matter of >>>>>>>> tracking how much memory we have hinted on versus the amount of memory >>>>>>>> that has been pulled from that pool. >>>>>>> This is to avoid false OOM in the guest? >>>>>> Partially, though it would still be possible. Basically it would just >>>>>> be a way of determining when we have hinted "enough". Basically it >>>>>> doesn't do us much good to be hinting on free memory if the guest is >>>>>> already constrained and just going to reallocate the memory shortly >>>>>> after we hinted on it. The idea is with a watermark we can avoid >>>>>> hinting until we start having pages that are actually going to stay >>>>>> free for a while. >>>>>> >>>>>>>> It is another reason why we >>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>>>>> has been hinted or not as we can then use that to determine if we have >>>>>>>> to account for it in the statistics. >>>>>>> The one benefit which I can see of having an explicit bit is that it >>>>>>> will help us to have a single hook away from the hot path within buddy >>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>>>>> hints while releasing pages. >>>>>>> >>>>>>> I still have to check PG_idle and PG_young which you mentioned but I >>>>>>> don't think we can reuse any existing bits. >>>>>> Those are bits that are already there for 64b. I think those exist in >>>>>> the page extension for 32b systems. If I am not mistaken they are only >>>>>> used in VMA mapped memory. What I was getting at is that those are the >>>>>> bits we could think about reusing. >>>>>> >>>>>>> If we really want to have something like a watermark, then can't we use >>>>>>> zone->free_pages before isolating to see how many free pages are there >>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>>>>> but it does that on per request basis). >>>>>> Right. That is only part of it though since that tells you how many >>>>>> free pages are there. But how many of those free pages are hinted? >>>>>> That is the part we would need to track separately and then then >>>>>> compare to free_pages to determine if we need to start hinting on more >>>>>> memory or not. >>>>> Only pages which are isolated will be hinted, and once a page is >>>>> isolated it will not be counted in the zone free pages. >>>>> Feel free to correct me if I am wrong. >>>> You are correct up to here. When we isolate the page it isn't counted >>>> against the free pages. However after we complete the hint we end up >>>> taking it out of isolation and returning it to the "free" state, so it >>>> will be counted against the free pages. >>>> >>>>> If I am understanding it correctly you only want to hint the idle pages, >>>>> is that right? >>>> Getting back to the ideas from our earlier discussion, we had 3 stages >>>> for things. Free but not hinted, isolated due to hinting, and free and >>>> hinted. So what we would need to do is identify the size of the first >>>> pool that is free and not hinted by knowing the total number of free >>>> pages, and then subtract the size of the pages that are hinted and >>>> still free. >>> To summarize, for now, I think it makes sense to stick with the current >>> approach as this way we can avoid any locking in the allocation path and >>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. >> I'm not sure what you are talking about by "avoid any locking in the >> allocation path". Are you talking about the spin on idle bit, if so >> then yes. > Yeap! >> However I have been testing your patches and I was correct >> in the assumption that you forgot to handle the zone lock when you >> were freeing __free_one_page. > Yes, these are the steps other than the comments you provided in the > code. (One of them is to fix release_buddy_page()) >> I just did a quick copy/paste from your >> zone lock handling from the guest_free_page_hinting function into the >> release_buddy_pages function and then I was able to enable multiple >> CPUs without any issues. >> >>> For the next step other than the comments received in the code and what >>> I mentioned in the cover email, I would like to do the following: >>> 1. Explore the watermark idea suggested by Alex and bring down memhog >>> execution time if possible. >> So there are a few things that are hurting us on the memhog test: >> 1. The current QEMU patch is only madvising 4K pages at a time, this >> is disabling THP and hurts the test. > Makes sense, thanks for pointing this out. >> >> 2. The fact that we madvise the pages away makes it so that we have to >> fault the page back in in order to use it for the memhog test. In >> order to avoid that penalty we may want to see if we can introduce >> some sort of "timeout" on the pages so that we are only hinting away >> old pages that have not been used for some period of time. > > Possibly using MADVISE_FREE should also help in this, I will try this as > well. I was asking myself some time ago how MADVISE_FREE will be handled in case of THP. Please let me know your findings :)
On 3/13/19 8:17 AM, David Hildenbrand wrote: > On 13.03.19 12:54, Nitesh Narayan Lal wrote: >> On 3/12/19 5:13 PM, Alexander Duyck wrote: >>> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>> On 3/8/19 4:39 PM, Alexander Duyck wrote: >>>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>>>>>> pages. >>>>>>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>>>>>> This way chances of triggering an OOM are lower. >>>>>>>>> If nothing else we could probably look at doing a watermark of some >>>>>>>>> sort so we have to have X amount of memory free but not hinted before >>>>>>>>> we will start providing the hints. It would just be a matter of >>>>>>>>> tracking how much memory we have hinted on versus the amount of memory >>>>>>>>> that has been pulled from that pool. >>>>>>>> This is to avoid false OOM in the guest? >>>>>>> Partially, though it would still be possible. Basically it would just >>>>>>> be a way of determining when we have hinted "enough". Basically it >>>>>>> doesn't do us much good to be hinting on free memory if the guest is >>>>>>> already constrained and just going to reallocate the memory shortly >>>>>>> after we hinted on it. The idea is with a watermark we can avoid >>>>>>> hinting until we start having pages that are actually going to stay >>>>>>> free for a while. >>>>>>> >>>>>>>>> It is another reason why we >>>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>>>>>> has been hinted or not as we can then use that to determine if we have >>>>>>>>> to account for it in the statistics. >>>>>>>> The one benefit which I can see of having an explicit bit is that it >>>>>>>> will help us to have a single hook away from the hot path within buddy >>>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>>>>>> hints while releasing pages. >>>>>>>> >>>>>>>> I still have to check PG_idle and PG_young which you mentioned but I >>>>>>>> don't think we can reuse any existing bits. >>>>>>> Those are bits that are already there for 64b. I think those exist in >>>>>>> the page extension for 32b systems. If I am not mistaken they are only >>>>>>> used in VMA mapped memory. What I was getting at is that those are the >>>>>>> bits we could think about reusing. >>>>>>> >>>>>>>> If we really want to have something like a watermark, then can't we use >>>>>>>> zone->free_pages before isolating to see how many free pages are there >>>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>>>>>> but it does that on per request basis). >>>>>>> Right. That is only part of it though since that tells you how many >>>>>>> free pages are there. But how many of those free pages are hinted? >>>>>>> That is the part we would need to track separately and then then >>>>>>> compare to free_pages to determine if we need to start hinting on more >>>>>>> memory or not. >>>>>> Only pages which are isolated will be hinted, and once a page is >>>>>> isolated it will not be counted in the zone free pages. >>>>>> Feel free to correct me if I am wrong. >>>>> You are correct up to here. When we isolate the page it isn't counted >>>>> against the free pages. However after we complete the hint we end up >>>>> taking it out of isolation and returning it to the "free" state, so it >>>>> will be counted against the free pages. >>>>> >>>>>> If I am understanding it correctly you only want to hint the idle pages, >>>>>> is that right? >>>>> Getting back to the ideas from our earlier discussion, we had 3 stages >>>>> for things. Free but not hinted, isolated due to hinting, and free and >>>>> hinted. So what we would need to do is identify the size of the first >>>>> pool that is free and not hinted by knowing the total number of free >>>>> pages, and then subtract the size of the pages that are hinted and >>>>> still free. >>>> To summarize, for now, I think it makes sense to stick with the current >>>> approach as this way we can avoid any locking in the allocation path and >>>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. >>> I'm not sure what you are talking about by "avoid any locking in the >>> allocation path". Are you talking about the spin on idle bit, if so >>> then yes. >> Yeap! >>> However I have been testing your patches and I was correct >>> in the assumption that you forgot to handle the zone lock when you >>> were freeing __free_one_page. >> Yes, these are the steps other than the comments you provided in the >> code. (One of them is to fix release_buddy_page()) >>> I just did a quick copy/paste from your >>> zone lock handling from the guest_free_page_hinting function into the >>> release_buddy_pages function and then I was able to enable multiple >>> CPUs without any issues. >>> >>>> For the next step other than the comments received in the code and what >>>> I mentioned in the cover email, I would like to do the following: >>>> 1. Explore the watermark idea suggested by Alex and bring down memhog >>>> execution time if possible. >>> So there are a few things that are hurting us on the memhog test: >>> 1. The current QEMU patch is only madvising 4K pages at a time, this >>> is disabling THP and hurts the test. >> Makes sense, thanks for pointing this out. >>> 2. The fact that we madvise the pages away makes it so that we have to >>> fault the page back in in order to use it for the memhog test. In >>> order to avoid that penalty we may want to see if we can introduce >>> some sort of "timeout" on the pages so that we are only hinting away >>> old pages that have not been used for some period of time. >> Possibly using MADVISE_FREE should also help in this, I will try this as >> well. > I was asking myself some time ago how MADVISE_FREE will be handled in > case of THP. Please let me know your findings :) I will do that. If we don't end up finding any appropriate page flag to track the age of free page. I am wondering if I can somehow use bitmap to track the free count for each PFN. >
On Wed, Mar 13, 2019 at 5:18 AM David Hildenbrand <david@redhat.com> wrote: > > On 13.03.19 12:54, Nitesh Narayan Lal wrote: > > > > On 3/12/19 5:13 PM, Alexander Duyck wrote: > >> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>> On 3/8/19 4:39 PM, Alexander Duyck wrote: > >>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: > >>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: > >>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > >>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > >>>>>>>>>> The only other thing I still want to try and see if I can do is to add > >>>>>>>>>> a jiffies value to the page private data in the case of the buddy > >>>>>>>>>> pages. > >>>>>>>>> Actually there's one extra thing I think we should do, and that is make > >>>>>>>>> sure we do not leave less than X% off the free memory at a time. > >>>>>>>>> This way chances of triggering an OOM are lower. > >>>>>>>> If nothing else we could probably look at doing a watermark of some > >>>>>>>> sort so we have to have X amount of memory free but not hinted before > >>>>>>>> we will start providing the hints. It would just be a matter of > >>>>>>>> tracking how much memory we have hinted on versus the amount of memory > >>>>>>>> that has been pulled from that pool. > >>>>>>> This is to avoid false OOM in the guest? > >>>>>> Partially, though it would still be possible. Basically it would just > >>>>>> be a way of determining when we have hinted "enough". Basically it > >>>>>> doesn't do us much good to be hinting on free memory if the guest is > >>>>>> already constrained and just going to reallocate the memory shortly > >>>>>> after we hinted on it. The idea is with a watermark we can avoid > >>>>>> hinting until we start having pages that are actually going to stay > >>>>>> free for a while. > >>>>>> > >>>>>>>> It is another reason why we > >>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page > >>>>>>>> has been hinted or not as we can then use that to determine if we have > >>>>>>>> to account for it in the statistics. > >>>>>>> The one benefit which I can see of having an explicit bit is that it > >>>>>>> will help us to have a single hook away from the hot path within buddy > >>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate > >>>>>>> hints while releasing pages. > >>>>>>> > >>>>>>> I still have to check PG_idle and PG_young which you mentioned but I > >>>>>>> don't think we can reuse any existing bits. > >>>>>> Those are bits that are already there for 64b. I think those exist in > >>>>>> the page extension for 32b systems. If I am not mistaken they are only > >>>>>> used in VMA mapped memory. What I was getting at is that those are the > >>>>>> bits we could think about reusing. > >>>>>> > >>>>>>> If we really want to have something like a watermark, then can't we use > >>>>>>> zone->free_pages before isolating to see how many free pages are there > >>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing > >>>>>>> but it does that on per request basis). > >>>>>> Right. That is only part of it though since that tells you how many > >>>>>> free pages are there. But how many of those free pages are hinted? > >>>>>> That is the part we would need to track separately and then then > >>>>>> compare to free_pages to determine if we need to start hinting on more > >>>>>> memory or not. > >>>>> Only pages which are isolated will be hinted, and once a page is > >>>>> isolated it will not be counted in the zone free pages. > >>>>> Feel free to correct me if I am wrong. > >>>> You are correct up to here. When we isolate the page it isn't counted > >>>> against the free pages. However after we complete the hint we end up > >>>> taking it out of isolation and returning it to the "free" state, so it > >>>> will be counted against the free pages. > >>>> > >>>>> If I am understanding it correctly you only want to hint the idle pages, > >>>>> is that right? > >>>> Getting back to the ideas from our earlier discussion, we had 3 stages > >>>> for things. Free but not hinted, isolated due to hinting, and free and > >>>> hinted. So what we would need to do is identify the size of the first > >>>> pool that is free and not hinted by knowing the total number of free > >>>> pages, and then subtract the size of the pages that are hinted and > >>>> still free. > >>> To summarize, for now, I think it makes sense to stick with the current > >>> approach as this way we can avoid any locking in the allocation path and > >>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. > >> I'm not sure what you are talking about by "avoid any locking in the > >> allocation path". Are you talking about the spin on idle bit, if so > >> then yes. > > Yeap! > >> However I have been testing your patches and I was correct > >> in the assumption that you forgot to handle the zone lock when you > >> were freeing __free_one_page. > > Yes, these are the steps other than the comments you provided in the > > code. (One of them is to fix release_buddy_page()) > >> I just did a quick copy/paste from your > >> zone lock handling from the guest_free_page_hinting function into the > >> release_buddy_pages function and then I was able to enable multiple > >> CPUs without any issues. > >> > >>> For the next step other than the comments received in the code and what > >>> I mentioned in the cover email, I would like to do the following: > >>> 1. Explore the watermark idea suggested by Alex and bring down memhog > >>> execution time if possible. > >> So there are a few things that are hurting us on the memhog test: > >> 1. The current QEMU patch is only madvising 4K pages at a time, this > >> is disabling THP and hurts the test. > > Makes sense, thanks for pointing this out. > >> > >> 2. The fact that we madvise the pages away makes it so that we have to > >> fault the page back in in order to use it for the memhog test. In > >> order to avoid that penalty we may want to see if we can introduce > >> some sort of "timeout" on the pages so that we are only hinting away > >> old pages that have not been used for some period of time. > > > > Possibly using MADVISE_FREE should also help in this, I will try this as > > well. > > I was asking myself some time ago how MADVISE_FREE will be handled in > case of THP. Please let me know your findings :) The problem with MADVISE_FREE is that it will add additional complication to the QEMU portion of all this as it only applies to anonymous memory if I am not mistaken. That also reminds me that one thing this patch set still doesn't address is what do we do about a direct assigned device or some other form of shared memory where we want to keep the virtual mapping beneath the guest pinned to a given set of physical memory.
On 13.03.19 17:37, Alexander Duyck wrote: > On Wed, Mar 13, 2019 at 5:18 AM David Hildenbrand <david@redhat.com> wrote: >> >> On 13.03.19 12:54, Nitesh Narayan Lal wrote: >>> >>> On 3/12/19 5:13 PM, Alexander Duyck wrote: >>>> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>> On 3/8/19 4:39 PM, Alexander Duyck wrote: >>>>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>>>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>>>>>>> pages. >>>>>>>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>>>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>>>>>>> This way chances of triggering an OOM are lower. >>>>>>>>>> If nothing else we could probably look at doing a watermark of some >>>>>>>>>> sort so we have to have X amount of memory free but not hinted before >>>>>>>>>> we will start providing the hints. It would just be a matter of >>>>>>>>>> tracking how much memory we have hinted on versus the amount of memory >>>>>>>>>> that has been pulled from that pool. >>>>>>>>> This is to avoid false OOM in the guest? >>>>>>>> Partially, though it would still be possible. Basically it would just >>>>>>>> be a way of determining when we have hinted "enough". Basically it >>>>>>>> doesn't do us much good to be hinting on free memory if the guest is >>>>>>>> already constrained and just going to reallocate the memory shortly >>>>>>>> after we hinted on it. The idea is with a watermark we can avoid >>>>>>>> hinting until we start having pages that are actually going to stay >>>>>>>> free for a while. >>>>>>>> >>>>>>>>>> It is another reason why we >>>>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>>>>>>> has been hinted or not as we can then use that to determine if we have >>>>>>>>>> to account for it in the statistics. >>>>>>>>> The one benefit which I can see of having an explicit bit is that it >>>>>>>>> will help us to have a single hook away from the hot path within buddy >>>>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>>>>>>> hints while releasing pages. >>>>>>>>> >>>>>>>>> I still have to check PG_idle and PG_young which you mentioned but I >>>>>>>>> don't think we can reuse any existing bits. >>>>>>>> Those are bits that are already there for 64b. I think those exist in >>>>>>>> the page extension for 32b systems. If I am not mistaken they are only >>>>>>>> used in VMA mapped memory. What I was getting at is that those are the >>>>>>>> bits we could think about reusing. >>>>>>>> >>>>>>>>> If we really want to have something like a watermark, then can't we use >>>>>>>>> zone->free_pages before isolating to see how many free pages are there >>>>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>>>>>>> but it does that on per request basis). >>>>>>>> Right. That is only part of it though since that tells you how many >>>>>>>> free pages are there. But how many of those free pages are hinted? >>>>>>>> That is the part we would need to track separately and then then >>>>>>>> compare to free_pages to determine if we need to start hinting on more >>>>>>>> memory or not. >>>>>>> Only pages which are isolated will be hinted, and once a page is >>>>>>> isolated it will not be counted in the zone free pages. >>>>>>> Feel free to correct me if I am wrong. >>>>>> You are correct up to here. When we isolate the page it isn't counted >>>>>> against the free pages. However after we complete the hint we end up >>>>>> taking it out of isolation and returning it to the "free" state, so it >>>>>> will be counted against the free pages. >>>>>> >>>>>>> If I am understanding it correctly you only want to hint the idle pages, >>>>>>> is that right? >>>>>> Getting back to the ideas from our earlier discussion, we had 3 stages >>>>>> for things. Free but not hinted, isolated due to hinting, and free and >>>>>> hinted. So what we would need to do is identify the size of the first >>>>>> pool that is free and not hinted by knowing the total number of free >>>>>> pages, and then subtract the size of the pages that are hinted and >>>>>> still free. >>>>> To summarize, for now, I think it makes sense to stick with the current >>>>> approach as this way we can avoid any locking in the allocation path and >>>>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. >>>> I'm not sure what you are talking about by "avoid any locking in the >>>> allocation path". Are you talking about the spin on idle bit, if so >>>> then yes. >>> Yeap! >>>> However I have been testing your patches and I was correct >>>> in the assumption that you forgot to handle the zone lock when you >>>> were freeing __free_one_page. >>> Yes, these are the steps other than the comments you provided in the >>> code. (One of them is to fix release_buddy_page()) >>>> I just did a quick copy/paste from your >>>> zone lock handling from the guest_free_page_hinting function into the >>>> release_buddy_pages function and then I was able to enable multiple >>>> CPUs without any issues. >>>> >>>>> For the next step other than the comments received in the code and what >>>>> I mentioned in the cover email, I would like to do the following: >>>>> 1. Explore the watermark idea suggested by Alex and bring down memhog >>>>> execution time if possible. >>>> So there are a few things that are hurting us on the memhog test: >>>> 1. The current QEMU patch is only madvising 4K pages at a time, this >>>> is disabling THP and hurts the test. >>> Makes sense, thanks for pointing this out. >>>> >>>> 2. The fact that we madvise the pages away makes it so that we have to >>>> fault the page back in in order to use it for the memhog test. In >>>> order to avoid that penalty we may want to see if we can introduce >>>> some sort of "timeout" on the pages so that we are only hinting away >>>> old pages that have not been used for some period of time. >>> >>> Possibly using MADVISE_FREE should also help in this, I will try this as >>> well. >> >> I was asking myself some time ago how MADVISE_FREE will be handled in >> case of THP. Please let me know your findings :) > > The problem with MADVISE_FREE is that it will add additional > complication to the QEMU portion of all this as it only applies to > anonymous memory if I am not mistaken. Just as MADV_DONTNEED. So nothing new. Future work.
On Wed, Mar 13, 2019 at 9:39 AM David Hildenbrand <david@redhat.com> wrote: > > On 13.03.19 17:37, Alexander Duyck wrote: > > On Wed, Mar 13, 2019 at 5:18 AM David Hildenbrand <david@redhat.com> wrote: > >> > >> On 13.03.19 12:54, Nitesh Narayan Lal wrote: > >>> > >>> On 3/12/19 5:13 PM, Alexander Duyck wrote: > >>>> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>> On 3/8/19 4:39 PM, Alexander Duyck wrote: > >>>>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: > >>>>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: > >>>>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: > >>>>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: > >>>>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: > >>>>>>>>>>>> The only other thing I still want to try and see if I can do is to add > >>>>>>>>>>>> a jiffies value to the page private data in the case of the buddy > >>>>>>>>>>>> pages. > >>>>>>>>>>> Actually there's one extra thing I think we should do, and that is make > >>>>>>>>>>> sure we do not leave less than X% off the free memory at a time. > >>>>>>>>>>> This way chances of triggering an OOM are lower. > >>>>>>>>>> If nothing else we could probably look at doing a watermark of some > >>>>>>>>>> sort so we have to have X amount of memory free but not hinted before > >>>>>>>>>> we will start providing the hints. It would just be a matter of > >>>>>>>>>> tracking how much memory we have hinted on versus the amount of memory > >>>>>>>>>> that has been pulled from that pool. > >>>>>>>>> This is to avoid false OOM in the guest? > >>>>>>>> Partially, though it would still be possible. Basically it would just > >>>>>>>> be a way of determining when we have hinted "enough". Basically it > >>>>>>>> doesn't do us much good to be hinting on free memory if the guest is > >>>>>>>> already constrained and just going to reallocate the memory shortly > >>>>>>>> after we hinted on it. The idea is with a watermark we can avoid > >>>>>>>> hinting until we start having pages that are actually going to stay > >>>>>>>> free for a while. > >>>>>>>> > >>>>>>>>>> It is another reason why we > >>>>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page > >>>>>>>>>> has been hinted or not as we can then use that to determine if we have > >>>>>>>>>> to account for it in the statistics. > >>>>>>>>> The one benefit which I can see of having an explicit bit is that it > >>>>>>>>> will help us to have a single hook away from the hot path within buddy > >>>>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate > >>>>>>>>> hints while releasing pages. > >>>>>>>>> > >>>>>>>>> I still have to check PG_idle and PG_young which you mentioned but I > >>>>>>>>> don't think we can reuse any existing bits. > >>>>>>>> Those are bits that are already there for 64b. I think those exist in > >>>>>>>> the page extension for 32b systems. If I am not mistaken they are only > >>>>>>>> used in VMA mapped memory. What I was getting at is that those are the > >>>>>>>> bits we could think about reusing. > >>>>>>>> > >>>>>>>>> If we really want to have something like a watermark, then can't we use > >>>>>>>>> zone->free_pages before isolating to see how many free pages are there > >>>>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing > >>>>>>>>> but it does that on per request basis). > >>>>>>>> Right. That is only part of it though since that tells you how many > >>>>>>>> free pages are there. But how many of those free pages are hinted? > >>>>>>>> That is the part we would need to track separately and then then > >>>>>>>> compare to free_pages to determine if we need to start hinting on more > >>>>>>>> memory or not. > >>>>>>> Only pages which are isolated will be hinted, and once a page is > >>>>>>> isolated it will not be counted in the zone free pages. > >>>>>>> Feel free to correct me if I am wrong. > >>>>>> You are correct up to here. When we isolate the page it isn't counted > >>>>>> against the free pages. However after we complete the hint we end up > >>>>>> taking it out of isolation and returning it to the "free" state, so it > >>>>>> will be counted against the free pages. > >>>>>> > >>>>>>> If I am understanding it correctly you only want to hint the idle pages, > >>>>>>> is that right? > >>>>>> Getting back to the ideas from our earlier discussion, we had 3 stages > >>>>>> for things. Free but not hinted, isolated due to hinting, and free and > >>>>>> hinted. So what we would need to do is identify the size of the first > >>>>>> pool that is free and not hinted by knowing the total number of free > >>>>>> pages, and then subtract the size of the pages that are hinted and > >>>>>> still free. > >>>>> To summarize, for now, I think it makes sense to stick with the current > >>>>> approach as this way we can avoid any locking in the allocation path and > >>>>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. > >>>> I'm not sure what you are talking about by "avoid any locking in the > >>>> allocation path". Are you talking about the spin on idle bit, if so > >>>> then yes. > >>> Yeap! > >>>> However I have been testing your patches and I was correct > >>>> in the assumption that you forgot to handle the zone lock when you > >>>> were freeing __free_one_page. > >>> Yes, these are the steps other than the comments you provided in the > >>> code. (One of them is to fix release_buddy_page()) > >>>> I just did a quick copy/paste from your > >>>> zone lock handling from the guest_free_page_hinting function into the > >>>> release_buddy_pages function and then I was able to enable multiple > >>>> CPUs without any issues. > >>>> > >>>>> For the next step other than the comments received in the code and what > >>>>> I mentioned in the cover email, I would like to do the following: > >>>>> 1. Explore the watermark idea suggested by Alex and bring down memhog > >>>>> execution time if possible. > >>>> So there are a few things that are hurting us on the memhog test: > >>>> 1. The current QEMU patch is only madvising 4K pages at a time, this > >>>> is disabling THP and hurts the test. > >>> Makes sense, thanks for pointing this out. > >>>> > >>>> 2. The fact that we madvise the pages away makes it so that we have to > >>>> fault the page back in in order to use it for the memhog test. In > >>>> order to avoid that penalty we may want to see if we can introduce > >>>> some sort of "timeout" on the pages so that we are only hinting away > >>>> old pages that have not been used for some period of time. > >>> > >>> Possibly using MADVISE_FREE should also help in this, I will try this as > >>> well. > >> > >> I was asking myself some time ago how MADVISE_FREE will be handled in > >> case of THP. Please let me know your findings :) > > > > The problem with MADVISE_FREE is that it will add additional > > complication to the QEMU portion of all this as it only applies to > > anonymous memory if I am not mistaken. > > Just as MADV_DONTNEED. So nothing new. Future work. I'm pretty sure you can use MADV_DONTNEED to free up file backed memory, I don't believe this is the case for MADV_FREE, but maybe I am mistaken. On a side note I was just reviewing some stuff related to the reserved bit and on-lining hotplug memory, and it just occurred to me that most the PG_offline bit would be a good means to indicate that we hinted away a page out of the buddy allocator, especially since it is already used by the balloon drivers anyway. We would just have to add a call to make sure we clear it when we call __ClearPageBuddy. It looks like that would currently be in del_page_from_free_area, at least for linux-next. I just wanted to get your thoughts on that as it seems like it might be a good fit. Thanks. - Alex
On 13.03.19 23:54, Alexander Duyck wrote: > On Wed, Mar 13, 2019 at 9:39 AM David Hildenbrand <david@redhat.com> wrote: >> >> On 13.03.19 17:37, Alexander Duyck wrote: >>> On Wed, Mar 13, 2019 at 5:18 AM David Hildenbrand <david@redhat.com> wrote: >>>> >>>> On 13.03.19 12:54, Nitesh Narayan Lal wrote: >>>>> >>>>> On 3/12/19 5:13 PM, Alexander Duyck wrote: >>>>>> On Tue, Mar 12, 2019 at 12:46 PM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>> On 3/8/19 4:39 PM, Alexander Duyck wrote: >>>>>>>> On Fri, Mar 8, 2019 at 11:39 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>>>> On 3/8/19 2:25 PM, Alexander Duyck wrote: >>>>>>>>>> On Fri, Mar 8, 2019 at 11:10 AM Nitesh Narayan Lal <nitesh@redhat.com> wrote: >>>>>>>>>>> On 3/8/19 1:06 PM, Alexander Duyck wrote: >>>>>>>>>>>> On Thu, Mar 7, 2019 at 6:32 PM Michael S. Tsirkin <mst@redhat.com> wrote: >>>>>>>>>>>>> On Thu, Mar 07, 2019 at 02:35:53PM -0800, Alexander Duyck wrote: >>>>>>>>>>>>>> The only other thing I still want to try and see if I can do is to add >>>>>>>>>>>>>> a jiffies value to the page private data in the case of the buddy >>>>>>>>>>>>>> pages. >>>>>>>>>>>>> Actually there's one extra thing I think we should do, and that is make >>>>>>>>>>>>> sure we do not leave less than X% off the free memory at a time. >>>>>>>>>>>>> This way chances of triggering an OOM are lower. >>>>>>>>>>>> If nothing else we could probably look at doing a watermark of some >>>>>>>>>>>> sort so we have to have X amount of memory free but not hinted before >>>>>>>>>>>> we will start providing the hints. It would just be a matter of >>>>>>>>>>>> tracking how much memory we have hinted on versus the amount of memory >>>>>>>>>>>> that has been pulled from that pool. >>>>>>>>>>> This is to avoid false OOM in the guest? >>>>>>>>>> Partially, though it would still be possible. Basically it would just >>>>>>>>>> be a way of determining when we have hinted "enough". Basically it >>>>>>>>>> doesn't do us much good to be hinting on free memory if the guest is >>>>>>>>>> already constrained and just going to reallocate the memory shortly >>>>>>>>>> after we hinted on it. The idea is with a watermark we can avoid >>>>>>>>>> hinting until we start having pages that are actually going to stay >>>>>>>>>> free for a while. >>>>>>>>>> >>>>>>>>>>>> It is another reason why we >>>>>>>>>>>> probably want a bit in the buddy pages somewhere to indicate if a page >>>>>>>>>>>> has been hinted or not as we can then use that to determine if we have >>>>>>>>>>>> to account for it in the statistics. >>>>>>>>>>> The one benefit which I can see of having an explicit bit is that it >>>>>>>>>>> will help us to have a single hook away from the hot path within buddy >>>>>>>>>>> merging code (just like your arch_merge_page) and still avoid duplicate >>>>>>>>>>> hints while releasing pages. >>>>>>>>>>> >>>>>>>>>>> I still have to check PG_idle and PG_young which you mentioned but I >>>>>>>>>>> don't think we can reuse any existing bits. >>>>>>>>>> Those are bits that are already there for 64b. I think those exist in >>>>>>>>>> the page extension for 32b systems. If I am not mistaken they are only >>>>>>>>>> used in VMA mapped memory. What I was getting at is that those are the >>>>>>>>>> bits we could think about reusing. >>>>>>>>>> >>>>>>>>>>> If we really want to have something like a watermark, then can't we use >>>>>>>>>>> zone->free_pages before isolating to see how many free pages are there >>>>>>>>>>> and put a threshold on it? (__isolate_free_page() does a similar thing >>>>>>>>>>> but it does that on per request basis). >>>>>>>>>> Right. That is only part of it though since that tells you how many >>>>>>>>>> free pages are there. But how many of those free pages are hinted? >>>>>>>>>> That is the part we would need to track separately and then then >>>>>>>>>> compare to free_pages to determine if we need to start hinting on more >>>>>>>>>> memory or not. >>>>>>>>> Only pages which are isolated will be hinted, and once a page is >>>>>>>>> isolated it will not be counted in the zone free pages. >>>>>>>>> Feel free to correct me if I am wrong. >>>>>>>> You are correct up to here. When we isolate the page it isn't counted >>>>>>>> against the free pages. However after we complete the hint we end up >>>>>>>> taking it out of isolation and returning it to the "free" state, so it >>>>>>>> will be counted against the free pages. >>>>>>>> >>>>>>>>> If I am understanding it correctly you only want to hint the idle pages, >>>>>>>>> is that right? >>>>>>>> Getting back to the ideas from our earlier discussion, we had 3 stages >>>>>>>> for things. Free but not hinted, isolated due to hinting, and free and >>>>>>>> hinted. So what we would need to do is identify the size of the first >>>>>>>> pool that is free and not hinted by knowing the total number of free >>>>>>>> pages, and then subtract the size of the pages that are hinted and >>>>>>>> still free. >>>>>>> To summarize, for now, I think it makes sense to stick with the current >>>>>>> approach as this way we can avoid any locking in the allocation path and >>>>>>> reduce the number of hypercalls for a bunch of MAX_ORDER - 1 page. >>>>>> I'm not sure what you are talking about by "avoid any locking in the >>>>>> allocation path". Are you talking about the spin on idle bit, if so >>>>>> then yes. >>>>> Yeap! >>>>>> However I have been testing your patches and I was correct >>>>>> in the assumption that you forgot to handle the zone lock when you >>>>>> were freeing __free_one_page. >>>>> Yes, these are the steps other than the comments you provided in the >>>>> code. (One of them is to fix release_buddy_page()) >>>>>> I just did a quick copy/paste from your >>>>>> zone lock handling from the guest_free_page_hinting function into the >>>>>> release_buddy_pages function and then I was able to enable multiple >>>>>> CPUs without any issues. >>>>>> >>>>>>> For the next step other than the comments received in the code and what >>>>>>> I mentioned in the cover email, I would like to do the following: >>>>>>> 1. Explore the watermark idea suggested by Alex and bring down memhog >>>>>>> execution time if possible. >>>>>> So there are a few things that are hurting us on the memhog test: >>>>>> 1. The current QEMU patch is only madvising 4K pages at a time, this >>>>>> is disabling THP and hurts the test. >>>>> Makes sense, thanks for pointing this out. >>>>>> >>>>>> 2. The fact that we madvise the pages away makes it so that we have to >>>>>> fault the page back in in order to use it for the memhog test. In >>>>>> order to avoid that penalty we may want to see if we can introduce >>>>>> some sort of "timeout" on the pages so that we are only hinting away >>>>>> old pages that have not been used for some period of time. >>>>> >>>>> Possibly using MADVISE_FREE should also help in this, I will try this as >>>>> well. >>>> >>>> I was asking myself some time ago how MADVISE_FREE will be handled in >>>> case of THP. Please let me know your findings :) >>> >>> The problem with MADVISE_FREE is that it will add additional >>> complication to the QEMU portion of all this as it only applies to >>> anonymous memory if I am not mistaken. >> >> Just as MADV_DONTNEED. So nothing new. Future work. > > I'm pretty sure you can use MADV_DONTNEED to free up file backed > memory, I don't believe this is the case for MADV_FREE, but maybe I am > mistaken. "MADV_DONTNEED cannot be applied to locked pages, Huge TLB pages, or VM_PFNMAP pages." For shmem, hugetlbfs and friends one has to use FALLOC_FL_PUNCH_HOLE as far as I remember (e.g. QEMU postcopy migration has to use it). So effectively, virtio-balloon can as of now only really deal with anonymous memory. And it is the same case for free page hinting. > > On a side note I was just reviewing some stuff related to the reserved > bit and on-lining hotplug memory, and it just occurred to me that most > the PG_offline bit would be a good means to indicate that we hinted > away a page out of the buddy allocator, especially since it is already > used by the balloon drivers anyway. We would just have to add a call > to make sure we clear it when we call __ClearPageBuddy. It looks like > that would currently be in del_page_from_free_area, at least for > linux-next. Hmm, if we only knew who came up with PG_offline ... ;) Unfortunately PG_offline is not a bit, it is mapcount value just like PG_buddy. Well okay, it is a bit in the mapcount value - but as of now, a page can only have one such "page type" at a time as far as I recall. > > I just wanted to get your thoughts on that as it seems like it might > be a good fit. It would be if we could have multiple page types at a time. I haven't had a look yet how realistic that would be. As you correctly noted, balloon drivers use that bit as of now to mark pages that are logically offline (here: "inflated"). > > Thanks. > > - Alex >
diff --git a/include/linux/page_hinting.h b/include/linux/page_hinting.h index 90254c582789..d554a2581826 100644 --- a/include/linux/page_hinting.h +++ b/include/linux/page_hinting.h @@ -13,3 +13,8 @@ void guest_free_page_enqueue(struct page *page, int order); void guest_free_page_try_hinting(void); +extern int __isolate_free_page(struct page *page, unsigned int order); +extern void __free_one_page(struct page *page, unsigned long pfn, + struct zone *zone, unsigned int order, + int migratetype); +void release_buddy_pages(void *obj_to_free, int entries); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 684d047f33ee..d38b7eea207b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -814,7 +814,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy, * -- nyc */ -static inline void __free_one_page(struct page *page, +inline void __free_one_page(struct page *page, unsigned long pfn, struct zone *zone, unsigned int order, int migratetype) diff --git a/virt/kvm/page_hinting.c b/virt/kvm/page_hinting.c index 48b4b5e796b0..9885b372b5a9 100644 --- a/virt/kvm/page_hinting.c +++ b/virt/kvm/page_hinting.c @@ -1,5 +1,9 @@ #include <linux/mm.h> #include <linux/page_hinting.h> +#include <linux/page_ref.h> +#include <linux/kvm_host.h> +#include <linux/kernel.h> +#include <linux/sort.h> /* * struct guest_free_pages- holds array of guest freed PFN's along with an @@ -16,6 +20,54 @@ struct guest_free_pages { DEFINE_PER_CPU(struct guest_free_pages, free_pages_obj); +/* + * struct guest_isolated_pages- holds the buddy isolated pages which are + * supposed to be freed by the host. + * @pfn: page frame number for the isolated page. + * @order: order of the isolated page. + */ +struct guest_isolated_pages { + unsigned long pfn; + unsigned int order; +}; + +void release_buddy_pages(void *obj_to_free, int entries) +{ + int i = 0; + int mt = 0; + struct guest_isolated_pages *isolated_pages_obj = obj_to_free; + + while (i < entries) { + struct page *page = pfn_to_page(isolated_pages_obj[i].pfn); + + mt = get_pageblock_migratetype(page); + __free_one_page(page, page_to_pfn(page), page_zone(page), + isolated_pages_obj[i].order, mt); + i++; + } + kfree(isolated_pages_obj); +} + +void guest_free_page_report(struct guest_isolated_pages *isolated_pages_obj, + int entries) +{ + release_buddy_pages(isolated_pages_obj, entries); +} + +static int sort_zonenum(const void *a1, const void *b1) +{ + const unsigned long *a = a1; + const unsigned long *b = b1; + + if (page_zonenum(pfn_to_page(a[0])) > page_zonenum(pfn_to_page(b[0]))) + return 1; + + if (page_zonenum(pfn_to_page(a[0])) < page_zonenum(pfn_to_page(b[0]))) + return -1; + + return 0; +} + struct page *get_buddy_page(struct page *page) { unsigned long pfn = page_to_pfn(page); @@ -33,9 +85,111 @@ struct page *get_buddy_page(struct page *page) static void guest_free_page_hinting(void) { struct guest_free_pages *hinting_obj = &get_cpu_var(free_pages_obj); + struct guest_isolated_pages *isolated_pages_obj; + int idx = 0, ret = 0; + struct zone *zone_cur, *zone_prev; + unsigned long flags = 0; + int hyp_idx = 0; + int free_pages_idx = hinting_obj->free_pages_idx; + + isolated_pages_obj = kmalloc(MAX_FGPT_ENTRIES * + sizeof(struct guest_isolated_pages), GFP_KERNEL); + if (!isolated_pages_obj) { + hinting_obj->free_pages_idx = 0; + put_cpu_var(hinting_obj); + return; + /* return some logical error here*/ + } + + sort(hinting_obj->free_page_arr, free_pages_idx, + sizeof(unsigned long), sort_zonenum, NULL); + + while (idx < free_pages_idx) { + unsigned long pfn = hinting_obj->free_page_arr[idx]; + unsigned long pfn_end = hinting_obj->free_page_arr[idx] + + (1 << FREE_PAGE_HINTING_MIN_ORDER) - 1; + + zone_cur = page_zone(pfn_to_page(pfn)); + if (idx == 0) { + zone_prev = zone_cur; + spin_lock_irqsave(&zone_cur->lock, flags); + } else if (zone_prev != zone_cur) { + spin_unlock_irqrestore(&zone_prev->lock, flags); + spin_lock_irqsave(&zone_cur->lock, flags); + zone_prev = zone_cur; + } + + while (pfn <= pfn_end) { + struct page *page = pfn_to_page(pfn); + struct page *buddy_page = NULL; + + if (PageCompound(page)) { + struct page *head_page = compound_head(page); + unsigned long head_pfn = page_to_pfn(head_page); + unsigned int alloc_pages = + 1 << compound_order(head_page); + + pfn = head_pfn + alloc_pages; + continue; + } + + if (page_ref_count(page)) { + pfn++; + continue; + } + + if (PageBuddy(page) && page_private(page) >= + FREE_PAGE_HINTING_MIN_ORDER) { + int buddy_order = page_private(page); + + ret = __isolate_free_page(page, buddy_order); + if (ret) { + isolated_pages_obj[hyp_idx].pfn = pfn; + isolated_pages_obj[hyp_idx].order = + buddy_order; + hyp_idx += 1; + } + pfn = pfn + (1 << buddy_order); + continue; + } + + buddy_page = get_buddy_page(page); + if (buddy_page && page_private(buddy_page) >= + FREE_PAGE_HINTING_MIN_ORDER) { + int buddy_order = page_private(buddy_page); + + ret = __isolate_free_page(buddy_page, + buddy_order); + if (ret) { + unsigned long buddy_pfn = + page_to_pfn(buddy_page); + + isolated_pages_obj[hyp_idx].pfn = + buddy_pfn; + isolated_pages_obj[hyp_idx].order = + buddy_order; + hyp_idx += 1; + } + pfn = page_to_pfn(buddy_page) + + (1 << buddy_order); + continue; + } + pfn++; + } + hinting_obj->free_page_arr[idx] = 0; + idx++; + if (idx == free_pages_idx) + spin_unlock_irqrestore(&zone_cur->lock, flags); + } hinting_obj->free_pages_idx = 0; put_cpu_var(hinting_obj); + + if (hyp_idx > 0) + guest_free_page_report(isolated_pages_obj, hyp_idx); + else + kfree(isolated_pages_obj); + /* return some logical error here*/ } int if_exist(struct page *page)
This patch enables the kernel to scan the per cpu array which carries head pages from the buddy free list of order FREE_PAGE_HINTING_MIN_ORDER (MAX_ORDER - 1) by guest_free_page_hinting(). guest_free_page_hinting() scans the entire per cpu array by acquiring a zone lock corresponding to the pages which are being scanned. If the page is still free and present in the buddy it tries to isolate the page and adds it to a dynamically allocated array. Once this scanning process is complete and if there are any isolated pages added to the dynamically allocated array guest_free_page_report() is invoked. However, before this the per-cpu array index is reset so that it can continue capturing the pages from buddy free list. In this patch guest_free_page_report() simply releases the pages back to the buddy by using __free_one_page() Signed-off-by: Nitesh Narayan Lal <nitesh@redhat.com> --- include/linux/page_hinting.h | 5 ++ mm/page_alloc.c | 2 +- virt/kvm/page_hinting.c | 154 +++++++++++++++++++++++++++++++++++ 3 files changed, 160 insertions(+), 1 deletion(-)