Message ID | 202212300912449061763@zte.com.cn (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | ksm: support tracking KSM-placed zero-pages | expand |
On 30.12.22 02:12, yang.yang29@zte.com.cn wrote: > From: xu xin <xu.xin16@zte.com.cn> > > A new function try_to_get_old_rmap_item is abstracted from > get_next_rmap_item. This function will be reused by the subsequent > patches about counting ksm_zero_pages. > > The patch improves the readability and reusability of KSM code. > > Signed-off-by: xu xin <xu.xin16@zte.com.cn> > Cc: David Hildenbrand <david@redhat.com> > Cc: Claudio Imbrenda <imbrenda@linux.ibm.com> > Cc: Xuexin Jiang <jiang.xuexin@zte.com.cn> > Reviewed-by: Xiaokai Ran <ran.xiaokai@zte.com.cn> > Reviewed-by: Yang Yang <yang.yang29@zte.com.cn> > --- > mm/ksm.c | 25 +++++++++++++++++++------ > 1 file changed, 19 insertions(+), 6 deletions(-) > > diff --git a/mm/ksm.c b/mm/ksm.c > index 83e2f74ae7da..5b0a7343ff4a 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -2214,23 +2214,36 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite > } > } > > -static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, > - struct ksm_rmap_item **rmap_list, > - unsigned long addr) > +static struct ksm_rmap_item *try_to_get_old_rmap_item(unsigned long addr, > + struct ksm_rmap_item **rmap_list) > { > - struct ksm_rmap_item *rmap_item; > - > while (*rmap_list) { > - rmap_item = *rmap_list; > + struct ksm_rmap_item *rmap_item = *rmap_list; Empty line missing. > if ((rmap_item->address & PAGE_MASK) == addr) > return rmap_item; > if (rmap_item->address > addr) > break; > *rmap_list = rmap_item->rmap_list; > + /* Running here indicates it's vma has been UNMERGEABLE */ "If we end up here, the VMA is UNMERGEABLE." Although I am not sure if that is true? > remove_rmap_item_from_tree(rmap_item); > free_rmap_item(rmap_item); > } > > + return NULL; > +} > + > +static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, > + struct ksm_rmap_item **rmap_list, > + unsigned long addr) > +{ > + struct ksm_rmap_item *rmap_item; > + > + /* lookup if we have a old rmap_item matching the addr*/ I suggest dropping that comment, "try_to_get_old_rmap_item()" is expressive enough. > + rmap_item = try_to_get_old_rmap_item(addr, rmap_list); > + if (rmap_item) > + return rmap_item; > + > + /* Need to allocate a new rmap_item */ I suggest dropping that comment for the same reason. > rmap_item = alloc_rmap_item(); > if (rmap_item) { > /* It has already been zeroed */
Thanks for these suggestions, we will take them into patch v6.
On 04.02.23 07:43, yang.yang29@zte.com.cn wrote:
> Thanks for these suggestions, we will take them into patch v6.
Please don't remove all context from your replies. Makes it really hard
to follow-up :)
Further, there seems to be an issue with threading and your mails. For
example, your cover letter [1] does not link the other patches like [2],
for example, because the patches don't have "in-reply-to:" tags. Try
using "git send-email", that should take care of threading automatically.
[1] https://lore.kernel.org/all/202302100915227721315@zte.com.cn/
[2] https://lore.kernel.org/all/202302100916423431376@zte.com.cn/
diff --git a/mm/ksm.c b/mm/ksm.c index 83e2f74ae7da..5b0a7343ff4a 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2214,23 +2214,36 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite } } -static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, - struct ksm_rmap_item **rmap_list, - unsigned long addr) +static struct ksm_rmap_item *try_to_get_old_rmap_item(unsigned long addr, + struct ksm_rmap_item **rmap_list) { - struct ksm_rmap_item *rmap_item; - while (*rmap_list) { - rmap_item = *rmap_list; + struct ksm_rmap_item *rmap_item = *rmap_list; if ((rmap_item->address & PAGE_MASK) == addr) return rmap_item; if (rmap_item->address > addr) break; *rmap_list = rmap_item->rmap_list; + /* Running here indicates it's vma has been UNMERGEABLE */ remove_rmap_item_from_tree(rmap_item); free_rmap_item(rmap_item); } + return NULL; +} + +static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot, + struct ksm_rmap_item **rmap_list, + unsigned long addr) +{ + struct ksm_rmap_item *rmap_item; + + /* lookup if we have a old rmap_item matching the addr*/ + rmap_item = try_to_get_old_rmap_item(addr, rmap_list); + if (rmap_item) + return rmap_item; + + /* Need to allocate a new rmap_item */ rmap_item = alloc_rmap_item(); if (rmap_item) { /* It has already been zeroed */