Message ID | 20190809001928.4950-1-richardw.yang@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] mm/mmap.c: refine find_vma_prev with rb_last | expand |
On 8/9/19 2:19 AM, Wei Yang wrote: > When addr is out of the range of the whole rb_tree, pprev will points to > the right-most node. rb_tree facility already provides a helper > function, rb_last, to do this task. We can leverage this instead of > re-implement it. > > This patch refines find_vma_prev with rb_last to make it a little nicer > to read. > > Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Nit below: > --- > v2: leverage rb_last > --- > mm/mmap.c | 9 +++------ > 1 file changed, 3 insertions(+), 6 deletions(-) > > diff --git a/mm/mmap.c b/mm/mmap.c > index 7e8c3e8ae75f..f7ed0afb994c 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -2270,12 +2270,9 @@ find_vma_prev(struct mm_struct *mm, unsigned long addr, > if (vma) { > *pprev = vma->vm_prev; > } else { > - struct rb_node *rb_node = mm->mm_rb.rb_node; > - *pprev = NULL; > - while (rb_node) { > - *pprev = rb_entry(rb_node, struct vm_area_struct, vm_rb); > - rb_node = rb_node->rb_right; > - } > + struct rb_node *rb_node = rb_last(&mm->mm_rb); > + *pprev = !rb_node ? NULL : > + rb_entry(rb_node, struct vm_area_struct, vm_rb); It's perhaps more common to write it like: *pprev = rb_node ? rb_entry(rb_node, struct vm_area_struct, vm_rb) : NULL; > } > return vma; > } >
On Fri, Aug 09, 2019 at 10:03:20AM +0200, Vlastimil Babka wrote: >On 8/9/19 2:19 AM, Wei Yang wrote: >> When addr is out of the range of the whole rb_tree, pprev will points to >> the right-most node. rb_tree facility already provides a helper >> function, rb_last, to do this task. We can leverage this instead of >> re-implement it. >> >> This patch refines find_vma_prev with rb_last to make it a little nicer >> to read. >> >> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> > >Acked-by: Vlastimil Babka <vbabka@suse.cz> > >Nit below: > >> --- >> v2: leverage rb_last >> --- >> mm/mmap.c | 9 +++------ >> 1 file changed, 3 insertions(+), 6 deletions(-) >> >> diff --git a/mm/mmap.c b/mm/mmap.c >> index 7e8c3e8ae75f..f7ed0afb994c 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -2270,12 +2270,9 @@ find_vma_prev(struct mm_struct *mm, unsigned long addr, >> if (vma) { >> *pprev = vma->vm_prev; >> } else { >> - struct rb_node *rb_node = mm->mm_rb.rb_node; >> - *pprev = NULL; >> - while (rb_node) { >> - *pprev = rb_entry(rb_node, struct vm_area_struct, vm_rb); >> - rb_node = rb_node->rb_right; >> - } >> + struct rb_node *rb_node = rb_last(&mm->mm_rb); >> + *pprev = !rb_node ? NULL : >> + rb_entry(rb_node, struct vm_area_struct, vm_rb); > >It's perhaps more common to write it like: >*pprev = rb_node ? rb_entry(rb_node, struct vm_area_struct, vm_rb) : NULL; > Do you prefer me to send v3 with this updated? >> } >> return vma; >> } >>
diff --git a/mm/mmap.c b/mm/mmap.c index 7e8c3e8ae75f..f7ed0afb994c 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2270,12 +2270,9 @@ find_vma_prev(struct mm_struct *mm, unsigned long addr, if (vma) { *pprev = vma->vm_prev; } else { - struct rb_node *rb_node = mm->mm_rb.rb_node; - *pprev = NULL; - while (rb_node) { - *pprev = rb_entry(rb_node, struct vm_area_struct, vm_rb); - rb_node = rb_node->rb_right; - } + struct rb_node *rb_node = rb_last(&mm->mm_rb); + *pprev = !rb_node ? NULL : + rb_entry(rb_node, struct vm_area_struct, vm_rb); } return vma; }
When addr is out of the range of the whole rb_tree, pprev will points to the right-most node. rb_tree facility already provides a helper function, rb_last, to do this task. We can leverage this instead of re-implement it. This patch refines find_vma_prev with rb_last to make it a little nicer to read. Signed-off-by: Wei Yang <richardw.yang@linux.intel.com> --- v2: leverage rb_last --- mm/mmap.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)