Message ID | 20211103200703.2265-1-urezki@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/vmalloc: Eliminate an extra orig_gfp_mask | expand |
[Cc Vasily] On Wed 03-11-21 21:07:03, Uladzislau Rezki wrote: > That extra variable has been introduced just for keeping an original > passed gfp_mask because it is updated with __GFP_NOWARN on entry, thus > error handling messages were broken. I am not sure what you mean by "error handling messages were broken" part. It is true that the current Linus tree has a broken allocation failure reporting but that is not a fault of orig_gfp_mask. In fact patch which is fixing that "mm/vmalloc: repair warn_alloc()s in __vmalloc_area_node()" currently in akpm tree is adding the additional mask. > Instead we can keep an original gfp_mask without modifying it and add > an extra __GFP_NOWARN flag together with gfp_mask as a parameter to > the vm_area_alloc_pages() function. It will make it less confused. I would tend to agree that this is a better approach. There is already nested_gfp mask and one more doesn't add to the readbility. Maybe we should just drop the above patch and just go with one which doesn't introduce the intermediate step and an additional gfp mask. > Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> > --- > mm/vmalloc.c | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index d2a00ad4e1dd..3b549ff5c95e 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2920,7 +2920,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > int node) > { > const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; > - const gfp_t orig_gfp_mask = gfp_mask; > unsigned long addr = (unsigned long)area->addr; > unsigned long size = get_vm_area_size(area); > unsigned long array_size; > @@ -2928,7 +2927,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > unsigned int page_order; > > array_size = (unsigned long)nr_small_pages * sizeof(struct page *); > - gfp_mask |= __GFP_NOWARN; > + > if (!(gfp_mask & (GFP_DMA | GFP_DMA32))) > gfp_mask |= __GFP_HIGHMEM; > > @@ -2941,7 +2940,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > } > > if (!area->pages) { > - warn_alloc(orig_gfp_mask, NULL, > + warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, failed to allocated page array size %lu", > nr_small_pages * PAGE_SIZE, array_size); > free_vm_area(area); > @@ -2951,8 +2950,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > set_vm_area_page_order(area, page_shift - PAGE_SHIFT); > page_order = vm_area_page_order(area); > > - area->nr_pages = vm_area_alloc_pages(gfp_mask, node, > - page_order, nr_small_pages, area->pages); > + area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN, > + node, page_order, nr_small_pages, area->pages); > > atomic_long_add(area->nr_pages, &nr_vmalloc_pages); > > @@ -2961,7 +2960,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > * allocation request, free them via __vfree() if any. > */ > if (area->nr_pages != nr_small_pages) { > - warn_alloc(orig_gfp_mask, NULL, > + warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, page order %u, failed to allocate pages", > area->nr_pages * PAGE_SIZE, page_order); > goto fail; > @@ -2969,7 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > > if (vmap_pages_range(addr, addr + size, prot, area->pages, > page_shift) < 0) { > - warn_alloc(orig_gfp_mask, NULL, > + warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, failed to map pages", > area->nr_pages * PAGE_SIZE); > goto fail; > -- > 2.17.1
> [Cc Vasily] > > On Wed 03-11-21 21:07:03, Uladzislau Rezki wrote: > > That extra variable has been introduced just for keeping an original > > passed gfp_mask because it is updated with __GFP_NOWARN on entry, thus > > error handling messages were broken. > > I am not sure what you mean by "error handling messages were broken" > part. > We slightly discussed it in another thread :) There was __GFP_NOWARN added on entry(unconditionally), what leads to ignoring all our internal error messages by the warn_alloc(). I have checked the linux-next and saw that Vasily sent a patch fixing it: <snip> Author: Vasily Averin <vvs@virtuozzo.com> Date: Thu Oct 21 15:07:26 2021 +1100 mm/vmalloc: repair warn_alloc()s in __vmalloc_area_node() Commit f255935b9767 ("mm: cleanup the gfp_mask handling in __vmalloc_area_node") added __GFP_NOWARN to gfp_mask unconditionally however it disabled all output inside warn_alloc() call. This patch saves original gfp_mask and provides it to all warn_alloc() calls. <snip> > It is true that the current Linus tree has a broken allocation failure > reporting but that is not a fault of orig_gfp_mask. In fact patch which > is fixing that "mm/vmalloc: repair warn_alloc()s in > __vmalloc_area_node()" currently in akpm tree is adding the additional > mask. > > > Instead we can keep an original gfp_mask without modifying it and add > > an extra __GFP_NOWARN flag together with gfp_mask as a parameter to > > the vm_area_alloc_pages() function. It will make it less confused. > > I would tend to agree that this is a better approach. There is already > nested_gfp mask and one more doesn't add to the readbility. > Agree, that is why i decided to send a patch. Because i find that extra gfp variable as odd one and confusing. I paid an attention on it during our discussion about __GFP_NOFAIL. But on my tree it was not fixed at all and after checking the linux-next i saw a fix. > > Maybe we should just drop the above patch and just go with one which > doesn't introduce the intermediate step and an additional gfp mask. > That we can do if all agree on. Thanks! -- Vlad Rezki
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd..3b549ff5c95e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2920,7 +2920,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, int node) { const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; - const gfp_t orig_gfp_mask = gfp_mask; unsigned long addr = (unsigned long)area->addr; unsigned long size = get_vm_area_size(area); unsigned long array_size; @@ -2928,7 +2927,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, unsigned int page_order; array_size = (unsigned long)nr_small_pages * sizeof(struct page *); - gfp_mask |= __GFP_NOWARN; + if (!(gfp_mask & (GFP_DMA | GFP_DMA32))) gfp_mask |= __GFP_HIGHMEM; @@ -2941,7 +2940,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, } if (!area->pages) { - warn_alloc(orig_gfp_mask, NULL, + warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to allocated page array size %lu", nr_small_pages * PAGE_SIZE, array_size); free_vm_area(area); @@ -2951,8 +2950,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - area->nr_pages = vm_area_alloc_pages(gfp_mask, node, - page_order, nr_small_pages, area->pages); + area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN, + node, page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); @@ -2961,7 +2960,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, * allocation request, free them via __vfree() if any. */ if (area->nr_pages != nr_small_pages) { - warn_alloc(orig_gfp_mask, NULL, + warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, page order %u, failed to allocate pages", area->nr_pages * PAGE_SIZE, page_order); goto fail; @@ -2969,7 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) { - warn_alloc(orig_gfp_mask, NULL, + warn_alloc(gfp_mask, NULL, "vmalloc error: size %lu, failed to map pages", area->nr_pages * PAGE_SIZE); goto fail;
That extra variable has been introduced just for keeping an original passed gfp_mask because it is updated with __GFP_NOWARN on entry, thus error handling messages were broken. Instead we can keep an original gfp_mask without modifying it and add an extra __GFP_NOWARN flag together with gfp_mask as a parameter to the vm_area_alloc_pages() function. It will make it less confused. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> --- mm/vmalloc.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-)