Message ID | 20210521111033.2243-1-urezki@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/vmalloc: Fallback to a single page allocator | expand |
On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote: > +static inline unsigned int > +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order, > + unsigned long nr_small_pages, struct page **pages) (at least) two tabs here, please, otherwise the argument list is at the same indentation as the code which trips up my parser. some people like to match the opening bracket, but that always feels like more work than it's worth. fwiw, i'd format it like this: static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned long nr_pages, struct page **pages) { ... (yes, i renamed some of the variables there; overly long variable names are painful) The rest of the patch looks good. Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote: > > +static inline unsigned int > > +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order, > > + unsigned long nr_small_pages, struct page **pages) > > (at least) two tabs here, please, otherwise the argument list is at > the same indentation as the code which trips up my parser. some people > like to match the opening bracket, but that always feels like more work > than it's worth. fwiw, i'd format it like this: > > static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid, > unsigned int order, unsigned long nr_pages, struct page **pages) > { > ... > No problem. Will fix it. > > (yes, i renamed some of the variables there; overly long variable names > are painful) > > The rest of the patch looks good. > > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Thank you! I will re-spin the patch and send a v2. -- Vlad Rezki
On Fri, May 21, 2021 at 02:55:09PM +0200, Uladzislau Rezki wrote: > > On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote: > > > +static inline unsigned int > > > +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order, > > > + unsigned long nr_small_pages, struct page **pages) > > > > (at least) two tabs here, please, otherwise the argument list is at > > the same indentation as the code which trips up my parser. some people > > like to match the opening bracket, but that always feels like more work > > than it's worth. fwiw, i'd format it like this: > > > > static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid, > > unsigned int order, unsigned long nr_pages, struct page **pages) > > { > > ... > > > No problem. Will fix it. > > > > > (yes, i renamed some of the variables there; overly long variable names > > are painful) > > > > The rest of the patch looks good. > > > > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Thank you! > > I will re-spin the patch and send a v2. > From 6537bc97b5550f17b0813caf02ce0ec1865fa94e Mon Sep 17 00:00:00 2001 From: "Uladzislau Rezki (Sony)" <urezki@gmail.com> Date: Thu, 20 May 2021 14:13:23 +0200 Subject: [PATCH v2] mm/vmalloc: Fallback to a single page allocator Currently for order-0 pages we use a bulk-page allocator to get set of pages. From the other hand not allocating all pages is something that might occur. In that case we should fallbak to the single-page allocator trying to get missing pages, because it is more permissive(direct reclaim, etc). Introduce a vm_area_alloc_pages() function where the described logic is implemented. Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> --- mm/vmalloc.c | 81 +++++++++++++++++++++++++++++++++------------------- 1 file changed, 52 insertions(+), 29 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b2a0cbfa37c1..7765af7b1e9c 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2756,6 +2756,54 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) EXPORT_SYMBOL_GPL(vmap_pfn); #endif /* CONFIG_VMAP_PFN */ +static inline unsigned int +vm_area_alloc_pages(gfp_t gfp, int nid, + unsigned int order, unsigned long nr_pages, struct page **pages) +{ + unsigned int nr_allocated = 0; + + /* + * For order-0 pages we make use of bulk allocator, if + * the page array is partly or not at all populated due + * to fails, fallback to a single page allocator that is + * more permissive. + */ + if (!order) + nr_allocated = alloc_pages_bulk_array_node( + gfp, nid, nr_pages, pages); + else + /* + * Compound pages required for remap_vmalloc_page if + * high-order pages. + */ + gfp |= __GFP_COMP; + + /* High-order pages or fallback path if "bulk" fails. */ + while (nr_allocated < nr_pages) { + struct page *page; + int i; + + page = alloc_pages_node(nid, gfp, order); + if (unlikely(!page)) + break; + + /* + * Careful, we allocate and map page-order pages, but + * tracking is done per PAGE_SIZE page so as to keep the + * vm_struct APIs independent of the physical/mapped size. + */ + for (i = 0; i < (1U << order); i++) + pages[nr_allocated + i] = page + i; + + if (gfpflags_allow_blocking(gfp)) + cond_resched(); + + nr_allocated += 1U << order; + } + + return nr_allocated; +} + static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot, unsigned int page_shift, int node) @@ -2789,37 +2837,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, return NULL; } - area->nr_pages = 0; set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - if (!page_order) { - area->nr_pages = alloc_pages_bulk_array_node( - gfp_mask, node, nr_small_pages, area->pages); - } else { - /* - * Careful, we allocate and map page_order pages, but tracking is done - * per PAGE_SIZE page so as to keep the vm_struct APIs independent of - * the physical/mapped size. - */ - while (area->nr_pages < nr_small_pages) { - struct page *page; - int i; - - /* Compound pages required for remap_vmalloc_page */ - page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order); - if (unlikely(!page)) - break; - - for (i = 0; i < (1U << page_order); i++) - area->pages[area->nr_pages + i] = page + i; - - if (gfpflags_allow_blocking(gfp_mask)) - cond_resched(); - - area->nr_pages += 1U << page_order; - } - } + area->nr_pages = vm_area_alloc_pages(gfp_mask, node, + page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); @@ -2835,7 +2857,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) { + if (vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift) < 0) { warn_alloc(gfp_mask, NULL, "vmalloc size %lu allocation failure: " "failed to map pages",
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
> Looks good, > > Reviewed-by: Christoph Hellwig <hch@lst.de> > Thanks! >
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b2a0cbfa37c1..4d9c422124d3 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2756,6 +2756,54 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) EXPORT_SYMBOL_GPL(vmap_pfn); #endif /* CONFIG_VMAP_PFN */ +static inline unsigned int +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order, + unsigned long nr_small_pages, struct page **pages) +{ + unsigned int nr_allocated = 0; + + /* + * For order-0 pages we make use of bulk allocator, if + * the page array is partly or not at all populated due + * to fails, fallback to a single page allocator that is + * more permissive. + */ + if (!page_order) + nr_allocated = alloc_pages_bulk_array_node( + gfp, nid, nr_small_pages, pages); + else + /* + * Compound pages required for remap_vmalloc_page if + * high-order pages. + */ + gfp |= __GFP_COMP; + + /* High-order pages or fallback path if "bulk" fails. */ + while (nr_allocated < nr_small_pages) { + struct page *page; + int i; + + page = alloc_pages_node(nid, gfp, page_order); + if (unlikely(!page)) + break; + + /* + * Careful, we allocate and map page_order pages, but + * tracking is done per PAGE_SIZE page so as to keep the + * vm_struct APIs independent of the physical/mapped size. + */ + for (i = 0; i < (1U << page_order); i++) + pages[nr_allocated + i] = page + i; + + if (gfpflags_allow_blocking(gfp)) + cond_resched(); + + nr_allocated += 1U << page_order; + } + + return nr_allocated; +} + static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot, unsigned int page_shift, int node) @@ -2789,37 +2837,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, return NULL; } - area->nr_pages = 0; set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - if (!page_order) { - area->nr_pages = alloc_pages_bulk_array_node( - gfp_mask, node, nr_small_pages, area->pages); - } else { - /* - * Careful, we allocate and map page_order pages, but tracking is done - * per PAGE_SIZE page so as to keep the vm_struct APIs independent of - * the physical/mapped size. - */ - while (area->nr_pages < nr_small_pages) { - struct page *page; - int i; - - /* Compound pages required for remap_vmalloc_page */ - page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order); - if (unlikely(!page)) - break; - - for (i = 0; i < (1U << page_order); i++) - area->pages[area->nr_pages + i] = page + i; - - if (gfpflags_allow_blocking(gfp_mask)) - cond_resched(); - - area->nr_pages += 1U << page_order; - } - } + area->nr_pages = vm_area_alloc_pages(gfp_mask, node, + page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); @@ -2835,7 +2857,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, goto fail; } - if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) { + if (vmap_pages_range(addr, addr + size, prot, area->pages, + page_shift) < 0) { warn_alloc(gfp_mask, NULL, "vmalloc size %lu allocation failure: " "failed to map pages",
Currently for order-0 pages we use a bulk-page allocator to get set of pages. From the other hand not allocating all pages is something that might occur. In that case we should fallbak to the single-page allocator trying to get missing pages, because it is more permissive(direct reclaim, etc). Introduce a vm_area_alloc_pages() function where the described logic is implemented. Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> --- mm/vmalloc.c | 81 +++++++++++++++++++++++++++++++++------------------- 1 file changed, 52 insertions(+), 29 deletions(-)