Message ID | 20211007181918.136982-3-mike.kravetz@oracle.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | hugetlb: add demote/split page functionality | expand |
On Thu, Oct 07, 2021 at 11:19:15AM -0700, Mike Kravetz wrote: > Add new interface cma_pages_valid() which indicates if the specified > pages are part of a CMA region. This interface will be used in a > subsequent patch by hugetlb code. > > In order to keep the same amount of DEBUG information, a pr_debug() call > was added to cma_pages_valid(). In the case where the page passed to > cma_release is not in cma region, the debug message will be printed from > cma_pages_valid as opposed to cma_release. > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: OScar Salvador <osalvador@suse.de>
On Fri, Oct 08, 2021 at 09:53:54AM +0200, Oscar Salvador wrote: > On Thu, Oct 07, 2021 at 11:19:15AM -0700, Mike Kravetz wrote: > > Add new interface cma_pages_valid() which indicates if the specified > > pages are part of a CMA region. This interface will be used in a > > subsequent patch by hugetlb code. > > > > In order to keep the same amount of DEBUG information, a pr_debug() call > > was added to cma_pages_valid(). In the case where the page passed to > > cma_release is not in cma region, the debug message will be printed from > > cma_pages_valid as opposed to cma_release. > > > > Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> > > Acked-by: David Hildenbrand <david@redhat.com> > > Reviewed-by: OScar Salvador <osalvador@suse.de> Fat fingers: s/OScar/Oscar
diff --git a/include/linux/cma.h b/include/linux/cma.h index 53fd8c3cdbd0..bd801023504b 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -46,6 +46,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, struct cma **res_cma); extern struct page *cma_alloc(struct cma *cma, unsigned long count, unsigned int align, bool no_warn); +extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned long count); extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); diff --git a/mm/cma.c b/mm/cma.c index 995e15480937..11152c3fb23c 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -524,6 +524,25 @@ struct page *cma_alloc(struct cma *cma, unsigned long count, return page; } +bool cma_pages_valid(struct cma *cma, const struct page *pages, + unsigned long count) +{ + unsigned long pfn; + + if (!cma || !pages) + return false; + + pfn = page_to_pfn(pages); + + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) { + pr_debug("%s(page %p, count %lu)\n", __func__, + (void *)pages, count); + return false; + } + + return true; +} + /** * cma_release() - release allocated pages * @cma: Contiguous memory region for which the allocation is performed. @@ -539,16 +558,13 @@ bool cma_release(struct cma *cma, const struct page *pages, { unsigned long pfn; - if (!cma || !pages) + if (!cma_pages_valid(cma, pages, count)) return false; pr_debug("%s(page %p, count %lu)\n", __func__, (void *)pages, count); pfn = page_to_pfn(pages); - if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) - return false; - VM_BUG_ON(pfn + count > cma->base_pfn + cma->count); free_contig_range(pfn, count);