Message ID | 20240619095555.85980-1-jgowans@amazon.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v2] memblock: Move late alloc warning down to phys alloc | expand |
From: Mike Rapoport (IBM) <rppt@kernel.org> On Wed, 19 Jun 2024 11:55:55 +0200, James Gowans wrote: > If a driver/subsystem tries to do an allocation after the memblock > allocations have been freed and the memory handed to the buddy > allocator, it will not actually be legal to use that allocation: the > buddy allocator owns the memory. Currently this mis-use is handled by > the memblock function which does allocations and returns virtual > addresses by printing a warning and doing a kmalloc instead. However > the physical allocation function does not to do this check - callers of > the physical alloc function are unprotected against mis-use. > > [...] Applied to for-next branch of memblock.git tree, thanks! [1/1] memblock: Move late alloc warning down to phys alloc commit: 94ff46de4a738e7916b68ab5cc0b0380729f02af tree: https://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock branch: for-next -- Sincerely yours, Mike.
On Wed, Jun 19, 2024 at 11:55:55AM +0200, James Gowans wrote: >If a driver/subsystem tries to do an allocation after the memblock >allocations have been freed and the memory handed to the buddy >allocator, it will not actually be legal to use that allocation: the >buddy allocator owns the memory. Currently this mis-use is handled by >the memblock function which does allocations and returns virtual >addresses by printing a warning and doing a kmalloc instead. However >the physical allocation function does not to do this check - callers of >the physical alloc function are unprotected against mis-use. > >Improve the error catching here by moving the check into the physical >allocation function which is used by the virtual addr allocation >function. > >Signed-off-by: James Gowans <jgowans@amazon.com> >Cc: Mike Rapoport <rppt@kernel.org> >Cc: Andrew Morton <akpm@linux-foundation.org> >Cc: Alex Graf <graf@amazon.de> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> >--- > >Notes: > Changes since v1: https://lore.kernel.org/all/20240614133016.134150-1-jgowans@amazon.com/ > - Move this late usage check before alignment check > - Replace memblocks with memblock > > mm/memblock.c | 18 +++++++++++------- > 1 file changed, 11 insertions(+), 7 deletions(-) > >diff --git a/mm/memblock.c b/mm/memblock.c >index d09136e040d3..dbb3d700247e 100644 >--- a/mm/memblock.c >+++ b/mm/memblock.c >@@ -1451,6 +1451,17 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, > if (WARN_ONCE(nid == MAX_NUMNODES, "Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead\n")) > nid = NUMA_NO_NODE; > >+ /* >+ * Detect any accidental use of these APIs after slab is ready, as at >+ * this moment memblock may be deinitialized already and its >+ * internal data may be destroyed (after execution of memblock_free_all) >+ */ >+ if (WARN_ON_ONCE(slab_is_available())) { >+ void *vaddr = kzalloc_node(size, GFP_NOWAIT, nid); >+ >+ return vaddr ? virt_to_phys(vaddr) : 0; >+ } >+ > if (!align) { > /* Can't use WARNs this early in boot on powerpc */ > dump_stack(); >@@ -1576,13 +1587,6 @@ static void * __init memblock_alloc_internal( > { > phys_addr_t alloc; > >- /* >- * Detect any accidental use of these APIs after slab is ready, as at >- * this moment memblock may be deinitialized already and its >- * internal data may be destroyed (after execution of memblock_free_all) >- */ >- if (WARN_ON_ONCE(slab_is_available())) >- return kzalloc_node(size, GFP_NOWAIT, nid); > > if (max_addr > memblock.current_limit) > max_addr = memblock.current_limit; >-- >2.34.1 >
diff --git a/mm/memblock.c b/mm/memblock.c index d09136e040d3..dbb3d700247e 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -1451,6 +1451,17 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, if (WARN_ONCE(nid == MAX_NUMNODES, "Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead\n")) nid = NUMA_NO_NODE; + /* + * Detect any accidental use of these APIs after slab is ready, as at + * this moment memblock may be deinitialized already and its + * internal data may be destroyed (after execution of memblock_free_all) + */ + if (WARN_ON_ONCE(slab_is_available())) { + void *vaddr = kzalloc_node(size, GFP_NOWAIT, nid); + + return vaddr ? virt_to_phys(vaddr) : 0; + } + if (!align) { /* Can't use WARNs this early in boot on powerpc */ dump_stack(); @@ -1576,13 +1587,6 @@ static void * __init memblock_alloc_internal( { phys_addr_t alloc; - /* - * Detect any accidental use of these APIs after slab is ready, as at - * this moment memblock may be deinitialized already and its - * internal data may be destroyed (after execution of memblock_free_all) - */ - if (WARN_ON_ONCE(slab_is_available())) - return kzalloc_node(size, GFP_NOWAIT, nid); if (max_addr > memblock.current_limit) max_addr = memblock.current_limit;
If a driver/subsystem tries to do an allocation after the memblock allocations have been freed and the memory handed to the buddy allocator, it will not actually be legal to use that allocation: the buddy allocator owns the memory. Currently this mis-use is handled by the memblock function which does allocations and returns virtual addresses by printing a warning and doing a kmalloc instead. However the physical allocation function does not to do this check - callers of the physical alloc function are unprotected against mis-use. Improve the error catching here by moving the check into the physical allocation function which is used by the virtual addr allocation function. Signed-off-by: James Gowans <jgowans@amazon.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Alex Graf <graf@amazon.de> --- Notes: Changes since v1: https://lore.kernel.org/all/20240614133016.134150-1-jgowans@amazon.com/ - Move this late usage check before alignment check - Replace memblocks with memblock mm/memblock.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)