Message ID | 20220531150823.1004101-1-patrick.wang.shcn@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: kmemleak: check boundary of objects allocated with physical address when scan | expand |
On Tue, May 31, 2022 at 11:08:23PM +0800, Patrick Wang wrote: > @@ -1132,8 +1135,13 @@ EXPORT_SYMBOL(kmemleak_no_scan); > void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count, > gfp_t gfp) > { > - if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn) > - kmemleak_alloc(__va(phys), size, min_count, gfp); > + pr_debug("%s(0x%p, %zu, %d)\n", __func__, __va(phys), size, min_count); I'd print just phys here since that's the function argument. > + if (kmemleak_enabled && (unsigned long)__va(phys) >= PAGE_OFFSET && > + !IS_ERR(__va(phys))) > + /* create object with OBJECT_PHYS flag */ > + create_object((unsigned long)__va(phys), size, min_count, > + gfp, true); Do we still need to check for __va(phys) >= PAGE_OFFSET? Also I don't think IS_ERR(__va(phys)) makes sense, we can't store an error in a physical address. The kmemleak_alloc_phys() function is only called on successful allocation, so shouldn't bother with error codes. > @@ -1436,6 +1441,13 @@ static void kmemleak_scan(void) > dump_object_info(object); > } > #endif > + > + /* outside lowmem, make it black */ Maybe a bit more verbose: /* ignore objects outside lowmem (paint them black) */ > + if (object->flags & OBJECT_PHYS) > + if (PHYS_PFN(__pa((void *)object->pointer)) < min_low_pfn || > + PHYS_PFN(__pa((void *)object->pointer)) >= max_low_pfn) > + __paint_it(object, KMEMLEAK_BLACK); I'd skip the checks if the object is OBJECT_NO_SCAN (side-effect of __paint_it()) so that the next scan won't have to go through the __pa() checks again. It's also probably more correct to check the upper object boundary). Something like: if ((object->flags & OBJECT_PHYS) && !(object->flags & OBJECT_NO_SCAN)) { unsigned long phys = __pa((void *)object->pointer); if (PHYS_PFN(phys) < min_low_pfn || PHYS_PFN(phys + object->size) >= max_low_pfn) __paint_it(object, KMEMLEAK_BLACK); } Thanks.
On 2022/6/1 00:29, Catalin Marinas wrote: > On Tue, May 31, 2022 at 11:08:23PM +0800, Patrick Wang wrote: >> @@ -1132,8 +1135,13 @@ EXPORT_SYMBOL(kmemleak_no_scan); >> void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count, >> gfp_t gfp) >> { >> - if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn) >> - kmemleak_alloc(__va(phys), size, min_count, gfp); >> + pr_debug("%s(0x%p, %zu, %d)\n", __func__, __va(phys), size, min_count); > > I'd print just phys here since that's the function argument. Will do. > >> + if (kmemleak_enabled && (unsigned long)__va(phys) >= PAGE_OFFSET && >> + !IS_ERR(__va(phys))) >> + /* create object with OBJECT_PHYS flag */ >> + create_object((unsigned long)__va(phys), size, min_count, >> + gfp, true); > > Do we still need to check for __va(phys) >= PAGE_OFFSET? Also I don't > think IS_ERR(__va(phys)) makes sense, we can't store an error in a > physical address. The kmemleak_alloc_phys() function is only called on > successful allocation, so shouldn't bother with error codes. In this commit: 972fa3a7c17c(mm: kmemleak: alloc gray object for reserved region with direct map) The kmemleak_alloc_phys() function is called directly by passing physical address from devicetree. So I'm concerned that could __va() => __pa() convert always get the phys back? I thought check for __va(phys) might help, but it probably dosen't work and using IS_ERR is indeed inappropriate. We might have to store phys in object and convert it via __va() for normal use like: #define object_pointer(obj) \ (obj->flags & OBJECT_PHYS ? (unsigned long)__va((void *)obj->pointer) \ : obj->pointer) > >> @@ -1436,6 +1441,13 @@ static void kmemleak_scan(void) >> dump_object_info(object); >> } >> #endif >> + >> + /* outside lowmem, make it black */ > > Maybe a bit more verbose: > > /* ignore objects outside lowmem (paint them black) */ Will do. > >> + if (object->flags & OBJECT_PHYS) >> + if (PHYS_PFN(__pa((void *)object->pointer)) < min_low_pfn || >> + PHYS_PFN(__pa((void *)object->pointer)) >= max_low_pfn) >> + __paint_it(object, KMEMLEAK_BLACK); > > I'd skip the checks if the object is OBJECT_NO_SCAN (side-effect of > __paint_it()) so that the next scan won't have to go through the __pa() > checks again. It's also probably more correct to check the upper object > boundary). Something like: > > if ((object->flags & OBJECT_PHYS) && > !(object->flags & OBJECT_NO_SCAN)) { > unsigned long phys = __pa((void *)object->pointer); > if (PHYS_PFN(phys) < min_low_pfn || > PHYS_PFN(phys + object->size) >= max_low_pfn) > __paint_it(object, KMEMLEAK_BLACK); > } Right, much more thorough. Will do. Thanks, Patrick
On Wed, Jun 01, 2022 at 06:24:34PM +0800, Patrick Wang wrote: > On 2022/6/1 00:29, Catalin Marinas wrote: > > On Tue, May 31, 2022 at 11:08:23PM +0800, Patrick Wang wrote: > > > + if (kmemleak_enabled && (unsigned long)__va(phys) >= PAGE_OFFSET && > > > + !IS_ERR(__va(phys))) > > > + /* create object with OBJECT_PHYS flag */ > > > + create_object((unsigned long)__va(phys), size, min_count, > > > + gfp, true); > > > > Do we still need to check for __va(phys) >= PAGE_OFFSET? Also I don't > > think IS_ERR(__va(phys)) makes sense, we can't store an error in a > > physical address. The kmemleak_alloc_phys() function is only called on > > successful allocation, so shouldn't bother with error codes. > > In this commit: > 972fa3a7c17c(mm: kmemleak: alloc gray object for reserved > region with direct map) > > The kmemleak_alloc_phys() function is called directly by passing > physical address from devicetree. So I'm concerned that could > __va() => __pa() convert always get the phys back? I thought > check for __va(phys) might help, but it probably dosen't work > and using IS_ERR is indeed inappropriate. > > We might have to store phys in object and convert it via __va() > for normal use like: > > #define object_pointer(obj) \ > (obj->flags & OBJECT_PHYS ? (unsigned long)__va((void *)obj->pointer) \ > : obj->pointer) In the commit you mentioned, the kmemleak callback is skipped if the memory is marked no-map. But you have a point with the va->pa conversion. On 32-bit architectures, the __va() is no longer valid if the pfn is above max_low_pfn. So whatever we add to the rbtree may be entirely bogus, and we can't guarantee that the va->pa conversion back is correct. Storing the phys address in object->pointer only solves the conversion but it doesn't solve the rbtree problem (VA and PA values may overlap, we can't just store the physical address either). And we use the rbtree for searching objects on freeing as well. Given that all the kmemleak_alloc_phys() calls always pass min_count=0 (we should probably get rid of the extra arguments), we don't expect them to leak, so there's no point in adding them to the rbtree. We can instead add a new object_phys_tree_root to store these objects by the physical address for when we need to search (kmemleak_free_part_phys()). This would probably look simpler than recording the callbacks and replaying them. Wherever we use object_tree_root we should check for OBJECT_PHYS and use object_phys_tree_root instead. There aren't many places.
On Thu, Jun 2, 2022 at 12:13 AM Catalin Marinas <catalin.marinas@arm.com> wrote: > > On Wed, Jun 01, 2022 at 06:24:34PM +0800, Patrick Wang wrote: > > On 2022/6/1 00:29, Catalin Marinas wrote: > > > On Tue, May 31, 2022 at 11:08:23PM +0800, Patrick Wang wrote: > > > > + if (kmemleak_enabled && (unsigned long)__va(phys) >= PAGE_OFFSET && > > > > + !IS_ERR(__va(phys))) > > > > + /* create object with OBJECT_PHYS flag */ > > > > + create_object((unsigned long)__va(phys), size, min_count, > > > > + gfp, true); > > > > > > Do we still need to check for __va(phys) >= PAGE_OFFSET? Also I don't > > > think IS_ERR(__va(phys)) makes sense, we can't store an error in a > > > physical address. The kmemleak_alloc_phys() function is only called on > > > successful allocation, so shouldn't bother with error codes. > > > > In this commit: > > 972fa3a7c17c(mm: kmemleak: alloc gray object for reserved > > region with direct map) > > > > The kmemleak_alloc_phys() function is called directly by passing > > physical address from devicetree. So I'm concerned that could > > __va() => __pa() convert always get the phys back? I thought > > check for __va(phys) might help, but it probably dosen't work > > and using IS_ERR is indeed inappropriate. > > > > We might have to store phys in object and convert it via __va() > > for normal use like: > > > > #define object_pointer(obj) \ > > (obj->flags & OBJECT_PHYS ? (unsigned long)__va((void *)obj->pointer) \ > > : obj->pointer) > > In the commit you mentioned, the kmemleak callback is skipped if the > memory is marked no-map. > > But you have a point with the va->pa conversion. On 32-bit > architectures, the __va() is no longer valid if the pfn is above > max_low_pfn. So whatever we add to the rbtree may be entirely bogus, > and we can't guarantee that the va->pa conversion back is correct. > > Storing the phys address in object->pointer only solves the conversion > but it doesn't solve the rbtree problem (VA and PA values may overlap, > we can't just store the physical address either). And we use the rbtree > for searching objects on freeing as well. > > Given that all the kmemleak_alloc_phys() calls always pass min_count=0 > (we should probably get rid of the extra arguments), we don't expect > them to leak, so there's no point in adding them to the rbtree. We can > instead add a new object_phys_tree_root to store these objects by the > physical address for when we need to search (kmemleak_free_part_phys()). > This would probably look simpler than recording the callbacks and > replaying them. > > Wherever we use object_tree_root we should check for OBJECT_PHYS and use > object_phys_tree_root instead. There aren't many places. Considering the usage of objects with OBJECT_PHYS, storing the phys address and giving their own rbtree should solve the phys problem. I will post a v2 ASAP. Thanks, Patrick
diff --git a/mm/kmemleak.c b/mm/kmemleak.c index a182f5ddaf68..1e2f90db9850 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -172,6 +172,8 @@ struct kmemleak_object { #define OBJECT_NO_SCAN (1 << 2) /* flag set to fully scan the object when scan_area allocation failed */ #define OBJECT_FULL_SCAN (1 << 3) +/* flag set for object allocated with physical address */ +#define OBJECT_PHYS (1 << 4) #define HEX_PREFIX " " /* number of bytes to print per line; must be 16 or 32 */ @@ -575,7 +577,8 @@ static int __save_stack_trace(unsigned long *trace) * memory block and add it to the object_list and object_tree_root. */ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, - int min_count, gfp_t gfp) + int min_count, gfp_t gfp, + bool is_phys) { unsigned long flags; struct kmemleak_object *object, *parent; @@ -595,7 +598,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, INIT_HLIST_HEAD(&object->area_list); raw_spin_lock_init(&object->lock); atomic_set(&object->use_count, 1); - object->flags = OBJECT_ALLOCATED; + object->flags = OBJECT_ALLOCATED | (is_phys ? OBJECT_PHYS : 0); object->pointer = ptr; object->size = kfence_ksize((void *)ptr) ?: size; object->excess_ref = 0; @@ -729,10 +732,10 @@ static void delete_object_part(unsigned long ptr, size_t size) end = object->pointer + object->size; if (ptr > start) create_object(start, ptr - start, object->min_count, - GFP_KERNEL); + GFP_KERNEL, object->flags & OBJECT_PHYS); if (ptr + size < end) create_object(ptr + size, end - ptr - size, object->min_count, - GFP_KERNEL); + GFP_KERNEL, object->flags & OBJECT_PHYS); __delete_object(object); } @@ -904,7 +907,7 @@ void __ref kmemleak_alloc(const void *ptr, size_t size, int min_count, pr_debug("%s(0x%p, %zu, %d)\n", __func__, ptr, size, min_count); if (kmemleak_enabled && ptr && !IS_ERR(ptr)) - create_object((unsigned long)ptr, size, min_count, gfp); + create_object((unsigned long)ptr, size, min_count, gfp, false); } EXPORT_SYMBOL_GPL(kmemleak_alloc); @@ -931,7 +934,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size, if (kmemleak_enabled && ptr && !IS_ERR(ptr)) for_each_possible_cpu(cpu) create_object((unsigned long)per_cpu_ptr(ptr, cpu), - size, 0, gfp); + size, 0, gfp, false); } EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu); @@ -953,7 +956,7 @@ void __ref kmemleak_vmalloc(const struct vm_struct *area, size_t size, gfp_t gfp * the virtual address of the vmalloc'ed block. */ if (kmemleak_enabled) { - create_object((unsigned long)area->addr, size, 2, gfp); + create_object((unsigned long)area->addr, size, 2, gfp, false); object_set_excess_ref((unsigned long)area, (unsigned long)area->addr); } @@ -1132,8 +1135,13 @@ EXPORT_SYMBOL(kmemleak_no_scan); void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count, gfp_t gfp) { - if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn) - kmemleak_alloc(__va(phys), size, min_count, gfp); + pr_debug("%s(0x%p, %zu, %d)\n", __func__, __va(phys), size, min_count); + + if (kmemleak_enabled && (unsigned long)__va(phys) >= PAGE_OFFSET && + !IS_ERR(__va(phys))) + /* create object with OBJECT_PHYS flag */ + create_object((unsigned long)__va(phys), size, min_count, + gfp, true); } EXPORT_SYMBOL(kmemleak_alloc_phys); @@ -1146,8 +1154,7 @@ EXPORT_SYMBOL(kmemleak_alloc_phys); */ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size) { - if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn) - kmemleak_free_part(__va(phys), size); + kmemleak_free_part(__va(phys), size); } EXPORT_SYMBOL(kmemleak_free_part_phys); @@ -1158,8 +1165,7 @@ EXPORT_SYMBOL(kmemleak_free_part_phys); */ void __ref kmemleak_not_leak_phys(phys_addr_t phys) { - if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn) - kmemleak_not_leak(__va(phys)); + kmemleak_not_leak(__va(phys)); } EXPORT_SYMBOL(kmemleak_not_leak_phys); @@ -1170,8 +1176,7 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys); */ void __ref kmemleak_ignore_phys(phys_addr_t phys) { - if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn) - kmemleak_ignore(__va(phys)); + kmemleak_ignore(__va(phys)); } EXPORT_SYMBOL(kmemleak_ignore_phys); @@ -1436,6 +1441,13 @@ static void kmemleak_scan(void) dump_object_info(object); } #endif + + /* outside lowmem, make it black */ + if (object->flags & OBJECT_PHYS) + if (PHYS_PFN(__pa((void *)object->pointer)) < min_low_pfn || + PHYS_PFN(__pa((void *)object->pointer)) >= max_low_pfn) + __paint_it(object, KMEMLEAK_BLACK); + /* reset the reference count (whiten the object) */ object->count = 0; if (color_gray(object) && get_object(object)) @@ -1966,14 +1978,14 @@ void __init kmemleak_init(void) /* register the data/bss sections */ create_object((unsigned long)_sdata, _edata - _sdata, - KMEMLEAK_GREY, GFP_ATOMIC); + KMEMLEAK_GREY, GFP_ATOMIC, false); create_object((unsigned long)__bss_start, __bss_stop - __bss_start, - KMEMLEAK_GREY, GFP_ATOMIC); + KMEMLEAK_GREY, GFP_ATOMIC, false); /* only register .data..ro_after_init if not within .data */ if (&__start_ro_after_init < &_sdata || &__end_ro_after_init > &_edata) create_object((unsigned long)__start_ro_after_init, __end_ro_after_init - __start_ro_after_init, - KMEMLEAK_GREY, GFP_ATOMIC); + KMEMLEAK_GREY, GFP_ATOMIC, false); } /*
The kmemleak_*_phys() interface uses "min_low_pfn" and "max_low_pfn" to check address. But on some architectures, kmemleak_*_phys() is called before those two variables initialized. Add OBJECT_PHYS flag for the objects allocated with physical address, and check the boundary when scan instead of in kmemleak_*_phys(). This commit will solve: https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com> --- mm/kmemleak.c | 48 ++++++++++++++++++++++++++++++------------------ 1 file changed, 30 insertions(+), 18 deletions(-)