diff mbox series

[v2,4/4] mm: lock slub page when listing objects

Message ID 20190912023111.219636-4-yuzhao@google.com (mailing list archive)
State New, archived
Headers show
Series [v2,1/4] mm: correct mask size for slub page->objects | expand

Commit Message

Yu Zhao Sept. 12, 2019, 2:31 a.m. UTC
Though I have no idea what the side effect of such race would be,
apparently we want to prevent the free list from being changed
while debugging the objects.

Signed-off-by: Yu Zhao <yuzhao@google.com>
---
 mm/slub.c | 4 ++++
 1 file changed, 4 insertions(+)

Comments

Kirill A. Shutemov Sept. 12, 2019, 10:06 a.m. UTC | #1
On Wed, Sep 11, 2019 at 08:31:11PM -0600, Yu Zhao wrote:
> Though I have no idea what the side effect of such race would be,
> apparently we want to prevent the free list from being changed
> while debugging the objects.

Codewise looks good to me. But commit message can be better.

Can we repharase it to indicate that slab_lock is required to protect
page->objects?
Yu Zhao Sept. 12, 2019, 9:12 p.m. UTC | #2
On Thu, Sep 12, 2019 at 01:06:42PM +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 11, 2019 at 08:31:11PM -0600, Yu Zhao wrote:
> > Though I have no idea what the side effect of such race would be,
> > apparently we want to prevent the free list from being changed
> > while debugging the objects.
> 
> Codewise looks good to me. But commit message can be better.
> 
> Can we repharase it to indicate that slab_lock is required to protect
> page->objects?

Will do.
Christoph Lameter (Ampere) Sept. 13, 2019, 2:58 p.m. UTC | #3
On Wed, 11 Sep 2019, Yu Zhao wrote:

> Though I have no idea what the side effect of such race would be,
> apparently we want to prevent the free list from being changed
> while debugging the objects.

process_slab() is called under the list_lock which prevents any allocation
from the free list in the slab page. This means that new objects can be
added to the freelist which occurs by updating the freelist pointer in the
slab page with a pointer to the newly free object which in turn contains
the old freelist pointr.

It is therefore possible to safely traverse the objects on the freelist
after the pointer has been retrieved

NAK.
diff mbox series

Patch

diff --git a/mm/slub.c b/mm/slub.c
index baa60dd73942..1c9726c28f0b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4608,11 +4608,15 @@  static void process_slab(struct loc_track *t, struct kmem_cache *s,
 	void *p;
 	unsigned long *map;
 
+	slab_lock(page);
+
 	map = get_map(s, page);
 	for_each_object(p, s, addr, page->objects)
 		if (!test_bit(slab_index(p, s, addr), map))
 			add_location(t, s, get_track(s, p, alloc));
 	put_map(map);
+
+	slab_unlock(page);
 }
 
 static int list_locations(struct kmem_cache *s, char *buf,