Message ID | 20190912023111.219636-4-yuzhao@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2,1/4] mm: correct mask size for slub page->objects | expand |
On Wed, Sep 11, 2019 at 08:31:11PM -0600, Yu Zhao wrote: > Though I have no idea what the side effect of such race would be, > apparently we want to prevent the free list from being changed > while debugging the objects. Codewise looks good to me. But commit message can be better. Can we repharase it to indicate that slab_lock is required to protect page->objects?
On Thu, Sep 12, 2019 at 01:06:42PM +0300, Kirill A. Shutemov wrote: > On Wed, Sep 11, 2019 at 08:31:11PM -0600, Yu Zhao wrote: > > Though I have no idea what the side effect of such race would be, > > apparently we want to prevent the free list from being changed > > while debugging the objects. > > Codewise looks good to me. But commit message can be better. > > Can we repharase it to indicate that slab_lock is required to protect > page->objects? Will do.
On Wed, 11 Sep 2019, Yu Zhao wrote: > Though I have no idea what the side effect of such race would be, > apparently we want to prevent the free list from being changed > while debugging the objects. process_slab() is called under the list_lock which prevents any allocation from the free list in the slab page. This means that new objects can be added to the freelist which occurs by updating the freelist pointer in the slab page with a pointer to the newly free object which in turn contains the old freelist pointr. It is therefore possible to safely traverse the objects on the freelist after the pointer has been retrieved NAK.
diff --git a/mm/slub.c b/mm/slub.c index baa60dd73942..1c9726c28f0b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4608,11 +4608,15 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s, void *p; unsigned long *map; + slab_lock(page); + map = get_map(s, page); for_each_object(p, s, addr, page->objects) if (!test_bit(slab_index(p, s, addr), map)) add_location(t, s, get_track(s, p, alloc)); put_map(map); + + slab_unlock(page); } static int list_locations(struct kmem_cache *s, char *buf,
Though I have no idea what the side effect of such race would be, apparently we want to prevent the free list from being changed while debugging the objects. Signed-off-by: Yu Zhao <yuzhao@google.com> --- mm/slub.c | 4 ++++ 1 file changed, 4 insertions(+)