Message ID | 0f980e84-b587-3d9e-3c26-ad57f947c08b@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott@redhat.com> wrote: > On 07/20/2016 01:27 PM, Kees Cook wrote: >> >> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the >> SLUB allocator to catch any copies that may span objects. Includes a >> redzone handling fix discovered by Michael Ellerman. >> >> Based on code from PaX and grsecurity. >> >> Signed-off-by: Kees Cook <keescook@chromium.org> >> Tested-by: Michael Ellerman <mpe@ellerman.id.au> >> --- >> init/Kconfig | 1 + >> mm/slub.c | 36 ++++++++++++++++++++++++++++++++++++ >> 2 files changed, 37 insertions(+) >> >> diff --git a/init/Kconfig b/init/Kconfig >> index 798c2020ee7c..1c4711819dfd 100644 >> --- a/init/Kconfig >> +++ b/init/Kconfig >> @@ -1765,6 +1765,7 @@ config SLAB >> >> config SLUB >> bool "SLUB (Unqueued Allocator)" >> + select HAVE_HARDENED_USERCOPY_ALLOCATOR >> help >> SLUB is a slab allocator that minimizes cache line usage >> instead of managing queues of cached objects (SLAB approach). >> diff --git a/mm/slub.c b/mm/slub.c >> index 825ff4505336..7dee3d9a5843 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int >> node) >> EXPORT_SYMBOL(__kmalloc_node); >> #endif >> >> +#ifdef CONFIG_HARDENED_USERCOPY >> +/* >> + * Rejects objects that are incorrectly sized. >> + * >> + * Returns NULL if check passes, otherwise const char * to name of cache >> + * to indicate an error. >> + */ >> +const char *__check_heap_object(const void *ptr, unsigned long n, >> + struct page *page) >> +{ >> + struct kmem_cache *s; >> + unsigned long offset; >> + size_t object_size; >> + >> + /* Find object and usable object size. */ >> + s = page->slab_cache; >> + object_size = slab_ksize(s); >> + >> + /* Find offset within object. */ >> + offset = (ptr - page_address(page)) % s->size; >> + >> + /* Adjust for redzone and reject if within the redzone. */ >> + if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) { >> + if (offset < s->red_left_pad) >> + return s->name; >> + offset -= s->red_left_pad; >> + } >> + >> + /* Allow address range falling entirely within object size. */ >> + if (offset <= object_size && n <= object_size - offset) >> + return NULL; >> + >> + return s->name; >> +} >> +#endif /* CONFIG_HARDENED_USERCOPY */ >> + > > > I compared this against what check_valid_pointer does for SLUB_DEBUG > checking. I was hoping we could utilize that function to avoid > duplication but a) __check_heap_object needs to allow accesses anywhere > in the object, not just the beginning b) accessing page->objects > is racy without the addition of locking in SLUB_DEBUG. > > Still, the ptr < page_address(page) check from __check_heap_object would > be good to add to avoid generating garbage large offsets and trying to > infer C math. > > diff --git a/mm/slub.c b/mm/slub.c > index 7dee3d9..5370e4f 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, > unsigned long n, > s = page->slab_cache; > object_size = slab_ksize(s); > + if (ptr < page_address(page)) > + return s->name; > + > /* Find offset within object. */ > offset = (ptr - page_address(page)) % s->size; > > With that, you can add > > Reviwed-by: Laura Abbott <labbott@redhat.com> Cool, I'll add that. Should I add your reviewed-by for this patch only or for the whole series? Thanks! -Kees > >> static size_t __ksize(const void *object) >> { >> struct page *page; >> > > Thanks, > Laura
On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote: > On 07/20/2016 01:27 PM, Kees Cook wrote: > > Under CONFIG_HARDENED_USERCOPY, this adds object size checking to > > the > > SLUB allocator to catch any copies that may span objects. Includes > > a > > redzone handling fix discovered by Michael Ellerman. > > > > Based on code from PaX and grsecurity. > > > > Signed-off-by: Kees Cook <keescook@chromium.org> > > Tested-by: Michael Ellerman <mpe@ellerman.id.au> > > --- > > init/Kconfig | 1 + > > mm/slub.c | 36 ++++++++++++++++++++++++++++++++++++ > > 2 files changed, 37 insertions(+) > > > > diff --git a/init/Kconfig b/init/Kconfig > > index 798c2020ee7c..1c4711819dfd 100644 > > --- a/init/Kconfig > > +++ b/init/Kconfig > > @@ -1765,6 +1765,7 @@ config SLAB > > > > config SLUB > > bool "SLUB (Unqueued Allocator)" > > + select HAVE_HARDENED_USERCOPY_ALLOCATOR > > help > > SLUB is a slab allocator that minimizes cache line > > usage > > instead of managing queues of cached objects (SLAB > > approach). > > diff --git a/mm/slub.c b/mm/slub.c > > index 825ff4505336..7dee3d9a5843 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t > > flags, int node) > > EXPORT_SYMBOL(__kmalloc_node); > > #endif > > > > +#ifdef CONFIG_HARDENED_USERCOPY > > +/* > > + * Rejects objects that are incorrectly sized. > > + * > > + * Returns NULL if check passes, otherwise const char * to name of > > cache > > + * to indicate an error. > > + */ > > +const char *__check_heap_object(const void *ptr, unsigned long n, > > + struct page *page) > > +{ > > + struct kmem_cache *s; > > + unsigned long offset; > > + size_t object_size; > > + > > + /* Find object and usable object size. */ > > + s = page->slab_cache; > > + object_size = slab_ksize(s); > > + > > + /* Find offset within object. */ > > + offset = (ptr - page_address(page)) % s->size; > > + > > + /* Adjust for redzone and reject if within the redzone. */ > > + if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) { > > + if (offset < s->red_left_pad) > > + return s->name; > > + offset -= s->red_left_pad; > > + } > > + > > + /* Allow address range falling entirely within object > > size. */ > > + if (offset <= object_size && n <= object_size - offset) > > + return NULL; > > + > > + return s->name; > > +} > > +#endif /* CONFIG_HARDENED_USERCOPY */ > > + > > I compared this against what check_valid_pointer does for SLUB_DEBUG > checking. I was hoping we could utilize that function to avoid > duplication but a) __check_heap_object needs to allow accesses > anywhere > in the object, not just the beginning b) accessing page->objects > is racy without the addition of locking in SLUB_DEBUG. > > Still, the ptr < page_address(page) check from __check_heap_object > would > be good to add to avoid generating garbage large offsets and trying > to > infer C math. > > diff --git a/mm/slub.c b/mm/slub.c > index 7dee3d9..5370e4f 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void > *ptr, unsigned long n, > s = page->slab_cache; > object_size = slab_ksize(s); > > + if (ptr < page_address(page)) > + return s->name; > + > /* Find offset within object. */ > offset = (ptr - page_address(page)) % s->size; > I don't get it, isn't that already guaranteed because we look for the page that ptr is in, before __check_heap_object is called? Specifically, in patch 3/12: + page = virt_to_head_page(ptr); + + /* Check slab allocator for flags and size. */ + if (PageSlab(page)) + return __check_heap_object(ptr, n, page); How can that generate a ptr that is not inside the page? What am I overlooking? And, should it be in the changelog or a comment? :)
On 07/25/2016 02:42 PM, Rik van Riel wrote: > On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote: >> On 07/20/2016 01:27 PM, Kees Cook wrote: >>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to >>> the >>> SLUB allocator to catch any copies that may span objects. Includes >>> a >>> redzone handling fix discovered by Michael Ellerman. >>> >>> Based on code from PaX and grsecurity. >>> >>> Signed-off-by: Kees Cook <keescook@chromium.org> >>> Tested-by: Michael Ellerman <mpe@ellerman.id.au> >>> --- >>> init/Kconfig | 1 + >>> mm/slub.c | 36 ++++++++++++++++++++++++++++++++++++ >>> 2 files changed, 37 insertions(+) >>> >>> diff --git a/init/Kconfig b/init/Kconfig >>> index 798c2020ee7c..1c4711819dfd 100644 >>> --- a/init/Kconfig >>> +++ b/init/Kconfig >>> @@ -1765,6 +1765,7 @@ config SLAB >>> >>> config SLUB >>> bool "SLUB (Unqueued Allocator)" >>> + select HAVE_HARDENED_USERCOPY_ALLOCATOR >>> help >>> SLUB is a slab allocator that minimizes cache line >>> usage >>> instead of managing queues of cached objects (SLAB >>> approach). >>> diff --git a/mm/slub.c b/mm/slub.c >>> index 825ff4505336..7dee3d9a5843 100644 >>> --- a/mm/slub.c >>> +++ b/mm/slub.c >>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t >>> flags, int node) >>> EXPORT_SYMBOL(__kmalloc_node); >>> #endif >>> >>> +#ifdef CONFIG_HARDENED_USERCOPY >>> +/* >>> + * Rejects objects that are incorrectly sized. >>> + * >>> + * Returns NULL if check passes, otherwise const char * to name of >>> cache >>> + * to indicate an error. >>> + */ >>> +const char *__check_heap_object(const void *ptr, unsigned long n, >>> + struct page *page) >>> +{ >>> + struct kmem_cache *s; >>> + unsigned long offset; >>> + size_t object_size; >>> + >>> + /* Find object and usable object size. */ >>> + s = page->slab_cache; >>> + object_size = slab_ksize(s); >>> + >>> + /* Find offset within object. */ >>> + offset = (ptr - page_address(page)) % s->size; >>> + >>> + /* Adjust for redzone and reject if within the redzone. */ >>> + if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) { >>> + if (offset < s->red_left_pad) >>> + return s->name; >>> + offset -= s->red_left_pad; >>> + } >>> + >>> + /* Allow address range falling entirely within object >>> size. */ >>> + if (offset <= object_size && n <= object_size - offset) >>> + return NULL; >>> + >>> + return s->name; >>> +} >>> +#endif /* CONFIG_HARDENED_USERCOPY */ >>> + >> >> I compared this against what check_valid_pointer does for SLUB_DEBUG >> checking. I was hoping we could utilize that function to avoid >> duplication but a) __check_heap_object needs to allow accesses >> anywhere >> in the object, not just the beginning b) accessing page->objects >> is racy without the addition of locking in SLUB_DEBUG. >> >> Still, the ptr < page_address(page) check from __check_heap_object >> would >> be good to add to avoid generating garbage large offsets and trying >> to >> infer C math. >> >> diff --git a/mm/slub.c b/mm/slub.c >> index 7dee3d9..5370e4f 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void >> *ptr, unsigned long n, >> s = page->slab_cache; >> object_size = slab_ksize(s); >> >> + if (ptr < page_address(page)) >> + return s->name; >> + >> /* Find offset within object. */ >> offset = (ptr - page_address(page)) % s->size; >> > > I don't get it, isn't that already guaranteed because we > look for the page that ptr is in, before __check_heap_object > is called? > > Specifically, in patch 3/12: > > + page = virt_to_head_page(ptr); > + > + /* Check slab allocator for flags and size. */ > + if (PageSlab(page)) > + return __check_heap_object(ptr, n, page); > > How can that generate a ptr that is not inside the page? > > What am I overlooking? And, should it be in the changelog or > a comment? :) > I ran into the subtraction issue when the vmalloc detection wasn't working on ARM64, somehow virt_to_head_page turned into a page that happened to have PageSlab set. I agree if everything is working properly this is redundant but given the type of feature this is, a little bit of redundancy against a system running off into the weeds or bad patches might be warranted. I'm not super attached to the check if other maintainers think it is redundant. Updating the __check_heap_object header comment with a note of what we are assuming could work Thanks, Laura
On Mon, 2016-07-25 at 16:29 -0700, Laura Abbott wrote: > On 07/25/2016 02:42 PM, Rik van Riel wrote: > > On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote: > > > On 07/20/2016 01:27 PM, Kees Cook wrote: > > > > Under CONFIG_HARDENED_USERCOPY, this adds object size checking > > > > to > > > > the > > > > SLUB allocator to catch any copies that may span objects. > > > > Includes > > > > a > > > > redzone handling fix discovered by Michael Ellerman. > > > > > > > > Based on code from PaX and grsecurity. > > > > > > > > Signed-off-by: Kees Cook <keescook@chromium.org> > > > > Tested-by: Michael Ellerman <mpe@ellerman.id.au> > > > > --- > > > > init/Kconfig | 1 + > > > > mm/slub.c | 36 ++++++++++++++++++++++++++++++++++++ > > > > 2 files changed, 37 insertions(+) > > > > > > > > diff --git a/init/Kconfig b/init/Kconfig > > > > index 798c2020ee7c..1c4711819dfd 100644 > > > > --- a/init/Kconfig > > > > +++ b/init/Kconfig > > > > @@ -1765,6 +1765,7 @@ config SLAB > > > > > > > > config SLUB > > > > bool "SLUB (Unqueued Allocator)" > > > > + select HAVE_HARDENED_USERCOPY_ALLOCATOR > > > > help > > > > SLUB is a slab allocator that minimizes cache line > > > > usage > > > > instead of managing queues of cached objects (SLAB > > > > approach). > > > > diff --git a/mm/slub.c b/mm/slub.c > > > > index 825ff4505336..7dee3d9a5843 100644 > > > > --- a/mm/slub.c > > > > +++ b/mm/slub.c > > > > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t > > > > flags, int node) > > > > EXPORT_SYMBOL(__kmalloc_node); > > > > #endif > > > > > > > > +#ifdef CONFIG_HARDENED_USERCOPY > > > > +/* > > > > + * Rejects objects that are incorrectly sized. > > > > + * > > > > + * Returns NULL if check passes, otherwise const char * to > > > > name of > > > > cache > > > > + * to indicate an error. > > > > + */ > > > > +const char *__check_heap_object(const void *ptr, unsigned long > > > > n, > > > > + struct page *page) > > > > +{ > > > > + struct kmem_cache *s; > > > > + unsigned long offset; > > > > + size_t object_size; > > > > + > > > > + /* Find object and usable object size. */ > > > > + s = page->slab_cache; > > > > + object_size = slab_ksize(s); > > > > + > > > > + /* Find offset within object. */ > > > > + offset = (ptr - page_address(page)) % s->size; > > > > + > > > > + /* Adjust for redzone and reject if within the > > > > redzone. */ > > > > + if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) { > > > > + if (offset < s->red_left_pad) > > > > + return s->name; > > > > + offset -= s->red_left_pad; > > > > + } > > > > + > > > > + /* Allow address range falling entirely within object > > > > size. */ > > > > + if (offset <= object_size && n <= object_size - > > > > offset) > > > > + return NULL; > > > > + > > > > + return s->name; > > > > +} > > > > +#endif /* CONFIG_HARDENED_USERCOPY */ > > > > + > > > > > > I compared this against what check_valid_pointer does for > > > SLUB_DEBUG > > > checking. I was hoping we could utilize that function to avoid > > > duplication but a) __check_heap_object needs to allow accesses > > > anywhere > > > in the object, not just the beginning b) accessing page->objects > > > is racy without the addition of locking in SLUB_DEBUG. > > > > > > Still, the ptr < page_address(page) check from > > > __check_heap_object > > > would > > > be good to add to avoid generating garbage large offsets and > > > trying > > > to > > > infer C math. > > > > > > diff --git a/mm/slub.c b/mm/slub.c > > > index 7dee3d9..5370e4f 100644 > > > --- a/mm/slub.c > > > +++ b/mm/slub.c > > > @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void > > > *ptr, unsigned long n, > > > s = page->slab_cache; > > > object_size = slab_ksize(s); > > > > > > + if (ptr < page_address(page)) > > > + return s->name; > > > + > > > /* Find offset within object. */ > > > offset = (ptr - page_address(page)) % s->size; > > > > > > > I don't get it, isn't that already guaranteed because we > > look for the page that ptr is in, before __check_heap_object > > is called? > > > > Specifically, in patch 3/12: > > > > + page = virt_to_head_page(ptr); > > + > > + /* Check slab allocator for flags and size. */ > > + if (PageSlab(page)) > > + return __check_heap_object(ptr, n, page); > > > > How can that generate a ptr that is not inside the page? > > > > What am I overlooking? And, should it be in the changelog or > > a comment? :) > > > > > I ran into the subtraction issue when the vmalloc detection wasn't > working on ARM64, somehow virt_to_head_page turned into a page > that happened to have PageSlab set. I agree if everything is working > properly this is redundant but given the type of feature this is, a > little bit of redundancy against a system running off into the weeds > or bad patches might be warranted. > That's fair. I have no objection to the check, but would like to see it documented, since it does look a little out of place.
On 07/25/2016 01:45 PM, Kees Cook wrote: > On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott@redhat.com> wrote: >> On 07/20/2016 01:27 PM, Kees Cook wrote: >>> >>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the >>> SLUB allocator to catch any copies that may span objects. Includes a >>> redzone handling fix discovered by Michael Ellerman. >>> >>> Based on code from PaX and grsecurity. >>> >>> Signed-off-by: Kees Cook <keescook@chromium.org> >>> Tested-by: Michael Ellerman <mpe@ellerman.id.au> >>> --- >>> init/Kconfig | 1 + >>> mm/slub.c | 36 ++++++++++++++++++++++++++++++++++++ >>> 2 files changed, 37 insertions(+) >>> >>> diff --git a/init/Kconfig b/init/Kconfig >>> index 798c2020ee7c..1c4711819dfd 100644 >>> --- a/init/Kconfig >>> +++ b/init/Kconfig >>> @@ -1765,6 +1765,7 @@ config SLAB >>> >>> config SLUB >>> bool "SLUB (Unqueued Allocator)" >>> + select HAVE_HARDENED_USERCOPY_ALLOCATOR >>> help >>> SLUB is a slab allocator that minimizes cache line usage >>> instead of managing queues of cached objects (SLAB approach). >>> diff --git a/mm/slub.c b/mm/slub.c >>> index 825ff4505336..7dee3d9a5843 100644 >>> --- a/mm/slub.c >>> +++ b/mm/slub.c >>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int >>> node) >>> EXPORT_SYMBOL(__kmalloc_node); >>> #endif >>> >>> +#ifdef CONFIG_HARDENED_USERCOPY >>> +/* >>> + * Rejects objects that are incorrectly sized. >>> + * >>> + * Returns NULL if check passes, otherwise const char * to name of cache >>> + * to indicate an error. >>> + */ >>> +const char *__check_heap_object(const void *ptr, unsigned long n, >>> + struct page *page) >>> +{ >>> + struct kmem_cache *s; >>> + unsigned long offset; >>> + size_t object_size; >>> + >>> + /* Find object and usable object size. */ >>> + s = page->slab_cache; >>> + object_size = slab_ksize(s); >>> + >>> + /* Find offset within object. */ >>> + offset = (ptr - page_address(page)) % s->size; >>> + >>> + /* Adjust for redzone and reject if within the redzone. */ >>> + if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) { >>> + if (offset < s->red_left_pad) >>> + return s->name; >>> + offset -= s->red_left_pad; >>> + } >>> + >>> + /* Allow address range falling entirely within object size. */ >>> + if (offset <= object_size && n <= object_size - offset) >>> + return NULL; >>> + >>> + return s->name; >>> +} >>> +#endif /* CONFIG_HARDENED_USERCOPY */ >>> + >> >> >> I compared this against what check_valid_pointer does for SLUB_DEBUG >> checking. I was hoping we could utilize that function to avoid >> duplication but a) __check_heap_object needs to allow accesses anywhere >> in the object, not just the beginning b) accessing page->objects >> is racy without the addition of locking in SLUB_DEBUG. >> >> Still, the ptr < page_address(page) check from __check_heap_object would >> be good to add to avoid generating garbage large offsets and trying to >> infer C math. >> >> diff --git a/mm/slub.c b/mm/slub.c >> index 7dee3d9..5370e4f 100644 >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, >> unsigned long n, >> s = page->slab_cache; >> object_size = slab_ksize(s); >> + if (ptr < page_address(page)) >> + return s->name; >> + >> /* Find offset within object. */ >> offset = (ptr - page_address(page)) % s->size; >> >> With that, you can add >> >> Reviwed-by: Laura Abbott <labbott@redhat.com> > > Cool, I'll add that. > > Should I add your reviewed-by for this patch only or for the whole series? > > Thanks! > > -Kees > Just this patch for now, I'm working through a couple of others >> >>> static size_t __ksize(const void *object) >>> { >>> struct page *page; >>> >> >> Thanks, >> Laura > > >
diff --git a/mm/slub.c b/mm/slub.c index 7dee3d9..5370e4f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n, s = page->slab_cache; object_size = slab_ksize(s); + if (ptr < page_address(page)) + return s->name; + /* Find offset within object. */ offset = (ptr - page_address(page)) % s->size;