Message ID | YUMfdA36fuyZ+/xt@hirez.programming.kicks-ass.net (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/vmalloc: Don't allow VM_NO_GUARD on vmap() | expand |
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
On 16.09.21 12:41, Peter Zijlstra wrote: > > The vmalloc guard pages are added on top of each allocation, thereby > isolating any two allocations from one another. The top guard of the > lower allocation is the bottom guard guard of the higher allocation > etc. > > Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of > isolating separate allocations. > > There are only two in-tree users of this flag, neither of which use it > through the exported interface. Ensure it stays this way. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > include/linux/vmalloc.h | 2 +- > mm/vmalloc.c | 7 +++++++ > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 671d402c3778..10e9571ff0b2 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -22,7 +22,7 @@ struct notifier_block; /* in notifier.h */ > #define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */ > #define VM_DMA_COHERENT 0x00000010 /* dma_alloc_coherent */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > -#define VM_NO_GUARD 0x00000040 /* don't add guard page */ > +#define VM_NO_GUARD 0x00000040 /* ***DANGEROUS*** don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */ > #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index d77830ff604c..01927ebea267 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2743,6 +2743,13 @@ void *vmap(struct page **pages, unsigned int count, > > might_sleep(); > > + /* > + * Your top guard is someone else's bottom guard. Not having a top > + * guard compromises someone else's mappings too. > + */ > + if (WARN_ON_ONCE(flags & VM_NO_GUARD)) > + flags &= ~VM_NO_GUARD; > + > if (count > totalram_pages()) > return NULL; > > Reviewed-by: David Hildenbrand <david@redhat.com>
On Thu, Sep 16, 2021 at 12:41:56PM +0200, Peter Zijlstra wrote: > > The vmalloc guard pages are added on top of each allocation, thereby > isolating any two allocations from one another. The top guard of the > lower allocation is the bottom guard guard of the higher allocation > etc. > > Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of > isolating separate allocations. > > There are only two in-tree users of this flag, neither of which use it > through the exported interface. Ensure it stays this way. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > include/linux/vmalloc.h | 2 +- > mm/vmalloc.c | 7 +++++++ > 2 files changed, 8 insertions(+), 1 deletion(-) > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 671d402c3778..10e9571ff0b2 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -22,7 +22,7 @@ struct notifier_block; /* in notifier.h */ > #define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */ > #define VM_DMA_COHERENT 0x00000010 /* dma_alloc_coherent */ > #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ > -#define VM_NO_GUARD 0x00000040 /* don't add guard page */ > +#define VM_NO_GUARD 0x00000040 /* ***DANGEROUS*** don't add guard page */ > #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ > #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */ > #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index d77830ff604c..01927ebea267 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2743,6 +2743,13 @@ void *vmap(struct page **pages, unsigned int count, > > might_sleep(); > > + /* > + * Your top guard is someone else's bottom guard. Not having a top > + * guard compromises someone else's mappings too. > + */ > + if (WARN_ON_ONCE(flags & VM_NO_GUARD)) > + flags &= ~VM_NO_GUARD; > + > if (count > totalram_pages()) > return NULL; Acked-by: Will Deacon <will@kernel.org> Thanks! Will
On Thu, Sep 16, 2021 at 12:41:56PM +0200, Peter Zijlstra wrote: > > The vmalloc guard pages are added on top of each allocation, thereby > isolating any two allocations from one another. The top guard of the > lower allocation is the bottom guard guard of the higher allocation > etc. > > Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of > isolating separate allocations. > > There are only two in-tree users of this flag, neither of which use it > through the exported interface. Ensure it stays this way. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Yes, please. :) Acked-by: Kees Cook <keescook@chromium.org>
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 671d402c3778..10e9571ff0b2 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -22,7 +22,7 @@ struct notifier_block; /* in notifier.h */ #define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */ #define VM_DMA_COHERENT 0x00000010 /* dma_alloc_coherent */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ -#define VM_NO_GUARD 0x00000040 /* don't add guard page */ +#define VM_NO_GUARD 0x00000040 /* ***DANGEROUS*** don't add guard page */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ #define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */ #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d77830ff604c..01927ebea267 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2743,6 +2743,13 @@ void *vmap(struct page **pages, unsigned int count, might_sleep(); + /* + * Your top guard is someone else's bottom guard. Not having a top + * guard compromises someone else's mappings too. + */ + if (WARN_ON_ONCE(flags & VM_NO_GUARD)) + flags &= ~VM_NO_GUARD; + if (count > totalram_pages()) return NULL;
The vmalloc guard pages are added on top of each allocation, thereby isolating any two allocations from one another. The top guard of the lower allocation is the bottom guard guard of the higher allocation etc. Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of isolating separate allocations. There are only two in-tree users of this flag, neither of which use it through the exported interface. Ensure it stays this way. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- include/linux/vmalloc.h | 2 +- mm/vmalloc.c | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-)