Message ID | 20240221194052.927623-4-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Memory allocation profiling | expand |
On Wed, Feb 21, 2024 at 2:41 PM Suren Baghdasaryan <surenb@google.com> wrote: > > From: Kent Overstreet <kent.overstreet@linux.dev> > > It seems we need to be more forceful with the compiler on this one. > This is done for performance reasons only. > > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > Reviewed-by: Kees Cook <keescook@chromium.org> > --- > mm/slub.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 2ef88bbf56a3..d31b03a8d9d5 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > return !kasan_slab_free(s, x, init); > } > > -static inline bool slab_free_freelist_hook(struct kmem_cache *s, > +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, __fastpath_inline seems to me more appropriate here. It prioritizes memory vs performance. > void **head, void **tail, > int *cnt) > { > -- > 2.44.0.rc0.258.g7320e95886-goog >
On Wed, Feb 21, 2024 at 1:16 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > On Wed, Feb 21, 2024 at 2:41 PM Suren Baghdasaryan <surenb@google.com> wrote: > > > > From: Kent Overstreet <kent.overstreet@linux.dev> > > > > It seems we need to be more forceful with the compiler on this one. > > This is done for performance reasons only. > > > > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > Reviewed-by: Kees Cook <keescook@chromium.org> > > --- > > mm/slub.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/slub.c b/mm/slub.c > > index 2ef88bbf56a3..d31b03a8d9d5 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) > > return !kasan_slab_free(s, x, init); > > } > > > > -static inline bool slab_free_freelist_hook(struct kmem_cache *s, > > +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, > > __fastpath_inline seems to me more appropriate here. It prioritizes > memory vs performance. Hmm. AFAIKT this function is used only in one place and we do not add any additional users, so I don't think changing to __fastpath_inline here would gain us anything. > > > void **head, void **tail, > > int *cnt) > > { > > -- > > 2.44.0.rc0.258.g7320e95886-goog > >
On 2/24/24 03:02, Suren Baghdasaryan wrote: > On Wed, Feb 21, 2024 at 1:16 PM Pasha Tatashin > <pasha.tatashin@soleen.com> wrote: >> >> On Wed, Feb 21, 2024 at 2:41 PM Suren Baghdasaryan <surenb@google.com> wrote: >> > >> > From: Kent Overstreet <kent.overstreet@linux.dev> >> > >> > It seems we need to be more forceful with the compiler on this one. >> > This is done for performance reasons only. >> > >> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> >> > Signed-off-by: Suren Baghdasaryan <surenb@google.com> >> > Reviewed-by: Kees Cook <keescook@chromium.org> >> > --- >> > mm/slub.c | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/mm/slub.c b/mm/slub.c >> > index 2ef88bbf56a3..d31b03a8d9d5 100644 >> > --- a/mm/slub.c >> > +++ b/mm/slub.c >> > @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) >> > return !kasan_slab_free(s, x, init); >> > } >> > >> > -static inline bool slab_free_freelist_hook(struct kmem_cache *s, >> > +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, >> >> __fastpath_inline seems to me more appropriate here. It prioritizes >> memory vs performance. > > Hmm. AFAIKT this function is used only in one place and we do not add > any additional users, so I don't think changing to __fastpath_inline > here would gain us anything. It would have been more future-proof and self-documenting. But I don't insist. Reviewed-by: Vlastimil Babka <vbabka@suse.cz> >> >> > void **head, void **tail, >> > int *cnt) >> > { >> > -- >> > 2.44.0.rc0.258.g7320e95886-goog >> >
On Mon, Feb 26, 2024, 9:31 AM Vlastimil Babka <vbabka@suse.cz> wrote: > On 2/24/24 03:02, Suren Baghdasaryan wrote: > > On Wed, Feb 21, 2024 at 1:16 PM Pasha Tatashin > > <pasha.tatashin@soleen.com> wrote: > >> > >> On Wed, Feb 21, 2024 at 2:41 PM Suren Baghdasaryan <surenb@google.com> > wrote: > >> > > >> > From: Kent Overstreet <kent.overstreet@linux.dev> > >> > > >> > It seems we need to be more forceful with the compiler on this one. > >> > This is done for performance reasons only. > >> > > >> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> > >> > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > >> > Reviewed-by: Kees Cook <keescook@chromium.org> > >> > --- > >> > mm/slub.c | 2 +- > >> > 1 file changed, 1 insertion(+), 1 deletion(-) > >> > > >> > diff --git a/mm/slub.c b/mm/slub.c > >> > index 2ef88bbf56a3..d31b03a8d9d5 100644 > >> > --- a/mm/slub.c > >> > +++ b/mm/slub.c > >> > @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void > *x, bool init) > >> > return !kasan_slab_free(s, x, init); > >> > } > >> > > >> > -static inline bool slab_free_freelist_hook(struct kmem_cache *s, > >> > +static __always_inline bool slab_free_freelist_hook(struct > kmem_cache *s, > >> > >> __fastpath_inline seems to me more appropriate here. It prioritizes > >> memory vs performance. > > > > Hmm. AFAIKT this function is used only in one place and we do not add > > any additional users, so I don't think changing to __fastpath_inline > > here would gain us anything. > For consistency __fastpath_inline makes more sense, but I am ok with or without this change. Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> > It would have been more future-proof and self-documenting. But I don't > insist. > > Reviewed-by: Vlastimil Babka <vbabka@suse.cz> > > >> > >> > void **head, void **tail, > >> > int *cnt) > >> > { > >> > -- > >> > 2.44.0.rc0.258.g7320e95886-goog > >> > > >
On Mon, Feb 26, 2024 at 7:21 AM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > > > On Mon, Feb 26, 2024, 9:31 AM Vlastimil Babka <vbabka@suse.cz> wrote: >> >> On 2/24/24 03:02, Suren Baghdasaryan wrote: >> > On Wed, Feb 21, 2024 at 1:16 PM Pasha Tatashin >> > <pasha.tatashin@soleen.com> wrote: >> >> >> >> On Wed, Feb 21, 2024 at 2:41 PM Suren Baghdasaryan <surenb@google.com> wrote: >> >> > >> >> > From: Kent Overstreet <kent.overstreet@linux.dev> >> >> > >> >> > It seems we need to be more forceful with the compiler on this one. >> >> > This is done for performance reasons only. >> >> > >> >> > Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> >> >> > Signed-off-by: Suren Baghdasaryan <surenb@google.com> >> >> > Reviewed-by: Kees Cook <keescook@chromium.org> >> >> > --- >> >> > mm/slub.c | 2 +- >> >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> >> > >> >> > diff --git a/mm/slub.c b/mm/slub.c >> >> > index 2ef88bbf56a3..d31b03a8d9d5 100644 >> >> > --- a/mm/slub.c >> >> > +++ b/mm/slub.c >> >> > @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) >> >> > return !kasan_slab_free(s, x, init); >> >> > } >> >> > >> >> > -static inline bool slab_free_freelist_hook(struct kmem_cache *s, >> >> > +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, >> >> >> >> __fastpath_inline seems to me more appropriate here. It prioritizes >> >> memory vs performance. >> > >> > Hmm. AFAIKT this function is used only in one place and we do not add >> > any additional users, so I don't think changing to __fastpath_inline >> > here would gain us anything. > > > For consistency __fastpath_inline makes more sense, but I am ok with or without this change. Ok, I'll update in the next revision. Thanks! > > Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com> > >> >> It would have been more future-proof and self-documenting. But I don't insist. >> >> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> >> >> >> >> >> > void **head, void **tail, >> >> > int *cnt) >> >> > { >> >> > -- >> >> > 2.44.0.rc0.258.g7320e95886-goog >> >> > >>
diff --git a/mm/slub.c b/mm/slub.c index 2ef88bbf56a3..d31b03a8d9d5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2121,7 +2121,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) return !kasan_slab_free(s, x, init); } -static inline bool slab_free_freelist_hook(struct kmem_cache *s, +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, int *cnt) {