Message ID | 163184741778.29351.16920832234899124642.stgit@noble.brown (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | congestion_wait() and GFP_NOFAIL | expand |
On 9/17/21 04:56, NeilBrown wrote: > __GFP_NOFAIL is documented both in gfp.h and memory-allocation.rst. > The details are not entirely consistent. > > This patch ensures both places state that: > - there is a risk of deadlock with reclaim/writeback/oom-kill > - it should only be used when there is no real alternative > - it is preferable to an endless loop > - it is strongly discourages for costly-order allocations. > > Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Vlastimil Babka <vbabka@suse.cz> Nit below: > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index 55b2ec1f965a..1d2a89e20b8b 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -209,7 +209,11 @@ struct vm_area_struct; > * used only when there is no reasonable failure policy) but it is > * definitely preferable to use the flag rather than opencode endless > * loop around allocator. > - * Using this flag for costly allocations is _highly_ discouraged. > + * Use of this flag may lead to deadlocks if locks are held which would > + * be needed for memory reclaim, write-back, or the timely exit of a > + * process killed by the OOM-killer. Dropping any locks not absolutely > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. We define costly as 3, not 1. But sure it's best to avoid even order>0 for __GFP_NOFAIL. Advising order>1 seems arbitrary though? > */ > #define __GFP_IO ((__force gfp_t)___GFP_IO) > #define __GFP_FS ((__force gfp_t)___GFP_FS) > > >
On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: [...] > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -209,7 +209,11 @@ struct vm_area_struct; > > * used only when there is no reasonable failure policy) but it is > > * definitely preferable to use the flag rather than opencode endless > > * loop around allocator. > > - * Using this flag for costly allocations is _highly_ discouraged. > > + * Use of this flag may lead to deadlocks if locks are held which would > > + * be needed for memory reclaim, write-back, or the timely exit of a > > + * process killed by the OOM-killer. Dropping any locks not absolutely > > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. > > We define costly as 3, not 1. But sure it's best to avoid even order>0 for > __GFP_NOFAIL. Advising order>1 seems arbitrary though? This is not completely arbitrary. We have a warning for any higher order allocation. rmqueue: WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); I do agree that "Using this flag for higher order allocations is _highly_ discouraged. > > */ > > #define __GFP_IO ((__force gfp_t)___GFP_IO) > > #define __GFP_FS ((__force gfp_t)___GFP_FS) > > > > > >
On 10/5/21 13:09, Michal Hocko wrote: > On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: > [...] >> > --- a/include/linux/gfp.h >> > +++ b/include/linux/gfp.h >> > @@ -209,7 +209,11 @@ struct vm_area_struct; >> > * used only when there is no reasonable failure policy) but it is >> > * definitely preferable to use the flag rather than opencode endless >> > * loop around allocator. >> > - * Using this flag for costly allocations is _highly_ discouraged. >> > + * Use of this flag may lead to deadlocks if locks are held which would >> > + * be needed for memory reclaim, write-back, or the timely exit of a >> > + * process killed by the OOM-killer. Dropping any locks not absolutely >> > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. >> > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. >> >> We define costly as 3, not 1. But sure it's best to avoid even order>0 for >> __GFP_NOFAIL. Advising order>1 seems arbitrary though? > > This is not completely arbitrary. We have a warning for any higher order > allocation. > rmqueue: > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); Oh, I missed that. > I do agree that "Using this flag for higher order allocations is > _highly_ discouraged. Well, with the warning in place this is effectively forbidden, not just discouraged. >> > */ >> > #define __GFP_IO ((__force gfp_t)___GFP_IO) >> > #define __GFP_FS ((__force gfp_t)___GFP_FS) >> > >> > >> > >
On Tue, Oct 05, 2021 at 02:27:45PM +0200, Vlastimil Babka wrote: > On 10/5/21 13:09, Michal Hocko wrote: > > On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: > > [...] > >> > --- a/include/linux/gfp.h > >> > +++ b/include/linux/gfp.h > >> > @@ -209,7 +209,11 @@ struct vm_area_struct; > >> > * used only when there is no reasonable failure policy) but it is > >> > * definitely preferable to use the flag rather than opencode endless > >> > * loop around allocator. > >> > - * Using this flag for costly allocations is _highly_ discouraged. > >> > + * Use of this flag may lead to deadlocks if locks are held which would > >> > + * be needed for memory reclaim, write-back, or the timely exit of a > >> > + * process killed by the OOM-killer. Dropping any locks not absolutely > >> > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > >> > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. > >> > >> We define costly as 3, not 1. But sure it's best to avoid even order>0 for > >> __GFP_NOFAIL. Advising order>1 seems arbitrary though? > > > > This is not completely arbitrary. We have a warning for any higher order > > allocation. > > rmqueue: > > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); > > Oh, I missed that. > > > I do agree that "Using this flag for higher order allocations is > > _highly_ discouraged. > > Well, with the warning in place this is effectively forbidden, not just > discouraged. Yup, especially as it doesn't obey __GFP_NOWARN. See commit de2860f46362 ("mm: Add kvrealloc()") as a direct result of unwittingly tripping over this warning when adding __GFP_NOFAIL annotations to replace open coded high-order kmalloc loops that have been in place for a couple of decades without issues. Personally I think that the way __GFP_NOFAIL is first of all recommended over open coded loops and then only later found to be effectively forbidden and needing to be replaced with open coded loops to be a complete mess. Not to mention on the impossibility of using __GFP_NOFAIL with kvmalloc() calls. Just what do we expect kmalloc_node(__GFP_NORETRY | __GFP_NOFAIL) to do, exactly? So, effectively, we have to open-code around kvmalloc() in situations where failure is not an option. Even if we pass __GFP_NOFAIL to __vmalloc(), it isn't guaranteed to succeed because of the "we won't honor gfp flags passed to __vmalloc" semantics it has. Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS kvmalloc via memalloc_nofs_save/restore(), so this behavioural restriction w.r.t. gfp flags just makes no sense at all. That leads to us having to go back to writing extremely custom open coded loops to avoid awful high-order kmalloc direct reclaim behaviour and still fall back to vmalloc and to still handle NOFAIL semantics we need: https://lore.kernel.org/linux-xfs/20210902095927.911100-8-david@fromorbit.com/ So, really, the problems are much deeper here than just badly documented, catch-22 rules for __GFP_NOFAIL - we can't even use __GFP_NOFAIL consistently across the allocation APIs because it changes allocation behaviours in unusable, self-defeating ways.... Cheers, Dave.
On Thu 07-10-21 10:14:52, Dave Chinner wrote: > On Tue, Oct 05, 2021 at 02:27:45PM +0200, Vlastimil Babka wrote: > > On 10/5/21 13:09, Michal Hocko wrote: > > > On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: > > > [...] > > >> > --- a/include/linux/gfp.h > > >> > +++ b/include/linux/gfp.h > > >> > @@ -209,7 +209,11 @@ struct vm_area_struct; > > >> > * used only when there is no reasonable failure policy) but it is > > >> > * definitely preferable to use the flag rather than opencode endless > > >> > * loop around allocator. > > >> > - * Using this flag for costly allocations is _highly_ discouraged. > > >> > + * Use of this flag may lead to deadlocks if locks are held which would > > >> > + * be needed for memory reclaim, write-back, or the timely exit of a > > >> > + * process killed by the OOM-killer. Dropping any locks not absolutely > > >> > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > > >> > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. > > >> > > >> We define costly as 3, not 1. But sure it's best to avoid even order>0 for > > >> __GFP_NOFAIL. Advising order>1 seems arbitrary though? > > > > > > This is not completely arbitrary. We have a warning for any higher order > > > allocation. > > > rmqueue: > > > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); > > > > Oh, I missed that. > > > > > I do agree that "Using this flag for higher order allocations is > > > _highly_ discouraged. > > > > Well, with the warning in place this is effectively forbidden, not just > > discouraged. > > Yup, especially as it doesn't obey __GFP_NOWARN. > > See commit de2860f46362 ("mm: Add kvrealloc()") as a direct result > of unwittingly tripping over this warning when adding __GFP_NOFAIL > annotations to replace open coded high-order kmalloc loops that have > been in place for a couple of decades without issues. > > Personally I think that the way __GFP_NOFAIL is first of all > recommended over open coded loops and then only later found to be > effectively forbidden and needing to be replaced with open coded > loops to be a complete mess. Well, there are two things. Opencoding something that _can_ be replaced by __GFP_NOFAIL and those that cannot because the respective allocator doesn't really support that semantic. kvmalloc is explicit about that IIRC. If you have a better way to consolidate the documentation then I am all for it. > Not to mention on the impossibility of using __GFP_NOFAIL with > kvmalloc() calls. Just what do we expect kmalloc_node(__GFP_NORETRY > | __GFP_NOFAIL) to do, exactly? This combination doesn't make any sense. Like others. Do you want us to list all combinations that make sense? > So, effectively, we have to open-code around kvmalloc() in > situations where failure is not an option. Even if we pass > __GFP_NOFAIL to __vmalloc(), it isn't guaranteed to succeed because > of the "we won't honor gfp flags passed to __vmalloc" semantics it > has. yes vmalloc doesn't support nofail semantic and it is not really trivial to craft it there. > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > restriction w.r.t. gfp flags just makes no sense at all. GFP_NOFS (without using the scope API) has the same problem as NOFAIL in the vmalloc. Hence it is not supported. If you use the scope API then you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to define these conditions in a more sensible way. Special case NOFS if the scope api is in use? Why do you want an explicit NOFS then? > That leads to us having to go back to writing extremely custom open > coded loops to avoid awful high-order kmalloc direct reclaim > behaviour and still fall back to vmalloc and to still handle NOFAIL > semantics we need: > > https://lore.kernel.org/linux-xfs/20210902095927.911100-8-david@fromorbit.com/ It would be more productive to get to MM people rather than rant on a xfs specific patchse. Anyway, I can see a kvmalloc mode where the kmalloc allocation would be really a very optimistic one - like your effectively GFP_NOWAIT. Nobody has requested such a mode until now and I am not sure how we would sensibly describe that by a gfp mask. Btw. your GFP_NOWAIT | __GFP_NORETRY combination doesn't make any sense in the allocator context as the later is a reclaim mofifier which doesn't get applied when the reclaim is disabled (in your case by flags &= ~__GFP_DIRECT_RECLAIM). GFP flags are not that easy to build a coherent and usable apis. Something we carry as a baggage for a long time. > So, really, the problems are much deeper here than just badly > documented, catch-22 rules for __GFP_NOFAIL - we can't even use > __GFP_NOFAIL consistently across the allocation APIs because it > changes allocation behaviours in unusable, self-defeating ways.... GFP_NOFAIL sucks. Not all allocator can follow it for practical reasons. You are welcome to help document those awkward corner cases or fix them up if you have a good idea how. Thanks!
On Thu, 07 Oct 2021, Michal Hocko wrote: > On Thu 07-10-21 10:14:52, Dave Chinner wrote: > > On Tue, Oct 05, 2021 at 02:27:45PM +0200, Vlastimil Babka wrote: > > > On 10/5/21 13:09, Michal Hocko wrote: > > > > On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: > > > > [...] > > > >> > --- a/include/linux/gfp.h > > > >> > +++ b/include/linux/gfp.h > > > >> > @@ -209,7 +209,11 @@ struct vm_area_struct; > > > >> > * used only when there is no reasonable failure policy) but it is > > > >> > * definitely preferable to use the flag rather than opencode endless > > > >> > * loop around allocator. > > > >> > - * Using this flag for costly allocations is _highly_ discouraged. > > > >> > + * Use of this flag may lead to deadlocks if locks are held which would > > > >> > + * be needed for memory reclaim, write-back, or the timely exit of a > > > >> > + * process killed by the OOM-killer. Dropping any locks not absolutely > > > >> > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > > > >> > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. > > > >> > > > >> We define costly as 3, not 1. But sure it's best to avoid even order>0 for > > > >> __GFP_NOFAIL. Advising order>1 seems arbitrary though? > > > > > > > > This is not completely arbitrary. We have a warning for any higher order > > > > allocation. > > > > rmqueue: > > > > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); > > > > > > Oh, I missed that. > > > > > > > I do agree that "Using this flag for higher order allocations is > > > > _highly_ discouraged. > > > > > > Well, with the warning in place this is effectively forbidden, not just > > > discouraged. > > > > Yup, especially as it doesn't obey __GFP_NOWARN. > > > > See commit de2860f46362 ("mm: Add kvrealloc()") as a direct result > > of unwittingly tripping over this warning when adding __GFP_NOFAIL > > annotations to replace open coded high-order kmalloc loops that have > > been in place for a couple of decades without issues. > > > > Personally I think that the way __GFP_NOFAIL is first of all > > recommended over open coded loops and then only later found to be > > effectively forbidden and needing to be replaced with open coded > > loops to be a complete mess. > > Well, there are two things. Opencoding something that _can_ be replaced > by __GFP_NOFAIL and those that cannot because the respective allocator > doesn't really support that semantic. kvmalloc is explicit about that > IIRC. If you have a better way to consolidate the documentation then I > am all for it. I think one thing that might help make the documentation better is to explicitly state *why* __GFP_NOFAIL is better than a loop. It occurs to me that while (!(p = kmalloc(sizeof(*p), GFP_KERNEL)); would behave much the same as adding __GFP_NOFAIL and dropping the 'while'. So why not? I certainly cannot see the need to add any delay to this loop as kmalloc does a fair bit of sleeping when permitted. I understand that __GFP_NOFAIL allows page_alloc to dip into reserves, but Mel holds that up as a reason *not* to use __GFP_NOFAIL as it can impact on other subsystems. Why not just let the caller decide if they deserve the boost, but oring in __GFP_ATOMIC or __GFP_MEMALLOC as appropriate. I assume there is a good reason. I vaguely remember the conversation that lead to __GFP_NOFAIL being introduced. I just cannot remember or deduce what the reason is. So it would be great to have it documented. > > > Not to mention on the impossibility of using __GFP_NOFAIL with > > kvmalloc() calls. Just what do we expect kmalloc_node(__GFP_NORETRY > > | __GFP_NOFAIL) to do, exactly? > > This combination doesn't make any sense. Like others. Do you want us to > list all combinations that make sense? I've been wondering about that. There seem to be sets of flags that are mutually exclusive. It is as though gfp_t is a struct of a few enums. 0, DMA32, DMA, HIGHMEM 0, FS, IO 0, ATOMIC, MEMALLOC, NOMEMALLOC, HIGH NORETRY, RETRY_MAYFAIL, 0, NOFAIL 0, KSWAPD_RECLAIM, DIRECT_RECLAIM 0, THISNODE, HARDWALL In a few cases there seem to be 3 bits where there are only 4 possibly combinations, so 2 bits would be enough. There is probably no real value is squeezing these into 2 bits, but clearly documenting the groups surely wouldn't hurt. Particularly highlighting the difference between related bits would help. The set with 'ATOMIC' is hard to wrap my mind around. They relate to ALLOC_HIGH and ALLOC_HARDER, but also to WMARK_NIN, WMARK_LOW, WMARK_HIGH ... I think. I wonder if FS,IO is really in the same set as DIRECT_RECLAIM as they all affect reclaim. Maybe FS and IO are only relevan if DIRECT_RECLAIM is set? I'd love to know that to expect if neither RETRY_MAYFAIL or NOFAIL is set. I guess it can fail, but it still tries harder than if RETRY_MAYFAIL is set.... Ahhhh... I found some documentation which mentions that RETRY_MAYFAIL doesn't trigger the oom killer. Is that it? So RETRY_NOKILLOOM might be a better name? > > > So, effectively, we have to open-code around kvmalloc() in > > situations where failure is not an option. Even if we pass > > __GFP_NOFAIL to __vmalloc(), it isn't guaranteed to succeed because > > of the "we won't honor gfp flags passed to __vmalloc" semantics it > > has. > > yes vmalloc doesn't support nofail semantic and it is not really trivial > to craft it there. > > > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > > restriction w.r.t. gfp flags just makes no sense at all. > > GFP_NOFS (without using the scope API) has the same problem as NOFAIL in > the vmalloc. Hence it is not supported. If you use the scope API then > you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to > define these conditions in a more sensible way. Special case NOFS if the > scope api is in use? Why do you want an explicit NOFS then? It would seem to make sense for kvmalloc to WARN_ON if it is passed flags that does not allow it to use vmalloc. Such callers could then know they can either change to a direct kmalloc(), or change flags. Silently ignoring the 'v' in the function name sees like a poor choice. Thanks, NeilBrown > > > That leads to us having to go back to writing extremely custom open > > coded loops to avoid awful high-order kmalloc direct reclaim > > behaviour and still fall back to vmalloc and to still handle NOFAIL > > semantics we need: > > > > https://lore.kernel.org/linux-xfs/20210902095927.911100-8-david@fromorbit.com/ > > It would be more productive to get to MM people rather than rant on a > xfs specific patchse. Anyway, I can see a kvmalloc mode where the > kmalloc allocation would be really a very optimistic one - like your > effectively GFP_NOWAIT. Nobody has requested such a mode until now and I > am not sure how we would sensibly describe that by a gfp mask. > > Btw. your GFP_NOWAIT | __GFP_NORETRY combination doesn't make any sense > in the allocator context as the later is a reclaim mofifier which > doesn't get applied when the reclaim is disabled (in your case by flags > &= ~__GFP_DIRECT_RECLAIM). > > GFP flags are not that easy to build a coherent and usable apis. > Something we carry as a baggage for a long time. > > > So, really, the problems are much deeper here than just badly > > documented, catch-22 rules for __GFP_NOFAIL - we can't even use > > __GFP_NOFAIL consistently across the allocation APIs because it > > changes allocation behaviours in unusable, self-defeating ways.... > > GFP_NOFAIL sucks. Not all allocator can follow it for practical > reasons. You are welcome to help document those awkward corner cases or > fix them up if you have a good idea how. > > Thanks! > -- > Michal Hocko > SUSE Labs > >
On Fri 08-10-21 10:15:45, Neil Brown wrote: > On Thu, 07 Oct 2021, Michal Hocko wrote: > > On Thu 07-10-21 10:14:52, Dave Chinner wrote: > > > On Tue, Oct 05, 2021 at 02:27:45PM +0200, Vlastimil Babka wrote: > > > > On 10/5/21 13:09, Michal Hocko wrote: > > > > > On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: > > > > > [...] > > > > >> > --- a/include/linux/gfp.h > > > > >> > +++ b/include/linux/gfp.h > > > > >> > @@ -209,7 +209,11 @@ struct vm_area_struct; > > > > >> > * used only when there is no reasonable failure policy) but it is > > > > >> > * definitely preferable to use the flag rather than opencode endless > > > > >> > * loop around allocator. > > > > >> > - * Using this flag for costly allocations is _highly_ discouraged. > > > > >> > + * Use of this flag may lead to deadlocks if locks are held which would > > > > >> > + * be needed for memory reclaim, write-back, or the timely exit of a > > > > >> > + * process killed by the OOM-killer. Dropping any locks not absolutely > > > > >> > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > > > > >> > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. > > > > >> > > > > >> We define costly as 3, not 1. But sure it's best to avoid even order>0 for > > > > >> __GFP_NOFAIL. Advising order>1 seems arbitrary though? > > > > > > > > > > This is not completely arbitrary. We have a warning for any higher order > > > > > allocation. > > > > > rmqueue: > > > > > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); > > > > > > > > Oh, I missed that. > > > > > > > > > I do agree that "Using this flag for higher order allocations is > > > > > _highly_ discouraged. > > > > > > > > Well, with the warning in place this is effectively forbidden, not just > > > > discouraged. > > > > > > Yup, especially as it doesn't obey __GFP_NOWARN. > > > > > > See commit de2860f46362 ("mm: Add kvrealloc()") as a direct result > > > of unwittingly tripping over this warning when adding __GFP_NOFAIL > > > annotations to replace open coded high-order kmalloc loops that have > > > been in place for a couple of decades without issues. > > > > > > Personally I think that the way __GFP_NOFAIL is first of all > > > recommended over open coded loops and then only later found to be > > > effectively forbidden and needing to be replaced with open coded > > > loops to be a complete mess. > > > > Well, there are two things. Opencoding something that _can_ be replaced > > by __GFP_NOFAIL and those that cannot because the respective allocator > > doesn't really support that semantic. kvmalloc is explicit about that > > IIRC. If you have a better way to consolidate the documentation then I > > am all for it. > > I think one thing that might help make the documentation better is to > explicitly state *why* __GFP_NOFAIL is better than a loop. > > It occurs to me that > while (!(p = kmalloc(sizeof(*p), GFP_KERNEL)); > > would behave much the same as adding __GFP_NOFAIL and dropping the > 'while'. So why not? I certainly cannot see the need to add any delay > to this loop as kmalloc does a fair bit of sleeping when permitted. > > I understand that __GFP_NOFAIL allows page_alloc to dip into reserves, > but Mel holds that up as a reason *not* to use __GFP_NOFAIL as it can > impact on other subsystems. __GFP_NOFAIL usage is a risk on its own. It is a hard requirement that the allocator cannot back off. So it has to absolutely everything to suceed. Whether it cheats and dips into reserves or not is a mere implementation detail and a subject to the specific implementation. > Why not just let the caller decide if they > deserve the boost, but oring in __GFP_ATOMIC or __GFP_MEMALLOC as > appropriate. They can do that. Explicit access to memory reserves is allowed unless it is explicitly forbidden by NOMEMALLOC flag. > I assume there is a good reason. I vaguely remember the conversation > that lead to __GFP_NOFAIL being introduced. I just cannot remember or > deduce what the reason is. So it would be great to have it documented. The basic reason is that if the allocator knows this is must suceed allocation request then it can prioritize it in some way. A dumb kmalloc loop as you pictured it is likely much less optimal in that sense, isn't it? Compare that to mempool allocator which is non failing as well but it has some involved handling and that is certainly not a good fit for __GFP_NOFAIL in the page allocator. > > > Not to mention on the impossibility of using __GFP_NOFAIL with > > > kvmalloc() calls. Just what do we expect kmalloc_node(__GFP_NORETRY > > > | __GFP_NOFAIL) to do, exactly? > > > > This combination doesn't make any sense. Like others. Do you want us to > > list all combinations that make sense? > > I've been wondering about that. There seem to be sets of flags that are > mutually exclusive. It is as though gfp_t is a struct of a few enums. > > 0, DMA32, DMA, HIGHMEM > 0, FS, IO > 0, ATOMIC, MEMALLOC, NOMEMALLOC, HIGH > NORETRY, RETRY_MAYFAIL, 0, NOFAIL > 0, KSWAPD_RECLAIM, DIRECT_RECLAIM > 0, THISNODE, HARDWALL > > In a few cases there seem to be 3 bits where there are only 4 possibly > combinations, so 2 bits would be enough. There is probably no real > value is squeezing these into 2 bits, but clearly documenting the groups > surely wouldn't hurt. Particularly highlighting the difference between > related bits would help. Don't we have that already? We have them grouped by placement, watermarks, reclaim and action modifiers. Then we have useful combinations. I believe we can always improve on that and I am always ready to listen here. > The set with 'ATOMIC' is hard to wrap my mind around. > They relate to ALLOC_HIGH and ALLOC_HARDER, but also to WMARK_NIN, > WMARK_LOW, WMARK_HIGH ... I think. ALLOC* and WMARK* is an internal allocator concept and I believe users of gfp flags shouldn't really care or even know those exist. > I wonder if FS,IO is really in the same set as DIRECT_RECLAIM as they > all affect reclaim. Maybe FS and IO are only relevan if DIRECT_RECLAIM > is set? yes, this indeed the case. Page allocator doesn't go outside of its proper without the direct reclaim. > I'd love to know that to expect if neither RETRY_MAYFAIL or NOFAIL is > set. I guess it can fail, but it still tries harder than if > RETRY_MAYFAIL is set.... > Ahhhh... I found some documentation which mentions The reclaim behavior is described along with the respective modifiers. I believe we can thank you for this structure as you were the primary driving force to clarify the behavior. > that RETRY_MAYFAIL > doesn't trigger the oom killer. Is that it? So RETRY_NOKILLOOM might be > a better name? Again the those are implementation details and I am not sure we really want to bother users with all of them. This wold quickly become hairy and likely even outdated after some time. The documentation tries to describe different levels of involvement. NOWAIT - no direct reclaim, NORETRY - only a light attempt to reclaim, RETRY_MAYFAIL - try as hard as feasible, NOFAIL - cannot really fail. If we can improve the wording I am all for it. > > > So, effectively, we have to open-code around kvmalloc() in > > > situations where failure is not an option. Even if we pass > > > __GFP_NOFAIL to __vmalloc(), it isn't guaranteed to succeed because > > > of the "we won't honor gfp flags passed to __vmalloc" semantics it > > > has. > > > > yes vmalloc doesn't support nofail semantic and it is not really trivial > > to craft it there. > > > > > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > > > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > > > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > > > restriction w.r.t. gfp flags just makes no sense at all. > > > > GFP_NOFS (without using the scope API) has the same problem as NOFAIL in > > the vmalloc. Hence it is not supported. If you use the scope API then > > you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to > > define these conditions in a more sensible way. Special case NOFS if the > > scope api is in use? Why do you want an explicit NOFS then? > > It would seem to make sense for kvmalloc to WARN_ON if it is passed > flags that does not allow it to use vmalloc. vmalloc is certainly not the hottest path in the kernel so I wouldn't be opposed. One should be careful that WARN_ON is effectively BUG_ON in some configurations but we are sinners from that perspective all over the place... Thanks!
On Fri, Oct 08, 2021 at 09:48:39AM +0200, Michal Hocko wrote: > On Fri 08-10-21 10:15:45, Neil Brown wrote: > > On Thu, 07 Oct 2021, Michal Hocko wrote: > > > On Thu 07-10-21 10:14:52, Dave Chinner wrote: > > > > On Tue, Oct 05, 2021 at 02:27:45PM +0200, Vlastimil Babka wrote: > > > > > On 10/5/21 13:09, Michal Hocko wrote: > > > > > > On Tue 05-10-21 11:20:51, Vlastimil Babka wrote: > > > > > > [...] > > > > > >> > --- a/include/linux/gfp.h > > > > > >> > +++ b/include/linux/gfp.h > > > > > >> > @@ -209,7 +209,11 @@ struct vm_area_struct; > > > > > >> > * used only when there is no reasonable failure policy) but it is > > > > > >> > * definitely preferable to use the flag rather than opencode endless > > > > > >> > * loop around allocator. > > > > > >> > - * Using this flag for costly allocations is _highly_ discouraged. > > > > > >> > + * Use of this flag may lead to deadlocks if locks are held which would > > > > > >> > + * be needed for memory reclaim, write-back, or the timely exit of a > > > > > >> > + * process killed by the OOM-killer. Dropping any locks not absolutely > > > > > >> > + * needed is advisable before requesting a %__GFP_NOFAIL allocate. > > > > > >> > + * Using this flag for costly allocations (order>1) is _highly_ discouraged. > > > > > >> > > > > > >> We define costly as 3, not 1. But sure it's best to avoid even order>0 for > > > > > >> __GFP_NOFAIL. Advising order>1 seems arbitrary though? > > > > > > > > > > > > This is not completely arbitrary. We have a warning for any higher order > > > > > > allocation. > > > > > > rmqueue: > > > > > > WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1)); > > > > > > > > > > Oh, I missed that. > > > > > > > > > > > I do agree that "Using this flag for higher order allocations is > > > > > > _highly_ discouraged. > > > > > > > > > > Well, with the warning in place this is effectively forbidden, not just > > > > > discouraged. > > > > > > > > Yup, especially as it doesn't obey __GFP_NOWARN. > > > > > > > > See commit de2860f46362 ("mm: Add kvrealloc()") as a direct result > > > > of unwittingly tripping over this warning when adding __GFP_NOFAIL > > > > annotations to replace open coded high-order kmalloc loops that have > > > > been in place for a couple of decades without issues. > > > > > > > > Personally I think that the way __GFP_NOFAIL is first of all > > > > recommended over open coded loops and then only later found to be > > > > effectively forbidden and needing to be replaced with open coded > > > > loops to be a complete mess. > > > > > > Well, there are two things. Opencoding something that _can_ be replaced > > > by __GFP_NOFAIL and those that cannot because the respective allocator > > > doesn't really support that semantic. kvmalloc is explicit about that > > > IIRC. If you have a better way to consolidate the documentation then I > > > am all for it. > > > > I think one thing that might help make the documentation better is to > > explicitly state *why* __GFP_NOFAIL is better than a loop. > > > > It occurs to me that > > while (!(p = kmalloc(sizeof(*p), GFP_KERNEL)); > > > > would behave much the same as adding __GFP_NOFAIL and dropping the > > 'while'. So why not? I certainly cannot see the need to add any delay > > to this loop as kmalloc does a fair bit of sleeping when permitted. > > > > I understand that __GFP_NOFAIL allows page_alloc to dip into reserves, > > but Mel holds that up as a reason *not* to use __GFP_NOFAIL as it can > > impact on other subsystems. > > __GFP_NOFAIL usage is a risk on its own. It is a hard requirement that > the allocator cannot back off. No, "allocator cannot back off" isn't a hard requirement for most GFP_NOFAIL uses. *Not failing the allocation* is the hard requirement. How long it takes for the allocation to actually succeed is irrelevant to most callers, and given that we are replacing loops that do while (!(p = kmalloc(sizeof(*p), GFP_KERNEL)) with __GFP_NOFAIL largely indicates that allocation *latency* and/or deadlocks are not an issue here. Indeed, if we deadlock in XFS because there is no memory available, that is *not a problem kmalloc() should be trying to solve*. THe problem is the caller being unable to handle allocation failure, so if allocation cannot make progress, that needs to be fixed by the caller getting rid of the unfailable allocation. The fact is that we've had these loops in production code for a couple of decades and these subsystems just aren't failing or deadlocking with such loops. IOWs, we don't need __GFP_NOFAIL to dig deep into reserves or drive the system to OOM killing - we just need to it keep retrying the same allocation until it succeeds. Put simply, we want "retry forever" semantics to match what production kernels have been doing for the past couple of decades, but all we've been given are "never fail" semantics that also do something different and potentially much more problematic. Do you see the difference here? __GFP_NOFAIL is not what we need in the vast majority of cases where it is used. We don't want the failing allocations to drive the machine hard into critical reserves, we just want the allocation to -eventually succeed- and if it doesn't, that's our problem to handle, not kmalloc().... > So it has to absolutely everything to > suceed. Whether it cheats and dips into reserves or not is a mere > implementation detail and a subject to the specific implementation. My point exactly: that's how the MM interprets __GFP_NOFAIL is supposed to provide callers with. What we are trying to tell you is that the semantics associated with __GFP_NOFAIL is not actually what we require, and it's the current semantics of __GFP_NOFAIL that cause all the "can't be applied consistently across the entire allocation APIs" problems.... > > > > So, effectively, we have to open-code around kvmalloc() in > > > > situations where failure is not an option. Even if we pass > > > > __GFP_NOFAIL to __vmalloc(), it isn't guaranteed to succeed because > > > > of the "we won't honor gfp flags passed to __vmalloc" semantics it > > > > has. > > > > > > yes vmalloc doesn't support nofail semantic and it is not really trivial > > > to craft it there. Yet retry-forever is trivial to implement across everything: kvmalloc(size, gfp_mask) { gfp_t flags = gfp_mask & ~__GFP_RETRY_FOREVER; do { p = __kvmalloc(size, flags) } while (!p && (gfp_mask & __GFP_RETRY_FOREVER)); return p; } That provides "allocation will eventually succeed" semantics just fine, yes? It doesn't guarantee forwards progress or success, just that *it won't fail*. It should be obvious what the difference between "retry forever" and __GFP_NOFAIL semantics are now, and why we don't actually want __GFP_NOFAIL. We just want __GFP_RETRY_FOREVER semantics that can be applied consistently across the entire allocation API regardless of whatever other flags are passed into the allocation: don't return until an allocation with the provided semantics succeeds. > > > > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > > > > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > > > > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > > > > restriction w.r.t. gfp flags just makes no sense at all. > > > > > > GFP_NOFS (without using the scope API) has the same problem as NOFAIL in > > > the vmalloc. Hence it is not supported. If you use the scope API then > > > you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to > > > define these conditions in a more sensible way. Special case NOFS if the > > > scope api is in use? Why do you want an explicit NOFS then? Exactly my point - this is clumsy and a total mess. I'm not asking for an explicit GFP_NOFS, just pointing out that the documented restrictions that "vmalloc can only do GFP_KERNEL allocations" is completely wrong. vmalloc() { if (!(gfp_flags & __GFP_FS)) memalloc_nofs_save(); p = __vmalloc(gfp_flags | GFP_KERNEL) if (!(gfp_flags & __GFP_FS)) memalloc_nofs_restore(); } Yup, that's how simple it is to support GFP_NOFS support in vmalloc(). This goes along with the argument that "it's impossible to do GFP_NOFAIL with vmalloc" as I addressed above. These things are not impossible, but we hide behind "we don't want people to use vmalloc" as an excuse for having shitty behaviour whilst ignoring that vmalloc is *heavily used* by core subsystems like filesystems because they cannot rely on high order allocations succeeding.... It also points out that the scope API is highly deficient. We can do GFP_NOFS via the scope API, but we can't do anything else because *there is no scope API for other GFP flags*. Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? That would save us a lot of bother in XFS. What about GFP_DIRECT_RECLAIM? I'd really like to turn that off for allocations in the XFS transaction commit path (as noted already in this thread) because direct reclaim that can make no progress is actively harmful (as noted already in this thread) Like I said - this is more than just bad documentation - the problem is that the whole allocation API is an inconsistent mess of control mechanisms to begin with... > > It would seem to make sense for kvmalloc to WARN_ON if it is passed > > flags that does not allow it to use vmalloc. > > vmalloc is certainly not the hottest path in the kernel so I wouldn't be > opposed. kvmalloc is most certainly becoming one of the hottest paths in XFS. IOWs, arguments that "vmalloc is not a hot path" are simply invalid these days because they are simply untrue. e.g. the profiles I posted in this thread... Cheers, Dave.
On Sat 09-10-21 09:36:49, Dave Chinner wrote: > On Fri, Oct 08, 2021 at 09:48:39AM +0200, Michal Hocko wrote: > > __GFP_NOFAIL usage is a risk on its own. It is a hard requirement that > > the allocator cannot back off. [...] > > No, "allocator cannot back off" isn't a hard requirement for most > GFP_NOFAIL uses. *Not failing the allocation* is the hard > requirement. We are talking about the same thing here I belive. By cannot back off I really mean cannot fail. Just for the clarification. > How long it takes for the allocation to actually succeed is > irrelevant to most callers, and given that we are replacing loops > that do > > while (!(p = kmalloc(sizeof(*p), GFP_KERNEL)) > > with __GFP_NOFAIL largely indicates that allocation *latency* and/or > deadlocks are not an issue here. Agreed. > Indeed, if we deadlock in XFS because there is no memory available, > that is *not a problem kmalloc() should be trying to solve*. THe > problem is the caller being unable to handle allocation failure, so > if allocation cannot make progress, that needs to be fixed by the > caller getting rid of the unfailable allocation. > > The fact is that we've had these loops in production code for a > couple of decades and these subsystems just aren't failing or > deadlocking with such loops. IOWs, we don't need __GFP_NOFAIL to dig > deep into reserves or drive the system to OOM killing - we just need > to it keep retrying the same allocation until it succeeds. > > Put simply, we want "retry forever" semantics to match what > production kernels have been doing for the past couple of decades, > but all we've been given are "never fail" semantics that also do > something different and potentially much more problematic. > > Do you see the difference here? __GFP_NOFAIL is not what we > need in the vast majority of cases where it is used. We don't want > the failing allocations to drive the machine hard into critical > reserves, we just want the allocation to -eventually succeed- and if > it doesn't, that's our problem to handle, not kmalloc().... I can see your point. I do have a recollection that there were some instance involved where an emergency access to memory reserves helped in OOM situations. Anway as I've tried to explain earlier that this all is an implementation detail users of the flag shouldn't really care about. If this heuristic is not doing any good then it should be removed. [...] > > > > > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > > > > > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > > > > > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > > > > > restriction w.r.t. gfp flags just makes no sense at all. > > > > > > > > GFP_NOFS (without using the scope API) has the same problem as NOFAIL in > > > > the vmalloc. Hence it is not supported. If you use the scope API then > > > > you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to > > > > define these conditions in a more sensible way. Special case NOFS if the > > > > scope api is in use? Why do you want an explicit NOFS then? > > Exactly my point - this is clumsy and a total mess. I'm not asking > for an explicit GFP_NOFS, just pointing out that the documented > restrictions that "vmalloc can only do GFP_KERNEL allocations" is > completely wrong. > > vmalloc() > { > if (!(gfp_flags & __GFP_FS)) > memalloc_nofs_save(); > p = __vmalloc(gfp_flags | GFP_KERNEL) > if (!(gfp_flags & __GFP_FS)) > memalloc_nofs_restore(); > } > > Yup, that's how simple it is to support GFP_NOFS support in > vmalloc(). Yes, this would work from the functionality POV but it defeats the philosophy behind the scope API. Why would you even need this if the scope was defined by the caller of the allocator? The initial hope was to get rid of the NOFS abuse that can be seen in many filesystems. All allocations from the scope would simply inherit the NOFS semantic so an explicit NOFS shouldn't be really necessary, right? > This goes along with the argument that "it's impossible to do > GFP_NOFAIL with vmalloc" as I addressed above. These things are not > impossible, but we hide behind "we don't want people to use vmalloc" > as an excuse for having shitty behaviour whilst ignoring that > vmalloc is *heavily used* by core subsystems like filesystems > because they cannot rely on high order allocations succeeding.... I do not think there is any reason to discourage anybody from using vmalloc these days. 32b is dying out and vmalloc space is no longer a very scarce resource. > It also points out that the scope API is highly deficient. > We can do GFP_NOFS via the scope API, but we can't > do anything else because *there is no scope API for other GFP > flags*. > > Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? NO{FS,IO} where first flags to start this approach. And I have to admit the experiment was much less successful then I hoped for. There are still thousands of direct NOFS users so for some reason defining scopes is not an easy thing to do. I am not against NOFAIL scopes in principle but seeing the nofs "success" I am worried this will not go really well either and it is much more tricky as NOFAIL has much stronger requirements than NOFS. Just imagine how tricky this can be if you just call a library code that is not under your control within a NOFAIL scope. What if that library code decides to allocate (e.g. printk that would attempt to do an optimistic NOWAIT allocation). > That > would save us a lot of bother in XFS. What about GFP_DIRECT_RECLAIM? > I'd really like to turn that off for allocations in the XFS > transaction commit path (as noted already in this thread) because > direct reclaim that can make no progress is actively harmful (as > noted already in this thread) As always if you have reasonable usecases then it is best to bring them up on the MM list and we can discuss them. > Like I said - this is more than just bad documentation - the problem > is that the whole allocation API is an inconsistent mess of control > mechanisms to begin with... I am not going to disagree. There is a lot of historical baggage and it doesn't help that any change is really hard to review because this interface is used throughout the kernel. I have tried to change some most obvious inconsistencies and I can tell this has always been a frustrating experience with a very small "reward" in the end because there are so many other problems. That being said, I would more than love to have a consistent and well defined interface and if you want to spend a lot of time on that then be my guest. > > > It would seem to make sense for kvmalloc to WARN_ON if it is passed > > > flags that does not allow it to use vmalloc. > > > > vmalloc is certainly not the hottest path in the kernel so I wouldn't be > > opposed. > > kvmalloc is most certainly becoming one of the hottest paths in XFS. > IOWs, arguments that "vmalloc is not a hot path" are simply invalid > these days because they are simply untrue. e.g. the profiles I > posted in this thread... Is it such a hot path that a check for compatible flags would be visible in profiles though?
On Mon, 11 Oct 2021, Michal Hocko wrote: > On Sat 09-10-21 09:36:49, Dave Chinner wrote: > > > > Put simply, we want "retry forever" semantics to match what > > production kernels have been doing for the past couple of decades, > > but all we've been given are "never fail" semantics that also do > > something different and potentially much more problematic. > > > > Do you see the difference here? __GFP_NOFAIL is not what we > > need in the vast majority of cases where it is used. We don't want > > the failing allocations to drive the machine hard into critical > > reserves, we just want the allocation to -eventually succeed- and if > > it doesn't, that's our problem to handle, not kmalloc().... > > I can see your point. I do have a recollection that there were some > instance involved where an emergency access to memory reserves helped > in OOM situations. It might have been better to annotate those particular calls with __GFP_ATOMIC or similar rather then change GFP_NOFAIL for everyone. Too late to fix that now though I think. Maybe the best way forward is to discourage new uses of GFP_NOFAIL. We would need a well-documented replacement. > > Anway as I've tried to explain earlier that this all is an > implementation detail users of the flag shouldn't really care about. If > this heuristic is not doing any good then it should be removed. Maybe users shouldn't care about implementation details, but they do need to care about semantics and costs. We need to know when it is appropriate to use GFP_NOFAIL, and when it is not. And what alternatives there are when it is not appropriate. Just saying "try to avoid using it" and "requires careful analysis" isn't acceptable. Sometimes it is unavoidable and analysis can only be done with a clear understanding of costs. Possibly analysis can only be done with a clear understanding of the internal implementation details. > > > It also points out that the scope API is highly deficient. > > We can do GFP_NOFS via the scope API, but we can't > > do anything else because *there is no scope API for other GFP > > flags*. > > > > Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? > > NO{FS,IO} where first flags to start this approach. And I have to admit > the experiment was much less successful then I hoped for. There are > still thousands of direct NOFS users so for some reason defining scopes > is not an easy thing to do. I'm not certain your conclusion is valid. It could be that defining scopes is easy enough, but no one feels motivated to do it. We need to do more than provide functionality. We need to tell people. Repeatedly. And advertise widely. And propose patches to make use of the functionality. And... and... and... I think changing to the scope API is a good change, but it is conceptually a big change. It needs to be driven. > > I am not against NOFAIL scopes in principle but seeing the nofs > "success" I am worried this will not go really well either and it is > much more tricky as NOFAIL has much stronger requirements than NOFS. > Just imagine how tricky this can be if you just call a library code > that is not under your control within a NOFAIL scope. What if that > library code decides to allocate (e.g. printk that would attempt to do > an optimistic NOWAIT allocation). __GFP_NOMEMALLOC holds a lesson worth learning here. PF_MEMALLOC effectively adds __GFP_MEMALLOC to all allocations, but some call sites need to over-ride that because there are alternate strategies available. This need-to-over-ride doesn't apply to NOFS or NOIO as that really is a thread-wide state. But MEMALLOC and NOFAIL are different. Some call sites can reasonably handle failure locally. I imagine the scope-api would say something like "NO_ENOMEM". i.e. memory allocations can fail as long as ENOMEM is never returned. Any caller that sets __GFP_RETRY_MAYFAIL or __GFP_NORETRY or maybe some others which not be affected by the NO_ENOMEM scope. But a plain GFP_KERNEL would. Introducing the scope api would be a good opportunity to drop the priority boost and *just* block until success. Priority boosts could then be added (possibly as a scope) only where they are measurably needed. I think we have 28 process flags in use. So we can probably afford one more for PF_MEMALLOC_NO_ENOMEM. What other scope flags might be useful? PF_MEMALLOC_BOOST which added __GFP_ATOMIC but not __GFP_MEMALLOC ?? PF_MEMALLOC_NORECLAIM ?? > > > That > > would save us a lot of bother in XFS. What about GFP_DIRECT_RECLAIM? > > I'd really like to turn that off for allocations in the XFS > > transaction commit path (as noted already in this thread) because > > direct reclaim that can make no progress is actively harmful (as > > noted already in this thread) > > As always if you have reasonable usecases then it is best to bring them > up on the MM list and we can discuss them. We are on the MM lists now... let's discuss :-) Dave: How would you feel about an effort to change xfs to stop using GFP_NOFS, and to use memalloc_nofs_save/restore instead? Having a major filesystem make the transition would be a good test-case, and could be used to motivate other filesystems to follow. We could add and use memalloc_no_enomem_save() too. Thanks, NeilBrown
On Mon, Oct 11, 2021 at 01:57:36PM +0200, Michal Hocko wrote: > On Sat 09-10-21 09:36:49, Dave Chinner wrote: > > On Fri, Oct 08, 2021 at 09:48:39AM +0200, Michal Hocko wrote: > > > > > > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > > > > > > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > > > > > > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > > > > > > restriction w.r.t. gfp flags just makes no sense at all. > > > > > > > > > > GFP_NOFS (without using the scope API) has the same problem as NOFAIL in > > > > > the vmalloc. Hence it is not supported. If you use the scope API then > > > > > you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to > > > > > define these conditions in a more sensible way. Special case NOFS if the > > > > > scope api is in use? Why do you want an explicit NOFS then? > > > > Exactly my point - this is clumsy and a total mess. I'm not asking > > for an explicit GFP_NOFS, just pointing out that the documented > > restrictions that "vmalloc can only do GFP_KERNEL allocations" is > > completely wrong. > > > > vmalloc() > > { > > if (!(gfp_flags & __GFP_FS)) > > memalloc_nofs_save(); > > p = __vmalloc(gfp_flags | GFP_KERNEL) > > if (!(gfp_flags & __GFP_FS)) > > memalloc_nofs_restore(); > > } > > > > Yup, that's how simple it is to support GFP_NOFS support in > > vmalloc(). > > Yes, this would work from the functionality POV but it defeats the > philosophy behind the scope API. Why would you even need this if the > scope was defined by the caller of the allocator? Who actually cares that vmalloc might be using the scoped API internally to implement GFP_NOFS or GFP_NOIO? Nobody at all. It is far more useful (and self documenting!) for one-off allocations to pass a GFP_NOFS flag than it is to use a scope API... > The initial hope was > to get rid of the NOFS abuse that can be seen in many filesystems. All > allocations from the scope would simply inherit the NOFS semantic so > an explicit NOFS shouldn't be really necessary, right? Yes, but I think you miss my point entirely: that the vmalloc restrictions on what gfp flags can be passed without making it entirely useless are completely arbitrary and non-sensical. > > This goes along with the argument that "it's impossible to do > > GFP_NOFAIL with vmalloc" as I addressed above. These things are not > > impossible, but we hide behind "we don't want people to use vmalloc" > > as an excuse for having shitty behaviour whilst ignoring that > > vmalloc is *heavily used* by core subsystems like filesystems > > because they cannot rely on high order allocations succeeding.... > > I do not think there is any reason to discourage anybody from using > vmalloc these days. 32b is dying out and vmalloc space is no longer a > very scarce resource. We are still discouraged from doing high order allocations and should only use pages directly. Not to mention that the API doesn't make it simple to use vmalloc as a direct replacement for high order kmalloc tends to discourage new users... > > It also points out that the scope API is highly deficient. > > We can do GFP_NOFS via the scope API, but we can't > > do anything else because *there is no scope API for other GFP > > flags*. > > > > Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? > > NO{FS,IO} where first flags to start this approach. And I have to admit > the experiment was much less successful then I hoped for. There are > still thousands of direct NOFS users so for some reason defining scopes > is not an easy thing to do. > > I am not against NOFAIL scopes in principle but seeing the nofs > "success" I am worried this will not go really well either and it is > much more tricky as NOFAIL has much stronger requirements than NOFS. > Just imagine how tricky this can be if you just call a library code > that is not under your control within a NOFAIL scope. What if that > library code decides to allocate (e.g. printk that would attempt to do > an optimistic NOWAIT allocation). I already asked you that _exact_ question earlier in the thread w.r.t. kvmalloc(GFP_NOFAIL) using optimistic NOWAIT kmalloc allocation. I asked you as a MM expert to define *and document* the behaviour that should result, not turn around and use the fact that it is undefined behaviour as a "this is too hard" excuse for not changing anything. THe fact is that the scope APIs are only really useful for certain contexts where restrictions are set by higher level functionality. For one-off allocation constraints the API sucks and we end up with crap like this (found in btrfs): /* * We're holding a transaction handle, so use a NOFS memory * allocation context to avoid deadlock if reclaim happens. */ nofs_flag = memalloc_nofs_save(); value = kmalloc(size, GFP_KERNEL); memalloc_nofs_restore(nofs_flag); But also from btrfs, this pattern is repeated in several places: nofs_flag = memalloc_nofs_save(); ctx = kvmalloc(struct_size(ctx, chunks, num_chunks), GFP_KERNEL); memalloc_nofs_restore(nofs_flag); This needs to use the scoped API because vmalloc doesn't support GFP_NOFS. So the poor "vmalloc needs scoped API" pattern is bleeding over into other code that doesn't have the problems vmalloc does. Do you see how this leads to poorly written code now? Or perhaps I should just point at ceph? /* * kvmalloc() doesn't fall back to the vmalloc allocator unless flags are * compatible with (a superset of) GFP_KERNEL. This is because while the * actual pages are allocated with the specified flags, the page table pages * are always allocated with GFP_KERNEL. * * ceph_kvmalloc() may be called with GFP_KERNEL, GFP_NOFS or GFP_NOIO. */ void *ceph_kvmalloc(size_t size, gfp_t flags) { void *p; if ((flags & (__GFP_IO | __GFP_FS)) == (__GFP_IO | __GFP_FS)) { p = kvmalloc(size, flags); } else if ((flags & (__GFP_IO | __GFP_FS)) == __GFP_IO) { unsigned int nofs_flag = memalloc_nofs_save(); p = kvmalloc(size, GFP_KERNEL); memalloc_nofs_restore(nofs_flag); } else { unsigned int noio_flag = memalloc_noio_save(); p = kvmalloc(size, GFP_KERNEL); memalloc_noio_restore(noio_flag); } return p; } IOWs, a large number of the users of the scope API simply make [k]vmalloc() provide GFP_NOFS behaviour. ceph_kvmalloc() is pretty much a wrapper that indicates how all vmalloc functions should behave. Honour GFP_NOFS and GFP_NOIO by using the scope API internally. > > That > > would save us a lot of bother in XFS. What about GFP_DIRECT_RECLAIM? > > I'd really like to turn that off for allocations in the XFS > > transaction commit path (as noted already in this thread) because > > direct reclaim that can make no progress is actively harmful (as > > noted already in this thread) > > As always if you have reasonable usecases then it is best to bring them > up on the MM list and we can discuss them. They've been pointed out many times in the past, and I've pointed them out again in this thread. Telling me to "bring them up on the mm list" when that's exactly what I'm doing right now is not a helpful response. > > Like I said - this is more than just bad documentation - the problem > > is that the whole allocation API is an inconsistent mess of control > > mechanisms to begin with... > > I am not going to disagree. There is a lot of historical baggage and > it doesn't help that any change is really hard to review because this > interface is used throughout the kernel. I have tried to change some > most obvious inconsistencies and I can tell this has always been a > frustrating experience with a very small "reward" in the end because > there are so many other problems. Technical debt in the mm APIs is something the mm developers need to address, not the people who tell you it's a problem for them. Telling the messenger "do my job for me because I find it too frustrating to make progress myself" doesn't help anyone make progress. If you find it frustrating trying to get mm code changed, imagine what it feels like for someone on the outside asking for relatively basic things like a consistent control API.... > That being said, I would more than love to have a consistent and well > defined interface and if you want to spend a lot of time on that then be > my guest. My point exactly: saying "fix it yourself" is not a good response.... > > > > It would seem to make sense for kvmalloc to WARN_ON if it is passed > > > > flags that does not allow it to use vmalloc. > > > > > > vmalloc is certainly not the hottest path in the kernel so I wouldn't be > > > opposed. > > > > kvmalloc is most certainly becoming one of the hottest paths in XFS. > > IOWs, arguments that "vmalloc is not a hot path" are simply invalid > > these days because they are simply untrue. e.g. the profiles I > > posted in this thread... > > Is it such a hot path that a check for compatible flags would be visible > in profiles though? No, that doesn't even show up as noise - the overhead of global spinlock contention and direct reclaim are the elephants that profiles point to, not a couple of flag checks on function parameters... Cheers, Dave.
On Wed 13-10-21 13:32:31, Dave Chinner wrote: > On Mon, Oct 11, 2021 at 01:57:36PM +0200, Michal Hocko wrote: > > On Sat 09-10-21 09:36:49, Dave Chinner wrote: > > > On Fri, Oct 08, 2021 at 09:48:39AM +0200, Michal Hocko wrote: > > > > > > > Even the API constaints of kvmalloc() w.r.t. only doing the vmalloc > > > > > > > fallback if the gfp context is GFP_KERNEL - we already do GFP_NOFS > > > > > > > kvmalloc via memalloc_nofs_save/restore(), so this behavioural > > > > > > > restriction w.r.t. gfp flags just makes no sense at all. > > > > > > > > > > > > GFP_NOFS (without using the scope API) has the same problem as NOFAIL in > > > > > > the vmalloc. Hence it is not supported. If you use the scope API then > > > > > > you can GFP_KERNEL for kvmalloc. This is clumsy but I am not sure how to > > > > > > define these conditions in a more sensible way. Special case NOFS if the > > > > > > scope api is in use? Why do you want an explicit NOFS then? > > > > > > Exactly my point - this is clumsy and a total mess. I'm not asking > > > for an explicit GFP_NOFS, just pointing out that the documented > > > restrictions that "vmalloc can only do GFP_KERNEL allocations" is > > > completely wrong. > > > > > > vmalloc() > > > { > > > if (!(gfp_flags & __GFP_FS)) > > > memalloc_nofs_save(); > > > p = __vmalloc(gfp_flags | GFP_KERNEL) > > > if (!(gfp_flags & __GFP_FS)) > > > memalloc_nofs_restore(); > > > } > > > > > > Yup, that's how simple it is to support GFP_NOFS support in > > > vmalloc(). > > > > Yes, this would work from the functionality POV but it defeats the > > philosophy behind the scope API. Why would you even need this if the > > scope was defined by the caller of the allocator? > > Who actually cares that vmalloc might be using the scoped API > internally to implement GFP_NOFS or GFP_NOIO? Nobody at all. > It is far more useful (and self documenting!) for one-off allocations > to pass a GFP_NOFS flag than it is to use a scope API... I would agree with you if the explicit GFP_NOFS usage was consistent and actually justified in the majority cases. My experience tells me otherwise though. Many filesystems use the flag just because that is easier. That leads to a huge overuse of the flag that leads to practical problems. I was hoping that if we offer an API that would define problematic reclaim recursion scopes then it would reduce the abuse. I haven't expected this to happen overnight but it is few years and it seems it will not happen soon either. [...] > > > It also points out that the scope API is highly deficient. > > > We can do GFP_NOFS via the scope API, but we can't > > > do anything else because *there is no scope API for other GFP > > > flags*. > > > > > > Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? > > > > NO{FS,IO} where first flags to start this approach. And I have to admit > > the experiment was much less successful then I hoped for. There are > > still thousands of direct NOFS users so for some reason defining scopes > > is not an easy thing to do. > > > > I am not against NOFAIL scopes in principle but seeing the nofs > > "success" I am worried this will not go really well either and it is > > much more tricky as NOFAIL has much stronger requirements than NOFS. > > Just imagine how tricky this can be if you just call a library code > > that is not under your control within a NOFAIL scope. What if that > > library code decides to allocate (e.g. printk that would attempt to do > > an optimistic NOWAIT allocation). > > I already asked you that _exact_ question earlier in the thread > w.r.t. kvmalloc(GFP_NOFAIL) using optimistic NOWAIT kmalloc > allocation. I asked you as a MM expert to define *and document* the > behaviour that should result, not turn around and use the fact that > it is undefined behaviour as a "this is too hard" excuse for not > changing anything. Dave, you have "thrown" a lot of complains in previous emails and it is hard to tell rants from features requests apart. I am sorry but I believe it would be much more productive to continue this discussion if you could mild your tone. Can I ask you to break down your feature requests into separate emails so that we can discuss and track them separately rather in this quite a long thread which has IMHO diverghed from the initial topic. Thanks! > THe fact is that the scope APIs are only really useful for certain > contexts where restrictions are set by higher level functionality. > For one-off allocation constraints the API sucks and we end up with Could you be more specific about these one-off allocation constrains? What would be the reason to define one-off NO{FS,IO} allocation constrain? Or did you have your NOFAIL example in mind? > crap like this (found in btrfs): > > /* > * We're holding a transaction handle, so use a NOFS memory > * allocation context to avoid deadlock if reclaim happens. > */ > nofs_flag = memalloc_nofs_save(); > value = kmalloc(size, GFP_KERNEL); > memalloc_nofs_restore(nofs_flag); Yes this looks wrong indeed! If I were to review such a code I would ask why the scope cannot match the transaction handle context. IIRC jbd does that. I am aware of these patterns. I was pulled in some discussions in the past and in some it turned out that the constrain is not needed at all and in some cases that has led to a proper scope definition. As you point out in your other examples it just happens that it is easier to go an easy path and define scopes ad-hoc to work around allocation API limitations. [...] > IOWs, a large number of the users of the scope API simply make > [k]vmalloc() provide GFP_NOFS behaviour. ceph_kvmalloc() is pretty > much a wrapper that indicates how all vmalloc functions should > behave. Honour GFP_NOFS and GFP_NOIO by using the scope API > internally. I was discouraging from this behavior at vmalloc level to push people to use scopes properly - aka at the level where the reclaim recursion is really a problem. If that is infeasible in practice then we can re-evaluate of course. I was really hoping we can get rid of cargo cult GFP_NOFS usage this way but the reality often disagrees with hopes. All that being said, let's discuss [k]vmalloc constrains and usecases that need changes in a separate email thread. Thanks!
On Wed, Oct 13, 2021 at 10:26:58AM +0200, Michal Hocko wrote: > > crap like this (found in btrfs): > > > > /* > > * We're holding a transaction handle, so use a NOFS memory > > * allocation context to avoid deadlock if reclaim happens. > > */ > > nofs_flag = memalloc_nofs_save(); > > value = kmalloc(size, GFP_KERNEL); > > memalloc_nofs_restore(nofs_flag); > > Yes this looks wrong indeed! If I were to review such a code I would ask > why the scope cannot match the transaction handle context. IIRC jbd does > that. Adding the transaction start/end as the NOFS scope is a long term plan and going on for years, because it's not a change we would need in btrfs, but rather a favor to MM to switch away from "GFP_NOFS everywhere because it's easy". The first step was to convert the easy cases. Almost all safe cases switching GFP_NOFS to GFP_KERNEL have happened. Another step is to convert GFP_NOFS to memalloc_nofs_save/GFP_KERNEL/memalloc_nofs_restore in contexts where we know we'd rely on the transaction NOFS scope in the future. Once this is implemented, the memalloc_nofs_* calls are deleted and it works as expected. Now you may argue that the switch could be changing GFP_NOFS to GFP_KERNEL at that time but that is not that easy to review or reason about in the whole transaction context in all allocations. This leads to code that was found in __btrfs_set_acl and called crap or wrong, because perhaps the background and the bigger plan is not immediately obvious. I hope the explanation above it puts it to the right perspective. The other class of scoped NOFS protection is around vmalloc-based allocations but that's for a different reason, would be solved by the same transaction start/end conversion as well. I'm working on that from time to time but this usually gets pushed down in the todo list. It's changing a lot of code, from what I've researched so far cannot be done at once and would probably introduce bugs hard to hit because of the external conditions (allocator, system load, ...). I have a plan to do that incrementally, adding assertions and converting functions in small batches to be able to catch bugs early, but I'm not exactly thrilled to start such endeavour in addition to normal development bug hunting. To get things moving again, I've refreshed the patch adding stubs and will try to find the best timing for merg to avoid patch conflicts, but no promises.
On Thu 14-10-21 13:32:01, David Sterba wrote: > On Wed, Oct 13, 2021 at 10:26:58AM +0200, Michal Hocko wrote: > > > crap like this (found in btrfs): > > > > > > /* > > > * We're holding a transaction handle, so use a NOFS memory > > > * allocation context to avoid deadlock if reclaim happens. > > > */ > > > nofs_flag = memalloc_nofs_save(); > > > value = kmalloc(size, GFP_KERNEL); > > > memalloc_nofs_restore(nofs_flag); > > > > Yes this looks wrong indeed! If I were to review such a code I would ask > > why the scope cannot match the transaction handle context. IIRC jbd does > > that. > > Adding the transaction start/end as the NOFS scope is a long term plan > and going on for years, because it's not a change we would need in > btrfs, but rather a favor to MM to switch away from "GFP_NOFS everywhere > because it's easy". > > The first step was to convert the easy cases. Almost all safe cases > switching GFP_NOFS to GFP_KERNEL have happened. Another step is to > convert GFP_NOFS to memalloc_nofs_save/GFP_KERNEL/memalloc_nofs_restore > in contexts where we know we'd rely on the transaction NOFS scope in the > future. Once this is implemented, the memalloc_nofs_* calls are deleted > and it works as expected. Now you may argue that the switch could be > changing GFP_NOFS to GFP_KERNEL at that time but that is not that easy > to review or reason about in the whole transaction context in all > allocations. > > This leads to code that was found in __btrfs_set_acl and called crap > or wrong, because perhaps the background and the bigger plan is not > immediately obvious. I hope the explanation above it puts it to the > right perspective. Yes it helps. Thanks for the clarification because this is far from obvious and changelogs I've checked do not mention this high level plan. I would have gone with a /* TODO: remove me once transactions use scopes... */ but this is obviously your call. > > The other class of scoped NOFS protection is around vmalloc-based > allocations but that's for a different reason, would be solved by the > same transaction start/end conversion as well. > > I'm working on that from time to time but this usually gets pushed down > in the todo list. It's changing a lot of code, from what I've researched > so far cannot be done at once and would probably introduce bugs hard to > hit because of the external conditions (allocator, system load, ...). > > I have a plan to do that incrementally, adding assertions and converting > functions in small batches to be able to catch bugs early, but I'm not > exactly thrilled to start such endeavour in addition to normal > development bug hunting. > > To get things moving again, I've refreshed the patch adding stubs and > will try to find the best timing for merg to avoid patch conflicts, but > no promises. Thanks!
On Tue 12-10-21 08:49:46, Neil Brown wrote: > On Mon, 11 Oct 2021, Michal Hocko wrote: > > On Sat 09-10-21 09:36:49, Dave Chinner wrote: > > > > > > Put simply, we want "retry forever" semantics to match what > > > production kernels have been doing for the past couple of decades, > > > but all we've been given are "never fail" semantics that also do > > > something different and potentially much more problematic. > > > > > > Do you see the difference here? __GFP_NOFAIL is not what we > > > need in the vast majority of cases where it is used. We don't want > > > the failing allocations to drive the machine hard into critical > > > reserves, we just want the allocation to -eventually succeed- and if > > > it doesn't, that's our problem to handle, not kmalloc().... > > > > I can see your point. I do have a recollection that there were some > > instance involved where an emergency access to memory reserves helped > > in OOM situations. > > It might have been better to annotate those particular calls with > __GFP_ATOMIC or similar rather then change GFP_NOFAIL for everyone. For historical reasons __GFP_ATOMIC is reserved for non sleeping allocations. __GFP_HIGH would be an alternative. > Too late to fix that now though I think. Maybe the best way forward is > to discourage new uses of GFP_NOFAIL. We would need a well-documented > replacement. I am not sure what that should be. Really if the memory reserves behavior of GFP_NOFAIL is really problematic then let's just reap it out. I do not see a new nofail like flag is due. > > Anway as I've tried to explain earlier that this all is an > > implementation detail users of the flag shouldn't really care about. If > > this heuristic is not doing any good then it should be removed. > > Maybe users shouldn't care about implementation details, but they do > need to care about semantics and costs. > We need to know when it is appropriate to use GFP_NOFAIL, and when it is > not. And what alternatives there are when it is not appropriate. > Just saying "try to avoid using it" and "requires careful analysis" > isn't acceptable. Sometimes it is unavoidable and analysis can only be > done with a clear understanding of costs. Possibly analysis can only be > done with a clear understanding of the internal implementation details. What we document currently is this * %__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller * cannot handle allocation failures. The allocation could block * indefinitely but will never return with failure. Testing for * failure is pointless. * New users should be evaluated carefully (and the flag should be * used only when there is no reasonable failure policy) but it is * definitely preferable to use the flag rather than opencode endless * loop around allocator. * Using this flag for costly allocations is _highly_ discouraged. so we tell when to use it - aka no reasonable failure policy. We put some discouragind language there. There is some discouraging language for high order allocations. Maybe we should suggest an alternative there. It seems there are usecases for those as well so we should implement a proper NOFAIL kvmalloc and recommend it for that instead. > > > It also points out that the scope API is highly deficient. > > > We can do GFP_NOFS via the scope API, but we can't > > > do anything else because *there is no scope API for other GFP > > > flags*. > > > > > > Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? > > > > NO{FS,IO} where first flags to start this approach. And I have to admit > > the experiment was much less successful then I hoped for. There are > > still thousands of direct NOFS users so for some reason defining scopes > > is not an easy thing to do. > > I'm not certain your conclusion is valid. It could be that defining > scopes is easy enough, but no one feels motivated to do it. > We need to do more than provide functionality. We need to tell people. > Repeatedly. And advertise widely. And propose patches to make use of > the functionality. And... and... and... Been there, done that for the low hanging fruit. Others were much more complex for me to follow up and I had other stuff on my table. > I think changing to the scope API is a good change, but it is > conceptually a big change. It needs to be driven. Agreed. > > I am not against NOFAIL scopes in principle but seeing the nofs > > "success" I am worried this will not go really well either and it is > > much more tricky as NOFAIL has much stronger requirements than NOFS. > > Just imagine how tricky this can be if you just call a library code > > that is not under your control within a NOFAIL scope. What if that > > library code decides to allocate (e.g. printk that would attempt to do > > an optimistic NOWAIT allocation). > > __GFP_NOMEMALLOC holds a lesson worth learning here. PF_MEMALLOC > effectively adds __GFP_MEMALLOC to all allocations, but some call sites > need to over-ride that because there are alternate strategies available. > This need-to-over-ride doesn't apply to NOFS or NOIO as that really is a > thread-wide state. But MEMALLOC and NOFAIL are different. Some call > sites can reasonably handle failure locally. > > I imagine the scope-api would say something like "NO_ENOMEM". i.e. > memory allocations can fail as long as ENOMEM is never returned. > Any caller that sets __GFP_RETRY_MAYFAIL or __GFP_NORETRY or maybe some > others which not be affected by the NO_ENOMEM scope. But a plain > GFP_KERNEL would. > > Introducing the scope api would be a good opportunity to drop the > priority boost and *just* block until success. Priority boosts could > then be added (possibly as a scope) only where they are measurably needed. > > I think we have 28 process flags in use. So we can probably afford one > more for PF_MEMALLOC_NO_ENOMEM. What other scope flags might be useful? > PF_MEMALLOC_BOOST which added __GFP_ATOMIC but not __GFP_MEMALLOC ?? > PF_MEMALLOC_NORECLAIM ?? I dunno. PF_MEMALLOC and its GFP_$FOO counterparts are quite hard to wrap my head around. I have never liked thos much TBH and building more on top sounds like step backward. I might be wrong but this sounds like even more work than NOFS scopes. > > > That > > > would save us a lot of bother in XFS. What about GFP_DIRECT_RECLAIM? > > > I'd really like to turn that off for allocations in the XFS > > > transaction commit path (as noted already in this thread) because > > > direct reclaim that can make no progress is actively harmful (as > > > noted already in this thread) > > > > As always if you have reasonable usecases then it is best to bring them > > up on the MM list and we can discuss them. > > We are on the MM lists now... let's discuss :-) Sure we can but this thread is a mix of so many topics that finding something useful will turn to be hard from my past experience. > Dave: How would you feel about an effort to change xfs to stop using > GFP_NOFS, and to use memalloc_nofs_save/restore instead? xfs is an example of a well behaved scope user. In fact the API has been largely based on xfs previous interface. There are still NOFS usesages in xfs which would be great to get rid of (e.g. the default mapping NOFS which was added due to lockdep false positives but that is unrelated). > Having a major > filesystem make the transition would be a good test-case, and could be > used to motivate other filesystems to follow. > We could add and use memalloc_no_enomem_save() too. ext has converted their transaction context to the scope API as well. There is still some explicit NOFS usage but I haven't checked details recently.
On Mon, 18 Oct 2021, Michal Hocko wrote: > On Tue 12-10-21 08:49:46, Neil Brown wrote: > > On Mon, 11 Oct 2021, Michal Hocko wrote: > > > On Sat 09-10-21 09:36:49, Dave Chinner wrote: > > > > > > > > Put simply, we want "retry forever" semantics to match what > > > > production kernels have been doing for the past couple of decades, > > > > but all we've been given are "never fail" semantics that also do > > > > something different and potentially much more problematic. > > > > > > > > Do you see the difference here? __GFP_NOFAIL is not what we > > > > need in the vast majority of cases where it is used. We don't want > > > > the failing allocations to drive the machine hard into critical > > > > reserves, we just want the allocation to -eventually succeed- and if > > > > it doesn't, that's our problem to handle, not kmalloc().... > > > > > > I can see your point. I do have a recollection that there were some > > > instance involved where an emergency access to memory reserves helped > > > in OOM situations. > > > > It might have been better to annotate those particular calls with > > __GFP_ATOMIC or similar rather then change GFP_NOFAIL for everyone. > > For historical reasons __GFP_ATOMIC is reserved for non sleeping > allocations. __GFP_HIGH would be an alternative. Historical reasons certainly shouldn't be ignored. But they can be questioned. __GFP_ATOMIC is documented as "the caller cannot reclaim or sleep and is high priority". This seems to over-lap with __GFP_DIRECT_RECLAIM (which permits reclaim and is the only place where page_alloc sleeps ... I think). The effect of setting __GFP_ATOMIC is: - triggers WARN_ON if __GFP_DIRECT_RECLAIM is also set. - bypass memcg limits - ignore the watermark_boost_factor effect - clears ALLOC_CPUSET - sets ALLOC_HARDER which provides: - access to nr_reserved_highatomic reserves - access to 1/4 the low-watermark reserves (ALLOC_HIGH gives 1/2) Combine them and you get access to 5/8 of the reserves. It is also used by driver/iommu/tegra-smmu.c to decide if a spinlock should remain held, or should be dropped over the alloc_page(). That's .... not my favourite code. So apart from the tegra thing and the WARN_ON, there is nothing about __GFP_ATOMIC which suggests it should only be used for non-sleeping allocations. It *should* only be used for allocations with a high failure cost and relatively short time before the memory will be returned and that likely includes many non sleeping allocations. It isn't clear to me why an allocation that is willing to sleep (if absolutely necessary) shouldn't be able to benefit from the priority boost of __GFP_ATOMIC. Or at least of ALLOC_HARDER... Maybe __GFP_HIGH should get the memcg and watermark_boost benefits too? Given that we have ALLOC_HARDER and ALLOC_HIGH, it would seem to be sensible to export those two settings in GFP_foo, and not forbid one of them to be used with __GFP_DIRECT_RECLAIM. > > > Too late to fix that now though I think. Maybe the best way forward is > > to discourage new uses of GFP_NOFAIL. We would need a well-documented > > replacement. > > I am not sure what that should be. Really if the memory reserves > behavior of GFP_NOFAIL is really problematic then let's just reap it > out. I do not see a new nofail like flag is due. Presumably there is a real risk of deadlock if we just remove the memory-reserves boosts of __GFP_NOFAIL. Maybe it would be safe to replace all current users of __GFP_NOFAIL with __GFP_NOFAIL|__GFP_HIGH, and then remove the __GFP_HIGH where analysis suggests there is no risk of deadlocks. Or maybe rename the __GFP_NOFAIL flag and #define __GFP_NOFAIL to include __GFP_HIGH? This would certainly be a better result than adding a new flag. > > > > Anway as I've tried to explain earlier that this all is an > > > implementation detail users of the flag shouldn't really care about. If > > > this heuristic is not doing any good then it should be removed. > > > > Maybe users shouldn't care about implementation details, but they do > > need to care about semantics and costs. > > We need to know when it is appropriate to use GFP_NOFAIL, and when it is > > not. And what alternatives there are when it is not appropriate. > > Just saying "try to avoid using it" and "requires careful analysis" > > isn't acceptable. Sometimes it is unavoidable and analysis can only be > > done with a clear understanding of costs. Possibly analysis can only be > > done with a clear understanding of the internal implementation details. > > What we document currently is this > * %__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller > * cannot handle allocation failures. The allocation could block > * indefinitely but will never return with failure. Testing for > * failure is pointless. This implies it is incompatible with __GFP_NORETRY and (probably) requires __GFP_RECLAIM. That is worth documenting, and possibly also a WARN_ON. > * New users should be evaluated carefully (and the flag should be > * used only when there is no reasonable failure policy) but it is > * definitely preferable to use the flag rather than opencode endless > * loop around allocator. How do we perform this evaluation? And why is it preferable to a loop? There are times when a loop makes sense, if there might be some other event that could provide the needed memory ... or if a SIGKILL might make it irrelevant. slab allocators presumably shouldn't pass __GFP_NOFAIL to alloc_page(), but should instead loop around 1/ check if any existing slabs have space 2/ if not, try to allocate a new page Providing the latter blocks for a while but not indefinitely that should be optimal. Why is __GFP_NOFAIL better? > * Using this flag for costly allocations is _highly_ discouraged. This is unhelpful. Saying something is "discouraged" carries an implied threat. This is open source and threats need to be open. Why is it discouraged? IF it is not forbidden, then it is clearly permitted. Maybe there are costs - so a clear statement of those costs would be appropriate. Also, what is a suitable alternative? Current code will trigger a WARN_ON, so it is effectively forbidden. Maybe we should document that __GFP_NOFAIL is forbidden for orders above 1, and that vmalloc() should be used instead (thanks for proposing that patch!). But that would mean __GFP_NOFAIL cannot be used for slabs which happen to use large orders. Hmmm. it appears that slub.c disables __GFP_NOFAIL when it tries for a large order allocation, and slob.c never tries large order allocations. So this only affects slab.c. xfs makes heavy use of kmem_cache_zalloc with __GFP_NOFAIL. I wonder if any of these slabs have large order with slab.c. > > so we tell when to use it - aka no reasonable failure policy. We put > some discouragind language there. There is some discouraging language > for high order allocations. Maybe we should suggest an alternative > there. It seems there are usecases for those as well so we should > implement a proper NOFAIL kvmalloc and recommend it for that instead. yes- suggest an alternative and also say what the tradeoffs are. > > > > > It also points out that the scope API is highly deficient. > > > > We can do GFP_NOFS via the scope API, but we can't > > > > do anything else because *there is no scope API for other GFP > > > > flags*. > > > > > > > > Why don't we have a GFP_NOFAIL/__GFP_RETRY_FOREVER scope API? > > > > > > NO{FS,IO} where first flags to start this approach. And I have to admit > > > the experiment was much less successful then I hoped for. There are > > > still thousands of direct NOFS users so for some reason defining scopes > > > is not an easy thing to do. > > > > I'm not certain your conclusion is valid. It could be that defining > > scopes is easy enough, but no one feels motivated to do it. > > We need to do more than provide functionality. We need to tell people. > > Repeatedly. And advertise widely. And propose patches to make use of > > the functionality. And... and... and... > > Been there, done that for the low hanging fruit. Others were much more > complex for me to follow up and I had other stuff on my table. I have no doubt that is a slow and rather thankless task, with no real payoff until it is complete. It reminds me a bit of BKL removal and 64-bit time. I think it is worth doing though. Finding the balance between letting it consume you and just giving up would be a challenge. > > > I think changing to the scope API is a good change, but it is > > conceptually a big change. It needs to be driven. > > Agreed. > > > > I am not against NOFAIL scopes in principle but seeing the nofs > > > "success" I am worried this will not go really well either and it is > > > much more tricky as NOFAIL has much stronger requirements than NOFS. > > > Just imagine how tricky this can be if you just call a library code > > > that is not under your control within a NOFAIL scope. What if that > > > library code decides to allocate (e.g. printk that would attempt to do > > > an optimistic NOWAIT allocation). > > > > __GFP_NOMEMALLOC holds a lesson worth learning here. PF_MEMALLOC > > effectively adds __GFP_MEMALLOC to all allocations, but some call sites > > need to over-ride that because there are alternate strategies available. > > This need-to-over-ride doesn't apply to NOFS or NOIO as that really is a > > thread-wide state. But MEMALLOC and NOFAIL are different. Some call > > sites can reasonably handle failure locally. > > > > I imagine the scope-api would say something like "NO_ENOMEM". i.e. > > memory allocations can fail as long as ENOMEM is never returned. > > Any caller that sets __GFP_RETRY_MAYFAIL or __GFP_NORETRY or maybe some > > others which not be affected by the NO_ENOMEM scope. But a plain > > GFP_KERNEL would. > > > > Introducing the scope api would be a good opportunity to drop the > > priority boost and *just* block until success. Priority boosts could > > then be added (possibly as a scope) only where they are measurably needed. > > > > I think we have 28 process flags in use. So we can probably afford one > > more for PF_MEMALLOC_NO_ENOMEM. What other scope flags might be useful? > > PF_MEMALLOC_BOOST which added __GFP_ATOMIC but not __GFP_MEMALLOC ?? > > PF_MEMALLOC_NORECLAIM ?? > > I dunno. PF_MEMALLOC and its GFP_$FOO counterparts are quite hard to > wrap my head around. I have never liked thos much TBH and building more > on top sounds like step backward. I might be wrong but this sounds like > even more work than NOFS scopes. > > > > > That > > > > would save us a lot of bother in XFS. What about GFP_DIRECT_RECLAIM? > > > > I'd really like to turn that off for allocations in the XFS > > > > transaction commit path (as noted already in this thread) because > > > > direct reclaim that can make no progress is actively harmful (as > > > > noted already in this thread) > > > > > > As always if you have reasonable usecases then it is best to bring them > > > up on the MM list and we can discuss them. > > > > We are on the MM lists now... let's discuss :-) > > Sure we can but this thread is a mix of so many topics that finding > something useful will turn to be hard from my past experience. Unfortunately life is messy. I just wanted to remove all congestion_wait() calls. But that lead to __GFP_NOFAIL and to scopes allocation API, and there are still more twisty passages waiting. Sometimes you don't know what topic will usefully start a constructive thread until you've already figured out the answer :-( > > > Dave: How would you feel about an effort to change xfs to stop using > > GFP_NOFS, and to use memalloc_nofs_save/restore instead? > > xfs is an example of a well behaved scope user. In fact the API has been > largely based on xfs previous interface. There are still NOFS usesages > in xfs which would be great to get rid of (e.g. the default mapping NOFS > which was added due to lockdep false positives but that is unrelated). > > > Having a major > > filesystem make the transition would be a good test-case, and could be > > used to motivate other filesystems to follow. > > We could add and use memalloc_no_enomem_save() too. > > ext has converted their transaction context to the scope API as well. > There is still some explicit NOFS usage but I haven't checked details > recently. Of the directories in fs/, 42 contain no mention of GFP_NOFS 17 contain fewer than 10 The 10 with most frequent usage (including comments) are: 47 fs/afs/ 48 fs/f2fs/ 49 fs/nfs/ 54 fs/dlm/ 59 fs/ceph/ 66 fs/ext4/ 73 fs/ntfs3/ 73 fs/ocfs2/ 83 fs/ubifs/ 231 fs/btrfs/ xfs is 28 - came in number 12. Though there are 25 KM_NOFS allocations, which would push it up to 7th place. A few use GFP_NOIO - nfs(11) and f2fs(9) being the biggest users. So clearly there is work to do Maybe we could add something to checkpatch.pl to discourage the addition of new GFP_NOFS usage. There is a lot of stuff there.... the bits that are important to me are: - why is __GFP_NOFAIL preferred? It is a valuable convenience, but I don't see that it is necessary - is it reasonable to use __GFP_HIGH when looping if there is a risk of deadlock? - Will __GFP_DIRECT_RECLAIM always result in a delay before failure? In that case it should be safe to loop around allocations using __GFP_DIRECT_RECLAIM without needing congestion_wait() (so it can just be removed. Thanks, NeilBrown
On Tue 19-10-21 15:32:27, Neil Brown wrote: > On Mon, 18 Oct 2021, Michal Hocko wrote: > > On Tue 12-10-21 08:49:46, Neil Brown wrote: > > > On Mon, 11 Oct 2021, Michal Hocko wrote: > > > > On Sat 09-10-21 09:36:49, Dave Chinner wrote: > > > > > > > > > > Put simply, we want "retry forever" semantics to match what > > > > > production kernels have been doing for the past couple of decades, > > > > > but all we've been given are "never fail" semantics that also do > > > > > something different and potentially much more problematic. > > > > > > > > > > Do you see the difference here? __GFP_NOFAIL is not what we > > > > > need in the vast majority of cases where it is used. We don't want > > > > > the failing allocations to drive the machine hard into critical > > > > > reserves, we just want the allocation to -eventually succeed- and if > > > > > it doesn't, that's our problem to handle, not kmalloc().... > > > > > > > > I can see your point. I do have a recollection that there were some > > > > instance involved where an emergency access to memory reserves helped > > > > in OOM situations. > > > > > > It might have been better to annotate those particular calls with > > > __GFP_ATOMIC or similar rather then change GFP_NOFAIL for everyone. > > > > For historical reasons __GFP_ATOMIC is reserved for non sleeping > > allocations. __GFP_HIGH would be an alternative. > > Historical reasons certainly shouldn't be ignored. But they can be > questioned. Agreed. Changing them is a more challenging task though. For example I really dislike how access to memory reserves is bound to "no reclaim" requirement. Ideally those should be completely orthogonal. I also do not think we need as many ways to ask for memory reserves as we have. Can a "regular" kernel developer tell a difference between __GFP_ATOMIC and __GFP_HIGH? I do not think so, unless one is willing to do ... > __GFP_ATOMIC is documented as "the caller cannot reclaim or sleep and is > high priority". > This seems to over-lap with __GFP_DIRECT_RECLAIM (which permits reclaim > and is the only place where page_alloc sleeps ... I think). > > The effect of setting __GFP_ATOMIC is: > - triggers WARN_ON if __GFP_DIRECT_RECLAIM is also set. > - bypass memcg limits > - ignore the watermark_boost_factor effect > - clears ALLOC_CPUSET > - sets ALLOC_HARDER which provides: > - access to nr_reserved_highatomic reserves > - access to 1/4 the low-watermark reserves (ALLOC_HIGH gives 1/2) > Combine them and you get access to 5/8 of the reserves. ... exactly this. And these are bunch of hacks developed over time and the baggage which is hard to change as I've said. Somebody with a sufficient time budget should start questioning all those and eventually make __GFP_ATOMIC a story of the past. > It is also used by driver/iommu/tegra-smmu.c to decide if a spinlock > should remain held, or should be dropped over the alloc_page(). That's > .... not my favourite code. Exactly! > So apart from the tegra thing and the WARN_ON, there is nothing about > __GFP_ATOMIC which suggests it should only be used for non-sleeping > allocations. The warning was added when the original GFP_ATOMIC was untangled from the reclaim implications to keep the "backward compatibility" IIRC. Mostly for IRQ handlers where the GFP_ATOMIC was used the most. My memory might fail me though. > It *should* only be used for allocations with a high failure cost and > relatively short time before the memory will be returned and that likely > includes many non sleeping allocations. It isn't clear to me why an > allocation that is willing to sleep (if absolutely necessary) shouldn't > be able to benefit from the priority boost of __GFP_ATOMIC. Or at least > of ALLOC_HARDER... I completely agree! As mentioned above memory reserves should be completely orthogonal. I am not sure we want an API for many different levels of reserves access. Do we need more than __GFP_HIGH? Maybe with a more descriptive name. > Maybe __GFP_HIGH should get the memcg and watermark_boost benefits too? > > Given that we have ALLOC_HARDER and ALLOC_HIGH, it would seem to be > sensible to export those two settings in GFP_foo, and not forbid one of > them to be used with __GFP_DIRECT_RECLAIM. I think ALLOC_HARDER should be kept internal implementation detail when the allocator needs to give somebody a boost on top of requests for internal balancing between requests. ALLOC_HIGH already matches __GFP_HIGH and that should be the way to ask for a boost explicitly IMO. We also have ALLOC_OOM as another level of internal memory reserves for OOM victims. Again something to be in hands of the allocator. > > > Too late to fix that now though I think. Maybe the best way forward is > > > to discourage new uses of GFP_NOFAIL. We would need a well-documented > > > replacement. > > > > I am not sure what that should be. Really if the memory reserves > > behavior of GFP_NOFAIL is really problematic then let's just reap it > > out. I do not see a new nofail like flag is due. > > Presumably there is a real risk of deadlock if we just remove the > memory-reserves boosts of __GFP_NOFAIL. Maybe it would be safe to > replace all current users of __GFP_NOFAIL with __GFP_NOFAIL|__GFP_HIGH, > and then remove the __GFP_HIGH where analysis suggests there is no risk > of deadlocks. I would much rather not bind those together and go other way around. If somebody can actually hit deadlocks (those are quite easy to spot as they do not go away) then we can talk about how to deal with them. Memory reserves can help only > < this much. > Or maybe rename the __GFP_NOFAIL flag and #define __GFP_NOFAIL to > include __GFP_HIGH? Wouldn't that lead to the __GFP_ATOMIC story again? > This would certainly be a better result than adding a new flag. > > > > > > > Anway as I've tried to explain earlier that this all is an > > > > implementation detail users of the flag shouldn't really care about. If > > > > this heuristic is not doing any good then it should be removed. > > > > > > Maybe users shouldn't care about implementation details, but they do > > > need to care about semantics and costs. > > > We need to know when it is appropriate to use GFP_NOFAIL, and when it is > > > not. And what alternatives there are when it is not appropriate. > > > Just saying "try to avoid using it" and "requires careful analysis" > > > isn't acceptable. Sometimes it is unavoidable and analysis can only be > > > done with a clear understanding of costs. Possibly analysis can only be > > > done with a clear understanding of the internal implementation details. > > > > What we document currently is this > > * %__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller > > * cannot handle allocation failures. The allocation could block > > * indefinitely but will never return with failure. Testing for > > * failure is pointless. > > This implies it is incompatible with __GFP_NORETRY and (probably) > requires __GFP_RECLAIM. That is worth documenting, and possibly also a > WARN_ON. Yes, I thought this would be obvious as those are reclaim modifiers so they require a reclaim. But I do see a point that being explicit here cannot hurt. Same with combining them together. It just doesn't make much sense to retry for ever and requesting noretry or retry and fail at the same time. Again a clarification cannot hurt though. > > * New users should be evaluated carefully (and the flag should be > > * used only when there is no reasonable failure policy) but it is > > * definitely preferable to use the flag rather than opencode endless > > * loop around allocator. > > How do we perform this evaluation? And why is it preferable to a loop? > There are times when a loop makes sense, if there might be some other > event that could provide the needed memory ... or if a SIGKILL might > make it irrelevant. > slab allocators presumably shouldn't pass __GFP_NOFAIL to alloc_page(), > but should instead loop around > 1/ check if any existing slabs have space > 2/ if not, try to allocate a new page > Providing the latter blocks for a while but not indefinitely that should > be optimal. > Why is __GFP_NOFAIL better? Because the allocator can do something if it knows that the allocation cannot fail. E.g. give such an allocation a higher priority over those that are allowed to fail. This is not limited to memory reserves, although this is the only measure that is implemented currently IIRC. On the other hand if there is something interesting the caller can do directly - e.g. do internal object management like mempool does - then it is better to retry at that level. > > * Using this flag for costly allocations is _highly_ discouraged. > > This is unhelpful. Saying something is "discouraged" carries an implied > threat. This is open source and threats need to be open. > Why is it discouraged? IF it is not forbidden, then it is clearly > permitted. Maybe there are costs - so a clear statement of those costs > would be appropriate. > Also, what is a suitable alternative? > > Current code will trigger a WARN_ON, so it is effectively forbidden. > Maybe we should document that __GFP_NOFAIL is forbidden for orders above > 1, and that vmalloc() should be used instead (thanks for proposing that > patch!). I think we want to recommend kvmalloc as an alternative once vmalloc is NOFAIL aware. I will skip over some of the specific regarding SLAB and NOFS usage if you do not mind and focus on points that have direct documentation consequences. Also I do not feel qualified commenting on neither SLAB nor FS internals. [...] > There is a lot of stuff there.... the bits that are important to me are: > > - why is __GFP_NOFAIL preferred? It is a valuable convenience, but I > don't see that it is necessary I think it is preferred for one and a half reasons. It tells allocator that this allocation cannot really fail and the caller doesn't have a very good/clever retry policy (e.g. like mempools mentioned above). The half reason would be for tracking purposes (git grep __GFP_NOFAIL) is easier than trying to catch all sorts of while loops over allocation which do not do anything really interesting. > - is it reasonable to use __GFP_HIGH when looping if there is a risk of > deadlock? As I've said above. Memory reserves are a finite resource and as such they cannot fundamentally solve deadlocks. They can help prioritize though. > - Will __GFP_DIRECT_RECLAIM always result in a delay before failure? In > that case it should be safe to loop around allocations using > __GFP_DIRECT_RECLAIM without needing congestion_wait() (so it can > just be removed. This is a good question and I do not think we have that documented anywhere. We do cond_resched() for sure. I do not think we guarantee a sleeping point in general. Maybe we should, I am not really sure. Thanks for good comments and tough questions
On Wed, 20 Oct 2021, Michal Hocko wrote: > On Tue 19-10-21 15:32:27, Neil Brown wrote: [clip looks of discussion where we are largely in agreement - happy to see that!] > > Presumably there is a real risk of deadlock if we just remove the > > memory-reserves boosts of __GFP_NOFAIL. Maybe it would be safe to > > replace all current users of __GFP_NOFAIL with __GFP_NOFAIL|__GFP_HIGH, > > and then remove the __GFP_HIGH where analysis suggests there is no risk > > of deadlocks. > > I would much rather not bind those together and go other way around. If > somebody can actually hit deadlocks (those are quite easy to spot as > they do not go away) then we can talk about how to deal with them. > Memory reserves can help only > < this much. I recall maybe 10 years ago Linus saying that he preferred simplicity to mathematical provability for handling memory deadlocks (or something like that). I lean towards provability myself, but I do see the other perspective. We have mempools and they can provide strong guarantees (though they are often over-allocated I think). But they can be a bit clumsy. I believe that DaveM is strong against anything like that in the network layer, so we strongly depend on GFP_MEMALLOC functionality for swap-over-NFS. I'm sure it is important elsewhere too. Of course __GFP_HIGH and __GFP_ATOMIC provide an intermediate priority level - more likely to fail than __GFP_MEMALLOC. I suspect they should not be seen as avoiding deadlock, only as improving service. So using them when we cannot wait might make sense, but there are probably other circumstances. > > Why is __GFP_NOFAIL better? > > Because the allocator can do something if it knows that the allocation > cannot fail. E.g. give such an allocation a higher priority over those > that are allowed to fail. This is not limited to memory reserves, > although this is the only measure that is implemented currently IIRC. > On the other hand if there is something interesting the caller can do > directly - e.g. do internal object management like mempool does - then > it is better to retry at that level. It *can* do something, but I don't think it *should* do something - not if that could have a negative impact on other threads. Just because I cannot fail, that doesn't mean someone else should fail to help me. Maybe I should just wait longer. > > > > * Using this flag for costly allocations is _highly_ discouraged. > > > > This is unhelpful. Saying something is "discouraged" carries an implied > > threat. This is open source and threats need to be open. > > Why is it discouraged? IF it is not forbidden, then it is clearly > > permitted. Maybe there are costs - so a clear statement of those costs > > would be appropriate. > > Also, what is a suitable alternative? > > > > Current code will trigger a WARN_ON, so it is effectively forbidden. > > Maybe we should document that __GFP_NOFAIL is forbidden for orders above > > 1, and that vmalloc() should be used instead (thanks for proposing that > > patch!). > > I think we want to recommend kvmalloc as an alternative once vmalloc is > NOFAIL aware. > > I will skip over some of the specific regarding SLAB and NOFS usage if > you do not mind and focus on points that have direct documentation > consequences. Also I do not feel qualified commenting on neither SLAB > nor FS internals. > > [...] > > There is a lot of stuff there.... the bits that are important to me are: > > > > - why is __GFP_NOFAIL preferred? It is a valuable convenience, but I > > don't see that it is necessary > > I think it is preferred for one and a half reasons. It tells allocator > that this allocation cannot really fail and the caller doesn't have a > very good/clever retry policy (e.g. like mempools mentioned above). The > half reason would be for tracking purposes (git grep __GFP_NOFAIL) is > easier than trying to catch all sorts of while loops over allocation > which do not do anything really interesting. I think the one reason is misguided, as described above. I think the half reason is good, and that we should introduce memalloc_retry_wait() and encourage developers to use that for any memalloc retry loop. __GFP_NOFAIL would then be a convenience flag which causes the allocator (slab or alloc_page or whatever) to call memalloc_retry_wait() and do the loop internally. What exactly memalloc_retry_wait() does (if anything) can be decided separately and changed as needed. > > - is it reasonable to use __GFP_HIGH when looping if there is a risk of > > deadlock? > > As I've said above. Memory reserves are a finite resource and as such > they cannot fundamentally solve deadlocks. They can help prioritize > though. To be fair, they can solve 1 level of deadlock. i.e. if you only need to make one allocation to guarantee progress, then allocating from reserves can help. If you might need to make a second allocation without freeing the first - then a single reserve pool can provide guarantees (which is why we use mempool is layered block devices - md over dm over loop of scsi). > > > - Will __GFP_DIRECT_RECLAIM always result in a delay before failure? In > > that case it should be safe to loop around allocations using > > __GFP_DIRECT_RECLAIM without needing congestion_wait() (so it can > > just be removed. > > This is a good question and I do not think we have that documented > anywhere. We do cond_resched() for sure. I do not think we guarantee a > sleeping point in general. Maybe we should, I am not really sure. If we add memalloc_retry_wait(), it wouldn't matter. We would only need to ensure that memalloc_retry_wait() waited if page_alloc didn't. I think we should: - introduce memalloc_retry_wait() and use it for all malloc retry loops including __GFP_NOFAIL - drop all the priority boosts added for __GFP_NOFAIL - drop __GFP_ATOMIC and change all the code that tests for __GFP_ATOMIC to instead test for __GFP_HIGH. __GFP_ATOMIC is NEVER used without __GFP_HIGH. This give a slight boost to several sites that use __GFP_HIGH explicitly. - choose a consistent order threshold for disallowing __GFP_NOFAIL (rmqueue uses "order > 1", __alloc_pages_slowpath uses "order > PAGE_ALLOC_COSTLY_ORDER"), test it once - early - and document kvmalloc as an alternative. Code can also loop if there is an alternative strategy for freeing up memory. Thanks, NeilBrown Thanks, NeilBrown
diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst index 5954ddf6ee13..8ea077465446 100644 --- a/Documentation/core-api/memory-allocation.rst +++ b/Documentation/core-api/memory-allocation.rst @@ -126,7 +126,30 @@ or another request. * ``GFP_KERNEL | __GFP_NOFAIL`` - overrides the default allocator behavior and all allocation requests will loop endlessly until they succeed. - This might be really dangerous especially for larger orders. + Any attempt to use ``__GFP_NOFAIL`` for allocations larger than + order-1 (2 pages) will trigger a warning. + + Use of ``__GFP_NOFAIL`` can cause deadlocks so it should only be used + when there is no alternative, and then should be used with caution. + Deadlocks can happen if the calling process holds any resources + (e.g. locks) which might be needed for memory reclaim or write-back, + or which might prevent a process killed by the OOM killer from + successfully exiting. Where possible, locks should be released + before using ``__GFP_NOFAIL``. + + While this flag is best avoided, it is still preferable to endless + loops around the allocator. Endless loops may still be used when + there is a need to test for the process being killed + (fatal_signal_pending(current)). + + * ``GFP_NOFS | __GFP_NOFAIL`` - Loop endlessly instead of failing + when performing allocations in file system code. The same guidance + as for ``GFP_KERNEL | __GFP_NOFAIL`` applies with extra emphasis on + the possibility of deadlocks. ``GFP_NOFS`` often implies that + filesystem locks are held which might lead to blocking reclaim. + Preemptively flushing or reclaiming memory associated with such + locks might be appropriate before requesting a ``__GFP_NOFAIL`` + allocation. Selecting memory allocator ========================== diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 55b2ec1f965a..1d2a89e20b8b 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -209,7 +209,11 @@ struct vm_area_struct; * used only when there is no reasonable failure policy) but it is * definitely preferable to use the flag rather than opencode endless * loop around allocator. - * Using this flag for costly allocations is _highly_ discouraged. + * Use of this flag may lead to deadlocks if locks are held which would + * be needed for memory reclaim, write-back, or the timely exit of a + * process killed by the OOM-killer. Dropping any locks not absolutely + * needed is advisable before requesting a %__GFP_NOFAIL allocate. + * Using this flag for costly allocations (order>1) is _highly_ discouraged. */ #define __GFP_IO ((__force gfp_t)___GFP_IO) #define __GFP_FS ((__force gfp_t)___GFP_FS)
__GFP_NOFAIL is documented both in gfp.h and memory-allocation.rst. The details are not entirely consistent. This patch ensures both places state that: - there is a risk of deadlock with reclaim/writeback/oom-kill - it should only be used when there is no real alternative - it is preferable to an endless loop - it is strongly discourages for costly-order allocations. Signed-off-by: NeilBrown <neilb@suse.de> --- Documentation/core-api/memory-allocation.rst | 25 ++++++++++++++++++++++++- include/linux/gfp.h | 6 +++++- 2 files changed, 29 insertions(+), 2 deletions(-)