Message ID | 1635318110-1905-1-git-send-email-huangzhaoyang@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [RFC] mm: have kswapd only reclaiming use min protection on memcg | expand |
On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > For the kswapd only reclaiming, there is no chance to try again on > this group while direct reclaim has. fix it by judging gfp flag. There is no problem description (same as in your last submissions. Have you looked at the patch submission documentation as recommended previously?). Also this patch doesn't make any sense. Both direct reclaim and kswapd use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat for the kswapd part).. > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> Nacked-by: Michal Hocko <mhocko@suse.com> > --- > mm/vmscan.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 74296c2..41f5776 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2704,7 +2704,8 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > unsigned long protection; > > /* memory.low scaling, make sure we retry before OOM */ > - if (!sc->memcg_low_reclaim && low > min) { > + if (!sc->memcg_low_reclaim && low > min > + && sc->gfp_mask & __GFP_DIRECT_RECLAIM) { > protection = low; > sc->memcg_low_skipped = 1; > } else { > -- > 1.9.1
On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > For the kswapd only reclaiming, there is no chance to try again on > > this group while direct reclaim has. fix it by judging gfp flag. > > There is no problem description (same as in your last submissions. Have > you looked at the patch submission documentation as recommended > previously?). > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > for the kswapd part).. ok, but how does the reclaiming try with memcg's min protection on the alloc without __GFP_DIRECT_RECLAIM? > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > Nacked-by: Michal Hocko <mhocko@suse.com> > > > --- > > mm/vmscan.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 74296c2..41f5776 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2704,7 +2704,8 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > > unsigned long protection; > > > > /* memory.low scaling, make sure we retry before OOM */ > > - if (!sc->memcg_low_reclaim && low > min) { > > + if (!sc->memcg_low_reclaim && low > min > > + && sc->gfp_mask & __GFP_DIRECT_RECLAIM) { > > protection = low; > > sc->memcg_low_skipped = 1; > > } else { > > -- > > 1.9.1 > > -- > Michal Hocko > SUSE Labs
On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > > > For the kswapd only reclaiming, there is no chance to try again on > > > this group while direct reclaim has. fix it by judging gfp flag. > > > > There is no problem description (same as in your last submissions. Have > > you looked at the patch submission documentation as recommended > > previously?). > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > > for the kswapd part).. > ok, but how does the reclaiming try with memcg's min protection on the > alloc without __GFP_DIRECT_RECLAIM? I do not follow. There is no need to protect memcg if the allocation request doesn't have __GFP_DIRECT_RECLAIM because that would fail the charge if a hard limit is reached, see try_charge_memcg and gfpflags_allow_blocking check. Background reclaim, on the other hand never breaches reclaim protection. What is the actual problem you want to solve?
On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > > > > > For the kswapd only reclaiming, there is no chance to try again on > > > > this group while direct reclaim has. fix it by judging gfp flag. > > > > > > There is no problem description (same as in your last submissions. Have > > > you looked at the patch submission documentation as recommended > > > previously?). > > > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > > > for the kswapd part).. > > ok, but how does the reclaiming try with memcg's min protection on the > > alloc without __GFP_DIRECT_RECLAIM? > > I do not follow. There is no need to protect memcg if the allocation > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the > charge if a hard limit is reached, see try_charge_memcg and > gfpflags_allow_blocking check. > > Background reclaim, on the other hand never breaches reclaim protection. > > What is the actual problem you want to solve? Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and all processes are under cgroups. Kswapd is the only hope here which however has a low efficiency of get_scan_count. I would like to have kswapd work as direct reclaim in 2nd round which will have protection=memory.min. > > -- > Michal Hocko > SUSE Labs
On Wed 27-10-21 17:19:56, Zhaoyang Huang wrote: > On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: > > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > > > > > > > For the kswapd only reclaiming, there is no chance to try again on > > > > > this group while direct reclaim has. fix it by judging gfp flag. > > > > > > > > There is no problem description (same as in your last submissions. Have > > > > you looked at the patch submission documentation as recommended > > > > previously?). > > > > > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > > > > for the kswapd part).. > > > ok, but how does the reclaiming try with memcg's min protection on the > > > alloc without __GFP_DIRECT_RECLAIM? > > > > I do not follow. There is no need to protect memcg if the allocation > > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the > > charge if a hard limit is reached, see try_charge_memcg and > > gfpflags_allow_blocking check. > > > > Background reclaim, on the other hand never breaches reclaim protection. > > > > What is the actual problem you want to solve? > Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and > all processes are under cgroups. Kswapd is the only hope here which > however has a low efficiency of get_scan_count. I would like to have > kswapd work as direct reclaim in 2nd round which will have > protection=memory.min. Do you have an example where this would be a practical problem? Atomic allocations should be rather rare.
On Wed, Oct 27, 2021 at 7:52 PM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 27-10-21 17:19:56, Zhaoyang Huang wrote: > > On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: > > > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > > > > > > > > > For the kswapd only reclaiming, there is no chance to try again on > > > > > > this group while direct reclaim has. fix it by judging gfp flag. > > > > > > > > > > There is no problem description (same as in your last submissions. Have > > > > > you looked at the patch submission documentation as recommended > > > > > previously?). > > > > > > > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > > > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > > > > > for the kswapd part).. > > > > ok, but how does the reclaiming try with memcg's min protection on the > > > > alloc without __GFP_DIRECT_RECLAIM? > > > > > > I do not follow. There is no need to protect memcg if the allocation > > > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the > > > charge if a hard limit is reached, see try_charge_memcg and > > > gfpflags_allow_blocking check. > > > > > > Background reclaim, on the other hand never breaches reclaim protection. > > > > > > What is the actual problem you want to solve? > > Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and > > all processes are under cgroups. Kswapd is the only hope here which > > however has a low efficiency of get_scan_count. I would like to have > > kswapd work as direct reclaim in 2nd round which will have > > protection=memory.min. > > Do you have an example where this would be a practical problem? Atomic > allocations should be rather rare. Please find below for the search result of '~__GFP_DIRECT_RECLAIM' which shows some drivers and net prefer to behave like that. Furthermore, the allocations are always together with high order. block/bio.c:464: gfp_mask &= ~__GFP_DIRECT_RECLAIM; drivers/vhost/net.c:668: pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | drivers/net/ethernet/mellanox/mlx4/icm.c:184: mask &= ~__GFP_DIRECT_RECLAIM; fs/erofs/zdata.c:243: gfp_t gfp = (mapping_gfp_mask(mc) & ~__GFP_DIRECT_RECLAIM) | fs/fscache/page.c:138: gfp &= ~__GFP_DIRECT_RECLAIM; fs/fscache/cookie.c:187: INIT_RADIX_TREE(&cookie->stores, GFP_NOFS & ~__GFP_DIRECT_RECLAIM); fs/btrfs/disk-io.c:2928: INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_DIRECT_RECLAIM); fs/btrfs/volumes.c:6868: INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_DIRECT_RECLAIM); fs/btrfs/volumes.c:6869: INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_DIRECT_RECLAIM); kernel/cgroup/cgroup.c:325: ret = idr_alloc(idr, ptr, start, end, gfp_mask & ~__GFP_DIRECT_RECLAIM); mm/mempool.c:389: gfp_temp = gfp_mask & ~(__GFP_DIRECT_RECLAIM|__GFP_IO); mm/hugetlb.c:2165: gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); mm/mempolicy.c:2061: preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); mm/memcontrol.c:5452: ret = try_charge(mc.to, GFP_KERNEL & ~__GFP_DIRECT_RECLAIM, count); net/core/sock.c:2623: pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | net/core/skbuff.c:6084: page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) | net/netlink/af_netlink.c:1302: (allocation & ~__GFP_DIRECT_RECLAIM) | net/netlink/af_netlink.c:2259: (GFP_KERNEL & ~__GFP_DIRECT_RECLAIM) | > > -- > Michal Hocko > SUSE Labs
On Wed 27-10-21 20:05:30, Zhaoyang Huang wrote: > On Wed, Oct 27, 2021 at 7:52 PM Michal Hocko <mhocko@suse.com> wrote: > > > > On Wed 27-10-21 17:19:56, Zhaoyang Huang wrote: > > > On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: > > > > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > > > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > > > > > > > > > > > For the kswapd only reclaiming, there is no chance to try again on > > > > > > > this group while direct reclaim has. fix it by judging gfp flag. > > > > > > > > > > > > There is no problem description (same as in your last submissions. Have > > > > > > you looked at the patch submission documentation as recommended > > > > > > previously?). > > > > > > > > > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > > > > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > > > > > > for the kswapd part).. > > > > > ok, but how does the reclaiming try with memcg's min protection on the > > > > > alloc without __GFP_DIRECT_RECLAIM? > > > > > > > > I do not follow. There is no need to protect memcg if the allocation > > > > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the > > > > charge if a hard limit is reached, see try_charge_memcg and > > > > gfpflags_allow_blocking check. > > > > > > > > Background reclaim, on the other hand never breaches reclaim protection. > > > > > > > > What is the actual problem you want to solve? > > > Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and > > > all processes are under cgroups. Kswapd is the only hope here which > > > however has a low efficiency of get_scan_count. I would like to have > > > kswapd work as direct reclaim in 2nd round which will have > > > protection=memory.min. > > > > Do you have an example where this would be a practical problem? Atomic > > allocations should be rather rare. > Please find below for the search result of '~__GFP_DIRECT_RECLAIM' > which shows some drivers and net prefer to behave like that. > Furthermore, the allocations are always together with high order. And what is the _practical_ problem you are seeing or trying to solve?
On Wed, Oct 27, 2021 at 8:31 PM Michal Hocko <mhocko@suse.com> wrote: > > On Wed 27-10-21 20:05:30, Zhaoyang Huang wrote: > > On Wed, Oct 27, 2021 at 7:52 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > On Wed 27-10-21 17:19:56, Zhaoyang Huang wrote: > > > > On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: > > > > > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: > > > > > > > > > > > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: > > > > > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > > > > > > > > > > > > > For the kswapd only reclaiming, there is no chance to try again on > > > > > > > > this group while direct reclaim has. fix it by judging gfp flag. > > > > > > > > > > > > > > There is no problem description (same as in your last submissions. Have > > > > > > > you looked at the patch submission documentation as recommended > > > > > > > previously?). > > > > > > > > > > > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd > > > > > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat > > > > > > > for the kswapd part).. > > > > > > ok, but how does the reclaiming try with memcg's min protection on the > > > > > > alloc without __GFP_DIRECT_RECLAIM? > > > > > > > > > > I do not follow. There is no need to protect memcg if the allocation > > > > > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the > > > > > charge if a hard limit is reached, see try_charge_memcg and > > > > > gfpflags_allow_blocking check. > > > > > > > > > > Background reclaim, on the other hand never breaches reclaim protection. > > > > > > > > > > What is the actual problem you want to solve? > > > > Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and > > > > all processes are under cgroups. Kswapd is the only hope here which > > > > however has a low efficiency of get_scan_count. I would like to have > > > > kswapd work as direct reclaim in 2nd round which will have > > > > protection=memory.min. > > > > > > Do you have an example where this would be a practical problem? Atomic > > > allocations should be rather rare. > > Please find below for the search result of '~__GFP_DIRECT_RECLAIM' > > which shows some drivers and net prefer to behave like that. > > Furthermore, the allocations are always together with high order. > > And what is the _practical_ problem you are seeing or trying to solve? We do have out of tree code behave like this and want to make the mechanics more robust > > -- > Michal Hocko > SUSE Labs
Zhaoyang Huang writes: >On Wed, Oct 27, 2021 at 8:31 PM Michal Hocko <mhocko@suse.com> wrote: >> >> On Wed 27-10-21 20:05:30, Zhaoyang Huang wrote: >> > On Wed, Oct 27, 2021 at 7:52 PM Michal Hocko <mhocko@suse.com> wrote: >> > > >> > > On Wed 27-10-21 17:19:56, Zhaoyang Huang wrote: >> > > > On Wed, Oct 27, 2021 at 4:26 PM Michal Hocko <mhocko@suse.com> wrote: >> > > > > >> > > > > On Wed 27-10-21 15:46:19, Zhaoyang Huang wrote: >> > > > > > On Wed, Oct 27, 2021 at 3:20 PM Michal Hocko <mhocko@suse.com> wrote: >> > > > > > > >> > > > > > > On Wed 27-10-21 15:01:50, Huangzhaoyang wrote: >> > > > > > > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> >> > > > > > > > >> > > > > > > > For the kswapd only reclaiming, there is no chance to try again on >> > > > > > > > this group while direct reclaim has. fix it by judging gfp flag. >> > > > > > > >> > > > > > > There is no problem description (same as in your last submissions. Have >> > > > > > > you looked at the patch submission documentation as recommended >> > > > > > > previously?). >> > > > > > > >> > > > > > > Also this patch doesn't make any sense. Both direct reclaim and kswapd >> > > > > > > use a gfp mask which contains __GFP_DIRECT_RECLAIM (see balance_pgdat >> > > > > > > for the kswapd part).. >> > > > > > ok, but how does the reclaiming try with memcg's min protection on the >> > > > > > alloc without __GFP_DIRECT_RECLAIM? >> > > > > >> > > > > I do not follow. There is no need to protect memcg if the allocation >> > > > > request doesn't have __GFP_DIRECT_RECLAIM because that would fail the >> > > > > charge if a hard limit is reached, see try_charge_memcg and >> > > > > gfpflags_allow_blocking check. >> > > > > >> > > > > Background reclaim, on the other hand never breaches reclaim protection. >> > > > > >> > > > > What is the actual problem you want to solve? >> > > > Imagine there is an allocation with gfp_mask & ~GFP_DIRECT_RECLAIM and >> > > > all processes are under cgroups. Kswapd is the only hope here which >> > > > however has a low efficiency of get_scan_count. I would like to have >> > > > kswapd work as direct reclaim in 2nd round which will have >> > > > protection=memory.min. >> > > >> > > Do you have an example where this would be a practical problem? Atomic >> > > allocations should be rather rare. >> > Please find below for the search result of '~__GFP_DIRECT_RECLAIM' >> > which shows some drivers and net prefer to behave like that. >> > Furthermore, the allocations are always together with high order. >> >> And what is the _practical_ problem you are seeing or trying to solve? >We do have out of tree code behave like this and want to make the >mechanics more robust It does one no good to use concepts like "robustness" in an unsubstantiated, unmeasured, and unquantified way. Either provide the measurements and tell us why we should care about those measurements, or there's very little to discuss. As it is this is a ten-deep thread where Michal has asked several perfectly reasonable questions and has received only a flimsy pastiche of discussion as a reply. Please provide tangible, hard data with reasons we should care about it. With that said, there's no way we're going to change core mm behaviour based on the whims of poorly behaved out of tree drivers.
diff --git a/mm/vmscan.c b/mm/vmscan.c index 74296c2..41f5776 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2704,7 +2704,8 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long protection; /* memory.low scaling, make sure we retry before OOM */ - if (!sc->memcg_low_reclaim && low > min) { + if (!sc->memcg_low_reclaim && low > min + && sc->gfp_mask & __GFP_DIRECT_RECLAIM) { protection = low; sc->memcg_low_skipped = 1; } else {