Message ID | 20220530074919.46352-1-songmuchun@bytedance.com (mailing list archive) |
---|---|
Headers | show |
Series | Use obj_cgroup APIs to charge the LRU pages | expand |
On Mon, 30 May 2022 15:49:08 +0800 Muchun Song <songmuchun@bytedance.com> wrote: > This version is rebased on v5.18. Not a great choice of base, really. mm-stable or mm-unstable or linux-next or even linus-of-the-day are all much more up to date. Although the memcg reviewer tags are pretty thin, I was going to give it a run. But after fixing a bunch of conflicts I got about halfway through then gave up on a big snarl in get_obj_cgroup_from_current(). > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ Surprising, that was over a year ago. Why has is taken so long?
On Mon, May 30, 2022 at 02:17:11PM -0700, Andrew Morton wrote: > On Mon, 30 May 2022 15:49:08 +0800 Muchun Song <songmuchun@bytedance.com> wrote: > > > This version is rebased on v5.18. > > Not a great choice of base, really. mm-stable or mm-unstable or > linux-next or even linus-of-the-day are all much more up to date. > I'll rebase it to linux-next in v6. > Although the memcg reviewer tags are pretty thin, I was going to give > it a run. But after fixing a bunch of conflicts I got about halfway > through then gave up on a big snarl in get_obj_cgroup_from_current(). > Got it. Will fix. > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > Surprising, that was over a year ago. Why has is taken so long? > Yeah, a little long. This issue has been going on for years. I have proposed an approach based on objcg to solve this issue last year, however, we are not sure if this is the best choice. So this patchset stalled for months. Recently, this issue was proposed in LSFMM 2022 conference by Roman, consensus was that the objcg-based reparenting is fine as well. So this patchset has recently resumed. Thanks.
On 5/30/22 03:49, Muchun Song wrote: > This version is rebased on v5.18. > > Since the following patchsets applied. All the kernel memory are charged > with the new APIs of obj_cgroup. > > [v17,00/19] The new cgroup slab memory controller [1] > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2] > > But user memory allocations (LRU pages) pinning memcgs for a long time - > it exists at a larger scale and is causing recurring problems in the real > world: page cache doesn't get reclaimed for a long time, or is used by the > second, third, fourth, ... instance of the same job that was restarted into > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > and make page reclaim very inefficient. > > We can convert LRU pages and most other raw memcg pins to the objcg direction > to fix this problem, and then the LRU pages will not pin the memcgs. > > This patchset aims to make the LRU pages to drop the reference to memory > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > of the dying cgroups will not increase if we run the following test script. > > ```bash > #!/bin/bash > > dd if=/dev/zero of=temp bs=4096 count=1 > cat /proc/cgroups | grep memory > > for i in {0..2000} > do > mkdir /sys/fs/cgroup/memory/test$i > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > cat temp >> log > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > rmdir /sys/fs/cgroup/memory/test$i > done > > cat /proc/cgroups | grep memory > > rm -f temp log > ``` > > [1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ > [2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > v5: > - Lots of improvements from Johannes, Roman and Waiman. > - Fix lockdep warning reported by kernel test robot. > - Add two new patches to do code cleanup. > - Collect Acked-by and Reviewed-by from Johannes and Roman. > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > it to local_lock. It could be an improvement in the future. My comment about local_lock/unlock is just a note that local_irq_disable/enable() have to be eventually replaced. However, we need to think carefully where to put the newly added local_lock. It is perfectly fine to keep it as is and leave the conversion as a future follow-up. Thank you very much for your work on this patchset. Cheers, Longman
On Mon, May 30, 2022 at 02:17:11PM -0700, Andrew Morton wrote: > On Mon, 30 May 2022 15:49:08 +0800 Muchun Song <songmuchun@bytedance.com> wrote: > > > This version is rebased on v5.18. > > Not a great choice of base, really. mm-stable or mm-unstable or > linux-next or even linus-of-the-day are all much more up to date. > > Although the memcg reviewer tags are pretty thin, I was going to give > it a run. But after fixing a bunch of conflicts I got about halfway > through then gave up on a big snarl in get_obj_cgroup_from_current(). > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > Surprising, that was over a year ago. Why has is taken so long? It's partially my fault: I was thinking (and to some extent still are) that using objcg is not the best choice long-term and was pushing on the idea to used per-memcg lru vectors as intermediate objects instead. But it looks like I underestimated the complexity and a potential overhead of this solution. The objcg-based approach can solve the problem right now and it shouldn't bring any long-term issues. So I asked Muchun to revive the patchset. Thanks!
On Mon, May 30, 2022 at 10:41:30PM -0400, Waiman Long wrote: > On 5/30/22 03:49, Muchun Song wrote: > > This version is rebased on v5.18. > > > > Since the following patchsets applied. All the kernel memory are charged > > with the new APIs of obj_cgroup. > > > > [v17,00/19] The new cgroup slab memory controller [1] > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2] > > > > But user memory allocations (LRU pages) pinning memcgs for a long time - > > it exists at a larger scale and is causing recurring problems in the real > > world: page cache doesn't get reclaimed for a long time, or is used by the > > second, third, fourth, ... instance of the same job that was restarted into > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > > and make page reclaim very inefficient. > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction > > to fix this problem, and then the LRU pages will not pin the memcgs. > > > > This patchset aims to make the LRU pages to drop the reference to memory > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > > of the dying cgroups will not increase if we run the following test script. > > > > ```bash > > #!/bin/bash > > > > dd if=/dev/zero of=temp bs=4096 count=1 > > cat /proc/cgroups | grep memory > > > > for i in {0..2000} > > do > > mkdir /sys/fs/cgroup/memory/test$i > > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > > cat temp >> log > > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > > rmdir /sys/fs/cgroup/memory/test$i > > done > > > > cat /proc/cgroups | grep memory > > > > rm -f temp log > > ``` > > > > [1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ > > [2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > > > v5: > > - Lots of improvements from Johannes, Roman and Waiman. > > - Fix lockdep warning reported by kernel test robot. > > - Add two new patches to do code cleanup. > > - Collect Acked-by and Reviewed-by from Johannes and Roman. > > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > > it to local_lock. It could be an improvement in the future. > > My comment about local_lock/unlock is just a note that > local_irq_disable/enable() have to be eventually replaced. However, we need > to think carefully where to put the newly added local_lock. It is perfectly > fine to keep it as is and leave the conversion as a future follow-up. > Totally agree. > Thank you very much for your work on this patchset. > Thanks.
Hi, Friendly ping. Any comments or objections? Thanks. On Mon, May 30, 2022 at 3:50 PM Muchun Song <songmuchun@bytedance.com> wrote: > > This version is rebased on v5.18. > > Since the following patchsets applied. All the kernel memory are charged > with the new APIs of obj_cgroup. > > [v17,00/19] The new cgroup slab memory controller [1] > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2] > > But user memory allocations (LRU pages) pinning memcgs for a long time - > it exists at a larger scale and is causing recurring problems in the real > world: page cache doesn't get reclaimed for a long time, or is used by the > second, third, fourth, ... instance of the same job that was restarted into > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory, > and make page reclaim very inefficient. > > We can convert LRU pages and most other raw memcg pins to the objcg direction > to fix this problem, and then the LRU pages will not pin the memcgs. > > This patchset aims to make the LRU pages to drop the reference to memory > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number > of the dying cgroups will not increase if we run the following test script. > > ```bash > #!/bin/bash > > dd if=/dev/zero of=temp bs=4096 count=1 > cat /proc/cgroups | grep memory > > for i in {0..2000} > do > mkdir /sys/fs/cgroup/memory/test$i > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs > cat temp >> log > echo $$ > /sys/fs/cgroup/memory/cgroup.procs > rmdir /sys/fs/cgroup/memory/test$i > done > > cat /proc/cgroups | grep memory > > rm -f temp log > ``` > > [1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/ > [2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/ > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/ > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/ > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/ > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/ > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/ > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/ > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/ > > v5: > - Lots of improvements from Johannes, Roman and Waiman. > - Fix lockdep warning reported by kernel test robot. > - Add two new patches to do code cleanup. > - Collect Acked-by and Reviewed-by from Johannes and Roman. > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform > it to local_lock. It could be an improvement in the future. > > v4: > - Resend and rebased on v5.18. > > v3: > - Removed the Acked-by tags from Roman since this version is based on > the folio relevant. > > v2: > - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the > dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks). > - Rebase to linux 5.15-rc1. > - Add a new pacth to cleanup mem_cgroup_kmem_disabled(). > > v1: > - Drop RFC tag. > - Rebase to linux next-20210811. > > RFC v4: > - Collect Acked-by from Roman. > - Rebase to linux next-20210525. > - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem(). > - Change the patch 1 title to "prepare objcg API for non-kmem usage". > - Convert reparent_ops_head to an array in patch 8. > > Thanks for Roman's review and suggestions. > > RFC v3: > - Drop the code cleanup and simplification patches. Gather those patches > into a separate series[1]. > - Rework patch #1 suggested by Johannes. > > RFC v2: > - Collect Acked-by tags by Johannes. Thanks. > - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks. > - Fix move_pages_to_lru(). > > Muchun Song (11): > mm: memcontrol: remove dead code and comments > mm: rename unlock_page_lruvec{_irq, _irqrestore} to > lruvec_unlock{_irq, _irqrestore} > mm: memcontrol: prepare objcg API for non-kmem usage > mm: memcontrol: make lruvec lock safe when LRU pages are reparented > mm: vmscan: rework move_pages_to_lru() > mm: thp: make split queue lock safe when LRU pages are reparented > mm: memcontrol: make all the callers of {folio,page}_memcg() safe > mm: memcontrol: introduce memcg_reparent_ops > mm: memcontrol: use obj_cgroup APIs to charge the LRU pages > mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function > mm: lru: use lruvec lock to serialize memcg changes > > fs/buffer.c | 4 +- > fs/fs-writeback.c | 23 +- > include/linux/memcontrol.h | 213 +++++++++------ > include/linux/mm_inline.h | 6 + > include/trace/events/writeback.h | 5 + > mm/compaction.c | 39 ++- > mm/huge_memory.c | 153 +++++++++-- > mm/memcontrol.c | 560 +++++++++++++++++++++++++++------------ > mm/migrate.c | 4 + > mm/mlock.c | 2 +- > mm/page_io.c | 5 +- > mm/swap.c | 62 ++--- > mm/vmscan.c | 67 +++-- > 13 files changed, 767 insertions(+), 376 deletions(-) > > > base-commit: 4b0986a3613c92f4ec1bdc7f60ec66fea135991f > -- > 2.11.0 >
On Thu, Jun 09, 2022 at 10:43:24AM +0800, Muchun Song wrote: > Hi, > > Friendly ping. Any comments or objections? I'm sorry, I was recently busy with some other stuff, but it's on my todo list. I'll try to find some time by the end of the week. Thanks! > > Thanks. > > On Mon, May 30, 2022 at 3:50 PM Muchun Song <songmuchun@bytedance.com> wrote: > > > > This version is rebased on v5.18. > > > > Since the following patchsets applied. All the kernel memory are charged > > with the new APIs of obj_cgroup. > > > > [v17,00/19] The new cgroup slab memory controller [1] > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2] Btw, both these patchsets were merged a long time ago, so you can refer to upstream commits instead.
On Thu, Jun 9, 2022 at 10:53 AM Roman Gushchin <roman.gushchin@linux.dev> wrote: > > On Thu, Jun 09, 2022 at 10:43:24AM +0800, Muchun Song wrote: > > Hi, > > > > Friendly ping. Any comments or objections? > > I'm sorry, I was recently busy with some other stuff, but it's on my todo list. > I'll try to find some time by the end of the week. Got it. Thanks Roman. Looking forward to your reviews. > > Thanks! > > > > > Thanks. > > > > On Mon, May 30, 2022 at 3:50 PM Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > This version is rebased on v5.18. > > > > > > Since the following patchsets applied. All the kernel memory are charged > > > with the new APIs of obj_cgroup. > > > > > > [v17,00/19] The new cgroup slab memory controller [1] > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2] > > Btw, both these patchsets were merged a long time ago, so you can refer > to upstream commits instead. Will do. Thanks.