mbox series

[v2,00/13] Use obj_cgroup APIs to charge the LRU pages

Message ID 20210916134748.67712-1-songmuchun@bytedance.com (mailing list archive)
Headers show
Series Use obj_cgroup APIs to charge the LRU pages | expand

Message

Muchun Song Sept. 16, 2021, 1:47 p.m. UTC
This version is rebased over linux 5.15-rc1, because Shakeel has asked me
if I could do that. I rework some code suggested by Roman as well in this
version. I have not removed the Acked-by tags which are from Roman, because
this version is not based on the folio relevant. If Roman wants me to
do this, please let me know, thanks.

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

	[v17,00/19] The new cgroup slab memory controller[1]
	[v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

cat /proc/cgroups | grep memory

cd /sys/fs/cgroup/memory

for i in range{1..500}
do
	mkdir test
	echo $$ > test/cgroup.procs
	sleep 60 &
	echo $$ > cgroup.procs
	echo `cat test/cgroup.procs` > cgroup.procs
	rmdir test
done

cat /proc/cgroups | grep memory
```

Thanks.

[1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/
[2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/

Changlogs in v2:
  1. Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
     dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
  2. Rebase to linux 5.15-rc1.
  3. Add a new pacth to cleanup mem_cgroup_kmem_disabled().

Changlogs in v1:
  1. Drop RFC tag.
  2. Rebase to linux next-20210811.

Changlogs in RFC v4:
  1. Collect Acked-by from Roman.
  2. Rebase to linux next-20210525.
  3. Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
  4. Change the patch 1 title to "prepare objcg API for non-kmem usage".
  5. Convert reparent_ops_head to an array in patch 8.

  Thanks for Roman's review and suggestions.

Changlogs in RFC v3:
  1. Drop the code cleanup and simplification patches. Gather those patches
     into a separate series[1].
  2. Rework patch #1 suggested by Johannes.

Changlogs in RFC v2:
  1. Collect Acked-by tags by Johannes. Thanks.
  2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
  3. Fix move_pages_to_lru().

Muchun Song (13):
  mm: move mem_cgroup_kmem_disabled() to memcontrol.h
  mm: memcontrol: prepare objcg API for non-kmem usage
  mm: memcontrol: introduce compact_lock_page_irqsave
  mm: memcontrol: make lruvec lock safe when the LRU pages reparented
  mm: vmscan: rework move_pages_to_lru()
  mm: thp: introduce split_queue_lock/unlock{_irqsave}()
  mm: thp: make split queue lock safe when LRU pages reparented
  mm: memcontrol: make all the callers of page_memcg() safe
  mm: memcontrol: introduce memcg_reparent_ops
  mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
  mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg()
  mm: lru: add VM_BUG_ON_PAGE to lru maintenance function
  mm: lru: use lruvec lock to serialize memcg changes

 Documentation/admin-guide/cgroup-v1/memory.rst |   2 +-
 fs/buffer.c                                    |  11 +-
 fs/fs-writeback.c                              |  23 +-
 include/linux/memcontrol.h                     | 184 ++++----
 include/linux/mm_inline.h                      |   6 +
 mm/compaction.c                                |  36 +-
 mm/filemap.c                                   |   2 +-
 mm/huge_memory.c                               | 159 +++++--
 mm/internal.h                                  |   5 -
 mm/memcontrol.c                                | 563 ++++++++++++++++++-------
 mm/migrate.c                                   |   4 +
 mm/page-writeback.c                            |  26 +-
 mm/page_io.c                                   |   5 +-
 mm/rmap.c                                      |  14 +-
 mm/slab_common.c                               |   2 +-
 mm/swap.c                                      |  46 +-
 mm/vmscan.c                                    |  56 ++-
 17 files changed, 775 insertions(+), 369 deletions(-)

Comments

Roman Gushchin Sept. 17, 2021, 1:28 a.m. UTC | #1
Hi Muchun!

On Thu, Sep 16, 2021 at 09:47:35PM +0800, Muchun Song wrote:
> This version is rebased over linux 5.15-rc1, because Shakeel has asked me
> if I could do that. I rework some code suggested by Roman as well in this
> version. I have not removed the Acked-by tags which are from Roman, because
> this version is not based on the folio relevant. If Roman wants me to
> do this, please let me know, thanks.

I'm fine with this, thanks for clarifying.

> 
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
> 
> 	[v17,00/19] The new cgroup slab memory controller[1]
> 	[v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]
> 
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.

I've an idea: what if we use struct list_lru_memcg as an intermediate object
between an individual page and struct mem_cgroup?

It could contain a pointer to a memory cgroup structure (not even sure if a
reference is needed), and a lru page can contain a pointer to the lruvec instead
of memcg/objcg.

This approach can probably simplify the locking scheme. But what's more
important, it can dramatically reduce the number of css_get()/put() calls.
The latter are not particularly cheap after the deletion of a cgroup:
they are atomic_dec()'s. As a result, the reclaim efficiency could be much
better. The downside: we will need to update page->lruvec_memcg pointers on
reparenting pages during the cgroup removal.

This is a rough idea, maybe there are significant reasons why it's not possible
or will be way worse. But I think it's worth discussing. What do you think?

Thanks!
Muchun Song Sept. 17, 2021, 10:49 a.m. UTC | #2
On Fri, Sep 17, 2021 at 9:29 AM Roman Gushchin <guro@fb.com> wrote:
>
> Hi Muchun!
>
> On Thu, Sep 16, 2021 at 09:47:35PM +0800, Muchun Song wrote:
> > This version is rebased over linux 5.15-rc1, because Shakeel has asked me
> > if I could do that. I rework some code suggested by Roman as well in this
> > version. I have not removed the Acked-by tags which are from Roman, because
> > this version is not based on the folio relevant. If Roman wants me to
> > do this, please let me know, thanks.
>
> I'm fine with this, thanks for clarifying.
>
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> >       [v17,00/19] The new cgroup slab memory controller[1]
> >       [v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
>
> I've an idea: what if we use struct list_lru_memcg as an intermediate object
> between an individual page and struct mem_cgroup?
>
> It could contain a pointer to a memory cgroup structure (not even sure if a
> reference is needed), and a lru page can contain a pointer to the lruvec instead
> of memcg/objcg.

Hi Roman,

If I understand properly, here you mean the struct page has a pointer
to the struct lruvec not struct list_lru_memcg. What's the functionality
of the struct list_lru_memcg? Would you mind exposing more details?

>
> This approach can probably simplify the locking scheme. But what's more
> important, it can dramatically reduce the number of css_get()/put() calls.
> The latter are not particularly cheap after the deletion of a cgroup:
> they are atomic_dec()'s. As a result, the reclaim efficiency could be much
> better. The downside: we will need to update page->lruvec_memcg pointers on
> reparenting pages during the cgroup removal.

Here we need to update page->lruvec_memcg pointers one by one,
right? Because the lru lock is per lruvec, the locking scheme still need
to be as proposed by this series when the page->lruvec_memcg is
changed If I understand properly. It's likely that I don't get your point.
Looking forward to your further details.

Thanks.

>
> This is a rough idea, maybe there are significant reasons why it's not possible
> or will be way worse. But I think it's worth discussing. What do you think?
>
> Thanks!
Roman Gushchin Sept. 18, 2021, 12:13 a.m. UTC | #3
On Fri, Sep 17, 2021 at 06:49:21PM +0800, Muchun Song wrote:
> On Fri, Sep 17, 2021 at 9:29 AM Roman Gushchin <guro@fb.com> wrote:
> >
> > Hi Muchun!
> >
> > On Thu, Sep 16, 2021 at 09:47:35PM +0800, Muchun Song wrote:
> > > This version is rebased over linux 5.15-rc1, because Shakeel has asked me
> > > if I could do that. I rework some code suggested by Roman as well in this
> > > version. I have not removed the Acked-by tags which are from Roman, because
> > > this version is not based on the folio relevant. If Roman wants me to
> > > do this, please let me know, thanks.
> >
> > I'm fine with this, thanks for clarifying.
> >
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > >       [v17,00/19] The new cgroup slab memory controller[1]
> > >       [v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> >
> > I've an idea: what if we use struct list_lru_memcg as an intermediate object
> > between an individual page and struct mem_cgroup?
> >
> > It could contain a pointer to a memory cgroup structure (not even sure if a
> > reference is needed), and a lru page can contain a pointer to the lruvec instead
> > of memcg/objcg.

lruvec_memcg I mean.

> 
> Hi Roman,
> 
> If I understand properly, here you mean the struct page has a pointer
> to the struct lruvec not struct list_lru_memcg. What's the functionality
> of the struct list_lru_memcg? Would you mind exposing more details?

So the basic idea is simple: a lru page charged to a memcg is associated with
a per-memcg lruvec (list_lru_memcg), which is associated with a memory cgroup.
And after your patches there is a second link of associations: page to objcg
to memcg:

1) page->objcg->memcg
2) page->list_lru_memcg->memcg

(those are not necessarily direct pointers, but generally speaking, relations).

My gut feeling is that if we can merge them into just 2) and use list_lru_memcg
as an intermediate object between pages and memory cgroups, the whole thing can
be more efficient and beautiful.

Yes, on reparenting we'd need to scan over all pages in the lru list, but
hopefully we can do it from a worker context. And it's not such a big deal as
with slab objects, where we simple had no list of all objects.

Again, I'm not 100% sure if it's possible and worth it, so it shouldn't block
your patchset if everybody else like it.

Thanks
Muchun Song Sept. 18, 2021, 7:55 a.m. UTC | #4
On Sat, Sep 18, 2021 at 8:13 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Fri, Sep 17, 2021 at 06:49:21PM +0800, Muchun Song wrote:
> > On Fri, Sep 17, 2021 at 9:29 AM Roman Gushchin <guro@fb.com> wrote:
> > >
> > > Hi Muchun!
> > >
> > > On Thu, Sep 16, 2021 at 09:47:35PM +0800, Muchun Song wrote:
> > > > This version is rebased over linux 5.15-rc1, because Shakeel has asked me
> > > > if I could do that. I rework some code suggested by Roman as well in this
> > > > version. I have not removed the Acked-by tags which are from Roman, because
> > > > this version is not based on the folio relevant. If Roman wants me to
> > > > do this, please let me know, thanks.
> > >
> > > I'm fine with this, thanks for clarifying.
> > >
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > >       [v17,00/19] The new cgroup slab memory controller[1]
> > > >       [v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > >
> > > I've an idea: what if we use struct list_lru_memcg as an intermediate object
> > > between an individual page and struct mem_cgroup?
> > >
> > > It could contain a pointer to a memory cgroup structure (not even sure if a
> > > reference is needed), and a lru page can contain a pointer to the lruvec instead
> > > of memcg/objcg.
>
> lruvec_memcg I mean.

Thanks for your clarification.

>
> >
> > Hi Roman,
> >
> > If I understand properly, here you mean the struct page has a pointer
> > to the struct lruvec not struct list_lru_memcg. What's the functionality
> > of the struct list_lru_memcg? Would you mind exposing more details?
>
> So the basic idea is simple: a lru page charged to a memcg is associated with
> a per-memcg lruvec (list_lru_memcg), which is associated with a memory cgroup.
> And after your patches there is a second link of associations: page to objcg
> to memcg:
>
> 1) page->objcg->memcg
> 2) page->list_lru_memcg->memcg
>
> (those are not necessarily direct pointers, but generally speaking, relations).
>
> My gut feeling is that if we can merge them into just 2) and use list_lru_memcg
> as an intermediate object between pages and memory cgroups, the whole thing can
> be more efficient and beautiful.
>
> Yes, on reparenting we'd need to scan over all pages in the lru list, but
> hopefully we can do it from a worker context. And it's not such a big deal as
> with slab objects, where we simple had no list of all objects.

struct list_lru_memcg seems to be redundant, it just contains a pointer
to struct mem_cgroup. We need to update each page->lruvec_memcg,
why not update page->memcg_data directly to its parent memcg?

The update of page->lruvec_memcg should be under both child and
parent's lruvec lock, right? I suppose scanning over all pages may be
a problem if there are many pages.

Thanks.

>
> Again, I'm not 100% sure if it's possible and worth it, so it shouldn't block
> your patchset if everybody else like it.
>
> Thanks