mbox series

[RFC,00/24] mm/hugetlb: Free some vmemmap pages of hugetlb page

Message ID 20200915125947.26204-1-songmuchun@bytedance.com (mailing list archive)
Headers show
Series mm/hugetlb: Free some vmemmap pages of hugetlb page | expand

Message

Muchun Song Sept. 15, 2020, 12:59 p.m. UTC
Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

Nowadays we track the status of physical page frames using `struct page`
arranged in one or more arrays. And here exists one-to-one mapping between
the physical page frame and the corresponding `struct page`.

The hugetlbpage support is built on top of multiple page size support
that is provided by most modern architectures. For example, x86 CPUs
normally support 4K and 2M (1G if architecturally supported) page sizes.
Every hugetlbpage has more than one `struct page`. The 2M hugetlbpage
has 512 `struct page` and 1G hugetlbpage has 4096 `struct page`. But
in the core of hugetlbpage only uses the first 4 `struct page` to store
metadata associated with each hugetlbpage. The rest of the `struct page`
are usually read the compound_head field which are all the same value.
If we can free some struct page memory to buddy system so that We can
save a lot of memory.

When the system boot up, every 2M hugetlbpage has 512 `struct page` which
is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

   hugetlbpage                   struct page(8 pages)          page frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> |     2     |
  |           |                     |     3     | -------------> |     3     |
  |           |                     |     4     | -------------> |     4     |
  |     2M    |                     |     5     | -------------> |     5     |
  |           |                     |     6     | -------------> |     6     |
  |           |                     |     7     | -------------> |     7     |
  |           |                     +-----------+                +-----------+
  |           |
  |           |
  +-----------+


When a hugetlbpage is preallocated, we can change the mapping from above to
bellow.

   hugetlbpage                   struct page(8 pages)          page frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> +-----------+
  |           |                     |     3     | -----------------^ ^ ^ ^ ^
  |           |                     |     4     | -------------------+ | | |
  |     2M    |                     |     5     | ---------------------+ | |
  |           |                     |     6     | -----------------------+ |
  |           |                     |     7     | -------------------------+
  |           |                     +-----------+
  |           |
  |           |
  +-----------+

The mapping of the first page(index 0) and the second page(index 1) is
unchanged. The remaining 6 pages are all mapped to the same page(index
1). So we only need 2 pages for vmemmap area and free 6 pages to the
buddy system to save memory. Why we can do this? Because the content
of the remaining 7 pages are usually same except the first page.

When a hugetlbpage is freed to the buddy system, we should allocate 6
pages for vmemmap pages and restore the previous mapping relationship.

If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
substantial gain. On our server, run some SPDK applications which will
use 300GB hugetlbpage. With this feature enabled, we can save 4797MB
memory.

Muchun Song (24):
  mm/memory_hotplug: Move bootmem info registration API to
    bootmem_info.c
  mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm/hugetlb: Register bootmem info when
    CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
  mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  mm/hugetlb: Introduce pgtable allocation/freeing helpers
  mm/hugetlb: Add freeing unused vmemmap pages support for x86
  mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  x86/mm: Introduce VMEMMAP_SIZE/VMEMMAP_END macro
  mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  mm/hugetlb: Add vmemmap_pmd_huge macro for x86
  mm/hugetlb: Defer freeing of hugetlb pages
  mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
    page
  mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
  mm/hugetlb: Use PG_slab to indicate split pmd
  mm/hugetlb: Support freeing vmemmap pages of gigantic page
  mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power
    of two
  mm/hugetlb: Clear PageHWPoison on the non-error memory page
  mm/hugetlb: Flush work when dissolving hugetlb page
  mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  mm/hugetlb: Merge pte to huge pmd only for gigantic page
  mm/hugetlb: Implement vmemmap_pmd_mkhuge macro
  mm/hugetlb: Gather discrete indexes of tail page
  mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
    page

 .../admin-guide/kernel-parameters.txt         |   9 +
 Documentation/admin-guide/mm/hugetlbpage.rst  |   3 +
 arch/x86/include/asm/hugetlb.h                |  20 +
 arch/x86/include/asm/pgtable_64_types.h       |   8 +
 arch/x86/mm/init_64.c                         |   5 +-
 fs/Kconfig                                    |  15 +
 include/linux/bootmem_info.h                  |  65 ++
 include/linux/hugetlb.h                       |  64 ++
 include/linux/hugetlb_cgroup.h                |  15 +-
 include/linux/memory_hotplug.h                |  27 -
 mm/Makefile                                   |   1 +
 mm/bootmem_info.c                             | 125 +++
 mm/hugetlb.c                                  | 834 +++++++++++++++++-
 mm/memory_hotplug.c                           | 116 ---
 mm/sparse.c                                   |   1 +
 15 files changed, 1143 insertions(+), 165 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

Comments

Matthew Wilcox (Oracle) Sept. 15, 2020, 2:32 p.m. UTC | #1
On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> This patch series will free some vmemmap pages(struct page structures)
> associated with each hugetlbpage when preallocated to save memory.

It would be lovely to be able to do this.  Unfortunately, it's completely
impossible right now.  Consider, for example, get_user_pages() called
on the fifth page of a hugetlb page.

I've spent a lot of time thinking about this, and there's a lot of work
that needs to happen before we can do this, mostly in device drivers.
Do you want to help?  It's a multi-year project.
Dave Hansen Sept. 15, 2020, 2:53 p.m. UTC | #2
On 9/15/20 7:32 AM, Matthew Wilcox wrote:
> On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
>> This patch series will free some vmemmap pages(struct page structures)
>> associated with each hugetlbpage when preallocated to save memory.
> It would be lovely to be able to do this.  Unfortunately, it's completely
> impossible right now.  Consider, for example, get_user_pages() called
> on the fifth page of a hugetlb page.

Yeah, exactly.

Does this series survive the in-kernel selftests/?  If so, sounds like
we need to add a new selftest.
Muchun Song Sept. 15, 2020, 3:28 p.m. UTC | #3
On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > This patch series will free some vmemmap pages(struct page structures)
> > associated with each hugetlbpage when preallocated to save memory.
>
> It would be lovely to be able to do this.  Unfortunately, it's completely
> impossible right now.  Consider, for example, get_user_pages() called
> on the fifth page of a hugetlb page.

Can you elaborate on the problem? Thanks so much.

>
> I've spent a lot of time thinking about this, and there's a lot of work
> that needs to happen before we can do this, mostly in device drivers.
> Do you want to help?  It's a multi-year project.

Welcome, we can work together.
Matthew Wilcox (Oracle) Sept. 15, 2020, 3:42 p.m. UTC | #4
On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > This patch series will free some vmemmap pages(struct page structures)
> > > associated with each hugetlbpage when preallocated to save memory.
> >
> > It would be lovely to be able to do this.  Unfortunately, it's completely
> > impossible right now.  Consider, for example, get_user_pages() called
> > on the fifth page of a hugetlb page.
> 
> Can you elaborate on the problem? Thanks so much.

OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
on a 4kB base page system.  Today, that results in a bio_vec containing
{head+5, 0, 0x800}.  Then we call page_to_phys() on that (head+5) struct
page to get the physical address of the I/O, and we turn it into a struct
scatterlist, which similarly has a reference to the page (head+5).

If you're returning page (head+1) from get_user_pages(), all the
calculations will be wrong and you'll do DMA to the wrong address.

What needs to happen is a conversion to get_user_bvecs().  That way we
can return a bvec which is {head, 0x5000, 0x800} and the calculations
will work.  So that's going to involve touching a lot of device drivers.
Christoph has a start on it here:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/gup-bvec

Overlapping with that is a desire to change the biovec so that it
only stores {phys_addr, length} rather than {page, offset, length}.
We're also going to have to rework the scatterlist so that it doesn't
carry a struct page either.
Muchun Song Sept. 15, 2020, 5:32 p.m. UTC | #5
On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > This patch series will free some vmemmap pages(struct page structures)
> > > > associated with each hugetlbpage when preallocated to save memory.
> > >
> > > It would be lovely to be able to do this.  Unfortunately, it's completely
> > > impossible right now.  Consider, for example, get_user_pages() called
> > > on the fifth page of a hugetlb page.
> >
> > Can you elaborate on the problem? Thanks so much.
>
> OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> on a 4kB base page system.  Today, that results in a bio_vec containing
> {head+5, 0, 0x800}.  Then we call page_to_phys() on that (head+5) struct
> page to get the physical address of the I/O, and we turn it into a struct
> scatterlist, which similarly has a reference to the page (head+5).

As I know, in this case, the get_user_pages() will get a reference
to the head page (head+0) before returning such that the hugetlb
page can not be freed. Although get_user_pages() returns the
page (head+5) and the scatterlist has a reference to the page
(head+5), this patch series can handle this situation. I can not
figure out where the problem is. What I missed? Thanks.

>
> If you're returning page (head+1) from get_user_pages(), all the
> calculations will be wrong and you'll do DMA to the wrong address.
>
> What needs to happen is a conversion to get_user_bvecs().  That way we
> can return a bvec which is {head, 0x5000, 0x800} and the calculations
> will work.  So that's going to involve touching a lot of device drivers.
> Christoph has a start on it here:
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/gup-bvec
>
> Overlapping with that is a desire to change the biovec so that it
> only stores {phys_addr, length} rather than {page, offset, length}.
> We're also going to have to rework the scatterlist so that it doesn't
> carry a struct page either.
Matthew Wilcox (Oracle) Sept. 15, 2020, 5:39 p.m. UTC | #6
On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote:
> On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > >
> > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > > This patch series will free some vmemmap pages(struct page structures)
> > > > > associated with each hugetlbpage when preallocated to save memory.
> > > >
> > > > It would be lovely to be able to do this.  Unfortunately, it's completely
> > > > impossible right now.  Consider, for example, get_user_pages() called
> > > > on the fifth page of a hugetlb page.
> > >
> > > Can you elaborate on the problem? Thanks so much.
> >
> > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> > on a 4kB base page system.  Today, that results in a bio_vec containing
> > {head+5, 0, 0x800}.  Then we call page_to_phys() on that (head+5) struct
> > page to get the physical address of the I/O, and we turn it into a struct
> > scatterlist, which similarly has a reference to the page (head+5).
> 
> As I know, in this case, the get_user_pages() will get a reference
> to the head page (head+0) before returning such that the hugetlb
> page can not be freed. Although get_user_pages() returns the
> page (head+5) and the scatterlist has a reference to the page
> (head+5), this patch series can handle this situation. I can not
> figure out where the problem is. What I missed? Thanks.

You freed pages 4-511 from the vmemmap so they could be used for
something else.  Page 5 isn't there any more.  So if you return head+5,
then when we complete the I/O, we'll look for the compound_head() of
head+5 and we won't find head.
Muchun Song Sept. 15, 2020, 6:03 p.m. UTC | #7
On Wed, Sep 16, 2020 at 1:39 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote:
> > On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > > >
> > > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > > > This patch series will free some vmemmap pages(struct page structures)
> > > > > > associated with each hugetlbpage when preallocated to save memory.
> > > > >
> > > > > It would be lovely to be able to do this.  Unfortunately, it's completely
> > > > > impossible right now.  Consider, for example, get_user_pages() called
> > > > > on the fifth page of a hugetlb page.
> > > >
> > > > Can you elaborate on the problem? Thanks so much.
> > >
> > > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> > > on a 4kB base page system.  Today, that results in a bio_vec containing
> > > {head+5, 0, 0x800}.  Then we call page_to_phys() on that (head+5) struct
> > > page to get the physical address of the I/O, and we turn it into a struct
> > > scatterlist, which similarly has a reference to the page (head+5).
> >
> > As I know, in this case, the get_user_pages() will get a reference
> > to the head page (head+0) before returning such that the hugetlb
> > page can not be freed. Although get_user_pages() returns the
> > page (head+5) and the scatterlist has a reference to the page
> > (head+5), this patch series can handle this situation. I can not
> > figure out where the problem is. What I missed? Thanks.
>
> You freed pages 4-511 from the vmemmap so they could be used for
> something else.  Page 5 isn't there any more.  So if you return head+5,
> then when we complete the I/O, we'll look for the compound_head() of
> head+5 and we won't find head.
>

We do not free pages 4-511 from the vmemmap. Actually, we only
free pages 128-511 from the vmemmap.

The 512 struct pages occupy 8 pages of physical memory. We only
free 6 physical page frames to the buddy. But we will create a new
mapping just like below. The virtual address of the freed pages will
remap to the second page frame. So the second page frame is
reused.

When a hugetlbpage is preallocated, we can change the mapping to
bellow.

   hugetlbpage                   struct page(8 pages)          page
frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> +-----------+
  |           |                     |     3     | -----------------^ ^ ^ ^ ^
  |           |                     |     4     | -------------------+ | | |
  |     2M    |                     |     5     | ---------------------+ | |
  |           |                     |     6     | -----------------------+ |
  |           |                     |     7     | -------------------------+
  |           |                     +-----------+
  |           |
  |           |
  +-----------+

As you can see, we reuse the first tail page.
Matthew Wilcox (Oracle) Sept. 15, 2020, 6:15 p.m. UTC | #8
On Wed, Sep 16, 2020 at 02:03:15AM +0800, Muchun Song wrote:
> On Wed, Sep 16, 2020 at 1:39 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote:
> > > On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > >
> > > > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > > > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > > > >
> > > > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > > > > This patch series will free some vmemmap pages(struct page structures)
> > > > > > > associated with each hugetlbpage when preallocated to save memory.
> > > > > >
> > > > > > It would be lovely to be able to do this.  Unfortunately, it's completely
> > > > > > impossible right now.  Consider, for example, get_user_pages() called
> > > > > > on the fifth page of a hugetlb page.
> > > > >
> > > > > Can you elaborate on the problem? Thanks so much.
> > > >
> > > > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> > > > on a 4kB base page system.  Today, that results in a bio_vec containing
> > > > {head+5, 0, 0x800}.  Then we call page_to_phys() on that (head+5) struct
> > > > page to get the physical address of the I/O, and we turn it into a struct
> > > > scatterlist, which similarly has a reference to the page (head+5).
> > >
> > > As I know, in this case, the get_user_pages() will get a reference
> > > to the head page (head+0) before returning such that the hugetlb
> > > page can not be freed. Although get_user_pages() returns the
> > > page (head+5) and the scatterlist has a reference to the page
> > > (head+5), this patch series can handle this situation. I can not
> > > figure out where the problem is. What I missed? Thanks.
> >
> > You freed pages 4-511 from the vmemmap so they could be used for
> > something else.  Page 5 isn't there any more.  So if you return head+5,
> > then when we complete the I/O, we'll look for the compound_head() of
> > head+5 and we won't find head.
> 
> We do not free pages 4-511 from the vmemmap. Actually, we only
> free pages 128-511 from the vmemmap.
> 
> The 512 struct pages occupy 8 pages of physical memory. We only
> free 6 physical page frames to the buddy. But we will create a new
> mapping just like below. The virtual address of the freed pages will
> remap to the second page frame. So the second page frame is
> reused.

Oh!  I get what you're doing now.

For the vmemmap case, you free the last N-2 physical pages but map the
second physical page multiple times.  So for the 512 pages case, we
see pages:

abcdefgh | ijklmnop | ijklmnop | ijklmnop | ijklmnop | ijklmnop | ijklmnop ...

Huh.  I think that might work, except for PageHWPoison.  I'll go back
to your patch series and look at that some more.
Muchun Song Sept. 16, 2020, 2:45 a.m. UTC | #9
On Wed, Sep 16, 2020 at 2:15 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Sep 16, 2020 at 02:03:15AM +0800, Muchun Song wrote:
> > On Wed, Sep 16, 2020 at 1:39 AM Matthew Wilcox <willy@infradead.org> wrote:
> > >
> > > On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote:
> > > > On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > > >
> > > > > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote:
> > > > > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > > > > >
> > > > > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote:
> > > > > > > > This patch series will free some vmemmap pages(struct page structures)
> > > > > > > > associated with each hugetlbpage when preallocated to save memory.
> > > > > > >
> > > > > > > It would be lovely to be able to do this.  Unfortunately, it's completely
> > > > > > > impossible right now.  Consider, for example, get_user_pages() called
> > > > > > > on the fifth page of a hugetlb page.
> > > > > >
> > > > > > Can you elaborate on the problem? Thanks so much.
> > > > >
> > > > > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page
> > > > > on a 4kB base page system.  Today, that results in a bio_vec containing
> > > > > {head+5, 0, 0x800}.  Then we call page_to_phys() on that (head+5) struct
> > > > > page to get the physical address of the I/O, and we turn it into a struct
> > > > > scatterlist, which similarly has a reference to the page (head+5).
> > > >
> > > > As I know, in this case, the get_user_pages() will get a reference
> > > > to the head page (head+0) before returning such that the hugetlb
> > > > page can not be freed. Although get_user_pages() returns the
> > > > page (head+5) and the scatterlist has a reference to the page
> > > > (head+5), this patch series can handle this situation. I can not
> > > > figure out where the problem is. What I missed? Thanks.
> > >
> > > You freed pages 4-511 from the vmemmap so they could be used for
> > > something else.  Page 5 isn't there any more.  So if you return head+5,
> > > then when we complete the I/O, we'll look for the compound_head() of
> > > head+5 and we won't find head.
> >
> > We do not free pages 4-511 from the vmemmap. Actually, we only
> > free pages 128-511 from the vmemmap.
> >
> > The 512 struct pages occupy 8 pages of physical memory. We only
> > free 6 physical page frames to the buddy. But we will create a new
> > mapping just like below. The virtual address of the freed pages will
> > remap to the second page frame. So the second page frame is
> > reused.
>
> Oh!  I get what you're doing now.
>
> For the vmemmap case, you free the last N-2 physical pages but map the
> second physical page multiple times.  So for the 512 pages case, we
> see pages:
>
> abcdefgh | ijklmnop | ijklmnop | ijklmnop | ijklmnop | ijklmnop | ijklmnop ...

Yeah, great. You are right.

>
> Huh.  I think that might work, except for PageHWPoison.  I'll go back
> to your patch series and look at that some more.
>

The PageHWPoison also is considered in the patch series. Looking
forward to your suggestions. Thanks.
Mike Kravetz Sept. 29, 2020, 9:58 p.m. UTC | #10
On 9/15/20 5:59 AM, Muchun Song wrote:
> Hi all,
> 
> This patch series will free some vmemmap pages(struct page structures)
> associated with each hugetlbpage when preallocated to save memory.
...
> The mapping of the first page(index 0) and the second page(index 1) is
> unchanged. The remaining 6 pages are all mapped to the same page(index
> 1). So we only need 2 pages for vmemmap area and free 6 pages to the
> buddy system to save memory. Why we can do this? Because the content
> of the remaining 7 pages are usually same except the first page.
> 
> When a hugetlbpage is freed to the buddy system, we should allocate 6
> pages for vmemmap pages and restore the previous mapping relationship.
> 
> If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> substantial gain. On our server, run some SPDK applications which will
> use 300GB hugetlbpage. With this feature enabled, we can save 4797MB
> memory.

At a high level this seems like a reasonable optimization for hugetlb
pages.  It is possible because hugetlb pages are 'special' and mostly
handled differently than pages in normal mm paths.

The majority of the new code is hugetlb specific, so it should not be
of too much concern for the general mm code paths.  I'll start looking
closer at the series.  However, if someone has high level concerns please
let us know.  The only 'potential' conflict I am aware of is discussion
about support of double mapping hugetlb pages.

https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg02407.html

More care/coordination would be needed to support double mapping with
this new option.  However, this series provides a boot option to disable
freeing of unneeded page structs.
Matthew Wilcox (Oracle) Sept. 30, 2020, 3:13 a.m. UTC | #11
On Tue, Sep 29, 2020 at 02:58:18PM -0700, Mike Kravetz wrote:
> On 9/15/20 5:59 AM, Muchun Song wrote:
> > Hi all,
> > 
> > This patch series will free some vmemmap pages(struct page structures)
> > associated with each hugetlbpage when preallocated to save memory.
> ...
> > The mapping of the first page(index 0) and the second page(index 1) is
> > unchanged. The remaining 6 pages are all mapped to the same page(index
> > 1). So we only need 2 pages for vmemmap area and free 6 pages to the
> > buddy system to save memory. Why we can do this? Because the content
> > of the remaining 7 pages are usually same except the first page.
> > 
> > When a hugetlbpage is freed to the buddy system, we should allocate 6
> > pages for vmemmap pages and restore the previous mapping relationship.
> > 
> > If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> > substantial gain. On our server, run some SPDK applications which will
> > use 300GB hugetlbpage. With this feature enabled, we can save 4797MB
> > memory.
> 
> At a high level this seems like a reasonable optimization for hugetlb
> pages.  It is possible because hugetlb pages are 'special' and mostly
> handled differently than pages in normal mm paths.
> 
> The majority of the new code is hugetlb specific, so it should not be
> of too much concern for the general mm code paths.  I'll start looking
> closer at the series.  However, if someone has high level concerns please
> let us know.  The only 'potential' conflict I am aware of is discussion
> about support of double mapping hugetlb pages.

Not on x86, but architectures which have dcache coherency issues sometimes
use PG_arch_1 on the subpages.  I think it would be wise to map pages
1-7 read-only to catch this, as well as any future change which causes
subpage bits to get set.
Mike Kravetz Oct. 7, 2020, 9:12 p.m. UTC | #12
On 9/29/20 2:58 PM, Mike Kravetz wrote:
> On 9/15/20 5:59 AM, Muchun Song wrote:
>> Hi all,
>>
>> This patch series will free some vmemmap pages(struct page structures)
>> associated with each hugetlbpage when preallocated to save memory.
> ...
>> The mapping of the first page(index 0) and the second page(index 1) is
>> unchanged. The remaining 6 pages are all mapped to the same page(index
>> 1). So we only need 2 pages for vmemmap area and free 6 pages to the
>> buddy system to save memory. Why we can do this? Because the content
>> of the remaining 7 pages are usually same except the first page.
>>
>> When a hugetlbpage is freed to the buddy system, we should allocate 6
>> pages for vmemmap pages and restore the previous mapping relationship.
>>
>> If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
>> substantial gain. On our server, run some SPDK applications which will
>> use 300GB hugetlbpage. With this feature enabled, we can save 4797MB
>> memory.

I had a hard time going through the patch series as it is currently
structured, and instead examined all the code together.  Muchun put in
much effort and the code does reduce memory usage.
- For 2MB hugetlb pages, we save 5 pages of struct pages
- For 1GB hugetlb pages, we save 4086 pages of struct pages

Code is even in pace to handle poisoned pages, although I have not looked
at this closely.  The code survives the libhugetlbfs and ltp huge page tests.

To date, nobody has asked the important question "Is the added complexity
worth the memory savings?".  I suppose it all depends on one's use case.
Obviously, the savings are more significant when one uses 1G huge pages but
that may not be the common case today.

> At a high level this seems like a reasonable optimization for hugetlb
> pages.  It is possible because hugetlb pages are 'special' and mostly
> handled differently than pages in normal mm paths.

Such an optimization only makes sense for something like hugetlb pages.  One
reason is the 'special' nature of hugetlbfs as stated above.  The other is
that this optimization mostly makes sense for huge pages that are created
once and stick around for a long time.  hugetlb pool pages are a perfect
example.  This is because manipulation of struct page mappings is done when
a huge page is created or destroyed.

> The majority of the new code is hugetlb specific, so it should not be
> of too much concern for the general mm code paths.

It is true that much of the code in this series was put in hugetlb.c.  However,
I would argue that there is a bunch of code that only deals with remapping
the memmap which should more generic and added to sparse-vmemmap.c.  This
would at least allow for easier reuse.

Before Muchun and myself put more effort into this series, I would really
like to get feedback on the whether or not this should move forward.
Specifically, is the memory savings worth added complexity?  Is the removing
of struct pages going to come back and cause issues for future features?
Muchun Song Oct. 9, 2020, 4:13 a.m. UTC | #13
On Thu, Oct 8, 2020 at 5:15 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 9/29/20 2:58 PM, Mike Kravetz wrote:
> > On 9/15/20 5:59 AM, Muchun Song wrote:
> >> Hi all,
> >>
> >> This patch series will free some vmemmap pages(struct page structures)
> >> associated with each hugetlbpage when preallocated to save memory.
> > ...
> >> The mapping of the first page(index 0) and the second page(index 1) is
> >> unchanged. The remaining 6 pages are all mapped to the same page(index
> >> 1). So we only need 2 pages for vmemmap area and free 6 pages to the
> >> buddy system to save memory. Why we can do this? Because the content
> >> of the remaining 7 pages are usually same except the first page.
> >>
> >> When a hugetlbpage is freed to the buddy system, we should allocate 6
> >> pages for vmemmap pages and restore the previous mapping relationship.
> >>
> >> If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> >> substantial gain. On our server, run some SPDK applications which will
> >> use 300GB hugetlbpage. With this feature enabled, we can save 4797MB
> >> memory.
>
> I had a hard time going through the patch series as it is currently
> structured, and instead examined all the code together.  Muchun put in
> much effort and the code does reduce memory usage.
> - For 2MB hugetlb pages, we save 5 pages of struct pages
> - For 1GB hugetlb pages, we save 4086 pages of struct pages
>
> Code is even in pace to handle poisoned pages, although I have not looked
> at this closely.  The code survives the libhugetlbfs and ltp huge page tests.
>
> To date, nobody has asked the important question "Is the added complexity
> worth the memory savings?".  I suppose it all depends on one's use case.
> Obviously, the savings are more significant when one uses 1G huge pages but
> that may not be the common case today.
>
> > At a high level this seems like a reasonable optimization for hugetlb
> > pages.  It is possible because hugetlb pages are 'special' and mostly
> > handled differently than pages in normal mm paths.
>
> Such an optimization only makes sense for something like hugetlb pages.  One
> reason is the 'special' nature of hugetlbfs as stated above.  The other is
> that this optimization mostly makes sense for huge pages that are created
> once and stick around for a long time.  hugetlb pool pages are a perfect
> example.  This is because manipulation of struct page mappings is done when
> a huge page is created or destroyed.

Yeah, in our cloud server, we have some application scenarios(e.g. SPDK,
DPDK, QEMU and jemalloc). These applications may use a lot of hugetlb
pages.

>
> > The majority of the new code is hugetlb specific, so it should not be
> > of too much concern for the general mm code paths.
>
> It is true that much of the code in this series was put in hugetlb.c.  However,
> I would argue that there is a bunch of code that only deals with remapping
> the memmap which should more generic and added to sparse-vmemmap.c.  This
> would at least allow for easier reuse.

I agree with you.

>
> Before Muchun and myself put more effort into this series, I would really
> like to get feedback on the whether or not this should move forward.
> Specifically, is the memory savings worth added complexity?  Is the removing
> of struct pages going to come back and cause issues for future features?

Some users do need this optimization to save memory. But if some users
do not need this optimization, they also can disable it by using a kernel boot
parameter 'hugetlb_free_vmemmap=off' or not configuring
CONFIG_HUGETLB_PAGE_FREE_VMEMMAP.

I have no idea about "cause issues for future features". Is there any feature
ongoing or planned?