Message ID | 20201217121303.13386-4-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Free some vmemmap pages of HugeTLB page | expand |
On Thu, Dec 17, 2020 at 08:12:55PM +0800, Muchun Song wrote: > +static inline void free_bootmem_page(struct page *page) > +{ > + unsigned long magic = (unsigned long)page->freelist; > + > + /* > + * The reserve_bootmem_region sets the reserved flag on bootmem > + * pages. > + */ > + VM_WARN_ON(page_ref_count(page) != 2); > + > + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) > + put_page_bootmem(page); > + else > + VM_WARN_ON(1); Ideally, I think we want to see what how the page looks since its state is not what we expected, so maybe join both conditions and use dump_page(). > + * By removing redundant page structs for HugeTLB pages, memory can returned to ^^ be > + * the buddy allocator for other uses. [...] > +void free_huge_page_vmemmap(struct hstate *h, struct page *head) > +{ > + unsigned long vmemmap_addr = (unsigned long)head; > + > + if (!free_vmemmap_pages_per_hpage(h)) > + return; > + > + vmemmap_remap_free(vmemmap_addr + RESERVE_VMEMMAP_SIZE, > + free_vmemmap_pages_size_per_hpage(h)); I am not sure what others think, but I would like to see vmemmap_remap_free taking three arguments: start, end, and reuse addr, e.g: void free_huge_page_vmemmap(struct hstate *h, struct page *head) { unsigned long vmemmap_addr = (unsigned long)head; unsigned long vmemmap_end, vmemmap_reuse; if (!free_vmemmap_pages_per_hpage(h)) return; vmemmap_addr += RESERVE_MEMMAP_SIZE; vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); vmemmap_reuse = vmemmap_addr - PAGE_SIZE; vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); } The reason for me to do this is to let the callers of vmemmap_remap_free decide __what__ they want to remap. More on this below. > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, > + unsigned long end, > + struct vmemmap_remap_walk *walk) > +{ > + pte_t *pte; > + > + pte = pte_offset_kernel(pmd, addr); > + > + if (walk->reuse_addr == addr) { > + BUG_ON(pte_none(*pte)); > + walk->reuse_page = pte_page(*pte++); > + addr += PAGE_SIZE; > + } Although it is quite obvious, a brief comment here pointing out what are we doing and that this is meant to be set only once would be nice. > +static void vmemmap_remap_range(unsigned long start, unsigned long end, > + struct vmemmap_remap_walk *walk) > +{ > + unsigned long addr = start - PAGE_SIZE; > + unsigned long next; > + pgd_t *pgd; > + > + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); > + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); > + > + walk->reuse_page = NULL; > + walk->reuse_addr = addr; With the change I suggested above, struct vmemmap_remap_walk should be initialitzed at once in vmemmap_remap_free, so this should not longer be needed. (And btw, you do not need to set reuse_page to NULL, the way you init the struct in vmemmap_remap_free makes sure to null any field you do not explicitly set). > +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, > + struct vmemmap_remap_walk *walk) > +{ > + /* > + * Make the tail pages are mapped with read-only to catch > + * illegal write operation to the tail pages. "Remap the tail pages as read-only to ..." > + */ > + pgprot_t pgprot = PAGE_KERNEL_RO; > + pte_t entry = mk_pte(walk->reuse_page, pgprot); > + struct page *page; > + > + page = pte_page(*pte); struct page *page = pte_page(*pte); since you did the same for the other two. > + list_add(&page->lru, walk->vmemmap_pages); > + > + set_pte_at(&init_mm, addr, pte, entry); > +} > + > +/** > + * vmemmap_remap_free - remap the vmemmap virtual address range > + * [start, start + size) to the page which > + * [start - PAGE_SIZE, start) is mapped, > + * then free vmemmap pages. > + * @start: start address of the vmemmap virtual address range > + * @size: size of the vmemmap virtual address range > + */ > +void vmemmap_remap_free(unsigned long start, unsigned long size) > +{ > + unsigned long end = start + size; > + LIST_HEAD(vmemmap_pages); > + > + struct vmemmap_remap_walk walk = { > + .remap_pte = vmemmap_remap_pte, > + .vmemmap_pages = &vmemmap_pages, > + }; As stated above, this would become: void vmemmap_remap_free(unsigned long start, unsigned long end, usigned long reuse) { LIST_HEAD(vmemmap_pages); struct vmemmap_remap_walk walk = { .reuse_addr = reuse, .remap_pte = vmemmap_remap_pte, .vmemmap_pages = &vmemmap_pages, }; You might have had your reasons to do this way, but this looks more natural to me, with the plus that callers of vmemmap_remap_free can specify what they want to remap.
On Mon, Dec 21, 2020 at 5:11 PM Oscar Salvador <osalvador@suse.de> wrote: > > On Thu, Dec 17, 2020 at 08:12:55PM +0800, Muchun Song wrote: > > +static inline void free_bootmem_page(struct page *page) > > +{ > > + unsigned long magic = (unsigned long)page->freelist; > > + > > + /* > > + * The reserve_bootmem_region sets the reserved flag on bootmem > > + * pages. > > + */ > > + VM_WARN_ON(page_ref_count(page) != 2); > > + > > + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) > > + put_page_bootmem(page); > > + else > > + VM_WARN_ON(1); > > Ideally, I think we want to see what how the page looks since its state > is not what we expected, so maybe join both conditions and use dump_page(). Agree. Will do. Thanks. > > > + * By removing redundant page structs for HugeTLB pages, memory can returned to > ^^ be Thanks. > > + * the buddy allocator for other uses. > > [...] > > > +void free_huge_page_vmemmap(struct hstate *h, struct page *head) > > +{ > > + unsigned long vmemmap_addr = (unsigned long)head; > > + > > + if (!free_vmemmap_pages_per_hpage(h)) > > + return; > > + > > + vmemmap_remap_free(vmemmap_addr + RESERVE_VMEMMAP_SIZE, > > + free_vmemmap_pages_size_per_hpage(h)); > > I am not sure what others think, but I would like to see vmemmap_remap_free taking > three arguments: start, end, and reuse addr, e.g: > > void free_huge_page_vmemmap(struct hstate *h, struct page *head) > { > unsigned long vmemmap_addr = (unsigned long)head; > unsigned long vmemmap_end, vmemmap_reuse; > > if (!free_vmemmap_pages_per_hpage(h)) > return; > > vmemmap_addr += RESERVE_MEMMAP_SIZE; > vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); > vmemmap_reuse = vmemmap_addr - PAGE_SIZE; > > vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse); > } > > The reason for me to do this is to let the callers of vmemmap_remap_free decide > __what__ they want to remap. > > More on this below. > > > > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, > > + unsigned long end, > > + struct vmemmap_remap_walk *walk) > > +{ > > + pte_t *pte; > > + > > + pte = pte_offset_kernel(pmd, addr); > > + > > + if (walk->reuse_addr == addr) { > > + BUG_ON(pte_none(*pte)); > > + walk->reuse_page = pte_page(*pte++); > > + addr += PAGE_SIZE; > > + } > > Although it is quite obvious, a brief comment here pointing out what are we > doing and that this is meant to be set only once would be nice. OK. Will do. > > > > +static void vmemmap_remap_range(unsigned long start, unsigned long end, > > + struct vmemmap_remap_walk *walk) > > +{ > > + unsigned long addr = start - PAGE_SIZE; > > + unsigned long next; > > + pgd_t *pgd; > > + > > + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); > > + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); > > + > > + walk->reuse_page = NULL; > > + walk->reuse_addr = addr; > > With the change I suggested above, struct vmemmap_remap_walk should be > initialitzed at once in vmemmap_remap_free, so this should not longer be needed. You are right. > (And btw, you do not need to set reuse_page to NULL, the way you init the struct > in vmemmap_remap_free makes sure to null any field you do not explicitly set). > > > > +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, > > + struct vmemmap_remap_walk *walk) > > +{ > > + /* > > + * Make the tail pages are mapped with read-only to catch > > + * illegal write operation to the tail pages. > "Remap the tail pages as read-only to ..." Thanks. > > > + */ > > + pgprot_t pgprot = PAGE_KERNEL_RO; > > + pte_t entry = mk_pte(walk->reuse_page, pgprot); > > + struct page *page; > > + > > + page = pte_page(*pte); > > struct page *page = pte_page(*pte); > > since you did the same for the other two. Yeah. Will change to this. > > > + list_add(&page->lru, walk->vmemmap_pages); > > + > > + set_pte_at(&init_mm, addr, pte, entry); > > +} > > + > > +/** > > + * vmemmap_remap_free - remap the vmemmap virtual address range > > + * [start, start + size) to the page which > > + * [start - PAGE_SIZE, start) is mapped, > > + * then free vmemmap pages. > > + * @start: start address of the vmemmap virtual address range > > + * @size: size of the vmemmap virtual address range > > + */ > > +void vmemmap_remap_free(unsigned long start, unsigned long size) > > +{ > > + unsigned long end = start + size; > > + LIST_HEAD(vmemmap_pages); > > + > > + struct vmemmap_remap_walk walk = { > > + .remap_pte = vmemmap_remap_pte, > > + .vmemmap_pages = &vmemmap_pages, > > + }; > > As stated above, this would become: > > void vmemmap_remap_free(unsigned long start, unsigned long end, > usigned long reuse) > { > LIST_HEAD(vmemmap_pages); > struct vmemmap_remap_walk walk = { > .reuse_addr = reuse, > .remap_pte = vmemmap_remap_pte, > .vmemmap_pages = &vmemmap_pages, > }; > > You might have had your reasons to do this way, but this looks more natural > to me, with the plus that callers of vmemmap_remap_free can specify > what they want to remap. Should we add a BUG_ON in vmemmap_remap_free() for now? BUG_ON(reuse != start + PAGE_SIZE); > > > -- > Oscar Salvador > SUSE L3
On Mon, Dec 21, 2020 at 07:25:15PM +0800, Muchun Song wrote: > Should we add a BUG_ON in vmemmap_remap_free() for now? > > BUG_ON(reuse != start + PAGE_SIZE); I do not think we have to, plus we would be BUG_ing for some specific use case in "generic" function. Maybe others think different though.
On Mon, Dec 21, 2020 at 9:44 PM Oscar Salvador <osalvador@suse.de> wrote: > > On Mon, Dec 21, 2020 at 07:25:15PM +0800, Muchun Song wrote: > > > Should we add a BUG_ON in vmemmap_remap_free() for now? > > > > BUG_ON(reuse != start + PAGE_SIZE); > > I do not think we have to, plus we would be BUG_ing for some specific use > case in "generic" function. The vmemmap_remap_range() walks page table range [start, end), if reuse is equal to (start + PAGE_SIZE), the range can adjust to [start - PAGE_SIZE, end). But if not, we need some work to implement the "generic" function. - adjust range to [min(start, reuse), end) and call vmemmap_remap_rangeand which skip the hole which is [reuse + PAGE_SIZE, start) or [end, reuse). - call vmemmap_remap_range(reuse, reuse + PAGE_SIZE) to get the reuse page.Then, call vmemmap_remap_range(start, end) again to remap. Which one do you prefer? > Maybe others think different though. > > -- > Oscar Salvador > SUSE L3 -- Yours, Muchun
On Mon, Dec 21, 2020 at 11:52:30PM +0800, Muchun Song wrote: > On Mon, Dec 21, 2020 at 9:44 PM Oscar Salvador <osalvador@suse.de> wrote: > > > > On Mon, Dec 21, 2020 at 07:25:15PM +0800, Muchun Song wrote: > > > > > Should we add a BUG_ON in vmemmap_remap_free() for now? > > > > > > BUG_ON(reuse != start + PAGE_SIZE); > > > > I do not think we have to, plus we would be BUG_ing for some specific use > > case in "generic" function. > > The vmemmap_remap_range() walks page table range [start, end), > if reuse is equal to (start + PAGE_SIZE), the range can adjust to > [start - PAGE_SIZE, end). But if not, we need some work to > implement the "generic" function. > > - adjust range to [min(start, reuse), end) and call > vmemmap_remap_rangeand which skip the hole > which is [reuse + PAGE_SIZE, start) or [end, reuse). > - call vmemmap_remap_range(reuse, reuse + PAGE_SIZE) > to get the reuse page.Then, call vmemmap_remap_range(start, end) > again to remap. > > Which one do you prefer? I would not overcomplicate things at this stage. Just follow my sugestion and add a BUG_ON as you said, that might be the easier way now. We can overthink this in the future when some other usecases come around, right? Thanks
On 12/21/20 10:00 AM, Oscar Salvador wrote: > On Mon, Dec 21, 2020 at 11:52:30PM +0800, Muchun Song wrote: >> On Mon, Dec 21, 2020 at 9:44 PM Oscar Salvador <osalvador@suse.de> wrote: >>> >>> On Mon, Dec 21, 2020 at 07:25:15PM +0800, Muchun Song wrote: >>> >>>> Should we add a BUG_ON in vmemmap_remap_free() for now? >>>> >>>> BUG_ON(reuse != start + PAGE_SIZE); >>> >>> I do not think we have to, plus we would be BUG_ing for some specific use >>> case in "generic" function. >> >> The vmemmap_remap_range() walks page table range [start, end), >> if reuse is equal to (start + PAGE_SIZE), the range can adjust to >> [start - PAGE_SIZE, end). But if not, we need some work to >> implement the "generic" function. >> >> - adjust range to [min(start, reuse), end) and call >> vmemmap_remap_rangeand which skip the hole >> which is [reuse + PAGE_SIZE, start) or [end, reuse). >> - call vmemmap_remap_range(reuse, reuse + PAGE_SIZE) >> to get the reuse page.Then, call vmemmap_remap_range(start, end) >> again to remap. >> >> Which one do you prefer? > > I would not overcomplicate things at this stage. > Just follow my sugestion and add a BUG_ON as you said, that might be the > easier way now. > We can overthink this in the future when some other usecases come > around, right? I too like the suggestion of specifying the reuse address. It is better than than relying on 'start + PAGE_SIZE' or even 'start - PAGE_SIZE' as in the previous version. However, if we do allow this then we can not allow just any reuse address without complicating the code. Why? Because the code would also need to do a page table walk to validate reuse addr. In the current code, that is handled as long as reuse address is part of the range we are walking. I see two assumptions in the current code: 1) reuse address is part of the range 2) remap_pte is found 'first' in table walk before we start remapping. In the current use case, the 'reuse addr' is always going to be the start of the page table range we walk. Correct? If so, perhaps it would just be simpler for now to have range be [reuse addr, last page mapped to reuse addr]. IOW, always have the range start with reuse addr and all subsequent pages in the range are mapped to reuse addr. I know it is not very generic or flexible. But, it might be easier to understand than the adjustments (+- PAGE_SIZE) currently being made in the code. Just a thought.
On Tue, Dec 22, 2020 at 2:00 AM Oscar Salvador <osalvador@suse.de> wrote: > > On Mon, Dec 21, 2020 at 11:52:30PM +0800, Muchun Song wrote: > > On Mon, Dec 21, 2020 at 9:44 PM Oscar Salvador <osalvador@suse.de> wrote: > > > > > > On Mon, Dec 21, 2020 at 07:25:15PM +0800, Muchun Song wrote: > > > > > > > Should we add a BUG_ON in vmemmap_remap_free() for now? > > > > > > > > BUG_ON(reuse != start + PAGE_SIZE); > > > > > > I do not think we have to, plus we would be BUG_ing for some specific use > > > case in "generic" function. > > > > The vmemmap_remap_range() walks page table range [start, end), > > if reuse is equal to (start + PAGE_SIZE), the range can adjust to > > [start - PAGE_SIZE, end). But if not, we need some work to > > implement the "generic" function. > > > > - adjust range to [min(start, reuse), end) and call > > vmemmap_remap_rangeand which skip the hole > > which is [reuse + PAGE_SIZE, start) or [end, reuse). > > - call vmemmap_remap_range(reuse, reuse + PAGE_SIZE) > > to get the reuse page.Then, call vmemmap_remap_range(start, end) > > again to remap. > > > > Which one do you prefer? > > I would not overcomplicate things at this stage. > Just follow my sugestion and add a BUG_ON as you said, that might be the > easier way now. > We can overthink this in the future when some other usecases come > around, right? You are right. Will do this. Thanks for your suggestions. > > Thanks > > > -- > Oscar Salvador > SUSE L3
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index 4ed6dee1adc9..4c80b7be1771 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -2,7 +2,7 @@ #ifndef __LINUX_BOOTMEM_INFO_H #define __LINUX_BOOTMEM_INFO_H -#include <linux/mmzone.h> +#include <linux/mm.h> /* * Types for free bootmem stored in page->lru.next. These have to be in @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, unsigned long type); void put_page_bootmem(struct page *page); + +/* + * Any memory allocated via the memblock allocator and not via the + * buddy will be marked reserved already in the memmap. For those + * pages, we can call this function to free it to buddy allocator. + */ +static inline void free_bootmem_page(struct page *page) +{ + unsigned long magic = (unsigned long)page->freelist; + + /* + * The reserve_bootmem_region sets the reserved flag on bootmem + * pages. + */ + VM_WARN_ON(page_ref_count(page) != 2); + + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + put_page_bootmem(page); + else + VM_WARN_ON(1); +} #else static inline void register_page_bootmem_info_node(struct pglist_data *pgdat) { @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { } + +static inline void free_bootmem_page(struct page *page) +{ +} #endif #endif /* __LINUX_BOOTMEM_INFO_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index eabe7d9f80d8..0ecad1a41190 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3005,6 +3005,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip) } #endif +void vmemmap_remap_free(unsigned long start, unsigned long size); + void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap); diff --git a/mm/Makefile b/mm/Makefile index ed4b88fa0f5e..056801d8daae 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP) += frontswap.o obj-$(CONFIG_ZSWAP) += zswap.o obj-$(CONFIG_HAS_DMA) += dmapool.o obj-$(CONFIG_HUGETLBFS) += hugetlb.o +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) += hugetlb_vmemmap.o obj-$(CONFIG_NUMA) += mempolicy.o obj-$(CONFIG_SPARSEMEM) += sparse.o obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1f3bf1710b66..140135fc8113 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -42,6 +42,7 @@ #include <linux/userfaultfd_k.h> #include <linux/page_owner.h> #include "internal.h" +#include "hugetlb_vmemmap.h" int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; @@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page) static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { + free_huge_page_vmemmap(h, page); + INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_hugetlb_cgroup(page, NULL); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c new file mode 100644 index 000000000000..5cf7b6122c86 --- /dev/null +++ b/mm/hugetlb_vmemmap.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song <songmuchun@bytedance.com> + * + * The struct page structures (page structs) are used to describe a physical + * page frame. By default, there is a one-to-one mapping from a page frame to + * it's corresponding page struct. + * + * The HugeTLB pages consist of multiple base page size pages and is supported + * by many architectures. See hugetlbpage.rst in the Documentation directory + * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and + * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of + * 4096 base pages. For each base page, there is a corresponding page struct. + * + * Within the HugeTLB subsystem, only the first 4 page structs are used to + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER + * provides this upper limit. The only 'useful' information in the remaining + * page structs is the compound_head field, and this field is the same for all + * tail pages. + * + * By removing redundant page structs for HugeTLB pages, memory can returned to + * the buddy allocator for other uses. + * + * Different architectures support different HugeTLB pages. For example, the + * following table is the HugeTLB page size supported by x86 and arm64 + * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB + * page. + * + * +--------------+-----------+-----------------------------------------------+ + * | Architecture | Page Size | HugeTLB Page Size | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | x86-64 | 4KB | 2MB | 1GB | | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * | | 4KB | 64KB | 2MB | 32MB | 1GB | + * | +-----------+-----------+-----------+-----------+-----------+ + * | arm64 | 16KB | 2MB | 32MB | 1GB | | + * | +-----------+-----------+-----------+-----------+-----------+ + * | | 64KB | 2MB | 512MB | 16GB | | + * +--------------+-----------+-----------+-----------+-----------+-----------+ + * + * When the system boot up, every HugeTLB page has more than one struct page + * structs whose size is (unit: pages): + * + * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following + * relationship. + * + * HugeTLB_Size = n * PAGE_SIZE + * + * Then, + * + * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + * = n * sizeof(struct page) / PAGE_SIZE + * + * We can use huge mapping at the pud/pmd level for the HugeTLB page. + * + * For the HugeTLB page of the pmd level mapping, then + * + * struct_size = n * sizeof(struct page) / PAGE_SIZE + * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + * = sizeof(struct page) / sizeof(pte_t) + * = 64 / 8 + * = 8 (pages) + * + * Where n is how many pte entries which one page can contains. So the value of + * n is (PAGE_SIZE / sizeof(pte_t)). + * + * This optimization only supports 64-bit system, so the value of sizeof(pte_t) + * is 8. And this optimization also applicable only when the size of struct page + * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64 + * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of + * struct page structs of it is 8 pages whose size depends on the size of the + * base page. + * + * For the HugeTLB page of the pud level mapping, then + * + * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + * = PAGE_SIZE / 8 * 8 (pages) + * = PAGE_SIZE (pages) + * + * Where the struct_size(pmd) is the size of the struct page structs of a + * HugeTLB page of the pmd level mapping. + * + * Next, we take the pmd level mapping of the HugeTLB page as an example to + * show the internal implementation of this optimization. There are 8 pages + * struct page structs associated with a HugeTLB page which is pmd mapped. + * + * Here is how things look before optimization. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | -------------> | 2 | + * | | +-----------+ +-----------+ + * | | | 3 | -------------> | 3 | + * | | +-----------+ +-----------+ + * | | | 4 | -------------> | 4 | + * | PMD | +-----------+ +-----------+ + * | level | | 5 | -------------> | 5 | + * | mapping | +-----------+ +-----------+ + * | | | 6 | -------------> | 6 | + * | | +-----------+ +-----------+ + * | | | 7 | -------------> | 7 | + * | | +-----------+ +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * The value of page->compound_head is the same for all tail pages. The first + * page of page structs (page 0) associated with the HugeTLB page contains the 4 + * page structs necessary to describe the HugeTLB. The only use of the remaining + * pages of page structs (page 1 to page 7) is to point to page->compound_head. + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs + * will be used for each HugeTLB page. This will allow us to free the remaining + * 6 pages to the buddy allocator. + * + * Here is how things look after remapping. + * + * HugeTLB struct pages(8 pages) page frame(8 pages) + * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + * | | | 0 | -------------> | 0 | + * | | +-----------+ +-----------+ + * | | | 1 | -------------> | 1 | + * | | +-----------+ +-----------+ + * | | | 2 | ----------------^ ^ ^ ^ ^ ^ + * | | +-----------+ | | | | | + * | | | 3 | ------------------+ | | | | + * | | +-----------+ | | | | + * | | | 4 | --------------------+ | | | + * | PMD | +-----------+ | | | + * | level | | 5 | ----------------------+ | | + * | mapping | +-----------+ | | + * | | | 6 | ------------------------+ | + * | | +-----------+ | + * | | | 7 | --------------------------+ + * | | +-----------+ + * | | + * | | + * | | + * +-----------+ + * + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for + * vmemmap pages and restore the previous mapping relationship. + * + * For the HugeTLB page of the pud level mapping. It is similar to the former. + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages. + * + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures + * (e.g. aarch64) provides a contiguous bit in the translation table entries + * that hints to the MMU to indicate that it is one of a contiguous set of + * entries that can be cached in a single TLB entry. + * + * The contiguous bit is used to increase the mapping size at the pmd and pte + * (last) level. So this type of HugeTLB page can be optimized only when its + * size of the struct page structs is greater than 2 pages. + */ +#include "hugetlb_vmemmap.h" + +/* + * There are a lot of struct page structures associated with each HugeTLB page. + * For tail pages, the value of compound_head is the same. So we can reuse first + * page of tail page structures. We map the virtual addresses of the remaining + * pages of tail page structures to the first tail page struct, and then free + * these page frames. Therefore, we need to reserve two pages as vmemmap areas. + */ +#define RESERVE_VMEMMAP_NR 2U +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) + +/* + * How many vmemmap pages associated with a HugeTLB page that can be freed + * to the buddy allocator. + * + * Todo: Returns zero for now, which means the feature is disabled. We will + * enable it once all the infrastructure is there. + */ +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h) +{ + return 0; +} + +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h) +{ + return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; +} + +void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr = (unsigned long)head; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_remap_free(vmemmap_addr + RESERVE_VMEMMAP_SIZE, + free_vmemmap_pages_size_per_hpage(h)); +} diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h new file mode 100644 index 000000000000..6923f03534d5 --- /dev/null +++ b/mm/hugetlb_vmemmap.h @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Free some vmemmap pages of HugeTLB + * + * Copyright (c) 2020, Bytedance. All rights reserved. + * + * Author: Muchun Song <songmuchun@bytedance.com> + */ +#ifndef _LINUX_HUGETLB_VMEMMAP_H +#define _LINUX_HUGETLB_VMEMMAP_H +#include <linux/hugetlb.h> + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void free_huge_page_vmemmap(struct hstate *h, struct page *head); +#else +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head) +{ +} +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ +#endif /* _LINUX_HUGETLB_VMEMMAP_H */ diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 16183d85a7d5..6cf2fdfb81e9 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -27,8 +27,185 @@ #include <linux/spinlock.h> #include <linux/vmalloc.h> #include <linux/sched.h> +#include <linux/pgtable.h> +#include <linux/bootmem_info.h> + #include <asm/dma.h> #include <asm/pgalloc.h> +#include <asm/tlbflush.h> + +/* + * vmemmap_remap_walk - walk vmemmap page table + * + * @remap_pte: called for each non-empty PTE (lowest-level) entry. + * @reuse_page: the page which is reused for the tail vmemmap pages. + * @reuse_addr: the virtual address of the @reuse_page page. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + */ +struct vmemmap_remap_walk { + void (*remap_pte)(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk); + struct page *reuse_page; + unsigned long reuse_addr; + struct list_head *vmemmap_pages; +}; + +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pte_t *pte; + + pte = pte_offset_kernel(pmd, addr); + + if (walk->reuse_addr == addr) { + BUG_ON(pte_none(*pte)); + walk->reuse_page = pte_page(*pte++); + addr += PAGE_SIZE; + } + + for (; addr != end; addr += PAGE_SIZE, pte++) { + BUG_ON(pte_none(*pte)); + + walk->remap_pte(pte, addr, walk); + } +} + +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pmd_t *pmd; + unsigned long next; + + pmd = pmd_offset(pud, addr); + do { + BUG_ON(pmd_none(*pmd)); + + next = pmd_addr_end(addr, end); + vmemmap_pte_range(pmd, addr, next, walk); + } while (pmd++, addr = next, addr != end); +} + +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + pud_t *pud; + unsigned long next; + + pud = pud_offset(p4d, addr); + do { + BUG_ON(pud_none(*pud)); + + next = pud_addr_end(addr, end); + vmemmap_pmd_range(pud, addr, next, walk); + } while (pud++, addr = next, addr != end); +} + +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr, + unsigned long end, + struct vmemmap_remap_walk *walk) +{ + p4d_t *p4d; + unsigned long next; + + p4d = p4d_offset(pgd, addr); + do { + BUG_ON(p4d_none(*p4d)); + + next = p4d_addr_end(addr, end); + vmemmap_pud_range(p4d, addr, next, walk); + } while (p4d++, addr = next, addr != end); +} + +static void vmemmap_remap_range(unsigned long start, unsigned long end, + struct vmemmap_remap_walk *walk) +{ + unsigned long addr = start - PAGE_SIZE; + unsigned long next; + pgd_t *pgd; + + VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); + VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + + walk->reuse_page = NULL; + walk->reuse_addr = addr; + + pgd = pgd_offset_k(addr); + do { + BUG_ON(pgd_none(*pgd)); + + next = pgd_addr_end(addr, end); + vmemmap_p4d_range(pgd, addr, next, walk); + } while (pgd++, addr = next, addr != end); + + flush_tlb_kernel_range(start, end); +} + +/* + * Free a vmemmap page. A vmemmap page can be allocated from the memblock + * allocator or buddy allocator. If the PG_reserved flag is set, it means + * that it allocated from the memblock allocator, just free it via the + * free_bootmem_page(). Otherwise, use __free_page(). + */ +static inline void free_vmemmap_page(struct page *page) +{ + if (PageReserved(page)) + free_bootmem_page(page); + else + __free_page(page); +} + +/* Free a list of the vmemmap pages */ +static void free_vmemmap_page_list(struct list_head *list) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, list, lru) { + list_del(&page->lru); + free_vmemmap_page(page); + } +} + +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + /* + * Make the tail pages are mapped with read-only to catch + * illegal write operation to the tail pages. + */ + pgprot_t pgprot = PAGE_KERNEL_RO; + pte_t entry = mk_pte(walk->reuse_page, pgprot); + struct page *page; + + page = pte_page(*pte); + list_add(&page->lru, walk->vmemmap_pages); + + set_pte_at(&init_mm, addr, pte, entry); +} + +/** + * vmemmap_remap_free - remap the vmemmap virtual address range + * [start, start + size) to the page which + * [start - PAGE_SIZE, start) is mapped, + * then free vmemmap pages. + * @start: start address of the vmemmap virtual address range + * @size: size of the vmemmap virtual address range + */ +void vmemmap_remap_free(unsigned long start, unsigned long size) +{ + unsigned long end = start + size; + LIST_HEAD(vmemmap_pages); + + struct vmemmap_remap_walk walk = { + .remap_pte = vmemmap_remap_pte, + .vmemmap_pages = &vmemmap_pages, + }; + + vmemmap_remap_range(start, end, &walk); + free_vmemmap_page_list(&vmemmap_pages); +} /* * Allocate a block of memory to be used to back the virtual memory map
Every HugeTLB has more than one struct page structure. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures to store metadata associated with each HugeTLB. There are a lot of struct page structures associated with each HugeTLB page. For tail pages, the value of compound_head is the same. So we can reuse first page of tail page structures. We map the virtual addresses of the remaining pages of tail page structures to the first tail page struct, and then free these page frames. Therefore, we need to reserve two pages as vmemmap areas. When we allocate a HugeTLB page from the buddy, we can free some vmemmap pages associated with each HugeTLB page. It is more appropriate to do it in the prep_new_huge_page(). The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap pages associated with a HugeTLB page can be freed, returns zero for now, which means the feature is disabled. We will enable it once all the infrastructure is there. Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- include/linux/bootmem_info.h | 27 +++++- include/linux/mm.h | 2 + mm/Makefile | 1 + mm/hugetlb.c | 3 + mm/hugetlb_vmemmap.c | 207 +++++++++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.h | 20 +++++ mm/sparse-vmemmap.c | 177 ++++++++++++++++++++++++++++++++++++ 7 files changed, 436 insertions(+), 1 deletion(-) create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h