Message ID | 20220624173656.2033256-7-jthoughton@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | hugetlb: Introduce HugeTLB high-granularity mapping | expand |
On 24/06/22 11:06 pm, James Houghton wrote: > This makes them usable for HugeTLB page table freeing operations. > After HugeTLB high-granularity mapping, the page table for a HugeTLB VMA > can get more complex, and these functions handle freeing page tables > generally. > > Signed-off-by: James Houghton <jthoughton@google.com> reviewed-by: manish.mishra@nutanix.com > --- > include/linux/mm.h | 7 +++++++ > mm/memory.c | 8 ++++---- > 2 files changed, 11 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index bc8f326be0ce..07f5da512147 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1847,6 +1847,13 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > > struct mmu_notifier_range; > > +void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr); > +void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, unsigned long addr, > + unsigned long end, unsigned long floor, unsigned long ceiling); > +void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr, > + unsigned long end, unsigned long floor, unsigned long ceiling); > +void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, unsigned long addr, > + unsigned long end, unsigned long floor, unsigned long ceiling); > void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, > unsigned long end, unsigned long floor, unsigned long ceiling); > int > diff --git a/mm/memory.c b/mm/memory.c > index 7a089145cad4..bb3b9b5b94fb 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -227,7 +227,7 @@ static void check_sync_rss_stat(struct task_struct *task) > * Note: this doesn't free the actual pages themselves. That > * has been handled earlier when unmapping all the memory regions. > */ > -static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, > +void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, > unsigned long addr) > { > pgtable_t token = pmd_pgtable(*pmd); > @@ -236,7 +236,7 @@ static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, > mm_dec_nr_ptes(tlb->mm); > } > > -static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, > +inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, > unsigned long addr, unsigned long end, > unsigned long floor, unsigned long ceiling) > { > @@ -270,7 +270,7 @@ static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, > mm_dec_nr_pmds(tlb->mm); > } > > -static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, > +inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, > unsigned long addr, unsigned long end, > unsigned long floor, unsigned long ceiling) > { > @@ -304,7 +304,7 @@ static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, > mm_dec_nr_puds(tlb->mm); > } > > -static inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, > +inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, > unsigned long addr, unsigned long end, > unsigned long floor, unsigned long ceiling) > {
On 06/24/22 17:36, James Houghton wrote: > This makes them usable for HugeTLB page table freeing operations. > After HugeTLB high-granularity mapping, the page table for a HugeTLB VMA > can get more complex, and these functions handle freeing page tables > generally. > Hmmmm? free_pgd_range is not generally called directly for hugetlb mappings. There is a wrapper hugetlb_free_pgd_range which can have architecture specific implementations. It makes me wonder if these lower level routines can be directly used on hugetlb mappings. My 'guess' is that any such details will be hidden in the callers. Suspect this will become clear in later patches.
On Tue, Jun 28, 2022 at 1:35 PM Mike Kravetz <mike.kravetz@oracle.com> wrote: > > On 06/24/22 17:36, James Houghton wrote: > > This makes them usable for HugeTLB page table freeing operations. > > After HugeTLB high-granularity mapping, the page table for a HugeTLB VMA > > can get more complex, and these functions handle freeing page tables > > generally. > > > > Hmmmm? > > free_pgd_range is not generally called directly for hugetlb mappings. > There is a wrapper hugetlb_free_pgd_range which can have architecture > specific implementations. It makes me wonder if these lower level > routines can be directly used on hugetlb mappings. My 'guess' is that any > such details will be hidden in the callers. Suspect this will become clear > in later patches. Thanks for pointing out hugetlb_free_pgd_range. I think I'll need to change how freeing HugeTLB HGM PTEs is written, because as written, we don't do any architecture-specific things. I think I have a good idea for what I need to do, probably something like this: make `hugetlb_free_range` overridable, and then provide an implementation for it for all the architectures that HAVE_ARCH_HUGETLB_FREE_PGD_RANGE. Making the regular `free_p?d_range` functions public *does* help with implementing the regular/general `hugetlb_free_range` function though, so I think this commit is still useful. - James > -- > Mike Kravetz > > > Signed-off-by: James Houghton <jthoughton@google.com> > > --- > > include/linux/mm.h | 7 +++++++ > > mm/memory.c | 8 ++++---- > > 2 files changed, 11 insertions(+), 4 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index bc8f326be0ce..07f5da512147 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -1847,6 +1847,13 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, > > > > struct mmu_notifier_range; > > > > +void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr); > > +void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, unsigned long addr, > > + unsigned long end, unsigned long floor, unsigned long ceiling); > > +void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr, > > + unsigned long end, unsigned long floor, unsigned long ceiling); > > +void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, unsigned long addr, > > + unsigned long end, unsigned long floor, unsigned long ceiling); > > void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, > > unsigned long end, unsigned long floor, unsigned long ceiling); > > int > > diff --git a/mm/memory.c b/mm/memory.c > > index 7a089145cad4..bb3b9b5b94fb 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -227,7 +227,7 @@ static void check_sync_rss_stat(struct task_struct *task) > > * Note: this doesn't free the actual pages themselves. That > > * has been handled earlier when unmapping all the memory regions. > > */ > > -static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, > > +void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, > > unsigned long addr) > > { > > pgtable_t token = pmd_pgtable(*pmd); > > @@ -236,7 +236,7 @@ static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, > > mm_dec_nr_ptes(tlb->mm); > > } > > > > -static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, > > +inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, > > unsigned long addr, unsigned long end, > > unsigned long floor, unsigned long ceiling) > > { > > @@ -270,7 +270,7 @@ static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, > > mm_dec_nr_pmds(tlb->mm); > > } > > > > -static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, > > +inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, > > unsigned long addr, unsigned long end, > > unsigned long floor, unsigned long ceiling) > > { > > @@ -304,7 +304,7 @@ static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, > > mm_dec_nr_puds(tlb->mm); > > } > > > > -static inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, > > +inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, > > unsigned long addr, unsigned long end, > > unsigned long floor, unsigned long ceiling) > > { > > -- > > 2.37.0.rc0.161.g10f37bed90-goog > >
diff --git a/include/linux/mm.h b/include/linux/mm.h index bc8f326be0ce..07f5da512147 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1847,6 +1847,13 @@ void unmap_vmas(struct mmu_gather *tlb, struct vm_area_struct *start_vma, struct mmu_notifier_range; +void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr); +void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, unsigned long addr, + unsigned long end, unsigned long floor, unsigned long ceiling); +void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr, + unsigned long end, unsigned long floor, unsigned long ceiling); +void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, unsigned long addr, + unsigned long end, unsigned long floor, unsigned long ceiling); void free_pgd_range(struct mmu_gather *tlb, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling); int diff --git a/mm/memory.c b/mm/memory.c index 7a089145cad4..bb3b9b5b94fb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -227,7 +227,7 @@ static void check_sync_rss_stat(struct task_struct *task) * Note: this doesn't free the actual pages themselves. That * has been handled earlier when unmapping all the memory regions. */ -static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, +void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr) { pgtable_t token = pmd_pgtable(*pmd); @@ -236,7 +236,7 @@ static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, mm_dec_nr_ptes(tlb->mm); } -static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, +inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) { @@ -270,7 +270,7 @@ static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, mm_dec_nr_pmds(tlb->mm); } -static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, +inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) { @@ -304,7 +304,7 @@ static inline void free_pud_range(struct mmu_gather *tlb, p4d_t *p4d, mm_dec_nr_puds(tlb->mm); } -static inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, +inline void free_p4d_range(struct mmu_gather *tlb, pgd_t *pgd, unsigned long addr, unsigned long end, unsigned long floor, unsigned long ceiling) {
This makes them usable for HugeTLB page table freeing operations. After HugeTLB high-granularity mapping, the page table for a HugeTLB VMA can get more complex, and these functions handle freeing page tables generally. Signed-off-by: James Houghton <jthoughton@google.com> --- include/linux/mm.h | 7 +++++++ mm/memory.c | 8 ++++---- 2 files changed, 11 insertions(+), 4 deletions(-)