Message ID | 20191203132239.5910-2-thomas_os@shipmail.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Huge page-table entries for TTM | expand |
On Tue, 3 Dec 2019 14:22:32 +0100 Thomas Hellström (VMware) <thomas_os@shipmail.org> wrote: > From: Thomas Hellstrom <thellstrom@vmware.com> > > For VM_PFNMAP and VM_MIXEDMAP vmas that want to support transhuge pages > and -page table entries, introduce vma_is_special_huge() that takes the > same codepaths as vma_is_dax(). > > The use of "special" follows the definition in memory.c, vm_normal_page(): > "Special" mappings do not wish to be associated with a "struct page" > (either it doesn't exist, or it exists but they don't want to touch it) > > For PAGE_SIZE pages, "special" is determined per page table entry to be > able to deal with COW pages. But since we don't have huge COW pages, > we can classify a vma as either "special huge" or "normal huge". > > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2822,6 +2822,12 @@ extern long copy_huge_page_from_user(struct page *dst_page, > const void __user *usr_src, > unsigned int pages_per_huge_page, > bool allow_pagefault); > +static inline bool vma_is_special_huge(struct vm_area_struct *vma) > +{ > + return vma_is_dax(vma) || (vma->vm_file && > + (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); > +} Some documetnation would be nice. Not only what it does, but why it does it. ie, what is the *meaning* of vma_is_spacial_huge(vma)==true?
diff --git a/include/linux/mm.h b/include/linux/mm.h index 0133542d69c9..886a1f899887 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2822,6 +2822,12 @@ extern long copy_huge_page_from_user(struct page *dst_page, const void __user *usr_src, unsigned int pages_per_huge_page, bool allow_pagefault); +static inline bool vma_is_special_huge(struct vm_area_struct *vma) +{ + return vma_is_dax(vma) || (vma->vm_file && + (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); +} + #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ #ifdef CONFIG_DEBUG_PAGEALLOC diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 41a0fbddc96b..f8d24fc3f4df 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1789,7 +1789,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd, tlb->fullmm); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); - if (vma_is_dax(vma)) { + if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) zap_deposited_table(tlb->mm, pmd); spin_unlock(ptl); @@ -2053,7 +2053,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, */ pudp_huge_get_and_clear_full(tlb->mm, addr, pud, tlb->fullmm); tlb_remove_pud_tlb_entry(tlb, pud, addr); - if (vma_is_dax(vma)) { + if (vma_is_special_huge(vma)) { spin_unlock(ptl); /* No zero page support yet */ } else { @@ -2162,7 +2162,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (vma_is_dax(vma)) + if (vma_is_special_huge(vma)) return; page = pmd_page(_pmd); if (!PageDirty(page) && pmd_dirty(_pmd))