Message ID | 20220704013312.2415700-4-naoya.horiguchi@linux.dev (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm, hwpoison: enable 1GB hugepage support (v4) | expand |
On 2022/7/4 9:33, Naoya Horiguchi wrote: > From: Naoya Horiguchi <naoya.horiguchi@nec.com> > > follow_pud_mask() does not support non-present pud entry now. As long as > I tested on x86_64 server, follow_pud_mask() still simply returns > no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe > user-visible effect should happen. But generally we should call > follow_huge_pud() for non-present pud entry for 1GB hugetlb page. > > Update pud_huge() and follow_huge_pud() to handle non-present pud entries. > The changes are similar to previous works for pud entries commit e66f17ff7177 > ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit > cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage"). > > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com> > --- > v2 -> v3: > - fixed typos in subject and description, > - added comment on pud_huge(), > - added comment about fallback for hwpoisoned entry, > - updated initial check about FOLL_{PIN,GET} flags. > --- > arch/x86/mm/hugetlbpage.c | 8 +++++++- > mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++-- > 2 files changed, 37 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c > index 509408da0da1..6b3033845c6d 100644 > --- a/arch/x86/mm/hugetlbpage.c > +++ b/arch/x86/mm/hugetlbpage.c > @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd) > (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; > } > > +/* > + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal > + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. > + * Otherwise, returns 0. > + */ > int pud_huge(pud_t pud) > { > - return !!(pud_val(pud) & _PAGE_PSE); > + return !pud_none(pud) && > + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; > } Question: Is aarch64 supported too? It seems aarch64 version of pud_huge matches the requirement naturally for me. Anyway, this patch looks good to me. Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Thanks. > > #ifdef CONFIG_HUGETLB_PAGE > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index ad621688370b..66bb39e0fce8 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -6994,10 +6994,38 @@ struct page * __weak > follow_huge_pud(struct mm_struct *mm, unsigned long address, > pud_t *pud, int flags) > { > - if (flags & (FOLL_GET | FOLL_PIN)) > + struct page *page = NULL; > + spinlock_t *ptl; > + pte_t pte; > + > + if (WARN_ON_ONCE(flags & FOLL_PIN)) > return NULL; > > - return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); > +retry: > + ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud); > + if (!pud_huge(*pud)) > + goto out; > + pte = huge_ptep_get((pte_t *)pud); > + if (pte_present(pte)) { > + page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); > + if (WARN_ON_ONCE(!try_grab_page(page, flags))) { > + page = NULL; > + goto out; > + } > + } else { > + if (is_hugetlb_entry_migration(pte)) { > + spin_unlock(ptl); > + __migration_entry_wait(mm, (pte_t *)pud, ptl); > + goto retry; > + } > + /* > + * hwpoisoned entry is treated as no_page_table in > + * follow_page_mask(). > + */ > + } > +out: > + spin_unlock(ptl); > + return page; > } > > struct page * __weak >
On Tue, Jul 05, 2022 at 10:46:09AM +0800, Miaohe Lin wrote: > On 2022/7/4 9:33, Naoya Horiguchi wrote: > > From: Naoya Horiguchi <naoya.horiguchi@nec.com> > > > > follow_pud_mask() does not support non-present pud entry now. As long as > > I tested on x86_64 server, follow_pud_mask() still simply returns > > no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe > > user-visible effect should happen. But generally we should call > > follow_huge_pud() for non-present pud entry for 1GB hugetlb page. > > > > Update pud_huge() and follow_huge_pud() to handle non-present pud entries. > > The changes are similar to previous works for pud entries commit e66f17ff7177 > > ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit > > cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage"). > > > > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com> > > --- > > v2 -> v3: > > - fixed typos in subject and description, > > - added comment on pud_huge(), > > - added comment about fallback for hwpoisoned entry, > > - updated initial check about FOLL_{PIN,GET} flags. > > --- > > arch/x86/mm/hugetlbpage.c | 8 +++++++- > > mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++-- > > 2 files changed, 37 insertions(+), 3 deletions(-) > > > > diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c > > index 509408da0da1..6b3033845c6d 100644 > > --- a/arch/x86/mm/hugetlbpage.c > > +++ b/arch/x86/mm/hugetlbpage.c > > @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd) > > (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; > > } > > > > +/* > > + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal > > + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. > > + * Otherwise, returns 0. > > + */ > > int pud_huge(pud_t pud) > > { > > - return !!(pud_val(pud) & _PAGE_PSE); > > + return !pud_none(pud) && > > + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; > > } > > Question: Is aarch64 supported too? It seems aarch64 version of pud_huge matches > the requirement naturally for me. I think that if pmd_huge() and pud_huge() return true for non-present pmd/pud entries, that's OK. Otherwise we need update to support the new feature. In aarch64, the bits in pte/pmd/pud related to {pmd,pud}_present() and {pmd,pud}_huge() seem not to overlap with the bit range for swap type and swap offset, so maybe that's fine. But I recommend to test with arm64 if you have access to aarch64 servers. > > Anyway, this patch looks good to me. > > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> Thank you for reviewing. - Naoya Horiguchi
On 2022/7/5 17:04, HORIGUCHI NAOYA(堀口 直也) wrote: > On Tue, Jul 05, 2022 at 10:46:09AM +0800, Miaohe Lin wrote: >> On 2022/7/4 9:33, Naoya Horiguchi wrote: >>> From: Naoya Horiguchi <naoya.horiguchi@nec.com> >>> >>> follow_pud_mask() does not support non-present pud entry now. As long as >>> I tested on x86_64 server, follow_pud_mask() still simply returns >>> no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe >>> user-visible effect should happen. But generally we should call >>> follow_huge_pud() for non-present pud entry for 1GB hugetlb page. >>> >>> Update pud_huge() and follow_huge_pud() to handle non-present pud entries. >>> The changes are similar to previous works for pud entries commit e66f17ff7177 >>> ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit >>> cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage"). >>> >>> Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com> >>> --- >>> v2 -> v3: >>> - fixed typos in subject and description, >>> - added comment on pud_huge(), >>> - added comment about fallback for hwpoisoned entry, >>> - updated initial check about FOLL_{PIN,GET} flags. >>> --- >>> arch/x86/mm/hugetlbpage.c | 8 +++++++- >>> mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++-- >>> 2 files changed, 37 insertions(+), 3 deletions(-) >>> >>> diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c >>> index 509408da0da1..6b3033845c6d 100644 >>> --- a/arch/x86/mm/hugetlbpage.c >>> +++ b/arch/x86/mm/hugetlbpage.c >>> @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd) >>> (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; >>> } >>> >>> +/* >>> + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal >>> + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. >>> + * Otherwise, returns 0. >>> + */ >>> int pud_huge(pud_t pud) >>> { >>> - return !!(pud_val(pud) & _PAGE_PSE); >>> + return !pud_none(pud) && >>> + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; >>> } >> >> Question: Is aarch64 supported too? It seems aarch64 version of pud_huge matches >> the requirement naturally for me. > > I think that if pmd_huge() and pud_huge() return true for non-present > pmd/pud entries, that's OK. Otherwise we need update to support the > new feature. > > In aarch64, the bits in pte/pmd/pud related to {pmd,pud}_present() and > {pmd,pud}_huge() seem not to overlap with the bit range for swap type > and swap offset, so maybe that's fine. But I recommend to test with > arm64 if you have access to aarch64 servers. I see. This series is intended to enable 1GB hugepage support on x86. And if someone wants to use it in other arches, it's better to have a test first. ;) Thanks. > >> >> Anyway, this patch looks good to me. >> >> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com> > > Thank you for reviewing. > > - Naoya Horiguchi >
On 07/04/22 10:33, Naoya Horiguchi wrote: > From: Naoya Horiguchi <naoya.horiguchi@nec.com> > > follow_pud_mask() does not support non-present pud entry now. As long as > I tested on x86_64 server, follow_pud_mask() still simply returns > no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe > user-visible effect should happen. But generally we should call > follow_huge_pud() for non-present pud entry for 1GB hugetlb page. > > Update pud_huge() and follow_huge_pud() to handle non-present pud entries. > The changes are similar to previous works for pud entries commit e66f17ff7177 > ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit > cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage"). > > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com> > --- > v2 -> v3: > - fixed typos in subject and description, > - added comment on pud_huge(), > - added comment about fallback for hwpoisoned entry, > - updated initial check about FOLL_{PIN,GET} flags. > --- > arch/x86/mm/hugetlbpage.c | 8 +++++++- > mm/hugetlb.c | 32 ++++++++++++++++++++++++++++++-- > 2 files changed, 37 insertions(+), 3 deletions(-) Thanks! Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c index 509408da0da1..6b3033845c6d 100644 --- a/arch/x86/mm/hugetlbpage.c +++ b/arch/x86/mm/hugetlbpage.c @@ -30,9 +30,15 @@ int pmd_huge(pmd_t pmd) (pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; } +/* + * pud_huge() returns 1 if @pud is hugetlb related entry, that is normal + * hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry. + * Otherwise, returns 0. + */ int pud_huge(pud_t pud) { - return !!(pud_val(pud) & _PAGE_PSE); + return !pud_none(pud) && + (pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT; } #ifdef CONFIG_HUGETLB_PAGE diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ad621688370b..66bb39e0fce8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6994,10 +6994,38 @@ struct page * __weak follow_huge_pud(struct mm_struct *mm, unsigned long address, pud_t *pud, int flags) { - if (flags & (FOLL_GET | FOLL_PIN)) + struct page *page = NULL; + spinlock_t *ptl; + pte_t pte; + + if (WARN_ON_ONCE(flags & FOLL_PIN)) return NULL; - return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); +retry: + ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud); + if (!pud_huge(*pud)) + goto out; + pte = huge_ptep_get((pte_t *)pud); + if (pte_present(pte)) { + page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT); + if (WARN_ON_ONCE(!try_grab_page(page, flags))) { + page = NULL; + goto out; + } + } else { + if (is_hugetlb_entry_migration(pte)) { + spin_unlock(ptl); + __migration_entry_wait(mm, (pte_t *)pud, ptl); + goto retry; + } + /* + * hwpoisoned entry is treated as no_page_table in + * follow_page_mask(). + */ + } +out: + spin_unlock(ptl); + return page; } struct page * __weak