Message ID | 20210316124007.20474-2-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Some cleanups for huge_memory | expand |
On Tue, Mar 16, 2021 at 08:40:02AM -0400, Miaohe Lin wrote: > +static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address) > +{ > + /* > + * If the new address isn't hpage aligned and it could previously > + * contain an hugepage: check if we need to split an huge pmd. > + */ > + if (address & ~HPAGE_PMD_MASK && > + range_in_vma(vma, address & HPAGE_PMD_MASK, > + (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE)) Since you're at it, maybe use ALIGN/ALIGN_DOWN too against HPAGE_PMD_SIZE? > + split_huge_pmd_address(vma, address, false, NULL); > +}
Hi: On 2021/3/17 4:40, Peter Xu wrote: > On Tue, Mar 16, 2021 at 08:40:02AM -0400, Miaohe Lin wrote: >> +static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address) >> +{ >> + /* >> + * If the new address isn't hpage aligned and it could previously >> + * contain an hugepage: check if we need to split an huge pmd. >> + */ >> + if (address & ~HPAGE_PMD_MASK && >> + range_in_vma(vma, address & HPAGE_PMD_MASK, >> + (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE)) > > Since you're at it, maybe use ALIGN/ALIGN_DOWN too against HPAGE_PMD_SIZE? > Many thanks for reply. Sounds good. :) Do you mean this? diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bff92dea5ab3..ae16a82da823 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2301,44 +2301,38 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, __split_huge_pmd(vma, pmd, address, freeze, page); } +static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address) +{ + /* + * If the new address isn't hpage aligned and it could previously + * contain an hugepage: check if we need to split an huge pmd. + */ + if (!IS_ALIGNED(address, HPAGE_PMD_SIZE) && + range_in_vma(vma, ALIGN_DOWN(address, HPAGE_PMD_SIZE), + ALIGN(address, HPAGE_PMD_SIZE))) + split_huge_pmd_address(vma, address, false, NULL); +} + >> + split_huge_pmd_address(vma, address, false, NULL); >> +} >
On Wed, Mar 17, 2021 at 10:18:40AM +0800, Miaohe Lin wrote: > Hi: > On 2021/3/17 4:40, Peter Xu wrote: > > On Tue, Mar 16, 2021 at 08:40:02AM -0400, Miaohe Lin wrote: > >> +static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address) > >> +{ > >> + /* > >> + * If the new address isn't hpage aligned and it could previously > >> + * contain an hugepage: check if we need to split an huge pmd. > >> + */ > >> + if (address & ~HPAGE_PMD_MASK && > >> + range_in_vma(vma, address & HPAGE_PMD_MASK, > >> + (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE)) > > > > Since you're at it, maybe use ALIGN/ALIGN_DOWN too against HPAGE_PMD_SIZE? > > > > Many thanks for reply. Sounds good. :) Do you mean this? > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index bff92dea5ab3..ae16a82da823 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2301,44 +2301,38 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, > __split_huge_pmd(vma, pmd, address, freeze, page); > } > > +static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address) > +{ > + /* > + * If the new address isn't hpage aligned and it could previously > + * contain an hugepage: check if we need to split an huge pmd. > + */ > + if (!IS_ALIGNED(address, HPAGE_PMD_SIZE) && > + range_in_vma(vma, ALIGN_DOWN(address, HPAGE_PMD_SIZE), > + ALIGN(address, HPAGE_PMD_SIZE))) > + split_huge_pmd_address(vma, address, false, NULL); > +} > + Yes. Thanks,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index bff92dea5ab3..e943ccbdc9dd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2301,44 +2301,38 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address, __split_huge_pmd(vma, pmd, address, freeze, page); } +static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address) +{ + /* + * If the new address isn't hpage aligned and it could previously + * contain an hugepage: check if we need to split an huge pmd. + */ + if (address & ~HPAGE_PMD_MASK && + range_in_vma(vma, address & HPAGE_PMD_MASK, + (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE)) + split_huge_pmd_address(vma, address, false, NULL); +} + void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, long adjust_next) { - /* - * If the new start address isn't hpage aligned and it could - * previously contain an hugepage: check if we need to split - * an huge pmd. - */ - if (start & ~HPAGE_PMD_MASK && - (start & HPAGE_PMD_MASK) >= vma->vm_start && - (start & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= vma->vm_end) - split_huge_pmd_address(vma, start, false, NULL); + /* Check if we need to split start first. */ + split_huge_pmd_if_needed(vma, start); - /* - * If the new end address isn't hpage aligned and it could - * previously contain an hugepage: check if we need to split - * an huge pmd. - */ - if (end & ~HPAGE_PMD_MASK && - (end & HPAGE_PMD_MASK) >= vma->vm_start && - (end & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= vma->vm_end) - split_huge_pmd_address(vma, end, false, NULL); + /* Check if we need to split end next. */ + split_huge_pmd_if_needed(vma, end); /* - * If we're also updating the vma->vm_next->vm_start, if the new - * vm_next->vm_start isn't hpage aligned and it could previously - * contain an hugepage: check if we need to split an huge pmd. + * If we're also updating the vma->vm_next->vm_start, + * check if we need to split it. */ if (adjust_next > 0) { struct vm_area_struct *next = vma->vm_next; unsigned long nstart = next->vm_start; nstart += adjust_next; - if (nstart & ~HPAGE_PMD_MASK && - (nstart & HPAGE_PMD_MASK) >= next->vm_start && - (nstart & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE <= next->vm_end) - split_huge_pmd_address(next, nstart, false, NULL); + split_huge_pmd_if_needed(next, nstart); } }
The current implementation of vma_adjust_trans_huge() contains some duplicated codes. Add helper function to get rid of these codes to make it more succinct. Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/huge_memory.c | 44 +++++++++++++++++++------------------------- 1 file changed, 19 insertions(+), 25 deletions(-)