Message ID | 117121665254442c3c7f585248296495e5e2b45c.1722404078.git.baolin.wang@linux.alibaba.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem | expand |
On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache > if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is > 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. > This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for > shmem to filter allowable huge orders. > > Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Reviewed-by: Barry Song <baohua@kernel.org> > --- > mm/shmem.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 2faa9daaf54b..a4332a97558c 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, > unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); > unsigned long vm_flags = vma->vm_flags; > /* > - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that > + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that > * are enabled for this vma. Nit: THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough. I feel we don't need this comment? > */ > - unsigned long orders = BIT(PMD_ORDER + 1) - 1; > + unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; > loff_t i_size; > int order; > > -- > 2.39.3 > Thanks Barry
On 2024/7/31 14:18, Barry Song wrote: > On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang > <baolin.wang@linux.alibaba.com> wrote: >> >> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache >> if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is >> 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. >> This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for >> shmem to filter allowable huge orders. >> >> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > Reviewed-by: Barry Song <baohua@kernel.org> Thanks for reviewing. > >> --- >> mm/shmem.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/mm/shmem.c b/mm/shmem.c >> index 2faa9daaf54b..a4332a97558c 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, >> unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); >> unsigned long vm_flags = vma->vm_flags; >> /* >> - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that >> + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that >> * are enabled for this vma. > > Nit: > THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough. > I feel we don't need this comment? Sure. Andrew, please help to squash the following changes into this patch. Thanks. diff --git a/mm/shmem.c b/mm/shmem.c index 6e9836b1bd1d..432faec21547 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1629,10 +1629,6 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma->vm_flags; - /* - * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that - * are enabled for this vma. - */ unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; loff_t i_size; int order;
On 31.07.24 07:46, Baolin Wang wrote: > Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache > if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is > 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. > This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for > shmem to filter allowable huge orders. > > Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/shmem.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 2faa9daaf54b..a4332a97558c 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, > unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); > unsigned long vm_flags = vma->vm_flags; > /* > - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that > + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that > * are enabled for this vma. > */ > - unsigned long orders = BIT(PMD_ORDER + 1) - 1; > + unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; > loff_t i_size; > int order; > Acked-by: David Hildenbrand <david@redhat.com>
On 2024/7/31 16:56, Baolin Wang wrote: > > > On 2024/7/31 14:18, Barry Song wrote: >> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang >> <baolin.wang@linux.alibaba.com> wrote: >>> >>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page >>> cache >>> if needed"), ARM64 can support 512MB PMD-sized THP when the base page >>> size is >>> 64KB, which is larger than the maximum supported page cache size >>> MAX_PAGECACHE_ORDER. >>> This is not expected. To fix this issue, use >>> THP_ORDERS_ALL_FILE_DEFAULT for >>> shmem to filter allowable huge orders. >>> >>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") >>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> >> Reviewed-by: Barry Song <baohua@kernel.org> > > Thanks for reviewing. > >> >>> --- >>> mm/shmem.c | 4 ++-- >>> 1 file changed, 2 insertions(+), 2 deletions(-) >>> >>> diff --git a/mm/shmem.c b/mm/shmem.c >>> index 2faa9daaf54b..a4332a97558c 100644 >>> --- a/mm/shmem.c >>> +++ b/mm/shmem.c >>> @@ -1630,10 +1630,10 @@ unsigned long >>> shmem_allowable_huge_orders(struct inode *inode, >>> unsigned long within_size_orders = >>> READ_ONCE(huge_shmem_orders_within_size); >>> unsigned long vm_flags = vma->vm_flags; >>> /* >>> - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that >>> + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 >>> that >>> * are enabled for this vma. >> >> Nit: >> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough. >> I feel we don't need this comment? > > Sure. > > Andrew, please help to squash the following changes into this patch. > Thanks. Maybe drop unsigned long orders too? diff --git a/mm/shmem.c b/mm/shmem.c index 6af95f595d6f..8485eb6f2ec4 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma ? vma->vm_flags : 0; - /* - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that - * are enabled for this vma. - */ - unsigned long orders = BIT(PMD_ORDER + 1) - 1; bool global_huge; loff_t i_size; int order; @@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, if (global_huge) mask |= READ_ONCE(huge_shmem_orders_inherit); - return orders & mask; + return THP_ORDERS_ALL_FILE_DEFAULT & mask; } > > diff --git a/mm/shmem.c b/mm/shmem.c > index 6e9836b1bd1d..432faec21547 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1629,10 +1629,6 @@ unsigned long shmem_allowable_huge_orders(struct > inode *inode, > unsigned long mask = READ_ONCE(huge_shmem_orders_always); > unsigned long within_size_orders = > READ_ONCE(huge_shmem_orders_within_size); > unsigned long vm_flags = vma->vm_flags; > - /* > - * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that > - * are enabled for this vma. > - */ > unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; > loff_t i_size; > int order; >
On 2024/7/31 17:59, Kefeng Wang wrote: > > > On 2024/7/31 16:56, Baolin Wang wrote: >> >> >> On 2024/7/31 14:18, Barry Song wrote: >>> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang >>> <baolin.wang@linux.alibaba.com> wrote: >>>> >>>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size >>>> page cache >>>> if needed"), ARM64 can support 512MB PMD-sized THP when the base >>>> page size is >>>> 64KB, which is larger than the maximum supported page cache size >>>> MAX_PAGECACHE_ORDER. >>>> This is not expected. To fix this issue, use >>>> THP_ORDERS_ALL_FILE_DEFAULT for >>>> shmem to filter allowable huge orders. >>>> >>>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") >>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >>> >>> Reviewed-by: Barry Song <baohua@kernel.org> >> >> Thanks for reviewing. >> >>> >>>> --- >>>> mm/shmem.c | 4 ++-- >>>> 1 file changed, 2 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/mm/shmem.c b/mm/shmem.c >>>> index 2faa9daaf54b..a4332a97558c 100644 >>>> --- a/mm/shmem.c >>>> +++ b/mm/shmem.c >>>> @@ -1630,10 +1630,10 @@ unsigned long >>>> shmem_allowable_huge_orders(struct inode *inode, >>>> unsigned long within_size_orders = >>>> READ_ONCE(huge_shmem_orders_within_size); >>>> unsigned long vm_flags = vma->vm_flags; >>>> /* >>>> - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that >>>> + * Check all the (large) orders below MAX_PAGECACHE_ORDER + >>>> 1 that >>>> * are enabled for this vma. >>> >>> Nit: >>> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough. >>> I feel we don't need this comment? >> >> Sure. >> >> Andrew, please help to squash the following changes into this patch. >> Thanks. > > Maybe drop unsigned long orders too? > > diff --git a/mm/shmem.c b/mm/shmem.c > index 6af95f595d6f..8485eb6f2ec4 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct > inode *inode, > unsigned long mask = READ_ONCE(huge_shmem_orders_always); > unsigned long within_size_orders = > READ_ONCE(huge_shmem_orders_within_size); > unsigned long vm_flags = vma ? vma->vm_flags : 0; > - /* > - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that > - * are enabled for this vma. > - */ > - unsigned long orders = BIT(PMD_ORDER + 1) - 1; > bool global_huge; > loff_t i_size; > int order; > @@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct > inode *inode, > if (global_huge) > mask |= READ_ONCE(huge_shmem_orders_inherit); > > - return orders & mask; > + return THP_ORDERS_ALL_FILE_DEFAULT & mask; > } Yes. Good point. Thanks. (Hope Andrew can help to squash these changes :))
On 31 Jul 2024, at 1:46, Baolin Wang wrote: > Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache > if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is > 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. > This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for > shmem to filter allowable huge orders. > > Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/shmem.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 2faa9daaf54b..a4332a97558c 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, > unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); > unsigned long vm_flags = vma->vm_flags; > /* > - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that > + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that > * are enabled for this vma. > */ > - unsigned long orders = BIT(PMD_ORDER + 1) - 1; > + unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; > loff_t i_size; > int order; Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi
On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> (Hope Andrew can help to squash these changes :))
I'm seeing some rejects against, amongst other things, your own "Some
cleanups for shmem" series.
So... v2, please?
On 2024/8/1 04:48, Andrew Morton wrote: > On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > >> (Hope Andrew can help to squash these changes :)) > > I'm seeing some rejects against, amongst other things, your own "Some > cleanups for shmem" series. > > So... v2, please? These two bugfix patches are based on the mm-hotfixes-unstable branch and need to be merged into 6.11-rcX, so they should be queued first. For the 'Some cleanups for shmem' series, I can send a new V4 version to you after resolving conflicts with the shmem bugfix patches. Sorry for the inconvenience.
On Thu, 1 Aug 2024 08:06:59 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > > On 2024/8/1 04:48, Andrew Morton wrote: > > On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > > > >> (Hope Andrew can help to squash these changes :)) > > > > I'm seeing some rejects against, amongst other things, your own "Some > > cleanups for shmem" series. > > > > So... v2, please? > > These two bugfix patches are based on the mm-hotfixes-unstable branch > and need to be merged into 6.11-rcX, so they should be queued first. OK. > For the 'Some cleanups for shmem' series, I can send a new V4 version to > you after resolving conflicts with the shmem bugfix patches. Sorry for > the inconvenience. I fixed things up.
On 2024/8/2 03:55, Andrew Morton wrote: > On Thu, 1 Aug 2024 08:06:59 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote: > >> >> >> On 2024/8/1 04:48, Andrew Morton wrote: >>> On Wed, 31 Jul 2024 18:22:17 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote: >>> >>>> (Hope Andrew can help to squash these changes :)) >>> >>> I'm seeing some rejects against, amongst other things, your own "Some >>> cleanups for shmem" series. >>> >>> So... v2, please? >> >> These two bugfix patches are based on the mm-hotfixes-unstable branch >> and need to be merged into 6.11-rcX, so they should be queued first. > > OK. > >> For the 'Some cleanups for shmem' series, I can send a new V4 version to >> you after resolving conflicts with the shmem bugfix patches. Sorry for >> the inconvenience. > > I fixed things up. Thank you for your help, Andrew. I have checked the patches after resolving the conflicts, and all look good.
diff --git a/mm/shmem.c b/mm/shmem.c index 2faa9daaf54b..a4332a97558c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma->vm_flags; /* - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that + * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that * are enabled for this vma. */ - unsigned long orders = BIT(PMD_ORDER + 1) - 1; + unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; loff_t i_size; int order;
Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is 64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER. This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for shmem to filter allowable huge orders. Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/shmem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)