Message ID | 358924679107339e6b17a5d8b1b2e10ae6306227.1717673614.git.baolin.wang@linux.alibaba.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | support large folio swap-out and swap-in for shmem | expand |
Hi Baolin, On Thu, Jun 06, 2024 at 07:58:54PM +0800, Baolin Wang wrote: > To support shmem large folio swapout in the following patches, using > xa_get_order() to get the order of the swap entry to calculate the swap > usage of shmem. > > Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> > --- > mm/shmem.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index eefdf5c61c04..0ac71580decb 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -865,13 +865,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping, > struct page *page; > unsigned long swapped = 0; > unsigned long max = end - 1; > + int order; > > rcu_read_lock(); > xas_for_each(&xas, page, max) { > if (xas_retry(&xas, page)) > continue; > - if (xa_is_value(page)) > - swapped++; > + if (xa_is_value(page)) { > + order = xa_get_order(xas.xa, xas.xa_index); > + swapped += 1 << order; I'd get rid of order and simply do: swapped += 1UL << xa_get_order() > + } > if (xas.xa_index == max) > break; > if (need_resched()) { > -- > 2.39.3 >
On 2024/6/10 22:53, Daniel Gomez wrote: > Hi Baolin, > On Thu, Jun 06, 2024 at 07:58:54PM +0800, Baolin Wang wrote: >> To support shmem large folio swapout in the following patches, using >> xa_get_order() to get the order of the swap entry to calculate the swap >> usage of shmem. >> >> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> >> --- >> mm/shmem.c | 7 +++++-- >> 1 file changed, 5 insertions(+), 2 deletions(-) >> >> diff --git a/mm/shmem.c b/mm/shmem.c >> index eefdf5c61c04..0ac71580decb 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -865,13 +865,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping, >> struct page *page; >> unsigned long swapped = 0; >> unsigned long max = end - 1; >> + int order; >> >> rcu_read_lock(); >> xas_for_each(&xas, page, max) { >> if (xas_retry(&xas, page)) >> continue; >> - if (xa_is_value(page)) >> - swapped++; >> + if (xa_is_value(page)) { >> + order = xa_get_order(xas.xa, xas.xa_index); >> + swapped += 1 << order; > > I'd get rid of order and simply do: > > swapped += 1UL << xa_get_order() OK. Will do.
diff --git a/mm/shmem.c b/mm/shmem.c index eefdf5c61c04..0ac71580decb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -865,13 +865,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping, struct page *page; unsigned long swapped = 0; unsigned long max = end - 1; + int order; rcu_read_lock(); xas_for_each(&xas, page, max) { if (xas_retry(&xas, page)) continue; - if (xa_is_value(page)) - swapped++; + if (xa_is_value(page)) { + order = xa_get_order(xas.xa, xas.xa_index); + swapped += 1 << order; + } if (xas.xa_index == max) break; if (need_resched()) {
To support shmem large folio swapout in the following patches, using xa_get_order() to get the order of the swap entry to calculate the swap usage of shmem. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- mm/shmem.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)