mbox series

[v10,00/18] reimplement per-vma lock as a refcount

Message ID 20250213224655.1680278-1-surenb@google.com (mailing list archive)
Headers show
Series reimplement per-vma lock as a refcount | expand

Message

Suren Baghdasaryan Feb. 13, 2025, 10:46 p.m. UTC
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
This patchset:
1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
boundary and changing the cache to be cacheline-aligned to minimize
cacheline sharing;
2. changes vm_area_struct initialization to mark new vma as detached until
it is inserted into vma tree;
3. replaces vm_lock and vma->detached flag with a reference counter;
4. regroups vm_area_struct members to fit them into 3 cachelines;
5. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
reuse and to minimize call_rcu() calls.

Pagefault microbenchmarks show performance improvement:
Hmean     faults/cpu-1    507926.5547 (   0.00%)   506519.3692 *  -0.28%*
Hmean     faults/cpu-4    479119.7051 (   0.00%)   481333.6802 *   0.46%*
Hmean     faults/cpu-7    452880.2961 (   0.00%)   455845.6211 *   0.65%*
Hmean     faults/cpu-12   347639.1021 (   0.00%)   352004.2254 *   1.26%*
Hmean     faults/cpu-21   200061.2238 (   0.00%)   229597.0317 *  14.76%*
Hmean     faults/cpu-30   145251.2001 (   0.00%)   164202.5067 *  13.05%*
Hmean     faults/cpu-48   106848.4434 (   0.00%)   120641.5504 *  12.91%*
Hmean     faults/cpu-56    92472.3835 (   0.00%)   103464.7916 *  11.89%*
Hmean     faults/sec-1    507566.1468 (   0.00%)   506139.0811 *  -0.28%*
Hmean     faults/sec-4   1880478.2402 (   0.00%)  1886795.6329 *   0.34%*
Hmean     faults/sec-7   3106394.3438 (   0.00%)  3140550.7485 *   1.10%*
Hmean     faults/sec-12  4061358.4795 (   0.00%)  4112477.0206 *   1.26%*
Hmean     faults/sec-21  3988619.1169 (   0.00%)  4577747.1436 *  14.77%*
Hmean     faults/sec-30  3909839.5449 (   0.00%)  4311052.2787 *  10.26%*
Hmean     faults/sec-48  4761108.4691 (   0.00%)  5283790.5026 *  10.98%*
Hmean     faults/sec-56  4885561.4590 (   0.00%)  5415839.4045 *  10.85%*

Changes since v9 [4]:
PATCH [4/18]
- Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
per Lorenzo Stoakes
- Rename vma_iter_store() into vma_iter_store_new(), per Lorenzo Stoakes
- Expand changelog, per Lorenzo Stoakes
- Update vma tests to check for vma detached state correctness,
per Lorenzo Stoakes

PATCH [5/18]
- Add Reviewed-by, per Lorenzo Stoakes

PATCH [6/18]
- Add Acked-by, per Lorenzo Stoakes

PATCH [7/18]
- Refactor the code, per Lorenzo Stoakes
- Remove Vlastimil's Acked-by since code is changed

PATCH [8/18]
- Drop inline for mmap_init_lock(), per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes

PATCH [9/18]
- Add Reviewed-by, per Lorenzo Stoakes

PATCH [10/18]
- New patch to add refcount_add_not_zero_acquire/refcount_set_release
- Add Acked-by #slab, per Vlastimil Babka

PATCH [11/18]
- Change refcount limit to be used with xxx_acquire functions

PATCH [12/18]
- Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
per Hillf Danton
- Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
per Mateusz Guzik
- Update changelog, per Wei Yang
- Change vma_start_read() to return EAGAIN if vma got isolated and changed
lock_vma_under_rcu() back to detect this condition, per Wei Yang
- Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
per Lorenzo Stoakes
- Remove Vlastimil's Reviewed-by since code is changed

PATCH [13/18]
- Update vm_area_struct for tests, per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes

PATCH [14/18]
- Minimized duplicate code, per Lorenzo Stoakes

PATCH [15/18]
- Add Reviewed-by, per Lorenzo Stoakes

PATCH [17/18]
- Use refcount_set_release() in vma_mark_attached(), per Will Deacon

PATCH [18/18]
- Updated documenation, per Lorenzo Stoakes
- Add Reviewed-by, per Lorenzo Stoakes

[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
[4] https://lore.kernel.org/all/20250111042604.3230628-1-surenb@google.com/

Patchset applies over mm-unstable

Suren Baghdasaryan (18):
  mm: introduce vma_start_read_locked{_nested} helpers
  mm: move per-vma lock into vm_area_struct
  mm: mark vma as detached until it's added into vma tree
  mm: introduce vma_iter_store_attached() to use with attached vmas
  mm: mark vmas detached upon exit
  types: move struct rcuwait into types.h
  mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
  mm: move mmap_init_lock() out of the header file
  mm: uninline the main body of vma_start_write()
  refcount: provide ops for cases when object's memory can be reused
  refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire
  mm: replace vm_lock and detached flag with a reference count
  mm: move lesser used vma_area_struct members into the last cacheline
  mm/debug: print vm_refcnt state when dumping the vma
  mm: remove extra vma_numab_state_init() call
  mm: prepare lock_vma_under_rcu() for vma reuse possibility
  mm: make vma cache SLAB_TYPESAFE_BY_RCU
  docs/mm: document latest changes to vm_lock

 Documentation/RCU/whatisRCU.rst               |  10 +
 Documentation/core-api/refcount-vs-atomic.rst |  37 +++-
 Documentation/mm/process_addrs.rst            |  44 +++--
 include/linux/mm.h                            | 176 ++++++++++++++----
 include/linux/mm_types.h                      |  75 ++++----
 include/linux/mmap_lock.h                     |   6 -
 include/linux/rcuwait.h                       |  13 +-
 include/linux/refcount.h                      | 125 +++++++++++++
 include/linux/slab.h                          |  15 +-
 include/linux/types.h                         |  12 ++
 kernel/fork.c                                 | 129 ++++++-------
 mm/debug.c                                    |   6 +
 mm/init-mm.c                                  |   1 +
 mm/memory.c                                   | 106 ++++++++++-
 mm/mmap.c                                     |   3 +-
 mm/nommu.c                                    |   4 +-
 mm/userfaultfd.c                              |  38 ++--
 mm/vma.c                                      |  27 ++-
 mm/vma.h                                      |  15 +-
 tools/include/linux/refcount.h                |   5 +
 tools/testing/vma/linux/atomic.h              |   6 +
 tools/testing/vma/vma.c                       |  42 ++++-
 tools/testing/vma/vma_internal.h              | 127 ++++++-------
 23 files changed, 702 insertions(+), 320 deletions(-)


base-commit: 47aa60e930fe7fc2a945e4406e3ad1dfa73bb47c

Comments

Shivank Garg Feb. 14, 2025, 7:34 p.m. UTC | #1
On 2/14/2025 4:16 AM, Suren Baghdasaryan wrote:
> Back when per-vma locks were introduces, vm_lock was moved out of
> vm_area_struct in [1] because of the performance regression caused by
> false cacheline sharing. Recent investigation [2] revealed that the
> regressions is limited to a rather old Broadwell microarchitecture and
> even there it can be mitigated by disabling adjacent cacheline
> prefetching, see [3].
> Splitting single logical structure into multiple ones leads to more
> complicated management, extra pointer dereferences and overall less
> maintainable code. When that split-away part is a lock, it complicates
> things even further. With no performance benefits, there are no reasons
> for this split. Merging the vm_lock back into vm_area_struct also allows
> vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
> This patchset:
> 1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
> boundary and changing the cache to be cacheline-aligned to minimize
> cacheline sharing;
> 2. changes vm_area_struct initialization to mark new vma as detached until
> it is inserted into vma tree;
> 3. replaces vm_lock and vma->detached flag with a reference counter;
> 4. regroups vm_area_struct members to fit them into 3 cachelines;
> 5. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
> reuse and to minimize call_rcu() calls.
> 
> Pagefault microbenchmarks show performance improvement:
> Hmean     faults/cpu-1    507926.5547 (   0.00%)   506519.3692 *  -0.28%*
> Hmean     faults/cpu-4    479119.7051 (   0.00%)   481333.6802 *   0.46%*
> Hmean     faults/cpu-7    452880.2961 (   0.00%)   455845.6211 *   0.65%*
> Hmean     faults/cpu-12   347639.1021 (   0.00%)   352004.2254 *   1.26%*
> Hmean     faults/cpu-21   200061.2238 (   0.00%)   229597.0317 *  14.76%*
> Hmean     faults/cpu-30   145251.2001 (   0.00%)   164202.5067 *  13.05%*
> Hmean     faults/cpu-48   106848.4434 (   0.00%)   120641.5504 *  12.91%*
> Hmean     faults/cpu-56    92472.3835 (   0.00%)   103464.7916 *  11.89%*
> Hmean     faults/sec-1    507566.1468 (   0.00%)   506139.0811 *  -0.28%*
> Hmean     faults/sec-4   1880478.2402 (   0.00%)  1886795.6329 *   0.34%*
> Hmean     faults/sec-7   3106394.3438 (   0.00%)  3140550.7485 *   1.10%*
> Hmean     faults/sec-12  4061358.4795 (   0.00%)  4112477.0206 *   1.26%*
> Hmean     faults/sec-21  3988619.1169 (   0.00%)  4577747.1436 *  14.77%*
> Hmean     faults/sec-30  3909839.5449 (   0.00%)  4311052.2787 *  10.26%*
> Hmean     faults/sec-48  4761108.4691 (   0.00%)  5283790.5026 *  10.98%*
> Hmean     faults/sec-56  4885561.4590 (   0.00%)  5415839.4045 *  10.85%*

I tested this patch-series on AMD EPYC Zen 5 system
(2-socket, 64-core per socket with SMT Enabled, 4 NUMA nodes)
using mmtests's PFT (config-workload-pft-threads) on mm-unstable. 

I see significant performance improvement for higher thread count:

			   mm-unstable		   mm-unstable
			   -6.14-rc2-vanilla1	   -6.14-rc2-v10-per-vma-lock
Hmean     faults/cpu-1    1933589.0920 (   0.00%)  1950506.1985 (   0.87%)
Hmean     faults/cpu-4     722834.4269 (   0.00%)   657946.3257 (  -8.98%)
Hmean     faults/cpu-7     373210.8410 (   0.00%)   358995.9493 (  -3.81%)
Hmean     faults/cpu-12    216267.7580 (   0.00%)   211032.8119 (  -2.42%)
Hmean     faults/cpu-21    153080.2758 (   0.00%)   150207.3115 (  -1.88%)
Hmean     faults/cpu-30    143142.8874 (   0.00%)   142904.3981 (  -0.17%)
Hmean     faults/cpu-48    135825.2524 (   0.00%)   158502.4303 *  16.70%*
Hmean     faults/cpu-79    111892.4921 (   0.00%)   141725.6864 *  26.66%*
Hmean     faults/cpu-110    96905.8995 (   0.00%)   114238.6961 *  17.89%*
Hmean     faults/cpu-128    89136.8524 (   0.00%)   107620.7035 *  20.74%*
Hmean     faults/sec-1    1933283.3273 (   0.00%)  1950224.2371 (   0.88%)
Hmean     faults/sec-4    2859235.5825 (   0.00%)  2611293.1103 (  -8.67%)
Hmean     faults/sec-7    2580415.8792 (   0.00%)  2497936.1104 (  -3.20%)
Hmean     faults/sec-12   2560172.2303 (   0.00%)  2516697.9056 (  -1.70%)
Hmean     faults/sec-21   3080686.9599 (   0.00%)  3038393.3328 (  -1.37%)
Hmean     faults/sec-30   4174290.0462 (   0.00%)  4168318.6202 (  -0.14%)
Hmean     faults/sec-48   6318251.1880 (   0.00%)  7323087.0849 *  15.90%*
Hmean     faults/sec-79   8502378.1341 (   0.00%) 10761979.4193 *  26.58%*
Hmean     faults/sec-110 10131823.3341 (   0.00%) 12318722.2392 *  21.58%*
Hmean     faults/sec-128 10584693.5966 (   0.00%) 13354652.5141 *  26.17%*

Slight degradation at 4 and 7 can be ignored due to high variance:

HCoeffVar faults/cpu-4          8.7568 (   0.00%)       11.4420 ( -30.66%)
HCoeffVar faults/cpu-7          3.3204 (   0.00%)        3.4852 (  -4.96%)

Please consider my:

Tested-by: Shivank Garg <shivankg@amd.com>

Best Regards,
Shivank Garg

> 
> Changes since v9 [4]:
> PATCH [4/18]
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() in vma_assert_{attached|detached},
> per Lorenzo Stoakes
> - Rename vma_iter_store() into vma_iter_store_new(), per Lorenzo Stoakes
> - Expand changelog, per Lorenzo Stoakes
> - Update vma tests to check for vma detached state correctness,
> per Lorenzo Stoakes
> 
> PATCH [5/18]
> - Add Reviewed-by, per Lorenzo Stoakes
> 
> PATCH [6/18]
> - Add Acked-by, per Lorenzo Stoakes
> 
> PATCH [7/18]
> - Refactor the code, per Lorenzo Stoakes
> - Remove Vlastimil's Acked-by since code is changed
> 
> PATCH [8/18]
> - Drop inline for mmap_init_lock(), per Lorenzo Stoakes
> - Add Reviewed-by, per Lorenzo Stoakes
> 
> PATCH [9/18]
> - Add Reviewed-by, per Lorenzo Stoakes
> 
> PATCH [10/18]
> - New patch to add refcount_add_not_zero_acquire/refcount_set_release
> - Add Acked-by #slab, per Vlastimil Babka
> 
> PATCH [11/18]
> - Change refcount limit to be used with xxx_acquire functions
> 
> PATCH [12/18]
> - Use __refcount_inc_not_zero_limited_acquire() in vma_start_read(),
> per Hillf Danton
> - Refactor vma_assert_locked() to avoid vm_refcnt read when CONFIG_DEBUG_VM=n,
> per Mateusz Guzik
> - Update changelog, per Wei Yang
> - Change vma_start_read() to return EAGAIN if vma got isolated and changed
> lock_vma_under_rcu() back to detect this condition, per Wei Yang
> - Change VM_BUG_ON_VMA() to WARN_ON_ONCE() when checking vma detached state,
> per Lorenzo Stoakes
> - Remove Vlastimil's Reviewed-by since code is changed
> 
> PATCH [13/18]
> - Update vm_area_struct for tests, per Lorenzo Stoakes
> - Add Reviewed-by, per Lorenzo Stoakes
> 
> PATCH [14/18]
> - Minimized duplicate code, per Lorenzo Stoakes
> 
> PATCH [15/18]
> - Add Reviewed-by, per Lorenzo Stoakes
> 
> PATCH [17/18]
> - Use refcount_set_release() in vma_mark_attached(), per Will Deacon
> 
> PATCH [18/18]
> - Updated documenation, per Lorenzo Stoakes
> - Add Reviewed-by, per Lorenzo Stoakes
> 
> [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
> [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
> [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
> [4] https://lore.kernel.org/all/20250111042604.3230628-1-surenb@google.com/
> 
> Patchset applies over mm-unstable
> 
> Suren Baghdasaryan (18):
>   mm: introduce vma_start_read_locked{_nested} helpers
>   mm: move per-vma lock into vm_area_struct
>   mm: mark vma as detached until it's added into vma tree
>   mm: introduce vma_iter_store_attached() to use with attached vmas
>   mm: mark vmas detached upon exit
>   types: move struct rcuwait into types.h
>   mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
>   mm: move mmap_init_lock() out of the header file
>   mm: uninline the main body of vma_start_write()
>   refcount: provide ops for cases when object's memory can be reused
>   refcount: introduce __refcount_{add|inc}_not_zero_limited_acquire
>   mm: replace vm_lock and detached flag with a reference count
>   mm: move lesser used vma_area_struct members into the last cacheline
>   mm/debug: print vm_refcnt state when dumping the vma
>   mm: remove extra vma_numab_state_init() call
>   mm: prepare lock_vma_under_rcu() for vma reuse possibility
>   mm: make vma cache SLAB_TYPESAFE_BY_RCU
>   docs/mm: document latest changes to vm_lock
> 
>  Documentation/RCU/whatisRCU.rst               |  10 +
>  Documentation/core-api/refcount-vs-atomic.rst |  37 +++-
>  Documentation/mm/process_addrs.rst            |  44 +++--
>  include/linux/mm.h                            | 176 ++++++++++++++----
>  include/linux/mm_types.h                      |  75 ++++----
>  include/linux/mmap_lock.h                     |   6 -
>  include/linux/rcuwait.h                       |  13 +-
>  include/linux/refcount.h                      | 125 +++++++++++++
>  include/linux/slab.h                          |  15 +-
>  include/linux/types.h                         |  12 ++
>  kernel/fork.c                                 | 129 ++++++-------
>  mm/debug.c                                    |   6 +
>  mm/init-mm.c                                  |   1 +
>  mm/memory.c                                   | 106 ++++++++++-
>  mm/mmap.c                                     |   3 +-
>  mm/nommu.c                                    |   4 +-
>  mm/userfaultfd.c                              |  38 ++--
>  mm/vma.c                                      |  27 ++-
>  mm/vma.h                                      |  15 +-
>  tools/include/linux/refcount.h                |   5 +
>  tools/testing/vma/linux/atomic.h              |   6 +
>  tools/testing/vma/vma.c                       |  42 ++++-
>  tools/testing/vma/vma_internal.h              | 127 ++++++-------
>  23 files changed, 702 insertions(+), 320 deletions(-)
> 
> 
> base-commit: 47aa60e930fe7fc2a945e4406e3ad1dfa73bb47c