mbox series

[v6,00/16] move per-vma lock into vm_area_struct

Message ID 20241216192419.2970941-1-surenb@google.com (mailing list archive)
Headers show
Series move per-vma lock into vm_area_struct | expand

Message

Suren Baghdasaryan Dec. 16, 2024, 7:24 p.m. UTC
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
This patchset:
1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
boundary and changing the cache to be cacheline-aligned to minimize
cacheline sharing;
2. changes vm_area_struct initialization to mark new vma as detached until
it is inserted into vma tree;
3. replaces vm_lock and vma->detached flag with a reference counter;
4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
reuse and to minimize call_rcu() calls.

Pagefault microbenchmarks show performance improvement:
Hmean     faults/cpu-1    507926.5547 (   0.00%)   506519.3692 *  -0.28%*
Hmean     faults/cpu-4    479119.7051 (   0.00%)   481333.6802 *   0.46%*
Hmean     faults/cpu-7    452880.2961 (   0.00%)   455845.6211 *   0.65%*
Hmean     faults/cpu-12   347639.1021 (   0.00%)   352004.2254 *   1.26%*
Hmean     faults/cpu-21   200061.2238 (   0.00%)   229597.0317 *  14.76%*
Hmean     faults/cpu-30   145251.2001 (   0.00%)   164202.5067 *  13.05%*
Hmean     faults/cpu-48   106848.4434 (   0.00%)   120641.5504 *  12.91%*
Hmean     faults/cpu-56    92472.3835 (   0.00%)   103464.7916 *  11.89%*
Hmean     faults/sec-1    507566.1468 (   0.00%)   506139.0811 *  -0.28%*
Hmean     faults/sec-4   1880478.2402 (   0.00%)  1886795.6329 *   0.34%*
Hmean     faults/sec-7   3106394.3438 (   0.00%)  3140550.7485 *   1.10%*
Hmean     faults/sec-12  4061358.4795 (   0.00%)  4112477.0206 *   1.26%*
Hmean     faults/sec-21  3988619.1169 (   0.00%)  4577747.1436 *  14.77%*
Hmean     faults/sec-30  3909839.5449 (   0.00%)  4311052.2787 *  10.26%*
Hmean     faults/sec-48  4761108.4691 (   0.00%)  5283790.5026 *  10.98%*
Hmean     faults/sec-56  4885561.4590 (   0.00%)  5415839.4045 *  10.85%*

Changes since v5 [4]
- Added Reviewed-by, per Vlastimil Babka;
- Added replacement of vm_lock and vma->detached flag with vm_refcnt,
per Peter Zijlstra and Matthew Wilcox
- Marked vmas detached during exit_mmap;
- Ensureed vmas are in detached state before they are freed;
- Changed SLAB_TYPESAFE_BY_RCU patch to not require ctor, leading to a
much simpler code;
- Removed unnecessary patch [5]
- Updated documentation to reflect changes to vm_lock;

Patchset applies over mm-unstable after reverting v5 of this patchset [4]
(currently 687e99a5faa5-905ab222508a)

[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
[4] https://lore.kernel.org/all/20241206225204.4008261-1-surenb@google.com/
[5] https://lore.kernel.org/all/20241206225204.4008261-6-surenb@google.com/

Suren Baghdasaryan (16):
  mm: introduce vma_start_read_locked{_nested} helpers
  mm: move per-vma lock into vm_area_struct
  mm: mark vma as detached until it's added into vma tree
  mm/nommu: fix the last places where vma is not locked before being
    attached
  types: move struct rcuwait into types.h
  mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
  mm: move mmap_init_lock() out of the header file
  mm: uninline the main body of vma_start_write()
  refcount: introduce __refcount_{add|inc}_not_zero_limited
  mm: replace vm_lock and detached flag with a reference count
  mm: enforce vma to be in detached state before freeing
  mm: remove extra vma_numab_state_init() call
  mm: introduce vma_ensure_detached()
  mm: prepare lock_vma_under_rcu() for vma reuse possibility
  mm: make vma cache SLAB_TYPESAFE_BY_RCU
  docs/mm: document latest changes to vm_lock

 Documentation/mm/process_addrs.rst |  44 ++++----
 include/linux/mm.h                 | 162 +++++++++++++++++++++++------
 include/linux/mm_types.h           |  37 ++++---
 include/linux/mmap_lock.h          |   6 --
 include/linux/rcuwait.h            |  13 +--
 include/linux/refcount.h           |  20 +++-
 include/linux/slab.h               |   6 --
 include/linux/types.h              |  12 +++
 kernel/fork.c                      |  88 ++++------------
 mm/init-mm.c                       |   1 +
 mm/memory.c                        |  75 +++++++++++--
 mm/mmap.c                          |   8 +-
 mm/nommu.c                         |   2 +
 mm/userfaultfd.c                   |  31 +++---
 mm/vma.c                           |  15 ++-
 mm/vma.h                           |   4 +-
 tools/testing/vma/linux/atomic.h   |   5 +
 tools/testing/vma/vma_internal.h   |  96 +++++++++--------
 18 files changed, 378 insertions(+), 247 deletions(-)

Comments

Suren Baghdasaryan Dec. 16, 2024, 7:39 p.m. UTC | #1
On Mon, Dec 16, 2024 at 11:24 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Back when per-vma locks were introduces, vm_lock was moved out of
> vm_area_struct in [1] because of the performance regression caused by
> false cacheline sharing. Recent investigation [2] revealed that the
> regressions is limited to a rather old Broadwell microarchitecture and
> even there it can be mitigated by disabling adjacent cacheline
> prefetching, see [3].
> Splitting single logical structure into multiple ones leads to more
> complicated management, extra pointer dereferences and overall less
> maintainable code. When that split-away part is a lock, it complicates
> things even further. With no performance benefits, there are no reasons
> for this split. Merging the vm_lock back into vm_area_struct also allows
> vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
> This patchset:
> 1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
> boundary and changing the cache to be cacheline-aligned to minimize
> cacheline sharing;
> 2. changes vm_area_struct initialization to mark new vma as detached until
> it is inserted into vma tree;
> 3. replaces vm_lock and vma->detached flag with a reference counter;
> 4. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
> reuse and to minimize call_rcu() calls.
>
> Pagefault microbenchmarks show performance improvement:
> Hmean     faults/cpu-1    507926.5547 (   0.00%)   506519.3692 *  -0.28%*
> Hmean     faults/cpu-4    479119.7051 (   0.00%)   481333.6802 *   0.46%*
> Hmean     faults/cpu-7    452880.2961 (   0.00%)   455845.6211 *   0.65%*
> Hmean     faults/cpu-12   347639.1021 (   0.00%)   352004.2254 *   1.26%*
> Hmean     faults/cpu-21   200061.2238 (   0.00%)   229597.0317 *  14.76%*
> Hmean     faults/cpu-30   145251.2001 (   0.00%)   164202.5067 *  13.05%*
> Hmean     faults/cpu-48   106848.4434 (   0.00%)   120641.5504 *  12.91%*
> Hmean     faults/cpu-56    92472.3835 (   0.00%)   103464.7916 *  11.89%*
> Hmean     faults/sec-1    507566.1468 (   0.00%)   506139.0811 *  -0.28%*
> Hmean     faults/sec-4   1880478.2402 (   0.00%)  1886795.6329 *   0.34%*
> Hmean     faults/sec-7   3106394.3438 (   0.00%)  3140550.7485 *   1.10%*
> Hmean     faults/sec-12  4061358.4795 (   0.00%)  4112477.0206 *   1.26%*
> Hmean     faults/sec-21  3988619.1169 (   0.00%)  4577747.1436 *  14.77%*
> Hmean     faults/sec-30  3909839.5449 (   0.00%)  4311052.2787 *  10.26%*
> Hmean     faults/sec-48  4761108.4691 (   0.00%)  5283790.5026 *  10.98%*
> Hmean     faults/sec-56  4885561.4590 (   0.00%)  5415839.4045 *  10.85%*
>
> Changes since v5 [4]
> - Added Reviewed-by, per Vlastimil Babka;
> - Added replacement of vm_lock and vma->detached flag with vm_refcnt,
> per Peter Zijlstra and Matthew Wilcox
> - Marked vmas detached during exit_mmap;
> - Ensureed vmas are in detached state before they are freed;
> - Changed SLAB_TYPESAFE_BY_RCU patch to not require ctor, leading to a
> much simpler code;
> - Removed unnecessary patch [5]
> - Updated documentation to reflect changes to vm_lock;
>
> Patchset applies over mm-unstable after reverting v5 of this patchset [4]
> (currently 687e99a5faa5-905ab222508a)

^^^
Please be aware of this if trying to apply to a branch. mm-unstable
contains an older version of this patchset which needs to be reverted
before this one can be applied.

>
> [1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
> [2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
> [3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
> [4] https://lore.kernel.org/all/20241206225204.4008261-1-surenb@google.com/
> [5] https://lore.kernel.org/all/20241206225204.4008261-6-surenb@google.com/
>
> Suren Baghdasaryan (16):
>   mm: introduce vma_start_read_locked{_nested} helpers
>   mm: move per-vma lock into vm_area_struct
>   mm: mark vma as detached until it's added into vma tree
>   mm/nommu: fix the last places where vma is not locked before being
>     attached
>   types: move struct rcuwait into types.h
>   mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail
>   mm: move mmap_init_lock() out of the header file
>   mm: uninline the main body of vma_start_write()
>   refcount: introduce __refcount_{add|inc}_not_zero_limited
>   mm: replace vm_lock and detached flag with a reference count
>   mm: enforce vma to be in detached state before freeing
>   mm: remove extra vma_numab_state_init() call
>   mm: introduce vma_ensure_detached()
>   mm: prepare lock_vma_under_rcu() for vma reuse possibility
>   mm: make vma cache SLAB_TYPESAFE_BY_RCU
>   docs/mm: document latest changes to vm_lock
>
>  Documentation/mm/process_addrs.rst |  44 ++++----
>  include/linux/mm.h                 | 162 +++++++++++++++++++++++------
>  include/linux/mm_types.h           |  37 ++++---
>  include/linux/mmap_lock.h          |   6 --
>  include/linux/rcuwait.h            |  13 +--
>  include/linux/refcount.h           |  20 +++-
>  include/linux/slab.h               |   6 --
>  include/linux/types.h              |  12 +++
>  kernel/fork.c                      |  88 ++++------------
>  mm/init-mm.c                       |   1 +
>  mm/memory.c                        |  75 +++++++++++--
>  mm/mmap.c                          |   8 +-
>  mm/nommu.c                         |   2 +
>  mm/userfaultfd.c                   |  31 +++---
>  mm/vma.c                           |  15 ++-
>  mm/vma.h                           |   4 +-
>  tools/testing/vma/linux/atomic.h   |   5 +
>  tools/testing/vma/vma_internal.h   |  96 +++++++++--------
>  18 files changed, 378 insertions(+), 247 deletions(-)
>
> --
> 2.47.1.613.gc27f4b7a9f-goog
>
Andrew Morton Dec. 17, 2024, 6:42 p.m. UTC | #2
On Mon, 16 Dec 2024 11:39:16 -0800 Suren Baghdasaryan <surenb@google.com> wrote:

> > Patchset applies over mm-unstable after reverting v5 of this patchset [4]
> > (currently 687e99a5faa5-905ab222508a)
> 
> ^^^
> Please be aware of this if trying to apply to a branch. mm-unstable
> contains an older version of this patchset which needs to be reverted
> before this one can be applied.

I quietly updated mm-unstable to v6.  I understand that a v7 is expected.
Suren Baghdasaryan Dec. 17, 2024, 6:49 p.m. UTC | #3
On Tue, Dec 17, 2024 at 10:42 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Mon, 16 Dec 2024 11:39:16 -0800 Suren Baghdasaryan <surenb@google.com> wrote:
>
> > > Patchset applies over mm-unstable after reverting v5 of this patchset [4]
> > > (currently 687e99a5faa5-905ab222508a)
> >
> > ^^^
> > Please be aware of this if trying to apply to a branch. mm-unstable
> > contains an older version of this patchset which needs to be reverted
> > before this one can be applied.
>
> I quietly updated mm-unstable to v6.  I understand that a v7 is expected.

Thanks! Yes, I'll post v7 once our discussion with Peter on
refcounting is concluded.

Could you please fixup the issue that Lokesh found in
https://lore.kernel.org/all/20241216192419.2970941-7-surenb@google.com/
?
Instead of

+                if (!vma_start_read_locked(*dst_vmap)) {

it should be:

+                if (vma_start_read_locked(*dst_vmap)) {

That's the only critical issue found in v6 so far.
Thanks!