mbox series

[v5,0/6] move per-vma lock into vm_area_struct

Message ID 20241206225204.4008261-1-surenb@google.com (mailing list archive)
Headers show
Series move per-vma lock into vm_area_struct | expand

Message

Suren Baghdasaryan Dec. 6, 2024, 10:51 p.m. UTC
Back when per-vma locks were introduces, vm_lock was moved out of
vm_area_struct in [1] because of the performance regression caused by
false cacheline sharing. Recent investigation [2] revealed that the
regressions is limited to a rather old Broadwell microarchitecture and
even there it can be mitigated by disabling adjacent cacheline
prefetching, see [3].
Splitting single logical structure into multiple ones leads to more
complicated management, extra pointer dereferences and overall less
maintainable code. When that split-away part is a lock, it complicates
things even further. With no performance benefits, there are no reasons
for this split. Merging the vm_lock back into vm_area_struct also allows
vm_area_struct to use SLAB_TYPESAFE_BY_RCU later in this patchset.
This patchset:
1. moves vm_lock back into vm_area_struct, aligning it at the cacheline
boundary and changing the cache to be cacheline-aligned to minimize
cacheline sharing;
2. changes vm_area_struct initialization to mark new vma as detached until
it is inserted into vma tree;
3. changes vm_area_struct cache to SLAB_TYPESAFE_BY_RCU to allow for their
reuse and to minimize call_rcu() calls. To avoid bloating vm_area_struct,
we introduce unioned freeptr_t field and allow freeptr_offset to be used
with ctor in a separate patch.
Pagefault microbenchmarks do not show noticeable performance change.

Changes since v4 [4]
- Added SOBs, per Lorenzo Stoakes and Shakeel Butt;
- Changed vma_clear() and vma_copy() to set required vma members
individually, per Matthew Wilcox
- Added comments in vma_start_read() about new false locked result
possibilities, per Vlastimil Babka
- Added back freeptr_t into vm_area_struct, as it's harmless and can be
used in the future to shrink the structure, per Vlastimil Babka
- Fixed the race in vm_area_alloc() when vma->detached was temporarily
reset by memcpy() before setting it back, per Vlastimil Babka
- Added a patch allowing freeptr_offset to be used with ctor,
per Vlastimil Babka

Patch applies over linux-next (due to vm_lock change [5] not in mm tree).

[1] https://lore.kernel.org/all/20230227173632.3292573-34-surenb@google.com/
[2] https://lore.kernel.org/all/ZsQyI%2F087V34JoIt@xsang-OptiPlex-9020/
[3] https://lore.kernel.org/all/CAJuCfpEisU8Lfe96AYJDZ+OM4NoPmnw9bP53cT_kbfP_pR+-2g@mail.gmail.com/
[4] https://lore.kernel.org/all/20241120000826.335387-1-surenb@google.com/
[5] https://lore.kernel.org/all/20241122174416.1367052-2-surenb@google.com/

Suren Baghdasaryan (6):
  mm: introduce vma_start_read_locked{_nested} helpers
  mm: move per-vma lock into vm_area_struct
  mm: mark vma as detached until it's added into vma tree
  mm: make vma cache SLAB_TYPESAFE_BY_RCU
  mm/slab: allow freeptr_offset to be used with ctor
  docs/mm: document latest changes to vm_lock

 Documentation/mm/process_addrs.rst |  10 +-
 include/linux/mm.h                 | 107 ++++++++++++++----
 include/linux/mm_types.h           |  16 ++-
 include/linux/slab.h               |  11 +-
 kernel/fork.c                      | 168 ++++++++++++++++++++---------
 mm/memory.c                        |  17 ++-
 mm/slub.c                          |   2 +-
 mm/userfaultfd.c                   |  22 +---
 mm/vma.c                           |   8 +-
 mm/vma.h                           |   2 +
 tools/testing/vma/vma_internal.h   |  55 ++++------
 11 files changed, 267 insertions(+), 151 deletions(-)


base-commit: ebe1b11614e079c5e366ce9bd3c8f44ca0fbcc1b

Comments

Andrew Morton Dec. 7, 2024, 4:29 a.m. UTC | #1
On Fri,  6 Dec 2024 14:51:57 -0800 Suren Baghdasaryan <surenb@google.com> wrote:

> Back when per-vma locks were introduces, vm_lock was moved out of
> vm_area_struct in [1] because of the performance regression caused by
> false cacheline sharing. Recent investigation [2] revealed that the
> regressions is limited to a rather old Broadwell microarchitecture and
> even there it can be mitigated by disabling adjacent cacheline
> prefetching, see [3].
> 
> ...
>
> Patch applies over linux-next (due to vm_lock change [5] not in mm tree).
> 
> ...
>
> [5] https://lore.kernel.org/all/20241122174416.1367052-2-surenb@google.com/

Well that's awkward.  I added the "seqlock: add raw_seqcount_try_begin"
series to mm.git.  Peter, please drop your copy from linux-next?