Message ID | 20241120000826.335387-5-surenb@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | move per-vma lock into vm_area_struct | expand |
On Tue, Nov 19, 2024 at 04:08:25PM -0800, Suren Baghdasaryan wrote: > +static inline void vma_clear(struct vm_area_struct *vma) > +{ > + /* Preserve vma->vm_lock */ > + memset(vma, 0, VMA_BEFORE_LOCK); > + memset(VMA_LOCK_END(vma), 0, VMA_AFTER_LOCK); > +} This isn't how you're supposed to handle constructors. You've fixed the immediate problem rather than writing the code in the intended style. > +static void vm_area_ctor(void *data) > +{ > + vma_lock_init(data); > +} After the ctor has run, the object should be in the same state as it is after it's freed. If you want to memset the entire thing then you can do it in the ctor. But there should be no need to do it in vma_init(). And there's lots of things you can move from vma_init() to the ctor. For example, at free time, anon_vma_chain should be an empty list. So if you init it in the ctor, you can avoid doing it in vma_init(). I'd suggest that vma_numab_state_free() should be the place which sets vma->numab_state to NULL and we can delete vma_numab_state_init() entirely.
On Tue, Nov 19, 2024 at 8:36 PM Matthew Wilcox <willy@infradead.org> wrote: > > On Tue, Nov 19, 2024 at 04:08:25PM -0800, Suren Baghdasaryan wrote: > > +static inline void vma_clear(struct vm_area_struct *vma) > > +{ > > + /* Preserve vma->vm_lock */ > > + memset(vma, 0, VMA_BEFORE_LOCK); > > + memset(VMA_LOCK_END(vma), 0, VMA_AFTER_LOCK); > > +} > > This isn't how you're supposed to handle constructors. You've fixed > the immediate problem rather than writing the code in the intended style. Yeah, I don't like this myself but the only alternative I can think of is to set the struct members individually. > > > +static void vm_area_ctor(void *data) > > +{ > > + vma_lock_init(data); > > +} > > After the ctor has run, the object should be in the same state as > it is after it's freed. If you want to memset the entire thing > then you can do it in the ctor. But there should be no need to > do it in vma_init(). IIUC, your suggestion is to memset() the vma and initialize vm_lock inside the ctor. Then when it's time to free the vma, we reset all members except vm_lock before freeing the vma. As you mention later, members like anon_vma_chain, which are already clear, also won't need to be reset at this point. Am I understanding your proposal correctly? BTW, if so, then vma_copy() will have to also copy vma members individually. > > And there's lots of things you can move from vma_init() to the ctor. > For example, at free time, anon_vma_chain should be an empty list. > So if you init it in the ctor, you can avoid doing it in vma_init(). True. > I'd suggest that vma_numab_state_free() should be the place which > sets vma->numab_state to NULL and we can delete vma_numab_state_init() > entirely. Sounds good to me. Please confirm if I correctly got your idea and I'll update this patch. Thanks for the feedback! >
On 11/20/24 01:08, Suren Baghdasaryan wrote: > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that > object reuse before RCU grace period is over will be detected inside > lock_vma_under_rcu(). > lock_vma_under_rcu() enters RCU read section, finds the vma at the > given address, locks the vma and checks if it got detached or remapped > to cover a different address range. These last checks are there > to ensure that the vma was not modified after we found it but before > locking it. > vma reuse introduces several new possibilities: > 1. vma can be reused after it was found but before it is locked; > 2. vma can be reused and reinitialized (including changing its vm_mm) > while being locked in vma_start_read(); > 3. vma can be reused and reinitialized after it was found but before > it is locked, then attached at a new address or to a new mm while being > read-locked; > For case #1 current checks will help detecting cases when: > - vma was reused but not yet added into the tree (detached check) > - vma was reused at a different address range (address check); > We are missing the check for vm_mm to ensure the reused vma was not > attached to a different mm. This patch adds the missing check. > For case #2, we pass mm to vma_start_read() to prevent access to > unstable vma->vm_mm. So we may now be looking at different mm's mm_lock_seq.sequence and return a false unlocked result, right? I guess the mm validation in lock_vma_under_rcu() handles that, but maybe the comment of vma_start_read() needs updating. > For case #3, we ensure the order in which vma->detached flag and > vm_start/vm_end/vm_mm are set and checked. vma gets attached after > vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check > vma->detached before checking vm_start/vm_end/vm_mm. This is required > because attaching vma happens without vma write-lock, as opposed to > vma detaching, which requires vma write-lock. This patch adds memory > barriers inside is_vma_detached() and vma_mark_attached() needed to > order reads and writes to vma->detached vs vm_start/vm_end/vm_mm. > After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep. > This will facilitate vm_area_struct reuse and will minimize the number > of call_rcu() calls. > Adding a freeptr_t into vm_area_struct (unioned with vm_start/vm_end) > could be used to avoids bloating the structure, however currently > custom free pointers are not supported in combination with a ctor > (see the comment for kmem_cache_args.freeptr_offset). I think there's nothing fundamental preventing to support that, there was just no user of it. We can do it later. > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > --- a/kernel/fork.c > +++ b/kernel/fork.c > @@ -436,6 +436,11 @@ static struct kmem_cache *vm_area_cachep; > /* SLAB cache for mm_struct structures (tsk->mm) */ > static struct kmem_cache *mm_cachep; > > +static void vm_area_ctor(void *data) > +{ > + vma_lock_init(data); > +} > + > struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) > { > struct vm_area_struct *vma; > @@ -462,8 +467,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) > * orig->shared.rb may be modified concurrently, but the clone > * will be reinitialized. > */ > - data_race(memcpy(new, orig, sizeof(*new))); > - vma_lock_init(new); > + vma_copy(new, orig); > INIT_LIST_HEAD(&new->anon_vma_chain); > #ifdef CONFIG_PER_VMA_LOCK > /* vma is not locked, can't use vma_mark_detached() */ Here we mark it detached but we might have already copied it as attached and confused a reader? I think this will be covered by what you said in reply to willy: "vma_copy() will have to also copy vma members individually." > @@ -475,32 +479,37 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) > return new; > } >
On Wed, Nov 20, 2024 at 2:16 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > On 11/20/24 01:08, Suren Baghdasaryan wrote: > > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that > > object reuse before RCU grace period is over will be detected inside > > lock_vma_under_rcu(). > > lock_vma_under_rcu() enters RCU read section, finds the vma at the > > given address, locks the vma and checks if it got detached or remapped > > to cover a different address range. These last checks are there > > to ensure that the vma was not modified after we found it but before > > locking it. > > vma reuse introduces several new possibilities: > > 1. vma can be reused after it was found but before it is locked; > > 2. vma can be reused and reinitialized (including changing its vm_mm) > > while being locked in vma_start_read(); > > 3. vma can be reused and reinitialized after it was found but before > > it is locked, then attached at a new address or to a new mm while being > > read-locked; > > For case #1 current checks will help detecting cases when: > > - vma was reused but not yet added into the tree (detached check) > > - vma was reused at a different address range (address check); > > We are missing the check for vm_mm to ensure the reused vma was not > > attached to a different mm. This patch adds the missing check. > > For case #2, we pass mm to vma_start_read() to prevent access to > > unstable vma->vm_mm. > > So we may now be looking at different mm's mm_lock_seq.sequence and return a > false unlocked result, right? I guess the mm validation in > lock_vma_under_rcu() handles that, but maybe the comment of vma_start_read() > needs updating. Correct. I'll add a comment about this. > > > For case #3, we ensure the order in which vma->detached flag and > > vm_start/vm_end/vm_mm are set and checked. vma gets attached after > > vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check > > vma->detached before checking vm_start/vm_end/vm_mm. This is required > > because attaching vma happens without vma write-lock, as opposed to > > vma detaching, which requires vma write-lock. This patch adds memory > > barriers inside is_vma_detached() and vma_mark_attached() needed to > > order reads and writes to vma->detached vs vm_start/vm_end/vm_mm. > > After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep. > > This will facilitate vm_area_struct reuse and will minimize the number > > of call_rcu() calls. > > Adding a freeptr_t into vm_area_struct (unioned with vm_start/vm_end) > > could be used to avoids bloating the structure, however currently > > custom free pointers are not supported in combination with a ctor > > (see the comment for kmem_cache_args.freeptr_offset). > > I think there's nothing fundamental preventing to support that, there was > just no user of it. We can do it later. Oh, ok. I can add it back so that we have one user and then when the mechanism is implemented it can be used for testing. Adding freeptr_t has no negative effects and will reduce later churn. > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com> > > --- a/kernel/fork.c > > +++ b/kernel/fork.c > > @@ -436,6 +436,11 @@ static struct kmem_cache *vm_area_cachep; > > /* SLAB cache for mm_struct structures (tsk->mm) */ > > static struct kmem_cache *mm_cachep; > > > > +static void vm_area_ctor(void *data) > > +{ > > + vma_lock_init(data); > > +} > > + > > struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) > > { > > struct vm_area_struct *vma; > > @@ -462,8 +467,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) > > * orig->shared.rb may be modified concurrently, but the clone > > * will be reinitialized. > > */ > > - data_race(memcpy(new, orig, sizeof(*new))); > > - vma_lock_init(new); > > + vma_copy(new, orig); > > INIT_LIST_HEAD(&new->anon_vma_chain); > > #ifdef CONFIG_PER_VMA_LOCK > > /* vma is not locked, can't use vma_mark_detached() */ > > Here we mark it detached but we might have already copied it as attached and > confused a reader? Very true. Thanks for catching this one! > > I think this will be covered by what you said in reply to willy: > "vma_copy() will have to also copy vma members individually." Yes, I think so. vma_copy() will need to copy most but not all members. vma->detached will be among those not copied. Thanks! > > > @@ -475,32 +479,37 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) > > return new; > > } > >
diff --git a/include/linux/mm.h b/include/linux/mm.h index dd1b6190df28..2a4794b7a513 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -257,7 +257,7 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *); struct vm_area_struct *vm_area_dup(struct vm_area_struct *); void vm_area_free(struct vm_area_struct *); /* Use only if VMA has no other users */ -void __vm_area_free(struct vm_area_struct *vma); +void vm_area_free_unreachable(struct vm_area_struct *vma); #ifndef CONFIG_MMU extern struct rb_root nommu_region_tree; @@ -690,12 +690,32 @@ static inline void vma_lock_init(struct vm_area_struct *vma) vma->vm_lock_seq = UINT_MAX; } +#define VMA_BEFORE_LOCK offsetof(struct vm_area_struct, vm_lock) +#define VMA_LOCK_END(vma) \ + (((void *)(vma)) + offsetofend(struct vm_area_struct, vm_lock)) +#define VMA_AFTER_LOCK \ + (sizeof(struct vm_area_struct) - offsetofend(struct vm_area_struct, vm_lock)) + +static inline void vma_clear(struct vm_area_struct *vma) +{ + /* Preserve vma->vm_lock */ + memset(vma, 0, VMA_BEFORE_LOCK); + memset(VMA_LOCK_END(vma), 0, VMA_AFTER_LOCK); +} + +static inline void vma_copy(struct vm_area_struct *new, struct vm_area_struct *orig) +{ + /* Preserve vma->vm_lock */ + data_race(memcpy(new, orig, VMA_BEFORE_LOCK)); + data_race(memcpy(VMA_LOCK_END(new), VMA_LOCK_END(orig), VMA_AFTER_LOCK)); +} + /* * Try to read-lock a vma. The function is allowed to occasionally yield false * locked result to avoid performance overhead, in which case we fall back to * using mmap_lock. The function should never yield false unlocked result. */ -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { /* * Check before locking. A race might cause false locked result. @@ -704,7 +724,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * we don't rely on for anything - the mm_lock_seq read against which we * need ordering is below. */ - if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq.sequence)) + if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence)) return false; if (unlikely(down_read_trylock(&vma->vm_lock.lock) == 0)) @@ -721,7 +741,7 @@ static inline bool vma_start_read(struct vm_area_struct *vma) * after it has been unlocked. * This pairs with RELEASE semantics in vma_end_write_all(). */ - if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&vma->vm_mm->mm_lock_seq))) { + if (unlikely(vma->vm_lock_seq == raw_read_seqcount(&mm->mm_lock_seq))) { up_read(&vma->vm_lock.lock); return false; } @@ -810,7 +830,15 @@ static inline void vma_assert_locked(struct vm_area_struct *vma) static inline void vma_mark_attached(struct vm_area_struct *vma) { - vma->detached = false; + /* + * This pairs with smp_rmb() inside is_vma_detached(). + * vma is marked attached after all vma modifications are done and it + * got added into the vma tree. All prior vma modifications should be + * made visible before marking the vma attached. + */ + smp_wmb(); + /* This pairs with READ_ONCE() in is_vma_detached(). */ + WRITE_ONCE(vma->detached, false); } static inline void vma_mark_detached(struct vm_area_struct *vma) @@ -822,7 +850,18 @@ static inline void vma_mark_detached(struct vm_area_struct *vma) static inline bool is_vma_detached(struct vm_area_struct *vma) { - return vma->detached; + bool detached; + + /* This pairs with WRITE_ONCE() in vma_mark_attached(). */ + detached = READ_ONCE(vma->detached); + /* + * This pairs with smp_wmb() inside vma_mark_attached() to ensure + * vma->detached is read before vma attributes read later inside + * lock_vma_under_rcu(). + */ + smp_rmb(); + + return detached; } static inline void release_fault_lock(struct vm_fault *vmf) @@ -847,7 +886,11 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_lock_init(struct vm_area_struct *vma) {} -static inline bool vma_start_read(struct vm_area_struct *vma) +static inline void vma_clear(struct vm_area_struct *vma) + { memset(vma, 0, sizeof(*vma)); } +static inline void vma_copy(struct vm_area_struct *new, struct vm_area_struct *orig) + { data_race(memcpy(new, orig, sizeof(*new))); } +static inline bool vma_start_read(struct mm_struct *mm, struct vm_area_struct *vma) { return false; } static inline void vma_end_read(struct vm_area_struct *vma) {} static inline void vma_start_write(struct vm_area_struct *vma) {} @@ -883,7 +926,7 @@ extern const struct vm_operations_struct vma_dummy_vm_ops; static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) { - memset(vma, 0, sizeof(*vma)); + vma_clear(vma); vma->vm_mm = mm; vma->vm_ops = &vma_dummy_vm_ops; INIT_LIST_HEAD(&vma->anon_vma_chain); @@ -892,7 +935,6 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm) vma->detached = true; #endif vma_numab_state_init(vma); - vma_lock_init(vma); } /* Use when VMA is not part of the VMA tree and needs no locking */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5c4bfdcfac72..8f6b0c935c2b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -667,15 +667,10 @@ struct vma_numab_state { struct vm_area_struct { /* The first cache line has the info for VMA tree walking. */ - union { - struct { - /* VMA covers [vm_start; vm_end) addresses within mm */ - unsigned long vm_start; - unsigned long vm_end; - }; -#ifdef CONFIG_PER_VMA_LOCK - struct rcu_head vm_rcu; /* Used for deferred freeing. */ -#endif + struct { + /* VMA covers [vm_start; vm_end) addresses within mm */ + unsigned long vm_start; + unsigned long vm_end; }; /* diff --git a/kernel/fork.c b/kernel/fork.c index f0cec673583c..76c68b041f8a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -436,6 +436,11 @@ static struct kmem_cache *vm_area_cachep; /* SLAB cache for mm_struct structures (tsk->mm) */ static struct kmem_cache *mm_cachep; +static void vm_area_ctor(void *data) +{ + vma_lock_init(data); +} + struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) { struct vm_area_struct *vma; @@ -462,8 +467,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) * orig->shared.rb may be modified concurrently, but the clone * will be reinitialized. */ - data_race(memcpy(new, orig, sizeof(*new))); - vma_lock_init(new); + vma_copy(new, orig); INIT_LIST_HEAD(&new->anon_vma_chain); #ifdef CONFIG_PER_VMA_LOCK /* vma is not locked, can't use vma_mark_detached() */ @@ -475,32 +479,37 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) return new; } -void __vm_area_free(struct vm_area_struct *vma) +static void __vm_area_free(struct vm_area_struct *vma, bool unreachable) { +#ifdef CONFIG_PER_VMA_LOCK + /* + * With SLAB_TYPESAFE_BY_RCU, vma can be reused and we need + * vma->detached to be set before vma is returned into the cache. + * This way reused object won't be used by readers until it's + * initialized and reattached. + * If vma is unreachable, there can be no other users and we + * can set vma->detached directly with no risk of a race. + * If vma is reachable, then it should have been already detached + * under vma write-lock or it was never attached. + */ + if (unreachable) + vma->detached = true; + else + VM_BUG_ON_VMA(!is_vma_detached(vma), vma); +#endif vma_numab_state_free(vma); free_anon_vma_name(vma); kmem_cache_free(vm_area_cachep, vma); } -#ifdef CONFIG_PER_VMA_LOCK -static void vm_area_free_rcu_cb(struct rcu_head *head) +void vm_area_free(struct vm_area_struct *vma) { - struct vm_area_struct *vma = container_of(head, struct vm_area_struct, - vm_rcu); - - /* The vma should not be locked while being destroyed. */ - VM_BUG_ON_VMA(rwsem_is_locked(&vma->vm_lock.lock), vma); - __vm_area_free(vma); + __vm_area_free(vma, false); } -#endif -void vm_area_free(struct vm_area_struct *vma) +void vm_area_free_unreachable(struct vm_area_struct *vma) { -#ifdef CONFIG_PER_VMA_LOCK - call_rcu(&vma->vm_rcu, vm_area_free_rcu_cb); -#else - __vm_area_free(vma); -#endif + __vm_area_free(vma, true); } static void account_kernel_stack(struct task_struct *tsk, int account) @@ -3135,9 +3144,11 @@ void __init proc_caches_init(void) sizeof(struct fs_struct), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); - vm_area_cachep = KMEM_CACHE(vm_area_struct, - SLAB_HWCACHE_ALIGN|SLAB_NO_MERGE|SLAB_PANIC| - SLAB_ACCOUNT); + vm_area_cachep = kmem_cache_create("vm_area_struct", + sizeof(struct vm_area_struct), 0, + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| + SLAB_ACCOUNT, vm_area_ctor); + mmap_init(); nsproxy_cache_init(); } diff --git a/mm/memory.c b/mm/memory.c index d0197a0c0996..b5fbc71b46bd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6275,10 +6275,16 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma) goto inval; - if (!vma_start_read(vma)) + if (!vma_start_read(mm, vma)) goto inval; - /* Check if the VMA got isolated after we found it */ + /* + * Check if the VMA got isolated after we found it. + * Note: vma we found could have been recycled and is being reattached. + * It's possible to attach a vma while it is read-locked, however a + * read-locked vma can't be detached (detaching requires write-locking). + * Therefore if this check passes, we have an attached and stable vma. + */ if (is_vma_detached(vma)) { vma_end_read(vma); count_vm_vma_lock_event(VMA_LOCK_MISS); @@ -6292,8 +6298,9 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, * fields are accessible for RCU readers. */ - /* Check since vm_start/vm_end might change before we lock the VMA */ - if (unlikely(address < vma->vm_start || address >= vma->vm_end)) + /* Check if the vma we locked is the right one. */ + if (unlikely(vma->vm_mm != mm || + address < vma->vm_start || address >= vma->vm_end)) goto inval_end_read; rcu_read_unlock(); diff --git a/mm/vma.c b/mm/vma.c index 73104d434567..050b83df3df2 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -382,7 +382,7 @@ void remove_vma(struct vm_area_struct *vma, bool unreachable) fput(vma->vm_file); mpol_put(vma_policy(vma)); if (unreachable) - __vm_area_free(vma); + vm_area_free_unreachable(vma); else vm_area_free(vma); } diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h index 2fed366d20ef..fd668d6cafc0 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -632,14 +632,15 @@ static inline void mpol_put(struct mempolicy *) { } -static inline void __vm_area_free(struct vm_area_struct *vma) +static inline void vm_area_free(struct vm_area_struct *vma) { free(vma); } -static inline void vm_area_free(struct vm_area_struct *vma) +static inline void vm_area_free_unreachable(struct vm_area_struct *vma) { - __vm_area_free(vma); + vma->detached = true; + vm_area_free(vma); } static inline void lru_add_drain(void)
To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that object reuse before RCU grace period is over will be detected inside lock_vma_under_rcu(). lock_vma_under_rcu() enters RCU read section, finds the vma at the given address, locks the vma and checks if it got detached or remapped to cover a different address range. These last checks are there to ensure that the vma was not modified after we found it but before locking it. vma reuse introduces several new possibilities: 1. vma can be reused after it was found but before it is locked; 2. vma can be reused and reinitialized (including changing its vm_mm) while being locked in vma_start_read(); 3. vma can be reused and reinitialized after it was found but before it is locked, then attached at a new address or to a new mm while being read-locked; For case #1 current checks will help detecting cases when: - vma was reused but not yet added into the tree (detached check) - vma was reused at a different address range (address check); We are missing the check for vm_mm to ensure the reused vma was not attached to a different mm. This patch adds the missing check. For case #2, we pass mm to vma_start_read() to prevent access to unstable vma->vm_mm. For case #3, we ensure the order in which vma->detached flag and vm_start/vm_end/vm_mm are set and checked. vma gets attached after vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check vma->detached before checking vm_start/vm_end/vm_mm. This is required because attaching vma happens without vma write-lock, as opposed to vma detaching, which requires vma write-lock. This patch adds memory barriers inside is_vma_detached() and vma_mark_attached() needed to order reads and writes to vma->detached vs vm_start/vm_end/vm_mm. After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep. This will facilitate vm_area_struct reuse and will minimize the number of call_rcu() calls. Adding a freeptr_t into vm_area_struct (unioned with vm_start/vm_end) could be used to avoids bloating the structure, however currently custom free pointers are not supported in combination with a ctor (see the comment for kmem_cache_args.freeptr_offset). Signed-off-by: Suren Baghdasaryan <surenb@google.com> --- include/linux/mm.h | 60 +++++++++++++++++++++++++++----- include/linux/mm_types.h | 13 +++---- kernel/fork.c | 53 +++++++++++++++++----------- mm/memory.c | 15 +++++--- mm/vma.c | 2 +- tools/testing/vma/vma_internal.h | 7 ++-- 6 files changed, 103 insertions(+), 47 deletions(-)