Message ID | 20220524060551.80037-7-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Use obj_cgroup APIs to charge the LRU pages | expand |
On Tue, May 24, 2022 at 02:05:46PM +0800, Muchun Song wrote: > Similar to the lruvec lock, we use the same approach to make the split > queue lock safe when LRU pages are reparented. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> Please, merge this into the previous patch (like Johannes asked for the lruvec counterpart). And add: Acked-by: Roman Gushchin <roman.gushchin@linux.dev> . Thanks!
On Tue, May 24, 2022 at 07:54:35PM -0700, Roman Gushchin wrote: > On Tue, May 24, 2022 at 02:05:46PM +0800, Muchun Song wrote: > > Similar to the lruvec lock, we use the same approach to make the split > > queue lock safe when LRU pages are reparented. > > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > Please, merge this into the previous patch (like Johannes asked > for the lruvec counterpart). > Will do in v5. > And add: > Acked-by: Roman Gushchin <roman.gushchin@linux.dev> . > Thanks Roman.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ea152bde441e..cc596034c487 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -543,9 +543,22 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio) { struct deferred_split *queue; + rcu_read_lock(); +retry: queue = folio_split_queue(folio); spin_lock(&queue->split_queue_lock); + if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) { + spin_unlock(&queue->split_queue_lock); + goto retry; + } + + /* + * Preemption is disabled in the internal of spin_lock, which can serve + * as RCU read-side critical sections. + */ + rcu_read_unlock(); + return queue; } @@ -554,9 +567,19 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags) { struct deferred_split *queue; + rcu_read_lock(); +retry: queue = folio_split_queue(folio); spin_lock_irqsave(&queue->split_queue_lock, *flags); + if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) { + spin_unlock_irqrestore(&queue->split_queue_lock, *flags); + goto retry; + } + + /* See the comments in folio_split_queue_lock(). */ + rcu_read_unlock(); + return queue; }
Similar to the lruvec lock, we use the same approach to make the split queue lock safe when LRU pages are reparented. Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- mm/huge_memory.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)