From patchwork Fri Dec 10 02:06:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Waiman Long X-Patchwork-Id: 12668537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD781C433F5 for ; Fri, 10 Dec 2021 02:07:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DB8D6B0071; Thu, 9 Dec 2021 21:07:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3885C6B0072; Thu, 9 Dec 2021 21:07:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251246B0075; Thu, 9 Dec 2021 21:07:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0139.hostedemail.com [216.40.44.139]) by kanga.kvack.org (Postfix) with ESMTP id 15BD76B0071 for ; Thu, 9 Dec 2021 21:07:10 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id CD42782FF0 for ; Fri, 10 Dec 2021 02:06:59 +0000 (UTC) X-FDA: 78900246558.09.0174D4E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 85ED480003 for ; Fri, 10 Dec 2021 02:06:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1639102018; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=Z2vD9OEgeBTpm8HRDHiQcgDjdIwItFgYD9G96kIg5Tw=; b=F9vIKYy4l2jZn5H1hJJBKbVKtSEW7T7QNcWNtKizb0GrqmwCyFao+DKCB5uUAtay40kg/e H3HXvtI8CHMkz29BdcWEO2rUJvHyrpC8SIRzuDjQve0Sn+USFfmo4Kzx0U2V3pUY40oggV BeufnlFpeQNkeAOmIBxr8+/KjKZ8L1Q= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-174-ld7YJdW7NDufNP4VeZIwgQ-1; Thu, 09 Dec 2021 21:06:53 -0500 X-MC-Unique: ld7YJdW7NDufNP4VeZIwgQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 509121023F4D; Fri, 10 Dec 2021 02:06:51 +0000 (UTC) Received: from llong.com (unknown [10.22.32.131]) by smtp.corp.redhat.com (Postfix) with ESMTP id 08EC05D9D5; Fri, 10 Dec 2021 02:06:46 +0000 (UTC) From: Waiman Long To: Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Sebastian Andrzej Siewior , Thomas Gleixner , Waiman Long Subject: [PATCH] mm/memcg: Properly handle memcg_stock access for PREEMPT_RT Date: Thu, 9 Dec 2021 21:06:32 -0500 Message-Id: <20211210020632.150769-1-longman@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Stat-Signature: xqi8b4sx6rozhj6jmxwacmtsfydewhd5 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=F9vIKYy4; spf=none (imf02.hostedemail.com: domain of longman@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 85ED480003 X-HE-Tag: 1639102018-92400 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Direct calls to local_irq_{save/restore}() and preempt_{enable/disable}() are not appropriate for PREEMPT_RT. To provide better PREEMPT_RT support, change local_irq_{save/restore}() to local_lock_irq{save/restore}() and add a local_lock_t to struct memcg_stock_pcp. Also disable the task and interrupt context optimization for obj_stock as there will be no performance gain in the case of PREEMPT_RT. In this case, task obj_stock will be there but remain unused. Signed-off-by: Waiman Long --- mm/memcontrol.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 6863a834ed42..c984d3054478 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2109,6 +2109,7 @@ struct obj_stock { }; struct memcg_stock_pcp { + local_lock_t lock; struct mem_cgroup *cached; /* this never be root cgroup */ unsigned int nr_pages; struct obj_stock task_obj; @@ -2147,29 +2148,28 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, * which is cheap in non-preempt kernel. The interrupt context object stock * can only be accessed after disabling interrupt. User context code can * access interrupt object stock, but not vice versa. + * + * This task and interrupt context optimization is disabled for PREEMPT_RT + * as there is no performance gain in this case. */ static inline struct obj_stock *get_obj_stock(unsigned long *pflags) { - struct memcg_stock_pcp *stock; - - if (likely(in_task())) { + if (likely(in_task()) && !IS_ENABLED(CONFIG_PREEMPT_RT)) { *pflags = 0UL; preempt_disable(); - stock = this_cpu_ptr(&memcg_stock); - return &stock->task_obj; + return this_cpu_ptr(&memcg_stock.task_obj); } - local_irq_save(*pflags); - stock = this_cpu_ptr(&memcg_stock); - return &stock->irq_obj; + local_lock_irqsave(&memcg_stock.lock, *pflags); + return this_cpu_ptr(&memcg_stock.irq_obj); } static inline void put_obj_stock(unsigned long flags) { - if (likely(in_task())) + if (likely(in_task()) && !IS_ENABLED(CONFIG_PREEMPT_RT)) preempt_enable(); else - local_irq_restore(flags); + local_unlock_irqrestore(&memcg_stock.lock, flags); } /** @@ -2192,7 +2192,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages) if (nr_pages > MEMCG_CHARGE_BATCH) return ret; - local_irq_save(flags); + local_lock_irqsave(&memcg_stock.lock, flags); stock = this_cpu_ptr(&memcg_stock); if (memcg == stock->cached && stock->nr_pages >= nr_pages) { @@ -2200,7 +2200,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages) ret = true; } - local_irq_restore(flags); + local_unlock_irqrestore(&memcg_stock.lock, flags); return ret; } @@ -2236,7 +2236,7 @@ static void drain_local_stock(struct work_struct *dummy) * drain_stock races is that we always operate on local CPU stock * here with IRQ disabled */ - local_irq_save(flags); + local_lock_irqsave(&memcg_stock.lock, flags); stock = this_cpu_ptr(&memcg_stock); drain_obj_stock(&stock->irq_obj); @@ -2245,7 +2245,7 @@ static void drain_local_stock(struct work_struct *dummy) drain_stock(stock); clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags); - local_irq_restore(flags); + local_unlock_irqrestore(&memcg_stock.lock, flags); } /* @@ -2257,7 +2257,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) struct memcg_stock_pcp *stock; unsigned long flags; - local_irq_save(flags); + local_lock_irqsave(&memcg_stock.lock, flags); stock = this_cpu_ptr(&memcg_stock); if (stock->cached != memcg) { /* reset if necessary */ @@ -2270,7 +2270,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages) if (stock->nr_pages > MEMCG_CHARGE_BATCH) drain_stock(stock); - local_irq_restore(flags); + local_unlock_irqrestore(&memcg_stock.lock, flags); } /* @@ -7059,9 +7059,12 @@ static int __init mem_cgroup_init(void) cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL, memcg_hotplug_cpu_dead); - for_each_possible_cpu(cpu) - INIT_WORK(&per_cpu_ptr(&memcg_stock, cpu)->work, - drain_local_stock); + for_each_possible_cpu(cpu) { + struct memcg_stock_pcp *stock = per_cpu_ptr(&memcg_stock, cpu); + + INIT_WORK(&stock->work, drain_local_stock); + local_lock_init(&stock->lock); + } for_each_node(node) { struct mem_cgroup_tree_per_node *rtpn;