From patchwork Tue Jan 25 16:43:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 12724028 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A360FC433FE for ; Tue, 25 Jan 2022 16:43:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90B1D8D0002; Tue, 25 Jan 2022 11:43:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 86E1A6B00B0; Tue, 25 Jan 2022 11:43:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BE876B00AF; Tue, 25 Jan 2022 11:43:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id 595946B00AD for ; Tue, 25 Jan 2022 11:43:47 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 10FB918357DBE for ; Tue, 25 Jan 2022 16:43:47 +0000 (UTC) X-FDA: 79069380894.27.632CC5A Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf09.hostedemail.com (Postfix) with ESMTP id 3B89014005C for ; Tue, 25 Jan 2022 16:43:45 +0000 (UTC) From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1643129024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X9c5aIkenG1kJx7yYAvc5/l4pcRmYkPR3oK3gG7L0IE=; b=JV0IxcO6w+adnIgRoA0JhtfDB4D212kDPUNeZHpKJmwVdv/uzmQFSdUrhIsqQxMmFXrMCH Ji28kt4Y0UMTL+92+7W5tsxJDeuvtlzj+s5JSrcDTfz1ctNzBcYOepP+r1sT34qXRug3va tiNU7C4Aiew1egy8LeJwE5MlfySToHcPd9U9ouB9q1PHnQfkyLq91VPBrY7G6sXZkSen+a TLge+adUj5MUe+Svu8nPtad5zVcJITpk3qYN9wQsu8v9s7u4AuXll3r7ItyXO/Lvl/A9Si D7S4BFYwT2ROw8Cu3vBhk3EPtNLOdFc446zci0L3lLT18aodfhTKdEmxKfRaMw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1643129024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X9c5aIkenG1kJx7yYAvc5/l4pcRmYkPR3oK3gG7L0IE=; b=5haeDRHpxSlrgKIOZULzTpzZfpWwlEjYdyd+4Zih4INVUIXoIrR2uJVrCKxJ78j1g0qaVB hcPFFt1t8ho+rvAw== To: cgroups@vger.kernel.org, linux-mm@kvack.org Cc: Andrew Morton , Johannes Weiner , Michal Hocko , =?utf-8?q?Michal_Koutn=C3=BD?= , Peter Zijlstra , Thomas Gleixner , Vladimir Davydov , Waiman Long , Sebastian Andrzej Siewior Subject: [PATCH 4/4] mm/memcg: Allow the task_obj optimization only on non-PREEMPTIBLE kernels. Date: Tue, 25 Jan 2022 17:43:37 +0100 Message-Id: <20220125164337.2071854-5-bigeasy@linutronix.de> In-Reply-To: <20220125164337.2071854-1-bigeasy@linutronix.de> References: <20220125164337.2071854-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Stat-Signature: xd64w5ky9y48ibqq1y7fj6aipk6k4d91 X-Rspam-User: nil Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=JV0IxcO6; dkim=pass header.d=linutronix.de header.s=2020e header.b=5haeDRHp; spf=pass (imf09.hostedemail.com: domain of bigeasy@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de; dmarc=pass (policy=none) header.from=linutronix.de X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3B89014005C X-HE-Tag: 1643129025-722012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Based on my understanding the optimisation with task_obj for in_task() mask sense on non-PREEMPTIBLE kernels because preempt_disable()/enable() is optimized away. This could be then restricted to !CONFIG_PREEMPTION kernel instead to only PREEMPT_RT. With CONFIG_PREEMPT_DYNAMIC a non-PREEMPTIBLE kernel can also be configured but these kernels always have preempt_disable()/enable() present so it probably makes no sense here for the optimisation. I did a micro benchmark with disabled interrupts and a loop of 100.000.000 invokcations of kfree(kmalloc()). Based on the results it makes no sense to add an exception based on dynamic preemption. Restrict the optimisation to !CONFIG_PREEMPTION kernels. Link: https://lore.kernel.org/all/YdX+INO9gQje6d0S@linutronix.de Signed-off-by: Sebastian Andrzej Siewior --- mm/memcontrol.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2d8be88c00888..20ea8f28ad99b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2030,7 +2030,7 @@ struct memcg_stock_pcp { local_lock_t stock_lock; struct mem_cgroup *cached; /* this never be root cgroup */ unsigned int nr_pages; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION /* Protects only task_obj */ local_lock_t task_obj_lock; struct obj_stock task_obj; @@ -2043,7 +2043,7 @@ struct memcg_stock_pcp { }; static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock) = { .stock_lock = INIT_LOCAL_LOCK(stock_lock), -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION .task_obj_lock = INIT_LOCAL_LOCK(task_obj_lock), #endif }; @@ -2132,7 +2132,7 @@ static void drain_local_stock(struct work_struct *dummy) * drain_stock races is that we always operate on local CPU stock * here with IRQ disabled */ -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION local_lock(&memcg_stock.task_obj_lock); old = drain_obj_stock(&this_cpu_ptr(&memcg_stock)->task_obj, NULL); local_unlock(&memcg_stock.task_obj_lock); @@ -2741,7 +2741,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags, { struct memcg_stock_pcp *stock; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (likely(in_task())) { *pflags = 0UL; *stock_lock_acquried = false; @@ -2759,7 +2759,7 @@ static inline struct obj_stock *get_obj_stock(unsigned long *pflags, static inline void put_obj_stock(unsigned long flags, bool stock_lock_acquried) { -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (likely(!stock_lock_acquried)) { local_unlock(&memcg_stock.task_obj_lock); return; @@ -3177,7 +3177,7 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock, { struct mem_cgroup *memcg; -#ifndef CONFIG_PREEMPT_RT +#ifndef CONFIG_PREEMPTION if (in_task() && stock->task_obj.cached_objcg) { memcg = obj_cgroup_memcg(stock->task_obj.cached_objcg); if (memcg && mem_cgroup_is_descendant(memcg, root_memcg))