From patchwork Mon Aug 22 00:17:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12950095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 441D1C00140 for ; Mon, 22 Aug 2022 00:19:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D45BD8E0001; Sun, 21 Aug 2022 20:19:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF4166B0075; Sun, 21 Aug 2022 20:19:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBD078E0001; Sun, 21 Aug 2022 20:19:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id AF6EB6B0074 for ; Sun, 21 Aug 2022 20:19:32 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 83AAA160410 for ; Mon, 22 Aug 2022 00:19:32 +0000 (UTC) X-FDA: 79825319784.27.38B6960 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf07.hostedemail.com (Postfix) with ESMTP id B547D4006A for ; Mon, 22 Aug 2022 00:17:53 +0000 (UTC) Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-32a115757b6so162367027b3.13 for ; Sun, 21 Aug 2022 17:17:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=PKFRaxD4qQqEIFqu3chPSEEEaimoA7SPNFa30ZMS/uU=; b=HizZvrBQiCe/lTGkboVezrNH98vpHv6pPqN3AxEJH/hFrU99YqU4n2tDb7vx0BDvjD bQrrXzlJxZNS5VEi+6ur1985jYp1mqaVxFam9zQg29fgy4j8FDwrvk2EWIEw9zZcLyy8 ZrbSnLWlxJzAlcK4inCb0YBuOPVU8Pf6wgn3U0AS+YKBi3N8fLf+SDkLifHnsuVpXtE9 VW+MgYxWYsl4FT5vPrWtjoQKUyoGIDwGhmbfEnhOfVukKNjsvqjWPYthJA+lrTU4JyRJ Sj3VMg2YYPUcwhLJDwgqi0QQBI7r5yYyU5d+lRAwuifJcVjqGV647qwS/PzhpquTv4FZ z7vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=PKFRaxD4qQqEIFqu3chPSEEEaimoA7SPNFa30ZMS/uU=; b=UkVyOS9nPCJVoWXSINvfmFccE2OywJBeScrjHNPGO/LsrEyYPo0Yu4Ihp4CECrnyUu gvexB9PKD81TTo3AOuvCK2APfu9N3zzv0nZE+v/8YOdzH4gbYCvEszeTpxiiu+jcma1z TFLhcOaS8DSRfbyrEJfFnA90sAOZOv0uUula9kTj+wknnkIFSm15yoUhaaJtMZ1NNSL1 UInh336NV+6Yy4w3B0/m8OG8BQiMlBKdf8YEll/ucIVCV3zyYam6MmsaFXtNuFTu2rx/ lX1BwfmHyZKPPto8E9bxiKPfUfPxUZgQid4lJrLmF8ZMNpNhYvlqNh5tkiY9lQ6YDlUw EK1Q== X-Gm-Message-State: ACgBeo1eDTcft9wIcT6Fj/n7ZBK2ZK3wHCInZ7QNqG8VDCK8jEu7DoWs CCmhJF5RtC/5z/yy1HirQEYKPUvgMucU9A== X-Google-Smtp-Source: AA6agR7ugDocQyFMo7I0NUqJCeOZethCdyIXG6uExhXVb1g/djDmpfI1UlZI7JLxJy/0h1pcp1JCf7rda8Xqlw== X-Received: from shakeelb.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:28b]) (user=shakeelb job=sendgmr) by 2002:a81:9b17:0:b0:335:c382:48d with SMTP id s23-20020a819b17000000b00335c382048dmr17808577ywg.244.1661127473042; Sun, 21 Aug 2022 17:17:53 -0700 (PDT) Date: Mon, 22 Aug 2022 00:17:35 +0000 In-Reply-To: <20220822001737.4120417-1-shakeelb@google.com> Message-Id: <20220822001737.4120417-2-shakeelb@google.com> Mime-Version: 1.0 References: <20220822001737.4120417-1-shakeelb@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH 1/3] mm: page_counter: remove unneeded atomic ops for low/min From: Shakeel Butt To: Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song Cc: " =?utf-8?q?Michal_Koutn=C3=BD?= " , Eric Dumazet , Soheil Hassas Yeganeh , Feng Tang , Oliver Sang , Andrew Morton , lkp@lists.01.org, cgroups@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1661127473; a=rsa-sha256; cv=none; b=4alt99/+FCLuBsED4nDZ5enjuUfjtHlLLM7DoCNKISJoWO1KLQ4SfO35sjkSfqJNsLzlNe vGV6v0qEANii/wbHV0UPRFrk9bwHtOcDj8SusW/XEdEfU8IrY7R3pDIqMGoTbHRTU2/YSn fhlFf//Vw9MXzFUBvsDPQ4F3OV67RIs= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HizZvrBQ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3McsCYwgKCPAkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3McsCYwgKCPAkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1661127473; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PKFRaxD4qQqEIFqu3chPSEEEaimoA7SPNFa30ZMS/uU=; b=yshSedbrovoAf6FwVeRLbJuRgaX8b/w7YZYfGADMm+YisC/0oaFXFuI2SQmaEFBSmePUKM kduaV495+YJk5XwH0Bik+K2fASloQYwwPQT7eknC7fCpXtMwhbxMnVElPQR+EzZHbvSSZh btqrcj31ZyDEowO1lrPtylhf8hHwtq0= Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HizZvrBQ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3McsCYwgKCPAkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3McsCYwgKCPAkZScWWdTYggYdW.Ugedafmp-eecnSUc.gjY@flex--shakeelb.bounces.google.com X-Rspam-User: X-Stat-Signature: xyc5wihjhik8ji8qz67uujxqo5sahrcw X-Rspamd-Queue-Id: B547D4006A X-Rspamd-Server: rspam06 X-HE-Tag: 1661127473-813547 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For cgroups using low or min protections, the function propagate_protected_usage() was doing an atomic xchg() operation irrespectively. It only needs to do that operation if the new value of protection is different from older one. This patch does that. To evaluate the impact of this optimization, on a 72 CPUs machine, we ran the following workload in a three level of cgroup hierarchy with top level having min and low setup appropriately. More specifically memory.min equal to size of netperf binary and memory.low double of that. $ netserver -6 # 36 instances of netperf with following params $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K Results (average throughput of netperf): Without (6.0-rc1) 10482.7 Mbps With patch 14542.5 Mbps (38.7% improvement) With the patch, the throughput improved by 38.7% Signed-off-by: Shakeel Butt Reported-by: kernel test robot Acked-by: Soheil Hassas Yeganeh Reviewed-by: Feng Tang Acked-by: Roman Gushchin --- mm/page_counter.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/mm/page_counter.c b/mm/page_counter.c index eb156ff5d603..47711aa28161 100644 --- a/mm/page_counter.c +++ b/mm/page_counter.c @@ -17,24 +17,23 @@ static void propagate_protected_usage(struct page_counter *c, unsigned long usage) { unsigned long protected, old_protected; - unsigned long low, min; long delta; if (!c->parent) return; - min = READ_ONCE(c->min); - if (min || atomic_long_read(&c->min_usage)) { - protected = min(usage, min); + protected = min(usage, READ_ONCE(c->min)); + old_protected = atomic_long_read(&c->min_usage); + if (protected != old_protected) { old_protected = atomic_long_xchg(&c->min_usage, protected); delta = protected - old_protected; if (delta) atomic_long_add(delta, &c->parent->children_min_usage); } - low = READ_ONCE(c->low); - if (low || atomic_long_read(&c->low_usage)) { - protected = min(usage, low); + protected = min(usage, READ_ONCE(c->low)); + old_protected = atomic_long_read(&c->low_usage); + if (protected != old_protected) { old_protected = atomic_long_xchg(&c->low_usage, protected); delta = protected - old_protected; if (delta)