From patchwork Tue May 30 11:40:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abel Wu X-Patchwork-Id: 13259799 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2BE117FE2 for ; Tue, 30 May 2023 11:40:35 +0000 (UTC) Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 011FBC5 for ; Tue, 30 May 2023 04:40:33 -0700 (PDT) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-64f47448aeaso3126518b3a.0 for ; Tue, 30 May 2023 04:40:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1685446833; x=1688038833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zSeZy3dBgM/hAvW1l4S3j81M4ry+F3vLfbl4Wg72DeY=; b=hhLQ3LoKBaJEJ5y7uGQ1a6tBrIrMZV+1pnIfqUg+3PPYZLcdnc+7VDqo7mmHd61hrj Sd4swO79Q16159eQDWb/4DVkqYPSM/uwLu9+nrIMavuQvSS0iMMA2tKxbWYOLOTyMNNW 8Q9YqbEBpn+Nx2pNK4smxSztJf5X5kZARTJKBTBELw90yTB1VKCkqnwAR8Px/p/KKR7V fOhc0w5nYnh7RRxeDgcc3lEBrXih07VolZo5Ebp7QGT1yNFQEfcQDeiZsXJhF9CLMeJk SavXV7b4KyPrvCtY6W/D4aQTnvHV5HjuN8C1ttrCgPPMzrSoHBP01IYNksy9EhRLI1EZ mTeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685446833; x=1688038833; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zSeZy3dBgM/hAvW1l4S3j81M4ry+F3vLfbl4Wg72DeY=; b=Ai9ZDTBMSH1e2+jQJPueeCmMxgi9561rcgtMD8LI8Y6Urh9WjG8nD7L7R4Epb6lnPz H8Z1QgF2xbwLDZE1+Ga7Px5zT4WZfgpGd2/Qz5xesg96O9IAqC/cXmgTFpG8NCwbCMxm qE1xaWB5/OWnWjDq1F09X97xPL+Cb1TNfQzLsUe36O6d1zpZ3vY06/OdLsqm2IwDy4Lt cYcXkoe9Y3gyvAvGfEsfOJbmZh5e7TMZDS0sdB7VZBdj5GEu3R/VpcXbaTlTeb594JHS ArEi/N+FgX2eLm+sengWjtlN34hcO1+tdfwHMYVFwDaUjNwDZ9Ni+jDJnT0Dk0fw1en3 3WvA== X-Gm-Message-State: AC+VfDydBHgQcYJYtf5RyNfpmgHv0vT/42BjhVnYfRyqIyJWsqHesYBI YO7BbITG6lP/KRR5KeDueqvpIg== X-Google-Smtp-Source: ACHHUZ4xzkdDhUOHefB08/7eoqUUKTgwsuecz4t21GvJg0uPWZgET1FRSQPRi1s3ik542LqiUkJgag== X-Received: by 2002:a05:6a20:8e14:b0:103:b0f9:7110 with SMTP id y20-20020a056a208e1400b00103b0f97110mr2307010pzj.11.1685446833446; Tue, 30 May 2023 04:40:33 -0700 (PDT) Received: from C02DV8HUMD6R.bytedance.net ([203.208.167.147]) by smtp.gmail.com with ESMTPSA id j20-20020aa78dd4000000b00642ea56f06fsm1515103pfr.0.2023.05.30.04.40.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 04:40:32 -0700 (PDT) From: Abel Wu To: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Johannes Weiner , Michal Hocko , Vladimir Davydov , Shakeel Butt , Muchun Song Cc: Simon Horman , netdev@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Abel Wu Subject: [PATCH v4 1/4] net-memcg: Fold dependency into memcg pressure cond Date: Tue, 30 May 2023 19:40:08 +0800 Message-Id: <20230530114011.13368-2-wuyun.abel@bytedance.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20230530114011.13368-1-wuyun.abel@bytedance.com> References: <20230530114011.13368-1-wuyun.abel@bytedance.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The callers of mem_cgroup_under_socket_pressure() should always make sure that (mem_cgroup_sockets_enabled && sk->sk_memcg) is true. So instead of coding around all the callsites, put the dependencies into mem_cgroup_under_socket_pressure() to avoid redundancy and possibly bugs. This change might also introduce slight function call overhead *iff* the function gets expanded in the future. But for now this change doesn't make binaries different (checked by vimdiff) except the one net/ipv4/tcp_input.o (by scripts/bloat-o-meter), which is probably negligible to performance: add/remove: 0/0 grow/shrink: 1/2 up/down: 5/-5 (0) Function old new delta tcp_grow_window 573 578 +5 tcp_try_rmem_schedule 1083 1081 -2 tcp_check_space 324 321 -3 Total: Before=44647, After=44647, chg +0.00% So folding the dependencies into mem_cgroup_under_socket_pressure() is generally a good thing and provides better readablility. Signed-off-by: Abel Wu --- include/linux/memcontrol.h | 2 ++ include/net/sock.h | 3 +-- include/net/tcp.h | 3 +-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 222d7370134c..a1aead140ff8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1743,6 +1743,8 @@ void mem_cgroup_sk_alloc(struct sock *sk); void mem_cgroup_sk_free(struct sock *sk); static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) { + if (!mem_cgroup_sockets_enabled || !memcg) + return false; if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_pressure) return true; do { diff --git a/include/net/sock.h b/include/net/sock.h index 8b7ed7167243..641c9373b44b 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1414,8 +1414,7 @@ static inline bool sk_under_memory_pressure(const struct sock *sk) if (!sk->sk_prot->memory_pressure) return false; - if (mem_cgroup_sockets_enabled && sk->sk_memcg && - mem_cgroup_under_socket_pressure(sk->sk_memcg)) + if (mem_cgroup_under_socket_pressure(sk->sk_memcg)) return true; return !!*sk->sk_prot->memory_pressure; diff --git a/include/net/tcp.h b/include/net/tcp.h index 04a31643cda3..3c5e3718b454 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -261,8 +261,7 @@ extern unsigned long tcp_memory_pressure; /* optimized version of sk_under_memory_pressure() for TCP sockets */ static inline bool tcp_under_memory_pressure(const struct sock *sk) { - if (mem_cgroup_sockets_enabled && sk->sk_memcg && - mem_cgroup_under_socket_pressure(sk->sk_memcg)) + if (mem_cgroup_under_socket_pressure(sk->sk_memcg)) return true; return READ_ONCE(tcp_memory_pressure); From patchwork Tue May 30 11:40:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abel Wu X-Patchwork-Id: 13259800 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6ED6413ACA for ; Tue, 30 May 2023 11:40:41 +0000 (UTC) Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CBFFAEA for ; Tue, 30 May 2023 04:40:39 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-64d41d8bc63so3389221b3a.0 for ; Tue, 30 May 2023 04:40:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1685446839; x=1688038839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7JbrrHsu8KzXBp8abtNpZsLO9DbdsQWsXzCPkkoYMdc=; b=XUY6RCnc6d+7un5QD7wAiC5Pb7wzmwMI4vRo1wJc4PF0bSz7lhpfuw5w2hLGZ7RA/3 n332Bt+Pe4lAngq6WuKUDTougZGEtz32pIJU7zKJ8QTXJ5ViSZGmOERloMz4QRJYUPXH hsv0l3Khpa0C5xc3NS3GzMJDoOmmkOVJ01kqthm/kqhC6JK6gClKXL3r+jLADDxhzx9e f8NGSVpHLSwGsRh54M1XlrCzSrhEhoysZtz/ipf9w+BGN8Jbp6s9IBTI7fqTjtdWxouD Z3Xw+ieXdkIIETN2bAKZUVTNN+xCGvn93OegJEeKCcXmPMVmn2p4opliPA/APLsYLQee burg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685446839; x=1688038839; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7JbrrHsu8KzXBp8abtNpZsLO9DbdsQWsXzCPkkoYMdc=; b=D4fgjkCjtZwsMWxjalfmZfZaetwu+WpLMP8dhN1zGzZZT0Z04FAgm4HWdweS1csXih FxT/vORnH63YSkAwbAqWOXVOVs/eRg3xTccTQHYyR668+QU1wfTE+CVGDXz/fvA0f2Q5 IVGqqbNOZz9ccDA04MlefINtYOP3f+CTVN2Jde2kRD43qQdxTXTs7mbP61RJqC9FnMtc 0JrLAYZ9zaJLUWDJjFfhUYduA8RlylCn0Rol7kbjLjYcDBpEoS4d77pwCOyH/2GfpCl3 ueQW+JZfc6XYi55NY1SaZ0omOh9JU9fMQzZh6/LzMZ3BvqD9vR4qp5/OLvRVlQevxuL4 RPQg== X-Gm-Message-State: AC+VfDxOgWek64imMgAnOAy3zVqlwjlt8bSXmP4l3FPuavlaPU06yEdC iCmYG+mWU4OXkLBRhUJNVYs8ew== X-Google-Smtp-Source: ACHHUZ5emuULCTRsl8IAfWUHF9798CEwa7RQFgXH9Xp8NT+z0Zduzv+V05AvK7LnBc/h3VKpGHIubA== X-Received: by 2002:a05:6a20:9151:b0:10c:71de:5dc5 with SMTP id x17-20020a056a20915100b0010c71de5dc5mr2593508pzc.30.1685446839327; Tue, 30 May 2023 04:40:39 -0700 (PDT) Received: from C02DV8HUMD6R.bytedance.net ([203.208.167.147]) by smtp.gmail.com with ESMTPSA id j20-20020aa78dd4000000b00642ea56f06fsm1515103pfr.0.2023.05.30.04.40.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 04:40:38 -0700 (PDT) From: Abel Wu To: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Johannes Weiner , Michal Hocko , Vladimir Davydov , Shakeel Butt , Muchun Song Cc: Simon Horman , netdev@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Abel Wu Subject: [PATCH v4 2/4] sock: Always take memcg pressure into consideration Date: Tue, 30 May 2023 19:40:09 +0800 Message-Id: <20230530114011.13368-3-wuyun.abel@bytedance.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20230530114011.13368-1-wuyun.abel@bytedance.com> References: <20230530114011.13368-1-wuyun.abel@bytedance.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The sk_under_memory_pressure() is called to check whether there is memory pressure related to this socket. But now it ignores the net- memcg's pressure if the proto of the socket doesn't care about the global pressure, which may put burden on its memcg compaction or reclaim path (also remember that socket memory is un-reclaimable). So always check the memcg's vm status to alleviate memstalls when it's in pressure. Signed-off-by: Abel Wu --- include/net/sock.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 641c9373b44b..b0e5533e5909 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1411,13 +1411,11 @@ static inline bool sk_has_memory_pressure(const struct sock *sk) static inline bool sk_under_memory_pressure(const struct sock *sk) { - if (!sk->sk_prot->memory_pressure) - return false; - if (mem_cgroup_under_socket_pressure(sk->sk_memcg)) return true; - return !!*sk->sk_prot->memory_pressure; + return sk->sk_prot->memory_pressure && + *sk->sk_prot->memory_pressure; } static inline long From patchwork Tue May 30 11:40:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abel Wu X-Patchwork-Id: 13259801 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C16A182B8 for ; Tue, 30 May 2023 11:40:48 +0000 (UTC) Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A3791100 for ; Tue, 30 May 2023 04:40:45 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-64d577071a6so5043051b3a.1 for ; Tue, 30 May 2023 04:40:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1685446845; x=1688038845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RMeJmdZ3LE5akFulN2YKZF3yDR93Nfz5GrpK/FAs6Mo=; b=MLCKl3OnqWHHyXcRdZaVayKCzY/Bl9wft7bDvD2wXODqoMTB/13VogpAknl5CPMsXy wPBLqktlL943IhkSBca1F/4EKfZGosHuGtUVdAkmBtNQRexyuge61JELuTrFyQTGDioc F0+b378M/Xr6NaH/X+q+yQL1JQ/nwwtFEKWza3rDI41G736sUwtprZHvXv+Sux07oy6G fAZMZSW2ZMmZCDiyKMZRxpfS15mP7ANNqmo2OSU7cqlFHCIbN4eBKRCfszo0suTUZ7wg rjPsmbI5Fxu5tY3g9Zhpc4nPO8DVN9OKcwpkE9piTLUZvZs5e+AEoecTNY5JWmxU1rCO 9ANw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685446845; x=1688038845; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RMeJmdZ3LE5akFulN2YKZF3yDR93Nfz5GrpK/FAs6Mo=; b=NZOwndlJWF3pc9OxxJrxRCDjNboKZIKpTr4YMOHVCn1lY8FBnwmGVb7urTxLl0L1Y4 m9Vc9+roODqR9ylj2w/G7o6GBSyTcO+VYz06pyWvnr+xNKWrh+E/8UmHJ1A+/JPMBKNF Zg0bf/moV6jooBClFh3yhpb876qJG+yiYyILz+mcBLZv4IlZkvp1c29X4T822BNOwujJ JRF8ypUrChczHjG13Gj4V+dwyqwwmkykpEqNtfujoHMhOmAIHzJKN+eUtDnZqNceEefk Sdb9ZfTOeEVPmqU9FR096WU5YfCgybmlyXPSOvt5rND8WJ4GooH1qmKDmdY48h/939KQ vQfw== X-Gm-Message-State: AC+VfDzlm1+T4oj5gGsqw4sdc6Oa5DuaXEMXjnXoEwdvNw4lqW66CfCC IdsBV7mYZmTTApN9+pZmg+QsvA== X-Google-Smtp-Source: ACHHUZ4FxxxoqYMoybqsRIkoNDdsA2hhrYAf6bOI8yODoZkUkDER+ZRyPtG6yvCay1gYPFhUPVfCzQ== X-Received: by 2002:a05:6a20:bea6:b0:10c:3cf3:ef7e with SMTP id gf38-20020a056a20bea600b0010c3cf3ef7emr1747230pzb.42.1685446845141; Tue, 30 May 2023 04:40:45 -0700 (PDT) Received: from C02DV8HUMD6R.bytedance.net ([203.208.167.147]) by smtp.gmail.com with ESMTPSA id j20-20020aa78dd4000000b00642ea56f06fsm1515103pfr.0.2023.05.30.04.40.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 04:40:44 -0700 (PDT) From: Abel Wu To: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Johannes Weiner , Michal Hocko , Vladimir Davydov , Shakeel Butt , Muchun Song Cc: Simon Horman , netdev@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Abel Wu Subject: [PATCH v4 3/4] sock: Fix misuse of sk_under_memory_pressure() Date: Tue, 30 May 2023 19:40:10 +0800 Message-Id: <20230530114011.13368-4-wuyun.abel@bytedance.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20230530114011.13368-1-wuyun.abel@bytedance.com> References: <20230530114011.13368-1-wuyun.abel@bytedance.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org The status of global socket memory pressure is updated when: a) __sk_mem_raise_allocated(): enter: sk_memory_allocated(sk) > sysctl_mem[1] leave: sk_memory_allocated(sk) <= sysctl_mem[0] b) __sk_mem_reduce_allocated(): leave: sk_under_memory_pressure(sk) && sk_memory_allocated(sk) < sysctl_mem[0] So the conditions of leaving global pressure are inconstant, which may lead to the situation that one pressured net-memcg prevents the global pressure from being cleared when there is indeed no global pressure, thus the global constrains are still in effect unexpectedly on the other sockets. This patch fixes this by ignoring the net-memcg's pressure when deciding whether should leave global memory pressure. Fixes: e1aab161e013 ("socket: initial cgroup code.") Signed-off-by: Abel Wu --- include/net/sock.h | 9 +++++++-- net/core/sock.c | 2 +- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index b0e5533e5909..257706710be5 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1409,13 +1409,18 @@ static inline bool sk_has_memory_pressure(const struct sock *sk) return sk->sk_prot->memory_pressure != NULL; } +static inline bool sk_under_global_memory_pressure(const struct sock *sk) +{ + return sk->sk_prot->memory_pressure && + *sk->sk_prot->memory_pressure; +} + static inline bool sk_under_memory_pressure(const struct sock *sk) { if (mem_cgroup_under_socket_pressure(sk->sk_memcg)) return true; - return sk->sk_prot->memory_pressure && - *sk->sk_prot->memory_pressure; + return sk_under_global_memory_pressure(sk); } static inline long diff --git a/net/core/sock.c b/net/core/sock.c index 5440e67bcfe3..801df091e37a 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3095,7 +3095,7 @@ void __sk_mem_reduce_allocated(struct sock *sk, int amount) if (mem_cgroup_sockets_enabled && sk->sk_memcg) mem_cgroup_uncharge_skmem(sk->sk_memcg, amount); - if (sk_under_memory_pressure(sk) && + if (sk_under_global_memory_pressure(sk) && (sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0))) sk_leave_memory_pressure(sk); } From patchwork Tue May 30 11:40:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Abel Wu X-Patchwork-Id: 13259802 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CA2317AC7 for ; Tue, 30 May 2023 11:40:53 +0000 (UTC) Received: from mail-pf1-x432.google.com (mail-pf1-x432.google.com [IPv6:2607:f8b0:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EE0D10C for ; Tue, 30 May 2023 04:40:51 -0700 (PDT) Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-64d4e4598f0so4833273b3a.2 for ; Tue, 30 May 2023 04:40:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1685446851; x=1688038851; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RnVh2joarXNT9aj52CHZE7i4DFTKtYVUHBE2hndAHFM=; b=btr0FoS+C6fgN8Erg41EU1t5DJZD+PGpyR8bakM6GW+5ICYCzcrthDnH/6xVPa+R/Q wG33TV1SZXTSdoEBPSuVByreJr8Gqe+O51Rrgz9Y73Q9ejyvJRjXK5eIT9djyPPlFgvb 7yjesf0wmL+VpcChkGgR+xTrD7A7FInxqIyHcX7V0kBikEvD+Gcmwvvj1tjFmu6SjTWU 1mtHFPzNab3djs/NQuIwpZKWzAqcOIj7G4B3U0h+DRCca2OlBrJX+tqBNTydDQfcbJO7 DKVs5NHggfUWPYEokBru5ocQ0Gp4jvD47OV3sXrkPMdZtcuktJf9yWqNjhJ2I4I7b8BW 09mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685446851; x=1688038851; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RnVh2joarXNT9aj52CHZE7i4DFTKtYVUHBE2hndAHFM=; b=CWqlOP1xd1Vk9Ym0XE6mTcJwvhLwJHNhnGdjncUhCslhPTk0t6kiGU4lilHZ6fGH2o CT9Q8Dj07zObBoSY/Gs2CCirS/FmdCscxQ2t0xVl8gY7yS5o6Q663d85tfldjTHrU80K UgvWGVC3mC+QTdVHvYEoY466n1PaLrSETq3GtkfujfQKoUEdtU1wmZnAUc0VFupCFDl9 ZZrpNhPpUzjkwamDe3yWtcMp6Z89vkcITQBcIJ6iYarzEan32iZlMwGmaUGb4U0KJ0aR JBMxRufUIj2KrnRGGqYpkMODrFTHW5BxKROBP8RJKi9kYVKRRvr4grFYhAOm1KEaTPrC SERA== X-Gm-Message-State: AC+VfDzSq/XrJ0QkN+uv1T9GAR0PJujHzZtAmDSUZ77a5FVQSTmsZFhN yi+lftHJuWApPRFkcsm5lap9gg== X-Google-Smtp-Source: ACHHUZ7/L6D/t8C+fhV1v7/kaE8Q+tmoGMGurwakklgYU2DF5zQp3EuB0WovCQ651Y9p7EeFL5iXhA== X-Received: by 2002:a05:6a00:b81:b0:643:5d7a:a898 with SMTP id g1-20020a056a000b8100b006435d7aa898mr2408213pfj.0.1685446850941; Tue, 30 May 2023 04:40:50 -0700 (PDT) Received: from C02DV8HUMD6R.bytedance.net ([203.208.167.147]) by smtp.gmail.com with ESMTPSA id j20-20020aa78dd4000000b00642ea56f06fsm1515103pfr.0.2023.05.30.04.40.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 30 May 2023 04:40:50 -0700 (PDT) From: Abel Wu To: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Johannes Weiner , Michal Hocko , Vladimir Davydov , Shakeel Butt , Muchun Song Cc: Simon Horman , netdev@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Abel Wu Subject: [PATCH v4 4/4] sock: Remove redundant cond of memcg pressure Date: Tue, 30 May 2023 19:40:11 +0800 Message-Id: <20230530114011.13368-5-wuyun.abel@bytedance.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20230530114011.13368-1-wuyun.abel@bytedance.com> References: <20230530114011.13368-1-wuyun.abel@bytedance.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org Now with the previous patch, __sk_mem_raise_allocated() considers the memory pressure of both global and the socket's memcg on a func- wide level, making the condition of memcg's pressure in question redundant. Signed-off-by: Abel Wu --- net/core/sock.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/core/sock.c b/net/core/sock.c index 801df091e37a..86735ad2f903 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3020,9 +3020,15 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) if (sk_has_memory_pressure(sk)) { u64 alloc; - if (!sk_under_memory_pressure(sk)) + if (!sk_under_global_memory_pressure(sk)) return 1; + alloc = sk_sockets_allocated_read_positive(sk); + + /* If under global pressure, allow the sockets that are below + * average memory usage to raise, trying to be fair among all + * the sockets under global constrains. + */ if (sk_prot_mem_limits(sk, 2) > alloc * sk_mem_pages(sk->sk_wmem_queued + atomic_read(&sk->sk_rmem_alloc) +