diff mbox series

[RFC] memcg: net: improve charging of incoming network traffic

Message ID 20250307055936.3988572-1-shakeel.butt@linux.dev (mailing list archive)
State New
Headers show
Series [RFC] memcg: net: improve charging of incoming network traffic | expand

Commit Message

Shakeel Butt March 7, 2025, 5:59 a.m. UTC
Memory cgroup accounting is expensive and to reduce the cost, the kernel
maintains per-cpu charge cache for a single memcg. So, if a charge
request comes for a different memcg, the kernel will flush the old
memcg's charge cache and then charge the newer memcg a fixed amount (64
pages), subtracts the charge request amount and stores the remaining in
the per-cpu charge cache for the newer memcg.

This mechanism is based on the assumption that the kernel, for locality,
keep a process on a CPU for long period of time and most of the charge
requests from that process will be served by that CPU's local charge
cache.

However this assumption breaks down for incoming network traffic in a
multi-tenant machine. We are in the process of running multiple
workloads on a single machine and if such workloads are network heavy,
we are seeing very high network memory accounting cost. We have observed
multiple CPUs spending almost 100% of their time in net_rx_action and
almost all of that time is spent in memcg accounting of the network
traffic.

More precisely, net_rx_action is serving packets from multiple workloads
and is observing/serving mix of packets of these workloads. The memcg
switch of per-cpu cache is very expensive and we are observing a lot of
memcg switches on the machine. Almost all the time is being spent on
charging new memcg and flushing older memcg cache. So, definitely we
need per-cpu cache that support multiple memcgs for this scenario.

This prototype implements a network specific scope based memcg charge
cache that supports holding charge for multiple memcgs. However this is
not the final design and I wanted to start the conversation on this
topic with some open questions below:

1. Should we keep existing per-cpu single memcg cache?
2. Should we have a network specific solution similar to this prototype
   or more general solution?
3. If we decide to have multi memcg charge cache, what should be the
   size? Should it be dynamic or static?
4. Do we really care about performance (throughput) in PREEMPT_RT?

Let me give my opinion on these questions:

A1. We definitely need to evolve the per-cpu charge cache. I am not
    happy with the irq disabling for memcg charging and stats code. I am
    planning to move towards two set of stocks, one for in_task() and
    the other for !in_task() (similar to active_memcg handling) and with
    that remove the irq disabling from the charge path. In the followup
    I want to expand this to the obj stocks as well and also remove the
    irq disabling from that.

A2. I think we need a general solution as I suspect kvfree_rcu_bulk()
    might be in a similar situation. However I think we can definitely
    use network specific knowledge to further improve network memory
    charging. For example, we know kernel uses GFP_ATOMIC for charging
    incoming traffic which always succeeds. We can exploit this
    knowledge to further improve network charging throughput.

A3. Here I think we need to start simple and make it more sophisticated
    as we see more data from production/field from multiple places. This
    can become complicated very easily. For example the evict policy for
    memcg charge cache.

A4. I don't think PREEMPT_RT is about throughput but it cares about
    latency but these memcg charge caches are about throughput. In
    addition PREEMPT_RT has made memcg code a lot messier (IMO). IMO the
    PREEMPT_RT kernel should just skip all per-cpu memcg caches
    including objcg ones and that would make code much simpler.

That is my take and I would really like opinions and suggestions from
others. BTW I want to resolve this issue asap as this is becoming a
blocker for multi-tenancy for us.

Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 include/linux/memcontrol.h | 37 +++++++++++++++++
 mm/memcontrol.c            | 83 ++++++++++++++++++++++++++++++++++++++
 net/core/dev.c             |  4 ++
 3 files changed, 124 insertions(+)

Comments

Yosry Ahmed March 7, 2025, 7:41 p.m. UTC | #1
On Thu, Mar 06, 2025 at 09:59:36PM -0800, Shakeel Butt wrote:
> Memory cgroup accounting is expensive and to reduce the cost, the kernel
> maintains per-cpu charge cache for a single memcg. So, if a charge
> request comes for a different memcg, the kernel will flush the old
> memcg's charge cache and then charge the newer memcg a fixed amount (64
> pages), subtracts the charge request amount and stores the remaining in
> the per-cpu charge cache for the newer memcg.
> 
> This mechanism is based on the assumption that the kernel, for locality,
> keep a process on a CPU for long period of time and most of the charge
> requests from that process will be served by that CPU's local charge
> cache.
> 
> However this assumption breaks down for incoming network traffic in a
> multi-tenant machine. We are in the process of running multiple
> workloads on a single machine and if such workloads are network heavy,
> we are seeing very high network memory accounting cost. We have observed
> multiple CPUs spending almost 100% of their time in net_rx_action and
> almost all of that time is spent in memcg accounting of the network
> traffic.
> 
> More precisely, net_rx_action is serving packets from multiple workloads
> and is observing/serving mix of packets of these workloads. The memcg
> switch of per-cpu cache is very expensive and we are observing a lot of
> memcg switches on the machine. Almost all the time is being spent on
> charging new memcg and flushing older memcg cache. So, definitely we
> need per-cpu cache that support multiple memcgs for this scenario.

We've internally faced a different situation on machines with a large
number of CPUs where the mod_memcg_state(MEMCG_SOCK) call in
mem_cgroup_[un]charge_skmem() causes latency due to high contention on
the atomic update in memcg_rstat_updated().

In this case, networking performs a lot of charge/uncharge operations,
but because we count the absolute magnitude updates in
memcg_rstat_updated(), we reach the threshold quickly. In practice, a
lot of these updates cancel each other out so the net change in the
stats may not be that large.

However, not using the absolute value of the updates could cause stat
updates of irrelevant stats with opposite polarity to cancel out,
potentially delaying stat updates.

I wonder if we can leverage the batching introduced here to fix this
problem as well. For example, if the charging in
mem_cgroup_[un]charge_skmem() is satisfied from this catch, can we avoid
mod_memcg_state() and only update the stats once at the end of batching?

IIUC the current implementation only covers the RX path, so it will
reduce the number of calls to mod_memcg_state(), but it won't prevent
charge/uncharge operations from raising the update counter
unnecessarily. I wonder if the scope of the batching could be increased
so that both TX and RX use the same cache, and charge/uncharge
operations cancel out completely in terms of stat updates.

WDYT?
Shakeel Butt March 7, 2025, 8:12 p.m. UTC | #2
On Fri, Mar 07, 2025 at 07:41:59PM +0000, Yosry Ahmed wrote:
> On Thu, Mar 06, 2025 at 09:59:36PM -0800, Shakeel Butt wrote:
> > Memory cgroup accounting is expensive and to reduce the cost, the kernel
> > maintains per-cpu charge cache for a single memcg. So, if a charge
> > request comes for a different memcg, the kernel will flush the old
> > memcg's charge cache and then charge the newer memcg a fixed amount (64
> > pages), subtracts the charge request amount and stores the remaining in
> > the per-cpu charge cache for the newer memcg.
> > 
> > This mechanism is based on the assumption that the kernel, for locality,
> > keep a process on a CPU for long period of time and most of the charge
> > requests from that process will be served by that CPU's local charge
> > cache.
> > 
> > However this assumption breaks down for incoming network traffic in a
> > multi-tenant machine. We are in the process of running multiple
> > workloads on a single machine and if such workloads are network heavy,
> > we are seeing very high network memory accounting cost. We have observed
> > multiple CPUs spending almost 100% of their time in net_rx_action and
> > almost all of that time is spent in memcg accounting of the network
> > traffic.
> > 
> > More precisely, net_rx_action is serving packets from multiple workloads
> > and is observing/serving mix of packets of these workloads. The memcg
> > switch of per-cpu cache is very expensive and we are observing a lot of
> > memcg switches on the machine. Almost all the time is being spent on
> > charging new memcg and flushing older memcg cache. So, definitely we
> > need per-cpu cache that support multiple memcgs for this scenario.
> 
> We've internally faced a different situation on machines with a large
> number of CPUs where the mod_memcg_state(MEMCG_SOCK) call in
> mem_cgroup_[un]charge_skmem() causes latency due to high contention on
> the atomic update in memcg_rstat_updated().

Interesting. At Meta, we are not seeing the latency issue due to
memcg_rstat_updated() but it is one of most expensive function in our
fleet and optimizing it is in our plan.

> 
> In this case, networking performs a lot of charge/uncharge operations,
> but because we count the absolute magnitude updates in
> memcg_rstat_updated(), we reach the threshold quickly. In practice, a
> lot of these updates cancel each other out so the net change in the
> stats may not be that large.
> 
> However, not using the absolute value of the updates could cause stat
> updates of irrelevant stats with opposite polarity to cancel out,
> potentially delaying stat updates.
> 
> I wonder if we can leverage the batching introduced here to fix this
> problem as well. For example, if the charging in
> mem_cgroup_[un]charge_skmem() is satisfied from this catch, can we avoid
> mod_memcg_state() and only update the stats once at the end of batching?
> 
> IIUC the current implementation only covers the RX path, so it will
> reduce the number of calls to mod_memcg_state(), but it won't prevent
> charge/uncharge operations from raising the update counter
> unnecessarily. I wonder if the scope of the batching could be increased
> so that both TX and RX use the same cache, and charge/uncharge
> operations cancel out completely in terms of stat updates.
> 
> WDYT?

JP (CCed) is currently working on collecting data from our fleet to find
the hotest memcg stats i.e. with the most updates. I think the early
data show MEMCG_SOCK and MEMCG_KMEM are among the hot ones. JP has
couple of ideas to improve the situation here which he will experiment
with and share in due time.

Regarding batching for TX and RX, my intention is to keep the charge
batching general purpose but I think the batching the MEMCG_SOCK for
networking with a scoping API can be done and seems like a good idea. I
will do that in the followup.

Thanks for taking a look.
diff mbox series

Patch

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 57664e2a8fb7..3aa22b0261be 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1617,6 +1617,30 @@  extern struct static_key_false memcg_sockets_enabled_key;
 #define mem_cgroup_sockets_enabled static_branch_unlikely(&memcg_sockets_enabled_key)
 void mem_cgroup_sk_alloc(struct sock *sk);
 void mem_cgroup_sk_free(struct sock *sk);
+
+struct memcg_skmem_batch {
+	int size;
+	struct mem_cgroup *memcg[MEMCG_CHARGE_BATCH];
+	unsigned int nr_pages[MEMCG_CHARGE_BATCH];
+};
+
+void __mem_cgroup_batch_charge_skmem_begin(struct memcg_skmem_batch *batch);
+void __mem_cgroup_batch_charge_skmem_end(struct memcg_skmem_batch *batch);
+
+static inline void mem_cgroup_batch_charge_skmem_begin(struct memcg_skmem_batch *batch)
+{
+	if (cgroup_subsys_on_dfl(memory_cgrp_subsys) &&
+	   mem_cgroup_sockets_enabled)
+		__mem_cgroup_batch_charge_skmem_begin(batch);
+}
+
+static inline void mem_cgroup_batch_charge_skmem_end(struct memcg_skmem_batch *batch)
+{
+	if (cgroup_subsys_on_dfl(memory_cgrp_subsys) &&
+	   mem_cgroup_sockets_enabled)
+		__mem_cgroup_batch_charge_skmem_end(batch);
+}
+
 static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg)
 {
 #ifdef CONFIG_MEMCG_V1
@@ -1638,6 +1662,19 @@  void reparent_shrinker_deferred(struct mem_cgroup *memcg);
 #define mem_cgroup_sockets_enabled 0
 static inline void mem_cgroup_sk_alloc(struct sock *sk) { };
 static inline void mem_cgroup_sk_free(struct sock *sk) { };
+
+struct memcg_skmem_batch {};
+
+static inline void mem_cgroup_batch_charge_skmem_begin(
+					struct memcg_skmem_batch *batch)
+{
+}
+
+static inline void mem_cgroup_batch_charge_skmem_end(
+					struct memcg_skmem_batch *batch)
+{
+}
+
 static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg)
 {
 	return false;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 709b16057048..3afca4d055b3 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -88,6 +88,7 @@  EXPORT_PER_CPU_SYMBOL_GPL(int_active_memcg);
 
 /* Socket memory accounting disabled? */
 static bool cgroup_memory_nosocket __ro_after_init;
+DEFINE_PER_CPU(struct memcg_skmem_batch *, int_skmem_batch);
 
 /* Kernel memory accounting disabled? */
 static bool cgroup_memory_nokmem __ro_after_init;
@@ -1775,6 +1776,57 @@  static struct obj_cgroup *drain_obj_stock(struct memcg_stock_pcp *stock);
 static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
 				     struct mem_cgroup *root_memcg);
 
+static inline bool consume_batch_stock(struct mem_cgroup *memcg,
+				       unsigned int nr_pages)
+{
+	int i;
+	struct memcg_skmem_batch *batch;
+
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) || in_task() ||
+	    !this_cpu_read(int_skmem_batch))
+		return false;
+
+	batch = this_cpu_read(int_skmem_batch);
+	for (i = 0; i < batch->size; ++i) {
+		if (batch->memcg[i] == memcg) {
+			if (nr_pages <= batch->nr_pages[i]) {
+				batch->nr_pages[i] -= nr_pages;
+				return true;
+			}
+			return false;
+		}
+	}
+	return false;
+}
+
+static inline bool refill_stock_batch(struct mem_cgroup *memcg,
+				      unsigned int nr_pages)
+{
+	int i;
+	struct memcg_skmem_batch *batch;
+
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) || in_task() ||
+	    !this_cpu_read(int_skmem_batch))
+		return false;
+
+	batch = this_cpu_read(int_skmem_batch);
+	for (i = 0; i < batch->size; ++i) {
+		if (memcg == batch->memcg[i]) {
+			batch->nr_pages[i] += nr_pages;
+			return true;
+		}
+	}
+
+	if (i == MEMCG_CHARGE_BATCH)
+		return false;
+
+	/* i == batch->size */
+	batch->memcg[i] = memcg;
+	batch->nr_pages[i] = nr_pages;
+	batch->size++;
+	return true;
+}
+
 /**
  * consume_stock: Try to consume stocked charge on this cpu.
  * @memcg: memcg to consume from.
@@ -1795,6 +1847,9 @@  static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
 	unsigned long flags;
 	bool ret = false;
 
+	if (consume_batch_stock(memcg, nr_pages))
+		return true;
+
 	if (nr_pages > MEMCG_CHARGE_BATCH)
 		return ret;
 
@@ -1887,6 +1942,9 @@  static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
 {
 	unsigned long flags;
 
+	if (refill_stock_batch(memcg, nr_pages))
+		return;
+
 	if (!localtry_trylock_irqsave(&memcg_stock.stock_lock, flags)) {
 		/*
 		 * In case of unlikely failure to lock percpu stock_lock
@@ -4894,6 +4952,31 @@  void mem_cgroup_sk_free(struct sock *sk)
 		css_put(&sk->sk_memcg->css);
 }
 
+void __mem_cgroup_batch_charge_skmem_begin(struct memcg_skmem_batch *batch)
+{
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) || in_task() ||
+	    this_cpu_read(int_skmem_batch))
+		return;
+
+	this_cpu_write(int_skmem_batch, batch);
+}
+
+void __mem_cgroup_batch_charge_skmem_end(struct memcg_skmem_batch *batch)
+{
+	int i;
+
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) || in_task() ||
+	    batch != this_cpu_read(int_skmem_batch))
+		return;
+
+	this_cpu_write(int_skmem_batch, NULL);
+	for (i = 0; i < batch->size; ++i) {
+		if (batch->nr_pages[i])
+			page_counter_uncharge(&batch->memcg[i]->memory,
+					      batch->nr_pages[i]);
+	}
+}
+
 /**
  * mem_cgroup_charge_skmem - charge socket memory
  * @memcg: memcg to charge
diff --git a/net/core/dev.c b/net/core/dev.c
index 0eba6e4f8ccb..846305d019c6 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -7484,9 +7484,12 @@  static __latent_entropy void net_rx_action(void)
 		usecs_to_jiffies(READ_ONCE(net_hotdata.netdev_budget_usecs));
 	struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
 	int budget = READ_ONCE(net_hotdata.netdev_budget);
+	struct memcg_skmem_batch batch = {};
 	LIST_HEAD(list);
 	LIST_HEAD(repoll);
 
+	mem_cgroup_batch_charge_skmem_begin(&batch);
+
 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
 start:
 	sd->in_net_rx_action = true;
@@ -7542,6 +7545,7 @@  static __latent_entropy void net_rx_action(void)
 	net_rps_action_and_irq_enable(sd);
 end:
 	bpf_net_ctx_clear(bpf_net_ctx);
+	mem_cgroup_batch_charge_skmem_end(&batch);
 }
 
 struct netdev_adjacent {