From patchwork Fri Mar 7 11:57:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 14006382 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44DD221859F; Fri, 7 Mar 2025 11:57:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348651; cv=none; b=YWAAWvn2GKDo/IXocRL1qEkLkQvIDJ+FhQvgivEHHmLncVfMvSB12pZEwzsDjcuXeHkTmJvzxJFDRewcR+zJUL0xqUAw8jFP1bLMmL0Wh824OSNKwK4CYeVSv2NpaaUPJQASUT7JKI/OkUZcOR/EtqDtAEHzkB3fPx0L2efkkPU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348651; c=relaxed/simple; bh=W6VQ+V1nmHstDdIlqfgtyjfcPs3sY566gSTVH5r0RV4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NLVmH7V8mpCDQZJM/dSAycd5p7mZta98tR9wLXEhjS0ca/WynWWCdYU6h9n4ytUl/y1he8Kc3MwYVKSIiFUP4gIMtY0A4LguANYf3pGhCU/K71cLsd9/ryJ0sdEXnJSWUlOjqBLK80efJJ1trJS0DpkS/tEXXeKOtx6ZCOUOisU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=waGdBjMN; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=N/zfUSI9; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="waGdBjMN"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="N/zfUSI9" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741348648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NOG038PWXRpibdztyql3sjzUMzeQfzjYIj73kOiBeMY=; b=waGdBjMN3L0P7GIYmRWc/w0ckswrLeqSCbCh5UAqWqQgmg1dU6NMQPu1rhDjH174kS6yZY MeK/kQMdg6sT9MjB3oRrU2PaoZU08tEXZdZAcWbA9kg6FLkvSpLsOSyP3rpbx9Ncs5XTcx OIC+1ngOros5sbngC3Iah9AaVSnf3JNHI4HZ+wbVhObSn1vwXON9m8wS6Tc7eCti4dtTEo QiH83R5vwg4AlCbcNvnHpFrLO3MWpXedZ4RwRAnXKlSKZNHfqbqxQ8mpqCPJUxdQE3iDSY 8gZvWGSBCiuQKyNXawYFNfFdKW7ZgyFOEZG+33IjZw3fsaM9nRKghrY+S9hKAg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741348648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NOG038PWXRpibdztyql3sjzUMzeQfzjYIj73kOiBeMY=; b=N/zfUSI9sT6hiVTKZKg/gpOvl6mCFm6wxR8DWzaQgONOFEMqMPUavBHQJ3CgSK2m0qY9oT FQ8+ZnMucrdjQoCw== To: linux-rdma@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Andrew Lunn , Eric Dumazet , Ilias Apalodimas , Jakub Kicinski , Jesper Dangaard Brouer , Joe Damato , Leon Romanovsky , Paolo Abeni , Saeed Mahameed , Simon Horman , Tariq Toukan , Thomas Gleixner , Yunsheng Lin , Sebastian Andrzej Siewior Subject: [PATCH net-next v2 1/5] page_pool: Provide an empty page_pool_stats for disabled stats. Date: Fri, 7 Mar 2025 12:57:18 +0100 Message-ID: <20250307115722.705311-2-bigeasy@linutronix.de> In-Reply-To: <20250307115722.705311-1-bigeasy@linutronix.de> References: <20250307115722.705311-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 An empty struct page_pool_stats allows to always add it to structs and pass it to functions like page_pool_ethtool_stats_get() without the need for an ifdef. Provide an empty struct page_pool_stats and page_pool_get_stats() for !CONFIG_PAGE_POOL_STATS builds. Signed-off-by: Sebastian Andrzej Siewior --- include/net/page_pool/helpers.h | 6 ++++++ include/net/page_pool/types.h | 4 ++++ 2 files changed, 10 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 582a3d00cbe23..4622db90f88f2 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -81,6 +81,12 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) { return data; } + +static inline bool page_pool_get_stats(const struct page_pool *pool, + struct page_pool_stats *stats) +{ + return false; +} #endif /** diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 7f405672b089d..6d55e6cf5d0db 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -140,6 +140,10 @@ struct page_pool_stats { struct page_pool_alloc_stats alloc_stats; struct page_pool_recycle_stats recycle_stats; }; + +#else /* !CONFIG_PAGE_POOL_STATS */ + +struct page_pool_stats { }; #endif /* The whole frag API block must stay within one cacheline. On 32-bit systems, From patchwork Fri Mar 7 11:57:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 14006384 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDF9E218587; Fri, 7 Mar 2025 11:57:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348652; cv=none; b=J5J1fKHuAnHSZMSXMO/ZxDHkYjXIbdfsuTZbm0gSk1NYNGTL2N5lk5QN6D2ufqzz383YgGz84xjDeOFmcVYWkg1rqerc5GrLrWKJWzAHlxDAk7WJDfrBhw/+U1yRx5MGU7zqCZnOblbZ6J78ZURf/G1OfH+a94Z+mGBCfqdvJ+g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348652; c=relaxed/simple; bh=ZMa2qkoF4qUajjpRQm/C3r6/ukbk7udvGjF2x6psW/s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JmAMzaks/nhisuvADmHz6AMUgxzF6MT2CRYw3due5eA0463SvuAhs49Evkd/SC67XXaV/6Qo+rc5jWXa83dUxq8IhQvMQxjtUF94P7AmRH2wq7467+RlWag4DT8oFCFHjuUg/Ktf2leIgv5RnSF8v/i+vpXyFMJ4emYgbZNxWK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=dmnWh04R; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=JVtNB3WZ; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="dmnWh04R"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="JVtNB3WZ" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741348648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xl1XW/KrvUaIfMMAnBqO32hrrs9CwivOeTV1oAsGsxI=; b=dmnWh04R39qmNckXlgjmn31FmzEkhuulkEYMOhnPraM6UvgRUUOjD+JHbkldUQXYFYUipC kcKU2jMEVUjmSCgAkqjmv9+cDouV0BWSNJHAAJ1k4AWr6iwf0+UkuSfg0q3nO9pdG/35ep E7J0yR9MEH6VQBDCFq65C9sWG/Vcvp+nDJk0UreT+JASZcQtRs2QFpbWQ2SlDOViGoz53Y cbSzpEaZgq2v527+7/YwqEnzRVIVPceTYTkdZrgYipd8Liz+Ydica0Ldx/qkQ9gP5eGFqR 8xh8LTXuWSLoC0iyGrV55y+wwoR/r1E9NmYQ1AMkLaFPStWleP/wGdnKZR+LDQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741348648; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xl1XW/KrvUaIfMMAnBqO32hrrs9CwivOeTV1oAsGsxI=; b=JVtNB3WZPZ3VIKM00mz4TGLoyB2tFbrgWBvgHZld0hinuK7xUgAMRhEgfTt+9VgodTXxOu l8gXBlz7WHato9Cw== To: linux-rdma@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Andrew Lunn , Eric Dumazet , Ilias Apalodimas , Jakub Kicinski , Jesper Dangaard Brouer , Joe Damato , Leon Romanovsky , Paolo Abeni , Saeed Mahameed , Simon Horman , Tariq Toukan , Thomas Gleixner , Yunsheng Lin , Sebastian Andrzej Siewior Subject: [PATCH net-next v2 2/5] page_pool: Add per-queue statistics. Date: Fri, 7 Mar 2025 12:57:19 +0100 Message-ID: <20250307115722.705311-3-bigeasy@linutronix.de> In-Reply-To: <20250307115722.705311-1-bigeasy@linutronix.de> References: <20250307115722.705311-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The mlx5 driver supports per-channel statistics. To make support generic it is required to have a template to fill the individual channel/ queue. Provide page_pool_ethtool_stats_get_strings_mq() to fill the strings for multiple queue. Signed-off-by: Sebastian Andrzej Siewior --- include/net/page_pool/helpers.h | 5 +++++ net/core/page_pool.c | 23 +++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 4622db90f88f2..a815b0ff97448 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -62,6 +62,7 @@ /* Deprecated driver-facing API, use netlink instead */ int page_pool_ethtool_stats_get_count(void); u8 *page_pool_ethtool_stats_get_strings(u8 *data); +void page_pool_ethtool_stats_get_strings_mq(u8 **data, unsigned int queue); u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats); bool page_pool_get_stats(const struct page_pool *pool, @@ -77,6 +78,10 @@ static inline u8 *page_pool_ethtool_stats_get_strings(u8 *data) return data; } +static inline void page_pool_ethtool_stats_get_strings_mq(u8 **data, unsigned int queue) +{ +} + static inline u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) { return data; diff --git a/net/core/page_pool.c b/net/core/page_pool.c index f5e908c9e7ad8..2290d80443d1e 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -68,6 +68,20 @@ static const char pp_stats[][ETH_GSTRING_LEN] = { "rx_pp_recycle_released_ref", }; +static const char pp_stats_mq[][ETH_GSTRING_LEN] = { + "rx%d_pp_alloc_fast", + "rx%d_pp_alloc_slow", + "rx%d_pp_alloc_slow_ho", + "rx%d_pp_alloc_empty", + "rx%d_pp_alloc_refill", + "rx%d_pp_alloc_waive", + "rx%d_pp_recycle_cached", + "rx%d_pp_recycle_cache_full", + "rx%d_pp_recycle_ring", + "rx%d_pp_recycle_ring_full", + "rx%d_pp_recycle_released_ref", +}; + /** * page_pool_get_stats() - fetch page pool stats * @pool: pool from which page was allocated @@ -123,6 +137,15 @@ u8 *page_pool_ethtool_stats_get_strings(u8 *data) } EXPORT_SYMBOL(page_pool_ethtool_stats_get_strings); +void page_pool_ethtool_stats_get_strings_mq(u8 **data, unsigned int queue) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(pp_stats_mq); i++) + ethtool_sprintf(data, pp_stats_mq[i], queue); +} +EXPORT_SYMBOL(page_pool_ethtool_stats_get_strings_mq); + int page_pool_ethtool_stats_get_count(void) { return ARRAY_SIZE(pp_stats); From patchwork Fri Mar 7 11:57:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 14006385 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFB032185A3; Fri, 7 Mar 2025 11:57:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348653; cv=none; b=furfHKX4og8FVbZypFJwNktUhS2ufYxOARXdatdZt6ObSJTd9O5O2PMb+mWrzvi9gc4CGkc7/XomSzTbD+FSNCmyhjL4KQbTZun78xzTmN7GEwXWPilqO4EJGefDUWMyc5XIkzip/J8ECnxpRHt4zhlXWVlrOfmj7daRZOr1sbo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348653; c=relaxed/simple; bh=dljZ4cOb8/GOgkgZ2/1rvIfgX3pFCFsNtfibBTQntao=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IEJnrV2fsvipfMeyaIERRTGlIpafVCg0s8nFygzo9PDX1qa1j1W1bj2Y/vy+kBKrRmJmmTDRLNCOdNvvrUVyBzP4sPKwGxXAaaJvzQoo9lPvdgP1kCXboKDYae7hYAaqpUWQhJVql6jusoBubjxFGD90twBedMK64pI3PcJR0hM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=YUKGvG81; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=A0qpAypK; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="YUKGvG81"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="A0qpAypK" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741348649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9bKr3vCDKW7xcTLcHLP/2rkXN4326kqV+J1MZCmHiEM=; b=YUKGvG81vEfE/IFx5Kj6BtN2d8TI8hwY+7LC0dFyjM/HOCorYjvaBW0rMLW+8Uhy8bOcaU hEzbVAkSr/FzD7mxcB51VeWTzzpmyiAnOQN0sWVoXXm1n6EvYZgJGRrJY0CsfX4V0tWca1 sQ079I4Q4yVgXRukwSpcY7O9jbi5lbhgQ1mJOjPGJdppeCzsUox128WpBwJFcq84ZCaQez P7y6hSPz8B1JnCHZVd5+pafmZuoZ7NTYyy7Ly6qLJfjdNG7YoVgDA8BUPY7/q9kMVsc1Yj 6+FB+NyZt+d8Tub4kH0akKNKNp30ht7PoixPqQud7GijDGLN3fC32t0oOfxPFw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741348649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9bKr3vCDKW7xcTLcHLP/2rkXN4326kqV+J1MZCmHiEM=; b=A0qpAypKvOBqj9YNN14vSmnGLkpSrvya3YG2GrbEP3HexuMOHiAt9feGsR3G/zxrBngkCJ brHHyabYEJWTGIDA== To: linux-rdma@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Andrew Lunn , Eric Dumazet , Ilias Apalodimas , Jakub Kicinski , Jesper Dangaard Brouer , Joe Damato , Leon Romanovsky , Paolo Abeni , Saeed Mahameed , Simon Horman , Tariq Toukan , Thomas Gleixner , Yunsheng Lin , Sebastian Andrzej Siewior Subject: [PATCH net-next v2 3/5] mlx5: Use generic code for page_pool statistics. Date: Fri, 7 Mar 2025 12:57:20 +0100 Message-ID: <20250307115722.705311-4-bigeasy@linutronix.de> In-Reply-To: <20250307115722.705311-1-bigeasy@linutronix.de> References: <20250307115722.705311-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The statistics gathering code for page_pool statistics has multiple steps: - gather statistics from a channel via page_pool_get_stats() to an on-stack structure. - copy this data to dedicated rq_stats. - copy the data from rq_stats global mlx5e_sw_stats structure, and merge per-queue statistics into one counter. - Finally copy the data in specific order for the ethtool query (both per queue and all queues summed up). The downside here is that the individual counter types are expected to be u64 and if something changes, the code breaks. Also if additional counter are added to struct page_pool_stats then they are not automtically picked up by the driver but need to be manually added in all four spots. Remove the page_pool_stats related description from sw_stats_desc and rq_stats_desc. Replace the counters in mlx5e_sw_stats and mlx5e_rq_stats with struct page_pool_stats. This one will be empty if page_pool_stats is disabled. Let mlx5e_stats_update_stats_rq_page_pool() fetch the stats for page_pool twice: One for the summed up data, one for the individual queue. Publish the strings via page_pool_ethtool_stats_get_strings() and page_pool_ethtool_stats_get_strings_mq(). Publish the counter via page_pool_ethtool_stats_get(). Suggested-by: Joe Damato Signed-off-by: Sebastian Andrzej Siewior --- .../ethernet/mellanox/mlx5/core/en_stats.c | 87 ++++--------------- .../ethernet/mellanox/mlx5/core/en_stats.h | 30 +------ 2 files changed, 19 insertions(+), 98 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 611ec4b6f3709..f99c5574b79b9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -37,9 +37,7 @@ #include "en/ptp.h" #include "en/port.h" -#ifdef CONFIG_PAGE_POOL_STATS #include -#endif void mlx5e_ethtool_put_stat(u64 **data, u64 val) { @@ -196,19 +194,6 @@ static const struct counter_desc sw_stats_desc[] = { { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_arfs_err) }, #endif { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_recover) }, -#ifdef CONFIG_PAGE_POOL_STATS - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_fast) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_slow) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_slow_high_order) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_empty) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_refill) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_alloc_waive) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_cached) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_cache_full) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_ring) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_ring_full) }, - { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_pp_recycle_released_ref) }, -#endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_packets) }, { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_tls_decrypted_bytes) }, @@ -257,7 +242,7 @@ static const struct counter_desc sw_stats_desc[] = { static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(sw) { - return NUM_SW_COUNTERS; + return NUM_SW_COUNTERS + page_pool_ethtool_stats_get_count(); } static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(sw) @@ -266,6 +251,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(sw) for (i = 0; i < NUM_SW_COUNTERS; i++) ethtool_puts(data, sw_stats_desc[i].format); + *data = page_pool_ethtool_stats_get_strings(*data); } static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(sw) @@ -276,6 +262,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(sw) mlx5e_ethtool_put_stat(data, MLX5E_READ_CTR64_CPU(&priv->stats.sw, sw_stats_desc, i)); + *data = page_pool_ethtool_stats_get(*data, &priv->stats.sw.page_pool_stats); } static void mlx5e_stats_grp_sw_update_stats_xdp_red(struct mlx5e_sw_stats *s, @@ -377,19 +364,6 @@ static void mlx5e_stats_grp_sw_update_stats_rq_stats(struct mlx5e_sw_stats *s, s->rx_arfs_err += rq_stats->arfs_err; #endif s->rx_recover += rq_stats->recover; -#ifdef CONFIG_PAGE_POOL_STATS - s->rx_pp_alloc_fast += rq_stats->pp_alloc_fast; - s->rx_pp_alloc_slow += rq_stats->pp_alloc_slow; - s->rx_pp_alloc_empty += rq_stats->pp_alloc_empty; - s->rx_pp_alloc_refill += rq_stats->pp_alloc_refill; - s->rx_pp_alloc_waive += rq_stats->pp_alloc_waive; - s->rx_pp_alloc_slow_high_order += rq_stats->pp_alloc_slow_high_order; - s->rx_pp_recycle_cached += rq_stats->pp_recycle_cached; - s->rx_pp_recycle_cache_full += rq_stats->pp_recycle_cache_full; - s->rx_pp_recycle_ring += rq_stats->pp_recycle_ring; - s->rx_pp_recycle_ring_full += rq_stats->pp_recycle_ring_full; - s->rx_pp_recycle_released_ref += rq_stats->pp_recycle_released_ref; -#endif #ifdef CONFIG_MLX5_EN_TLS s->rx_tls_decrypted_packets += rq_stats->tls_decrypted_packets; s->rx_tls_decrypted_bytes += rq_stats->tls_decrypted_bytes; @@ -496,34 +470,14 @@ static void mlx5e_stats_grp_sw_update_stats_qos(struct mlx5e_priv *priv, } } -#ifdef CONFIG_PAGE_POOL_STATS -static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) +static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_sw_stats *s, + struct mlx5e_channel *c) { struct mlx5e_rq_stats *rq_stats = c->rq.stats; - struct page_pool *pool = c->rq.page_pool; - struct page_pool_stats stats = { 0 }; - if (!page_pool_get_stats(pool, &stats)) - return; - - rq_stats->pp_alloc_fast = stats.alloc_stats.fast; - rq_stats->pp_alloc_slow = stats.alloc_stats.slow; - rq_stats->pp_alloc_slow_high_order = stats.alloc_stats.slow_high_order; - rq_stats->pp_alloc_empty = stats.alloc_stats.empty; - rq_stats->pp_alloc_waive = stats.alloc_stats.waive; - rq_stats->pp_alloc_refill = stats.alloc_stats.refill; - - rq_stats->pp_recycle_cached = stats.recycle_stats.cached; - rq_stats->pp_recycle_cache_full = stats.recycle_stats.cache_full; - rq_stats->pp_recycle_ring = stats.recycle_stats.ring; - rq_stats->pp_recycle_ring_full = stats.recycle_stats.ring_full; - rq_stats->pp_recycle_released_ref = stats.recycle_stats.released_refcnt; + page_pool_get_stats(c->rq.page_pool, &s->page_pool_stats); + page_pool_get_stats(c->rq.page_pool, &rq_stats->page_pool_stats); } -#else -static void mlx5e_stats_update_stats_rq_page_pool(struct mlx5e_channel *c) -{ -} -#endif static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) { @@ -532,15 +486,13 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(sw) memset(s, 0, sizeof(*s)); - for (i = 0; i < priv->channels.num; i++) /* for active channels only */ - mlx5e_stats_update_stats_rq_page_pool(priv->channels.c[i]); - for (i = 0; i < priv->stats_nch; i++) { struct mlx5e_channel_stats *channel_stats = priv->channel_stats[i]; int j; + mlx5e_stats_update_stats_rq_page_pool(s, priv->channels.c[i]); mlx5e_stats_grp_sw_update_stats_rq_stats(s, &channel_stats->rq); mlx5e_stats_grp_sw_update_stats_xdpsq(s, &channel_stats->rq_xdpsq); mlx5e_stats_grp_sw_update_stats_ch_stats(s, &channel_stats->ch); @@ -2086,19 +2038,6 @@ static const struct counter_desc rq_stats_desc[] = { { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, arfs_err) }, #endif { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, recover) }, -#ifdef CONFIG_PAGE_POOL_STATS - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_fast) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_slow) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_slow_high_order) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_empty) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_refill) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_alloc_waive) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_cached) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_cache_full) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_ring) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_ring_full) }, - { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, pp_recycle_released_ref) }, -#endif #ifdef CONFIG_MLX5_EN_TLS { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_packets) }, { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, tls_decrypted_bytes) }, @@ -2446,7 +2385,8 @@ static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(channels) (NUM_RQ_XDPSQ_STATS * max_nch) + (NUM_XDPSQ_STATS * max_nch) + (NUM_XSKRQ_STATS * max_nch * priv->xsk.ever_used) + - (NUM_XSKSQ_STATS * max_nch * priv->xsk.ever_used); + (NUM_XSKSQ_STATS * max_nch * priv->xsk.ever_used) + + page_pool_ethtool_stats_get_count() * max_nch; } static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels) @@ -2462,6 +2402,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(channels) for (i = 0; i < max_nch; i++) { for (j = 0; j < NUM_RQ_STATS; j++) ethtool_sprintf(data, rq_stats_desc[j].format, i); + page_pool_ethtool_stats_get_strings_mq(data, i); for (j = 0; j < NUM_XSKRQ_STATS * is_xsk; j++) ethtool_sprintf(data, xskrq_stats_desc[j].format, i); for (j = 0; j < NUM_RQ_XDPSQ_STATS; j++) @@ -2496,11 +2437,13 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(channels) ch_stats_desc, j)); for (i = 0; i < max_nch; i++) { + struct mlx5e_rq_stats *rq_stats = &priv->channel_stats[i]->rq; + for (j = 0; j < NUM_RQ_STATS; j++) mlx5e_ethtool_put_stat( - data, MLX5E_READ_CTR64_CPU( - &priv->channel_stats[i]->rq, + data, MLX5E_READ_CTR64_CPU(rq_stats, rq_stats_desc, j)); + *data = page_pool_ethtool_stats_get(*data, &rq_stats->page_pool_stats); for (j = 0; j < NUM_XSKRQ_STATS * is_xsk; j++) mlx5e_ethtool_put_stat( data, MLX5E_READ_CTR64_CPU( diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h index 5961c569cfe01..aebf4838a76c9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h @@ -33,6 +33,8 @@ #ifndef __MLX5_EN_STATS_H__ #define __MLX5_EN_STATS_H__ +#include + #define MLX5E_READ_CTR64_CPU(ptr, dsc, i) \ (*(u64 *)((char *)ptr + dsc[i].offset)) #define MLX5E_READ_CTR64_BE(ptr, dsc, i) \ @@ -215,19 +217,7 @@ struct mlx5e_sw_stats { u64 ch_aff_change; u64 ch_force_irq; u64 ch_eq_rearm; -#ifdef CONFIG_PAGE_POOL_STATS - u64 rx_pp_alloc_fast; - u64 rx_pp_alloc_slow; - u64 rx_pp_alloc_slow_high_order; - u64 rx_pp_alloc_empty; - u64 rx_pp_alloc_refill; - u64 rx_pp_alloc_waive; - u64 rx_pp_recycle_cached; - u64 rx_pp_recycle_cache_full; - u64 rx_pp_recycle_ring; - u64 rx_pp_recycle_ring_full; - u64 rx_pp_recycle_released_ref; -#endif + struct page_pool_stats page_pool_stats; #ifdef CONFIG_MLX5_EN_TLS u64 tx_tls_encrypted_packets; u64 tx_tls_encrypted_bytes; @@ -381,19 +371,7 @@ struct mlx5e_rq_stats { u64 arfs_err; #endif u64 recover; -#ifdef CONFIG_PAGE_POOL_STATS - u64 pp_alloc_fast; - u64 pp_alloc_slow; - u64 pp_alloc_slow_high_order; - u64 pp_alloc_empty; - u64 pp_alloc_refill; - u64 pp_alloc_waive; - u64 pp_recycle_cached; - u64 pp_recycle_cache_full; - u64 pp_recycle_ring; - u64 pp_recycle_ring_full; - u64 pp_recycle_released_ref; -#endif + struct page_pool_stats page_pool_stats; #ifdef CONFIG_MLX5_EN_TLS u64 tls_decrypted_packets; u64 tls_decrypted_bytes; From patchwork Fri Mar 7 11:57:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 14006386 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68E8F21885D; Fri, 7 Mar 2025 11:57:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348654; cv=none; b=RxgHRtOuOhv962fFbusQ5Z8CWWyzw5TxCX5u2SYVz6TeP0kkhuK7GtNCrp5Sc3/fTb8C4MyK6oMjZRtjU3MVyssr/H51UQZ7dRUGRxYiEnQpxiLdXY2CPyOIrzfQ5lH++cvuOXnVUuZezU2wzIRG2RntXb2Xb/McAaA4CNmK8ck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348654; c=relaxed/simple; bh=SGFIqls0s7Q+ZnLVrG3OonJaUsvj9jbcFUBkAx+fHz0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WqofvZXQ3abAt7fFgvv6GYnJD4WUuf8wZnOVEaFvz6+qv8FhZH01CLxOSPVDtNFzZKKSKpNliGmzdBdVvukRXcyGy4imuWmwyNsK/cBU34i5in5R5hISXrrYPCQRmdSbAoFMNUgVd74mZpmh7iV0gsQZ5Gv6tnfaF1YceDwGqvk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=kns1fqqO; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=FVDTnrGU; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="kns1fqqO"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="FVDTnrGU" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741348649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RYo/FLo9hs1YKyzHeLhHqmwOWRKzqs+oW8a/uV1uk1A=; b=kns1fqqO8GbYVQ5Ym4FyKmC+YAw0KZdLIc9+Kk43iEBjVLRO6zIGFAmIZqJvGiTW9MVk7e 3Tr8GepLrYesU3U90nMLvb5MMSfGL9qsMDPGfx+Lhg5JIUIbXGGHSUYlKK8sOxj8r7pPMo uaOjL3A24tbNcaJnagRYGlhqKKF/pCX6AhQqnddC/tZDbc60iZ2MpqGA8RlPxK3mi0Y7// tJLRbRnYI6cSDhuq9ms2I5KrIRan7iwZETWVSkaNUP0mAtQzS+47oCHyayDzEe34qURde0 ifZGVw0GxJx0RXWB7nK4iqwXUeYF/J2ZBJUNeACoPWGDzgPHyB6CykkI1HtXtg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741348649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RYo/FLo9hs1YKyzHeLhHqmwOWRKzqs+oW8a/uV1uk1A=; b=FVDTnrGUyN0mpj/QQutxwIRGj3WTQO/DjVeJMysVJkCqW+Apqboc8875Meh/O2uhfdp4hK z4fbOvTmlPNWsHAA== To: linux-rdma@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Andrew Lunn , Eric Dumazet , Ilias Apalodimas , Jakub Kicinski , Jesper Dangaard Brouer , Joe Damato , Leon Romanovsky , Paolo Abeni , Saeed Mahameed , Simon Horman , Tariq Toukan , Thomas Gleixner , Yunsheng Lin , Sebastian Andrzej Siewior Subject: [PATCH net-next v2 4/5] page_pool: Convert page_pool_recycle_stats to u64_stats_t. Date: Fri, 7 Mar 2025 12:57:21 +0100 Message-ID: <20250307115722.705311-5-bigeasy@linutronix.de> In-Reply-To: <20250307115722.705311-1-bigeasy@linutronix.de> References: <20250307115722.705311-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Using u64 for statistics can lead to inconsistency on 32bit because an update and a read requires to access two 32bit values. This can be avoided by using u64_stats_t for the counters and u64_stats_sync for the required synchronisation on 32bit platforms. The synchronisation is a NOP on 64bit architectures. Use u64_stats_t for the counters in page_pool_recycle_stats. Add U64_STATS_ZERO, a static initializer for u64_stats_t. Signed-off-by: Sebastian Andrzej Siewior --- Documentation/networking/page_pool.rst | 6 +-- include/linux/u64_stats_sync.h | 5 +++ include/net/page_pool/types.h | 13 ++++--- net/core/page_pool.c | 52 ++++++++++++++++++-------- net/core/page_pool_user.c | 10 ++--- 5 files changed, 58 insertions(+), 28 deletions(-) diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst index 9d958128a57cb..5215fd51a334a 100644 --- a/Documentation/networking/page_pool.rst +++ b/Documentation/networking/page_pool.rst @@ -181,11 +181,11 @@ Stats #ifdef CONFIG_PAGE_POOL_STATS /* retrieve stats */ - struct page_pool_stats stats = { 0 }; + struct page_pool_stats stats = { }; if (page_pool_get_stats(page_pool, &stats)) { /* perhaps the driver reports statistics with ethool */ - ethtool_print_allocation_stats(&stats.alloc_stats); - ethtool_print_recycle_stats(&stats.recycle_stats); + ethtool_print_allocation_stats(u64_stats_read(&stats.alloc_stats)); + ethtool_print_recycle_stats(u64_stats_read(&stats.recycle_stats)); } #endif diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h index 457879938fc19..086bd4a51cfe9 100644 --- a/include/linux/u64_stats_sync.h +++ b/include/linux/u64_stats_sync.h @@ -94,6 +94,8 @@ static inline void u64_stats_inc(u64_stats_t *p) local64_inc(&p->v); } +#define U64_STATS_ZERO(_member, _name) {} + static inline void u64_stats_init(struct u64_stats_sync *syncp) { } static inline void __u64_stats_update_begin(struct u64_stats_sync *syncp) { } static inline void __u64_stats_update_end(struct u64_stats_sync *syncp) { } @@ -141,6 +143,9 @@ static inline void u64_stats_inc(u64_stats_t *p) seqcount_init(&__s->seq); \ } while (0) +#define U64_STATS_ZERO(_member, _name) \ + _member.seq = SEQCNT_ZERO(#_name#_member.seq) + static inline void __u64_stats_update_begin(struct u64_stats_sync *syncp) { preempt_disable_nested(); diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 6d55e6cf5d0db..daf989d01436e 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #define PP_FLAG_DMA_MAP BIT(0) /* Should page_pool do the DMA @@ -114,6 +115,7 @@ struct page_pool_alloc_stats { /** * struct page_pool_recycle_stats - recycling (freeing) statistics + * @syncp: synchronisations point for updates. * @cached: recycling placed page in the page pool cache * @cache_full: page pool cache was full * @ring: page placed into the ptr ring @@ -121,11 +123,12 @@ struct page_pool_alloc_stats { * @released_refcnt: page released (and not recycled) because refcnt > 1 */ struct page_pool_recycle_stats { - u64 cached; - u64 cache_full; - u64 ring; - u64 ring_full; - u64 released_refcnt; + struct u64_stats_sync syncp; + u64_stats_t cached; + u64_stats_t cache_full; + u64_stats_t ring; + u64_stats_t ring_full; + u64_stats_t released_refcnt; }; /** diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 2290d80443d1e..312bdc5b5a8bf 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -37,21 +37,27 @@ DEFINE_STATIC_KEY_FALSE(page_pool_mem_providers); #define BIAS_MAX (LONG_MAX >> 1) #ifdef CONFIG_PAGE_POOL_STATS -static DEFINE_PER_CPU(struct page_pool_recycle_stats, pp_system_recycle_stats); +static DEFINE_PER_CPU(struct page_pool_recycle_stats, pp_system_recycle_stats) = { + U64_STATS_ZERO(.syncp, pp_system_recycle_stats), +}; /* alloc_stat_inc is intended to be used in softirq context */ #define alloc_stat_inc(pool, __stat) (pool->alloc_stats.__stat++) /* recycle_stat_inc is safe to use when preemption is possible. */ #define recycle_stat_inc(pool, __stat) \ do { \ - struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \ - this_cpu_inc(s->__stat); \ + struct page_pool_recycle_stats *s = this_cpu_ptr(pool->recycle_stats); \ + u64_stats_update_begin(&s->syncp); \ + u64_stats_inc(&s->__stat); \ + u64_stats_update_end(&s->syncp); \ } while (0) #define recycle_stat_add(pool, __stat, val) \ do { \ - struct page_pool_recycle_stats __percpu *s = pool->recycle_stats; \ - this_cpu_add(s->__stat, val); \ + struct page_pool_recycle_stats *s = this_cpu_ptr(pool->recycle_stats); \ + u64_stats_update_begin(&s->syncp); \ + u64_stats_add(&s->__stat, val); \ + u64_stats_update_end(&s->syncp); \ } while (0) static const char pp_stats[][ETH_GSTRING_LEN] = { @@ -96,6 +102,7 @@ static const char pp_stats_mq[][ETH_GSTRING_LEN] = { bool page_pool_get_stats(const struct page_pool *pool, struct page_pool_stats *stats) { + unsigned int start; int cpu = 0; if (!stats) @@ -110,14 +117,24 @@ bool page_pool_get_stats(const struct page_pool *pool, stats->alloc_stats.waive += pool->alloc_stats.waive; for_each_possible_cpu(cpu) { + u64 cached, cache_full, ring, ring_full, released_refcnt; const struct page_pool_recycle_stats *pcpu = per_cpu_ptr(pool->recycle_stats, cpu); - stats->recycle_stats.cached += pcpu->cached; - stats->recycle_stats.cache_full += pcpu->cache_full; - stats->recycle_stats.ring += pcpu->ring; - stats->recycle_stats.ring_full += pcpu->ring_full; - stats->recycle_stats.released_refcnt += pcpu->released_refcnt; + do { + start = u64_stats_fetch_begin(&pcpu->syncp); + cached = u64_stats_read(&pcpu->cached); + cache_full = u64_stats_read(&pcpu->cache_full); + ring = u64_stats_read(&pcpu->ring); + ring_full = u64_stats_read(&pcpu->ring_full); + released_refcnt = u64_stats_read(&pcpu->released_refcnt); + } while (u64_stats_fetch_retry(&pcpu->syncp, start)); + + u64_stats_add(&stats->recycle_stats.cached, cached); + u64_stats_add(&stats->recycle_stats.cache_full, cache_full); + u64_stats_add(&stats->recycle_stats.ring, ring); + u64_stats_add(&stats->recycle_stats.ring_full, ring_full); + u64_stats_add(&stats->recycle_stats.released_refcnt, released_refcnt); } return true; @@ -162,11 +179,11 @@ u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) *data++ = pool_stats->alloc_stats.empty; *data++ = pool_stats->alloc_stats.refill; *data++ = pool_stats->alloc_stats.waive; - *data++ = pool_stats->recycle_stats.cached; - *data++ = pool_stats->recycle_stats.cache_full; - *data++ = pool_stats->recycle_stats.ring; - *data++ = pool_stats->recycle_stats.ring_full; - *data++ = pool_stats->recycle_stats.released_refcnt; + *data++ = u64_stats_read(&pool_stats->recycle_stats.cached); + *data++ = u64_stats_read(&pool_stats->recycle_stats.cache_full); + *data++ = u64_stats_read(&pool_stats->recycle_stats.ring); + *data++ = u64_stats_read(&pool_stats->recycle_stats.ring_full); + *data++ = u64_stats_read(&pool_stats->recycle_stats.released_refcnt); return data; } @@ -270,9 +287,14 @@ static int page_pool_init(struct page_pool *pool, #ifdef CONFIG_PAGE_POOL_STATS if (!(pool->slow.flags & PP_FLAG_SYSTEM_POOL)) { + unsigned int cpu; + pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats); if (!pool->recycle_stats) return -ENOMEM; + + for_each_possible_cpu(cpu) + u64_stats_init(&per_cpu_ptr(pool->recycle_stats, cpu)->syncp); } else { /* For system page pool instance we use a singular stats object * instead of allocating a separate percpu variable for each diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 6677e0c2e2565..0d038c0c8996d 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -149,15 +149,15 @@ page_pool_nl_stats_fill(struct sk_buff *rsp, const struct page_pool *pool, nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_WAIVE, stats.alloc_stats.waive) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHED, - stats.recycle_stats.cached) || + u64_stats_read(&stats.recycle_stats.cached)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHE_FULL, - stats.recycle_stats.cache_full) || + u64_stats_read(&stats.recycle_stats.cache_full)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_RING, - stats.recycle_stats.ring) || + u64_stats_read(&stats.recycle_stats.ring)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_RING_FULL, - stats.recycle_stats.ring_full) || + u64_stats_read(&stats.recycle_stats.ring_full)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_RELEASED_REFCNT, - stats.recycle_stats.released_refcnt)) + u64_stats_read(&stats.recycle_stats.released_refcnt))) goto err_cancel_msg; genlmsg_end(rsp, hdr); From patchwork Fri Mar 7 11:57:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 14006387 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 814CB218EA7; Fri, 7 Mar 2025 11:57:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348655; cv=none; b=hghdPosbtNOuzOruSG4WlP6x+uMZb5SAQw0MoB3yzkETOWNiVqXRupwm/hrE82Hz+6DOap69GG2lGSFRmdU3mRI/ies3hEHyQjjSkfTSt4+ldVYChtXqfxYdZuYe7DxHROHa8NBDIEOI9sqEYPTbbM26wniG0CW/u6uWpAhhbyU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741348655; c=relaxed/simple; bh=Y4KWT/DTp3Q2oe32qtckIDcCF4wXT6rdbzqnfV7npiY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Oy0VtuYmWUTjUQ7q+iimSv77bjvRTDx7Dl8ZLj9tXVuSynMGWAtTW31ckGDpSdVFN5eBof7qpoXVQm/ZGNhZ+bcp8DQaOPD8V4L2tHRzIZ6FqkRkta4IMlra6y7H324vkZoI+o+Ixo1wseO/xwfnFhNFBax+DNCI/7Ggfd2fbhA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=HXr9F9S3; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=wY+SgcLd; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="HXr9F9S3"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="wY+SgcLd" From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1741348650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=q0z4ulBO6aXDtgDZ7oBXWjxFr4VugNd5bSgDi1G2fX0=; b=HXr9F9S3Hz4SgrnOYdNeLMAQK4Ye1pyDA/h6olHO9W7RMF16+GGIR9X+TLQwcH84vdFG+L ahYIKfd1ETx+JFAVZoS/ziLUfnMvcpt4po49+3JOaLPLteQ6ymHnGNAgHAPAP3hGIMCHhe OkIaN+TLRnN0UxqPWm/kRHGkSGiD+vMR3f+jaek0Ymy0MgeDgYYTcgVcErJs6sOGiSvAjj alqp8+AZxTopj+YVdyKhoUxqdna9LLNmLAPSA/3zh19Qnsed3nHiogqfDzXBMhpVPsMlO0 p1Vbqurj7DHglIUWTBMEgtU9nJGHRu7aFlvKsVIM66ephvEr2Mhu+NK6/m9uqA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1741348650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=q0z4ulBO6aXDtgDZ7oBXWjxFr4VugNd5bSgDi1G2fX0=; b=wY+SgcLdKqiCS2DEC3lVF2HEpL/AQZAibARp049RbTiiMVxKDErrOAieQ3QCmPUNDZXbcr XZGwPVqA3CtIxWBw== To: linux-rdma@vger.kernel.org, netdev@vger.kernel.org Cc: "David S. Miller" , Andrew Lunn , Eric Dumazet , Ilias Apalodimas , Jakub Kicinski , Jesper Dangaard Brouer , Joe Damato , Leon Romanovsky , Paolo Abeni , Saeed Mahameed , Simon Horman , Tariq Toukan , Thomas Gleixner , Yunsheng Lin , Sebastian Andrzej Siewior Subject: [PATCH net-next v2 5/5] page_pool: Convert page_pool_alloc_stats to u64_stats_t. Date: Fri, 7 Mar 2025 12:57:22 +0100 Message-ID: <20250307115722.705311-6-bigeasy@linutronix.de> In-Reply-To: <20250307115722.705311-1-bigeasy@linutronix.de> References: <20250307115722.705311-1-bigeasy@linutronix.de> Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Using u64 for statistics can lead to inconsistency on 32bit because an update and a read requires to access two 32bit values. This can be avoided by using u64_stats_t for the counters and u64_stats_sync for the required synchronisation on 32bit platforms. The synchronisation is a NOP on 64bit architectures. Use u64_stats_t for the counters in page_pool_alloc_stats. Signed-off-by: Sebastian Andrzej Siewior --- include/net/page_pool/types.h | 14 ++++++----- net/core/page_pool.c | 47 +++++++++++++++++++++++++---------- net/core/page_pool_user.c | 12 ++++----- 3 files changed, 48 insertions(+), 25 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index daf989d01436e..78984b9286c6b 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -96,6 +96,7 @@ struct page_pool_params { #ifdef CONFIG_PAGE_POOL_STATS /** * struct page_pool_alloc_stats - allocation statistics + * @syncp: synchronisations point for updates. * @fast: successful fast path allocations * @slow: slow path order-0 allocations * @slow_high_order: slow path high order allocations @@ -105,12 +106,13 @@ struct page_pool_params { * the cache due to a NUMA mismatch */ struct page_pool_alloc_stats { - u64 fast; - u64 slow; - u64 slow_high_order; - u64 empty; - u64 refill; - u64 waive; + struct u64_stats_sync syncp; + u64_stats_t fast; + u64_stats_t slow; + u64_stats_t slow_high_order; + u64_stats_t empty; + u64_stats_t refill; + u64_stats_t waive; }; /** diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 312bdc5b5a8bf..9f4a390964195 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -42,7 +42,14 @@ static DEFINE_PER_CPU(struct page_pool_recycle_stats, pp_system_recycle_stats) = }; /* alloc_stat_inc is intended to be used in softirq context */ -#define alloc_stat_inc(pool, __stat) (pool->alloc_stats.__stat++) +#define alloc_stat_inc(pool, __stat) \ + do { \ + struct page_pool_alloc_stats *s = &pool->alloc_stats; \ + u64_stats_update_begin(&s->syncp); \ + u64_stats_inc(&s->__stat); \ + u64_stats_update_end(&s->syncp); \ + } while (0) + /* recycle_stat_inc is safe to use when preemption is possible. */ #define recycle_stat_inc(pool, __stat) \ do { \ @@ -102,19 +109,32 @@ static const char pp_stats_mq[][ETH_GSTRING_LEN] = { bool page_pool_get_stats(const struct page_pool *pool, struct page_pool_stats *stats) { + u64 fast, slow, slow_high_order, empty, refill, waive; + const struct page_pool_alloc_stats *alloc_stats; unsigned int start; int cpu = 0; if (!stats) return false; + alloc_stats = &pool->alloc_stats; /* The caller is responsible to initialize stats. */ - stats->alloc_stats.fast += pool->alloc_stats.fast; - stats->alloc_stats.slow += pool->alloc_stats.slow; - stats->alloc_stats.slow_high_order += pool->alloc_stats.slow_high_order; - stats->alloc_stats.empty += pool->alloc_stats.empty; - stats->alloc_stats.refill += pool->alloc_stats.refill; - stats->alloc_stats.waive += pool->alloc_stats.waive; + do { + start = u64_stats_fetch_begin(&alloc_stats->syncp); + fast = u64_stats_read(&alloc_stats->fast); + slow = u64_stats_read(&alloc_stats->slow); + slow_high_order = u64_stats_read(&alloc_stats->slow_high_order); + empty = u64_stats_read(&alloc_stats->empty); + refill = u64_stats_read(&alloc_stats->refill); + waive = u64_stats_read(&alloc_stats->waive); + } while (u64_stats_fetch_retry(&alloc_stats->syncp, start)); + + u64_stats_add(&stats->alloc_stats.fast, fast); + u64_stats_add(&stats->alloc_stats.slow, slow); + u64_stats_add(&stats->alloc_stats.slow_high_order, slow_high_order); + u64_stats_add(&stats->alloc_stats.empty, empty); + u64_stats_add(&stats->alloc_stats.refill, refill); + u64_stats_add(&stats->alloc_stats.waive, waive); for_each_possible_cpu(cpu) { u64 cached, cache_full, ring, ring_full, released_refcnt; @@ -173,12 +193,12 @@ u64 *page_pool_ethtool_stats_get(u64 *data, const void *stats) { const struct page_pool_stats *pool_stats = stats; - *data++ = pool_stats->alloc_stats.fast; - *data++ = pool_stats->alloc_stats.slow; - *data++ = pool_stats->alloc_stats.slow_high_order; - *data++ = pool_stats->alloc_stats.empty; - *data++ = pool_stats->alloc_stats.refill; - *data++ = pool_stats->alloc_stats.waive; + *data++ = u64_stats_read(&pool_stats->alloc_stats.fast); + *data++ = u64_stats_read(&pool_stats->alloc_stats.slow); + *data++ = u64_stats_read(&pool_stats->alloc_stats.slow_high_order); + *data++ = u64_stats_read(&pool_stats->alloc_stats.empty); + *data++ = u64_stats_read(&pool_stats->alloc_stats.refill); + *data++ = u64_stats_read(&pool_stats->alloc_stats.waive); *data++ = u64_stats_read(&pool_stats->recycle_stats.cached); *data++ = u64_stats_read(&pool_stats->recycle_stats.cache_full); *data++ = u64_stats_read(&pool_stats->recycle_stats.ring); @@ -303,6 +323,7 @@ static int page_pool_init(struct page_pool *pool, pool->recycle_stats = &pp_system_recycle_stats; pool->system = true; } + u64_stats_init(&pool->alloc_stats.syncp); #endif if (ptr_ring_init(&pool->ring, ring_qsize, GFP_KERNEL) < 0) { diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c index 0d038c0c8996d..c368cb141147f 100644 --- a/net/core/page_pool_user.c +++ b/net/core/page_pool_user.c @@ -137,17 +137,17 @@ page_pool_nl_stats_fill(struct sk_buff *rsp, const struct page_pool *pool, nla_nest_end(rsp, nest); if (nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_FAST, - stats.alloc_stats.fast) || + u64_stats_read(&stats.alloc_stats.fast)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_SLOW, - stats.alloc_stats.slow) || + u64_stats_read(&stats.alloc_stats.slow)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_SLOW_HIGH_ORDER, - stats.alloc_stats.slow_high_order) || + u64_stats_read(&stats.alloc_stats.slow_high_order)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_EMPTY, - stats.alloc_stats.empty) || + u64_stats_read(&stats.alloc_stats.empty)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_REFILL, - stats.alloc_stats.refill) || + u64_stats_read(&stats.alloc_stats.refill)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_ALLOC_WAIVE, - stats.alloc_stats.waive) || + u64_stats_read(&stats.alloc_stats.waive)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHED, u64_stats_read(&stats.recycle_stats.cached)) || nla_put_uint(rsp, NETDEV_A_PAGE_POOL_STATS_RECYCLE_CACHE_FULL,