From patchwork Tue Apr 2 09:37:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Chebbi X-Patchwork-Id: 13613596 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-oi1-f176.google.com (mail-oi1-f176.google.com [209.85.167.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0651258AAC for ; Tue, 2 Apr 2024 09:35:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712050544; cv=none; b=HrDKCKNFmH8ka5b0hhomlEtNpKTTXyo966nwxc3oXZK4Ocq6CATl+fgdxbfdz4FZqgL+G93uXHCJmvyxCCD8nwAe1PwG7O+LJ0X2Tq5nd2jfzMHK7BWmIulB+/Lejazwew5vfpu6v6Sq8l9gnzji+nfdiqdMsH8eA5gddkL3kNc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712050544; c=relaxed/simple; bh=rpMwjBvsulNLSdj7VTFpSmssyr2QEGqZUpUB2lcxPgE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=GCbb+ij5vk3bBC2n5qlrnQO9im93QxlJlfD5ebnHfPtnDMWBeIfQSnbYXbdxoo9c5MCOvRjBi2q1/S+730cxgGC5EQPQy9sksjnmut5n7OkY/UA2d6KWre0mOmxq0Wlf0MiH5NqsbsrGOTPhpyM4Mow5HclBrqenv/evKxQADIk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=RVLOkDGS; arc=none smtp.client-ip=209.85.167.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="RVLOkDGS" Received: by mail-oi1-f176.google.com with SMTP id 5614622812f47-3c3f16f9c81so2327346b6e.1 for ; Tue, 02 Apr 2024 02:35:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1712050542; x=1712655342; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=Ku0mrkb12wJLxrj5c2WKpVbSR0MFXMV9Y6xNWDL8Rkw=; b=RVLOkDGSqT0dZcU4xTQcv3MAkDzca4rMh9zs83us+EVwhVqQVaJft5byPgZXDBxPFR 0dQE1eZQis4KVZ7BC9E3omXk37ERcuOrrXB/HjcsgbruiZGWKzwyU6DCBFXWI+RTArcF vlILZvDqaPBvfuuWdascJbH8yNvBgMzp4n5pk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712050542; x=1712655342; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ku0mrkb12wJLxrj5c2WKpVbSR0MFXMV9Y6xNWDL8Rkw=; b=uGMEYuDi0rFwP+ngBHhRFDIoFs6jxl3V4ylfGyxEpArTsgkZ2JpbPEPWbid9Hn2Lym pDqKgdh4Bcw85qO/T6n2elqch6Jju/BKAmyO0VmO8v2ifWif7OkcMVzhpQ+v7ALAmrbV 4rIgsWNUvI983Gmc4T39aRpcqAVrjH15v9jwLHTPVierc9ZrZELNrsBJTfWQ9M51vIf0 +9qdWTnjJVxOSo2quka5fKZ52njEG9ldFiu/4/qQg4civUm7++up54paUtDkItG7n6Wk g0iJHp15GuTPzh/rgEq4WxuOaOv3ZYtlhcPRD5ObUBVEbb3TDCanUPiyqg6PNBVucO4l 3C7Q== X-Forwarded-Encrypted: i=1; AJvYcCWOBX0rouIVN8fls63E/Ay1xQfFReQ8pMCPX4c8ldro/3084ItL4O3DaadN8yTI8lcdUivUHa/D4QO8WuTNjYdV0lw/iEmP X-Gm-Message-State: AOJu0YzIWqhNOTMt5A9dKn3MmXm+7f8UuuZp+72Va2Tq/KZaRuvHYxp2 18FGDWfhpwuJEZ80iE5W0IfVkjd3TKM6iVUXMfg6BM8mZomimBIdqJHuMmThrA== X-Google-Smtp-Source: AGHT+IFiuVpILg70wiGv8r5X+NsgBtJqToTHENs2I2VW8W6CRKF96LwGx1hZPS/FZeLDbIA+S3xWYw== X-Received: by 2002:a05:6870:2393:b0:22d:f9e1:dbce with SMTP id e19-20020a056870239300b0022df9e1dbcemr12266085oap.6.1712050541740; Tue, 02 Apr 2024 02:35:41 -0700 (PDT) Received: from PC-MID-R740.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id q19-20020a62e113000000b006e5a3db5875sm9702087pfh.13.2024.04.02.02.35.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Apr 2024 02:35:41 -0700 (PDT) From: Pavan Chebbi To: michael.chan@broadcom.com Cc: davem@davemloft.net, edumazet@google.com, gospo@broadcom.com, kuba@kernel.org, netdev@vger.kernel.org, pabeni@redhat.com, Somnath Kotur , Andy Gospodarek , Pavan Chebbi Subject: [PATCH net-next v2 3/7] bnxt_en: Allocate page pool per numa node Date: Tue, 2 Apr 2024 02:37:49 -0700 Message-Id: <20240402093753.331120-4-pavan.chebbi@broadcom.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20240402093753.331120-1-pavan.chebbi@broadcom.com> References: <20240402093753.331120-1-pavan.chebbi@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Somnath Kotur Driver's Page Pool allocation code looks at the node local to the PCIe device to determine where to allocate memory. In scenarios where the core count per NUMA node is low (< default rings) it makes sense to exhaust page pool allocations on Node 0 first and then moving on to allocating page pools for the remaining rings from Node 1. With this patch, and the following configuration on the NIC $ ethtool -L ens1f0np0 combined 16 (core count/node = 12, first 12 rings on node#0, last 4 rings node#1) and traffic redirected to a ring on node#1 , we see a performance improvement of ~20% Signed-off-by: Somnath Kotur Reviewed-by: Andy Gospodarek Reviewed-by: Michael Chan Signed-off-by: Pavan Chebbi --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 44b9332c147e..9fca1dee6486 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3559,14 +3559,15 @@ static void bnxt_free_rx_rings(struct bnxt *bp) } static int bnxt_alloc_rx_page_pool(struct bnxt *bp, - struct bnxt_rx_ring_info *rxr) + struct bnxt_rx_ring_info *rxr, + int numa_node) { struct page_pool_params pp = { 0 }; pp.pool_size = bp->rx_agg_ring_size; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size += bp->rx_ring_size; - pp.nid = dev_to_node(&bp->pdev->dev); + pp.nid = numa_node; pp.napi = &rxr->bnapi->napi; pp.netdev = bp->dev; pp.dev = &bp->pdev->dev; @@ -3586,7 +3587,8 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, static int bnxt_alloc_rx_rings(struct bnxt *bp) { - int i, rc = 0, agg_rings = 0; + int numa_node = dev_to_node(&bp->pdev->dev); + int i, rc = 0, agg_rings = 0, cpu; if (!bp->rx_ring) return -ENOMEM; @@ -3597,10 +3599,15 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp) for (i = 0; i < bp->rx_nr_rings; i++) { struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; struct bnxt_ring_struct *ring; + int cpu_node; ring = &rxr->rx_ring_struct; - rc = bnxt_alloc_rx_page_pool(bp, rxr); + cpu = cpumask_local_spread(i, numa_node); + cpu_node = cpu_to_node(cpu); + netdev_dbg(bp->dev, "Allocating page pool for rx_ring[%d] on numa_node: %d\n", + i, cpu_node); + rc = bnxt_alloc_rx_page_pool(bp, rxr, cpu_node); if (rc) return rc;