From patchwork Thu Apr 18 19:51:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635385 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D7BE63D62 for ; Thu, 18 Apr 2024 19:52:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469941; cv=none; b=SdnYFZAkcDr7rH9pN+MTN05pOzaYeRGdkMy6MlMjSWcp8AWVbv7QBbqmIc4B5UdutlZ1UqodkbWqknLeSWeQoXcR9zlc4yhitwPfM2D48nzYkyA26W9mcb1lFMtEUN0hOgRKjx4JgQsU0iv7ZGoX+sW7sMzniwwN6tC9MZAYWiw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469941; c=relaxed/simple; bh=Dx8QnO5TPwrgAAgJCm68WgpENI+3+pQLJKpgLtJndFc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jyK3kPP1e+B2Qln0nB2EptLNC7coDupfUOspv2vdInZiOBvjCDDQ+H0F+o6fAbSSSCI3OxUSGGWtcTz4FHsnFLnIsrKJz7QbwJ4ttx2A8mKzw24b3x7kQFjqqBpJuYsDd1zIsvGVWon1UdT9s0ZRAmWocX70X6fAc65YMQYE/VA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=r2aDGmts; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r2aDGmts" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-ddaf2f115f2so2246955276.3 for ; Thu, 18 Apr 2024 12:52:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469939; x=1714074739; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=C53GdnEbw3pBn26StGn2aGR2iLy/WiM1yH1OeFkVh28=; b=r2aDGmtszE9pUqpoD87k8p1Qd7pYEkMYEiF2cFL2C3wIW4R5fYqoxUoVbWcieC/zON mN2IipX3EafNRja/zg0BflglxW8W+QqvGsAj/fcolGr17TGMFcicORlXADQYdFm9CUm0 27kt8KqwJlvsh6vuOfKzobjKdNcU0SmNNbKOoNCAPycxqFuoZOmf+8m1lkrMmfyInDOf zs9BCitGLcJA4S0pDVi788F78TIBsnhRJ6uk3uuqocJzRI4/Zu/gjLiKqwECIyXqdM6L sqKpTPPvfMC4fGHVYGr5g3u/rnWDs1NgfwHL0KkC5WrYwHZDsNs5poHEHvvP0l7trNcY DuUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469939; x=1714074739; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=C53GdnEbw3pBn26StGn2aGR2iLy/WiM1yH1OeFkVh28=; b=kB7mK/7V44Pqrx4kddUDIVCdmeBbia4y/e4QaB+BVLrs432aP8nIpGoXEkp4Cfp/91 5CAsZma8vWUBMOzNxh87fJ7drSp+RqtnTxLcw3D4RoPyE0HoylvIaq+L26gOq+9TNHE3 YSupxvifgA114b1q7swOHBg/tFPEkaBY0GMbgBAwNmoKLFyP+8BqrwBZZsTqQpDs3PAH iHtuF6aXc0/ICYD60eZ/3OucTG+kLLnmT8DybkAOWWLOI/2fugpvddLJVuwYYu3usQj5 tjv5OmCthN653KMC3s3X0r90N+W/UQD4Vfny7FfE1tk511rqcx/xcJ9Z5xHQ5iyqFLFK WR6w== X-Gm-Message-State: AOJu0YyyPsWqX/PbcLEtIZgveVTedlohK3VrkPUYPQsnXIkQ8d+nQVe/ HLqHxNnMze9oczUyY7GL7QQFRQmoYLExgb1hyUAMNULYivjGF23whwAoieAsEXHCoWG3iwAxOYe L009zvVDySLW+jdWcyxXV06lLa3doZdsoDj4JBQPO48t/bwQGR4AORxv7gx21JsY4I98sR/jB4+ l9OUj8yQhmQ2Xne72LAcKDwquBYsOIsuMhnqWacY69kQ8= X-Google-Smtp-Source: AGHT+IE/bVmEhXdYPeElrQE+Dhd7tjWS9wJQIw6JVkZTVF6Jrsn95QjLJA7Cft6Hh1vy6F6bgEUXNaqkDG1Ogw== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:114b:b0:dc6:e647:3fae with SMTP id p11-20020a056902114b00b00dc6e6473faemr402685ybu.2.1713469938687; Thu, 18 Apr 2024 12:52:18 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:51 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-2-shailend@google.com> Subject: [RFC PATCH net-next 1/9] queue_api: define queue api From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Mina Almasry This API enables the net stack to reset the queues used for devmem TCP. Signed-off-by: Mina Almasry --- include/linux/netdevice.h | 3 +++ include/net/netdev_queues.h | 27 +++++++++++++++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index d45f330d083d..5b67dee39818 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1957,6 +1957,7 @@ enum netdev_reg_state { * @sysfs_rx_queue_group: Space for optional per-rx queue attributes * @rtnl_link_ops: Rtnl_link_ops * @stat_ops: Optional ops for queue-aware statistics + * @queue_mgmt_ops: Optional ops for queue management * * @gso_max_size: Maximum size of generic segmentation offload * @tso_max_size: Device (as in HW) limit on the max TSO request size @@ -2340,6 +2341,8 @@ struct net_device { const struct netdev_stat_ops *stat_ops; + const struct netdev_queue_mgmt_ops *queue_mgmt_ops; + /* for setting kernel sock attribute on TCP connection setup */ #define GSO_MAX_SEGS 65535u #define GSO_LEGACY_MAX_SIZE 65536u diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index 1ec408585373..337df0860ae6 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -60,6 +60,33 @@ struct netdev_stat_ops { struct netdev_queue_stats_tx *tx); }; +/** + * struct netdev_queue_mgmt_ops - netdev ops for queue management + * + * @ndo_queue_mem_alloc: Allocate memory for an RX queue. The memory returned + * in the form of a void* can be passed to + * ndo_queue_mem_free() for freeing or to ndo_queue_start + * to create an RX queue with this memory. + * + * @ndo_queue_mem_free: Free memory from an RX queue. + * + * @ndo_queue_start: Start an RX queue at the specified index. + * + * @ndo_queue_stop: Stop the RX queue at the specified index. + */ +struct netdev_queue_mgmt_ops { + void * (*ndo_queue_mem_alloc)(struct net_device *dev, + int idx); + void (*ndo_queue_mem_free)(struct net_device *dev, + void *queue_mem); + int (*ndo_queue_start)(struct net_device *dev, + int idx, + void *queue_mem); + int (*ndo_queue_stop)(struct net_device *dev, + int idx, + void **out_queue_mem); +}; + /** * DOC: Lockless queue stopping / waking helpers. * From patchwork Thu Apr 18 19:51:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635386 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86B01184105 for ; Thu, 18 Apr 2024 19:52:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469942; cv=none; b=eI5gRSedATb+mf4NIM1XwSaOfiRosKfy/KAJh+L+sLWNs8Rec1ZbUT2MkP/+zI9Bm3NO7/fkMTc+ctpj/nTM/UPgDpmDtZHKJkSNBqQrgOrmiQdkZPOwkkK4AoRFzgrU2o4syiWulIeDSRXxkg0upPJzoYuKO+Wio861vI8v/xg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469942; c=relaxed/simple; bh=xICIZheVqhAUi0wzN0qIadCMswo7huAdA5CoSJ6L8HI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=po5TgNFucO5IeWj62HW3xKiZRhvPvcVTpEXTooO5RITRgde+px0t2D7qFTdTMiBIpyQlH47Chebkhw52i0oQ6nAAirYPRYj4zFkJBoPfr2lXAUvd8IwDeYnOeIECj006kvn1JT2NdPEMeSNYj/L0DCqRamU9cW3EZZVL7HF0qgg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=b+Ibvbp4; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="b+Ibvbp4" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61892d91207so22814987b3.3 for ; Thu, 18 Apr 2024 12:52:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469940; x=1714074740; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZiKjM1W5A/xCRI9EhygNuZ2ckww6/WdQJTgN8XyXsLw=; b=b+Ibvbp40tmiek3bA/kHadG43yxoKStH4saagkzDFON3w3nwfI1GV/Kgay359CZgIu Y3Vmkqxfu/AngslFUWij6DXbkJ6NWboK8lmXnbW4gQEFdTK0XnfaTtrESvluEBC51xmL usCozOVjf54S4kvt+vcZ7UoAvdZfEQw++p2sNzVw+kI/R0Y8P6uf4cpr2j8pruLcz/WA zGczT6DshOXXgH2LNzhw2fxAXZjP40QsvQa4CeRgzP8GoSKLm0oX7zCYIJJHRZPQ4pxX 74j4b28iTLZ3E1iQ3ZADrxhKFLiiMbaWtk2oKOziyLjW7T2jVxKvL2Rp19GiwgSd36mX Hs7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469940; x=1714074740; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZiKjM1W5A/xCRI9EhygNuZ2ckww6/WdQJTgN8XyXsLw=; b=dHWL6olc031ssXV6O0YbzamwXA95MMFu+9G6+uWTHjxuKNOQP1UZb6p5bqqLKuQ15N 0+lHOqdn2RgXRqCXMIoVsne0kk60l/VFPvkJHz4XezjDawHZpZzasjKt23Ltf6wAmu+h nxqdzwLUOwie4ujUtY/LCdVfRYtPKciuJpgbspaC54rMha6fuO4kMN6kH7MAP6JtCwzW MHgDL0bbzIx6WfSuqedr+74dP9ngFYx9kAOhRtndrT4BszIjrFSfReoA2qpmurLcsWF5 X25b4DND4ZCoKpYvFusLdKsgFOZIhTW9uS0GfCykneB9JoUNvqz80me75xAlTO5KYi0a 3uhA== X-Gm-Message-State: AOJu0YzpkjUp6Hcpt43LbMyF4kQMMRKp+6xnAXAb3Vhs/wm8faGgUXX5 rQy7fYhF/klqxuBt7F7fbYs3qLKF07E+H8Tj+KuGTlrp9W2UsFr+cfP5csbHyeSfa+tSKkv9Z4K QoicBKnhLsm7KrLbk4M5ggPZlVUpXczFde7Yed058i2YdONJ6kenutmBD8JuvtVulLQWEAq80SE SAcFoxZuoeaMfDKQClQ9521CAW8a3ZtUgwgeMmXAuXaoA= X-Google-Smtp-Source: AGHT+IFenVdYIBSfR40UZoYTgQ2dq8ytqg/m610Suv9naVUlIplsK4yVquQClt7mTMaKKjpV5K0+B9+Xr0ngtA== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:d047:0:b0:61a:e319:b0 with SMTP id s68-20020a0dd047000000b0061ae31900b0mr770416ywd.1.1713469940316; Thu, 18 Apr 2024 12:52:20 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:52 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-3-shailend@google.com> Subject: [RFC PATCH net-next 2/9] gve: Make the RX free queue funcs idempotent From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Although this is not fixing any existing double free bug, making these functions idempotent allows for a simpler implementation of future ndo hooks that act on a single queue. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_rx.c | 29 ++++++++++++++++-------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index cd727e55ae0f..fde7503940f7 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -30,6 +30,9 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, u32 slots = rx->mask + 1; int i; + if (!rx->data.page_info) + return; + if (rx->data.raw_addressing) { for (i = 0; i < slots; i++) gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], @@ -70,20 +73,26 @@ static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, int idx = rx->q_num; size_t bytes; - bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; - dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); - rx->desc.desc_ring = NULL; + if (rx->desc.desc_ring) { + bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; + dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); + rx->desc.desc_ring = NULL; + } - dma_free_coherent(dev, sizeof(*rx->q_resources), - rx->q_resources, rx->q_resources_bus); - rx->q_resources = NULL; + if (rx->q_resources) { + dma_free_coherent(dev, sizeof(*rx->q_resources), + rx->q_resources, rx->q_resources_bus); + rx->q_resources = NULL; + } gve_rx_unfill_pages(priv, rx, cfg); - bytes = sizeof(*rx->data.data_ring) * slots; - dma_free_coherent(dev, bytes, rx->data.data_ring, - rx->data.data_bus); - rx->data.data_ring = NULL; + if (rx->data.data_ring) { + bytes = sizeof(*rx->data.data_ring) * slots; + dma_free_coherent(dev, bytes, rx->data.data_ring, + rx->data.data_bus); + rx->data.data_ring = NULL; + } kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; From patchwork Thu Apr 18 19:51:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635387 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 427C8199EAA for ; Thu, 18 Apr 2024 19:52:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469944; cv=none; b=Cx7jSlSCL515YUMeKlROnG40XmJCAPLQHz4g2zwavJcBOtx4+d8n8gbUI79R/JqmT/nL3BrikUDMoiYKICY0oov3PaaQCmDMBSAotkZwHDGyKfzSQ1thK7dEcXbXG8bymHVKZ7Cc1JxI+Jhnp8bpRFvVChFcUF5TrmEeyBZopkE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469944; c=relaxed/simple; bh=VDZOieUcpyDEH/t+Pr0QmpUi6oADY+eslJ7Li5e0SJE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CFnO8SvLkqOn7GgqdjMBtPG/13lI6hIo4087JO+5GxBsa7s4ZnfZdtCuCX+xUWzpu62OGRIgtpcoX+M2bT5SaUtWEe4vKt7I1HGg/aw9gdpXp7MoMGJX9yM4nNP4qY5tx1rdKkNFQh6njqqs3/5QKr8csaQhlZSpntpVb+YOD/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vV7bR+T7; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vV7bR+T7" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61890f3180aso25656717b3.2 for ; Thu, 18 Apr 2024 12:52:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469942; x=1714074742; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HVq9F9kHzNjyGhFTT6oKSDsOUC90CKKFIrQakeiM3wM=; b=vV7bR+T7LuDqc9a0vdOLz8NJ8JwANB02pJVqySBRgBoGS2yDjxH4caMf0J3MogjbBu Cz8FjJG+8tTiMw7czORN+F15Z5Cj+XDG5Td7ZaMZ7S/b5nP5NTnjdY7MXh6rAnrqi3FH bmZBG53wVvj690XF4ONRK7GJYrstVCpNLolwHxJ2AGftYHT88T3oEtylZA74eB4TUfAa StYSJAoFjCysVEOSoaJ63kVtRFZH9bK/m4RcNOGC49IHo8AwtlFvbAypxcMOWQXBHbj3 PBOckSwQ702Dn6wgG2cocG5h/zKzgMqXlHM9WsnzcrFYGiiUh5kkOXJ15sL8wnVrXE9I gaaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469942; x=1714074742; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HVq9F9kHzNjyGhFTT6oKSDsOUC90CKKFIrQakeiM3wM=; b=FR0v6TpkgUh4umsSFAW1GLJBCuBhA/c+1PbWqk6rFWwJSBwUOZKf+Oxruv5k1e4HSO muU+XKczLUx/56QLScNsxriQfGSE0MEz8GYTfygF0+yExZtcQicAc1G25B3Iezv6Tv0H qgBhO9VF5R4b4PIw4zp2BtfS3+a+ZxjsIZX0TTqh+ZZ1Y+nx1cTIYZ/TAfQXEQLqI+MA D80IugkRJxx7L7MuPpS0ieWPgZ7dUsmvP1f2FO5m3AjPcyOGPBcqSPUCBxlCOQ752rT7 QjY2Lfg4vaBsu3FdWRz83v7DtBhIPmXu3f6psbFX9BHbYF1KeryDbpHOaR0JeOZaeu33 lYDQ== X-Gm-Message-State: AOJu0YwTx54UQrRm6a5+ESxGcRh4st06Jtn1XqR13u9i6CMNBfQmRokk XKw4ZQvM2vQY70cXyxbvmI88Cauv1y02v8nRkjEkG+RCL6PMUM0+OH+yoYn+gqnfuGwts1ZgNSU lA9xGJodaWFrn2Kqjj3FQxx1hiqMogHyNoyMuydZ2FoHZR2AUv67Q+LAoctKsvG31/Y/dO2C5OI lCo+v5spjQTQdphrynmZttZRC60Ps+g7tcjtI1ndLUkeA= X-Google-Smtp-Source: AGHT+IGYMptObfYfddez9gNLgoWxhf8RHjzl4Y5YOS1Sry1gFL5zF9KdoHdIuRJzBXPZR2NUhbpFeEHW9gl1bA== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:ccce:0:b0:61b:1346:3cd5 with SMTP id o197-20020a0dccce000000b0061b13463cd5mr797946ywd.9.1713469942068; Thu, 18 Apr 2024 12:52:22 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:53 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-4-shailend@google.com> Subject: [RFC PATCH net-next 3/9] gve: Add adminq funcs to add/remove a single Rx queue From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This allows for implementing future ndo hooks that act on a single queue. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_adminq.c | 79 ++++++++++++++------ drivers/net/ethernet/google/gve/gve_adminq.h | 2 + 2 files changed, 58 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index b2b619aa2310..1b066c92d812 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -630,14 +630,15 @@ int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_que return gve_adminq_kick_and_wait(priv); } -static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +static void gve_adminq_get_create_rx_queue_cmd(struct gve_priv *priv, + union gve_adminq_command *cmd, + u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; - union gve_adminq_command cmd; - memset(&cmd, 0, sizeof(cmd)); - cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); - cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) { + memset(cmd, 0, sizeof(*cmd)); + cmd->opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); + cmd->create_rx_queue = (struct gve_adminq_create_rx_queue) { .queue_id = cpu_to_be32(queue_index), .ntfy_id = cpu_to_be32(rx->ntfy_id), .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), @@ -648,13 +649,13 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) u32 qpl_id = priv->queue_format == GVE_GQI_RDA_FORMAT ? GVE_RAW_ADDRESSING_QPL_ID : rx->data.qpl->id; - cmd.create_rx_queue.rx_desc_ring_addr = + cmd->create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->desc.bus), - cmd.create_rx_queue.rx_data_ring_addr = + cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->data.data_bus), - cmd.create_rx_queue.index = cpu_to_be32(queue_index); - cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); + cmd->create_rx_queue.index = cpu_to_be32(queue_index); + cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); + cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(rx->packet_buffer_size); } else { u32 qpl_id = 0; @@ -662,25 +663,39 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) qpl_id = GVE_RAW_ADDRESSING_QPL_ID; else qpl_id = rx->dqo.qpl->id; - cmd.create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); - cmd.create_rx_queue.rx_desc_ring_addr = + cmd->create_rx_queue.queue_page_list_id = cpu_to_be32(qpl_id); + cmd->create_rx_queue.rx_desc_ring_addr = cpu_to_be64(rx->dqo.complq.bus); - cmd.create_rx_queue.rx_data_ring_addr = + cmd->create_rx_queue.rx_data_ring_addr = cpu_to_be64(rx->dqo.bufq.bus); - cmd.create_rx_queue.packet_buffer_size = + cmd->create_rx_queue.packet_buffer_size = cpu_to_be16(priv->data_buffer_size_dqo); - cmd.create_rx_queue.rx_buff_ring_size = + cmd->create_rx_queue.rx_buff_ring_size = cpu_to_be16(priv->rx_desc_cnt); - cmd.create_rx_queue.enable_rsc = + cmd->create_rx_queue.enable_rsc = !!(priv->dev->features & NETIF_F_LRO); if (priv->header_split_enabled) - cmd.create_rx_queue.header_buffer_size = + cmd->create_rx_queue.header_buffer_size = cpu_to_be16(priv->header_buf_size); } +} +static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + + gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index); return gve_adminq_issue_cmd(priv, &cmd); } +int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + + gve_adminq_get_create_rx_queue_cmd(priv, &cmd, queue_index); + return gve_adminq_execute_cmd(priv, &cmd); +} + int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues) { int err; @@ -727,17 +742,22 @@ int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_qu return gve_adminq_kick_and_wait(priv); } +static void gve_adminq_make_destroy_rx_queue_cmd(union gve_adminq_command *cmd, + u32 queue_index) +{ + memset(cmd, 0, sizeof(*cmd)); + cmd->opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE); + cmd->destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) { + .queue_id = cpu_to_be32(queue_index), + }; +} + static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) { union gve_adminq_command cmd; int err; - memset(&cmd, 0, sizeof(cmd)); - cmd.opcode = cpu_to_be32(GVE_ADMINQ_DESTROY_RX_QUEUE); - cmd.destroy_rx_queue = (struct gve_adminq_destroy_rx_queue) { - .queue_id = cpu_to_be32(queue_index), - }; - + gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index); err = gve_adminq_issue_cmd(priv, &cmd); if (err) return err; @@ -745,6 +765,19 @@ static int gve_adminq_destroy_rx_queue(struct gve_priv *priv, u32 queue_index) return 0; } +int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index) +{ + union gve_adminq_command cmd; + int err; + + gve_adminq_make_destroy_rx_queue_cmd(&cmd, queue_index); + err = gve_adminq_execute_cmd(priv, &cmd); + if (err) + return err; + + return 0; +} + int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) { int err; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index beedf2353847..e64f0dbe744d 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -451,7 +451,9 @@ int gve_adminq_configure_device_resources(struct gve_priv *priv, int gve_adminq_deconfigure_device_resources(struct gve_priv *priv); int gve_adminq_create_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); int gve_adminq_destroy_tx_queues(struct gve_priv *priv, u32 start_id, u32 num_queues); +int gve_adminq_create_single_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_create_rx_queues(struct gve_priv *priv, u32 num_queues); +int gve_adminq_destroy_single_rx_queue(struct gve_priv *priv, u32 queue_index); int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 queue_id); int gve_adminq_register_page_list(struct gve_priv *priv, struct gve_queue_page_list *qpl); From patchwork Thu Apr 18 19:51:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635388 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3854184105 for ; Thu, 18 Apr 2024 19:52:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469946; cv=none; b=lL5VE1YZX3ED05f4jffN2fxevYghwLVKN/1Z1V/rCZIDB0k4RxrHvtRUk8WjElzj26j+aRqVwUsVskpGJgAbW2Uiun7cXqUQBU2sIaOOHD/je1hAJLu2PkZME9tzM9wT3nPJ4a9SnCK2w22AQNloWKwa2hlIuJ7yvWiog+9ETi4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469946; c=relaxed/simple; bh=fX+x8CrQsaBzLXqnTt3mG8Sbp1Qax3P66jmbBafpuM8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EPashxvYgOGWHvAWxTcsm94NG+Fk1WeYW8ueXRhTJEO4njoKXfWwc3lK8by211Lb2Ehvch3HyNWfadipvGoubN3/CcD6Ai8u3o0RiLflJK5zC6jp3odJ6IeqHq6FbhJH/FbsFNbg9rJksClarzEeurygxCTtHrx5e1foilbsFPc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uhMMZcqR; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uhMMZcqR" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-61afb46bad8so17035557b3.0 for ; Thu, 18 Apr 2024 12:52:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469944; x=1714074744; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Owi9kSuMMe75HjRaXTVXfYrGrA3im1Ib/iYQuTJVLqY=; b=uhMMZcqRJONxOoq5HViyhvpBUAiIu3xnHctulxzgT9pEeZiSpeEn3pRsZFny21zLxY wmMBR/9spGnvQYH35QX2dEq5hCA9cV+7BIl8WRkPB83p/mZJft9GjjeAwPRyjqRa9bQC 4l989NZ8prk/8SubXjk7kWSOcv5nxQDxtsX2O1RWGdiXERKO7Jn7LQMue5kqq+9FMW6W aiqd3Cdv8CBzd7Xm8VJxRs+apLUbJJK9aN3ZU1o4pReRxtp2nT29BmVc9zZMzyBv9zEz upiyBZSrMeYbW2pcvQCuMSV0Kqaj9eUOADFSMwbOxyTv/G9h+EZZ6IDoX4wMaCH53MgP O6Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469944; x=1714074744; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Owi9kSuMMe75HjRaXTVXfYrGrA3im1Ib/iYQuTJVLqY=; b=PLNzqySshvTaJ66uaNtOqpupOo6CiYSQ92p4hIyR6+V09oLKSsFOX1f3/zR/u/uIBN /ZG8Xl7VACshX6Y6+zsakRVQ3u5eFl8QsIsR3YbAe/f9FIPNyWizW9k35oOb+2VQyg3v xOhFqHDFaT6Z5q9GfmU+S/5jvsfDpgBJw8kfFakvExVrhdX9NRG7z86tT28y1x3XH2lq og38nSyacu6TtOtv2QHO0RA5lnR3eW6ci0zVaVUoe34v5o2c/W59zxrR++UNSCYoabIv W6IIieLRGWk+498YR+VQn9kteb+e3hB1X7SSqiJw1tJ57usuavsi6wh1AeR5ka86Cp3p mNhA== X-Gm-Message-State: AOJu0Yxtq4ogjCKiH2C0zNQINKmjGZqBGZsdozyYhNy38WZYB70Xv8Bg lZLTsmvUBncNHyKuJLhOfVHgQzYh4J8Nnbk/0ddhc2uN0zEFLeSXrAb239DkQ38rqHxwPCgYIib MGZxYU2sIw4qC/dzS/8nWUlhguEUBVqqsV8OR6E3FYaNO49mZ4oEdaW+fH/5LcmwMEfhPCuxlcI RpTb4ghQPOREbvLVCjd9hW0LuzxtXZvT5vj3mc691LVXw= X-Google-Smtp-Source: AGHT+IFPA9omB2ptB2ZmoHxWp7InAamOvgkLbxg+VwA0SU0N2M4S/7AUERYZGj4zWDmB6LkibeF8hkbNQG1byg== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a81:6e57:0:b0:61a:d420:3b57 with SMTP id j84-20020a816e57000000b0061ad4203b57mr50620ywc.0.1713469943766; Thu, 18 Apr 2024 12:52:23 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:54 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-5-shailend@google.com> Subject: [RFC PATCH net-next 4/9] gve: Make gve_turn(up|down) ignore stopped queues From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Currently the queues are either all live or all dead, toggling from one state to the other via the ndo open and stop hooks. The future addition of single-queue ndo hooks changes this, and thus gve_turnup and gve_turndown should evolve to account for a state where some queues are live and some aren't. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_main.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index a515e5af843c..8e875b598e78 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1965,12 +1965,16 @@ static void gve_turndown(struct gve_priv *priv) int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_tx_was_added_to_block(priv, idx)) + continue; napi_disable(&block->napi); } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_rx_was_added_to_block(priv, idx)) + continue; napi_disable(&block->napi); } @@ -1993,6 +1997,9 @@ static void gve_turnup(struct gve_priv *priv) int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_tx_was_added_to_block(priv, idx)) + continue; + napi_enable(&block->napi); if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); @@ -2005,6 +2012,9 @@ static void gve_turnup(struct gve_priv *priv) int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + if (!gve_rx_was_added_to_block(priv, idx)) + continue; + napi_enable(&block->napi); if (gve_is_gqi(priv)) { iowrite32be(0, gve_irq_doorbell(priv, block)); From patchwork Thu Apr 18 19:51:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635389 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C3A01A0AEB for ; Thu, 18 Apr 2024 19:52:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469947; cv=none; b=Hf3IBOPk7ujlX9T+x9dwnH+X9yPZfYzkewDz2wkAspIq1S5mRCDSQN2HWWRenyA3lOWM5pQHyCrOXVgZHF9mqFbJmETeF+254HCsIZ1hPYwRxsi9nj+ZmJWLWk0dHNyyoDJnRt7IMWfLvwD3pnFKPW1VEGmWYZAPi+BsAxm4SG4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469947; c=relaxed/simple; bh=vcdxpDcy+rA+HdslsUiAM9FmYkLK/OW1ILVoyBWrIqs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dpEWdHSOvX8jbC5LZnfp0I89iisdURdozCYGyg2trcuTCWx878pqCgHIAOOPAndmT6Lb0mTrE/oTnvigEBBvM/wv/w+iOlxNj2kc8ObisrrarOjBkEHROYtN6lhirydUmNg0PBv3I/U3CwFQbamWGf9jTx46AOOPHH3/c/z8uCs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FTbPYVUp; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FTbPYVUp" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61b2abd30fdso13233457b3.2 for ; Thu, 18 Apr 2024 12:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469945; x=1714074745; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EqPXJb1E2MyKZ2ZcTaUJUh/zFV0A+KSo3FwUOlz8CyU=; b=FTbPYVUpGGVExJlarbEu+ZY1kiFgizzkJF95X+pLY6V3f/2QmbD0MrRS50Ti+x+flt ICQKKUQuS79sLXmX/VzM8PHpk3+ZDLb/yHE6RvnHZMoYItGsHvqffwe4NEBnzAM4GEeK hu2QdVPH9VoWrWTnEKQG+bMQJf//DwEbusb5dFQuDcr0hO49lYWkt1V0oPWKtdrGxOPZ sLjwdqx2+eOaGIU1xhhlD24CYaQFVMFiEJlsdC/Ty+8z1HxmmyVuhuH2th9TZoMkP6cu NogeiIr38do1LSehCQRqz1544+Hzwx48ZfYnccvRBY8fAnMcO2woErAZ5Gxu/TSTssy0 Czhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469945; x=1714074745; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EqPXJb1E2MyKZ2ZcTaUJUh/zFV0A+KSo3FwUOlz8CyU=; b=ah4H+0uunlihZe1hGALt39w17DKa7UvRtbvFBazkiExsi1AM4wqexhBP+oXhrN+YsC +kWrsfDR8kwT6QzVaHKnxkXdfJKscDmuDz/rauv07jr3yThgY+xD0nEMjgY3GIDkJo2X 1ifno36VKk/oAlN30/GwdskEthRgMGOXPEKDR6Xh4BLkgEv/Vh65Ufri9KyBUlRJUD76 4fZkfAvNCVhvQ7KfOmesiqU00BAJE+e1sXvLTQkhmbYWOkWUWqwAmeKY8yRsRB6xHsB7 r7qFlFNQKDLVsa48t3AJ9dcA4oRFLKKuhiHGiOx9J2TM0NPqNSRzDxggwj8Ss8dXMqU+ LC6g== X-Gm-Message-State: AOJu0Yy2Gj9fu+4ZC3pNpzD2v/nC82kdgwK2lByfjG35u1CqY0hsX/Ig B3Cw0Z1TQI61FUbY3CmId/M4/prvcbhJK+3P7KtXKkdk7L3ZIwqDmJUqC2FyrrDtE01riNCgMet 258E1mUc32ZpfGtF1SRlfTBPK/jZNgUZqlITedWECum0ETn7oS3U6jGQH8KFKdZC5gG4YUzMQQ3 2iC+WbydLvINzw2qfOGxgrae9Oojjqc/K1f5OduC6799Q= X-Google-Smtp-Source: AGHT+IH5j/9OaG5VzuQyc9exHCE1jkz3GpN7vC2u70iQk6yEeLumEvr106Yj9rP2DUPk3Z7fRw1tsZMxb4jJKg== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a81:848e:0:b0:61b:982:4da0 with SMTP id u136-20020a81848e000000b0061b09824da0mr2609ywf.0.1713469945272; Thu, 18 Apr 2024 12:52:25 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:55 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-6-shailend@google.com> Subject: [RFC PATCH net-next 5/9] gve: Make gve_turnup work for nonempty queues From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC gVNIC has a requirement that all queues have to be quiesced before any queue is operated on (created or destroyed). To enable the implementation of future ndo hooks that work on a single queue, we need to evolve gve_turnup to account for queues already having some unprocessed descriptors in the ring. Say rxq 4 is being stopped and started via the queue api. Due to gve's requirement of quiescence, queues 0 through 3 are not processing their rings while queue 4 is being toggled. Once they are made live, these queues need to be poked to cause them to check their rings for descriptors that were written during their brief period of quiescence. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_main.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 8e875b598e78..4ab90d3eb1cb 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -2007,6 +2007,13 @@ static void gve_turnup(struct gve_priv *priv) gve_set_itr_coalesce_usecs_dqo(priv, block, priv->tx_coalesce_usecs); } + + /* Any descs written by the NIC before this barrier will be + * handled by the one-off napi schedule below. Whereas any + * descs after the barrier will generate interrupts. + */ + mb(); + napi_schedule(&block->napi); } for (idx = 0; idx < priv->rx_cfg.num_queues; idx++) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -2022,6 +2029,13 @@ static void gve_turnup(struct gve_priv *priv) gve_set_itr_coalesce_usecs_dqo(priv, block, priv->rx_coalesce_usecs); } + + /* Any descs written by the NIC before this barrier will be + * handled by the one-off napi schedule below. Whereas any + * descs after the barrier will generate interrupts. + */ + mb(); + napi_schedule(&block->napi); } gve_set_napi_enabled(priv); From patchwork Thu Apr 18 19:51:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635390 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEAF91A0AFA for ; Thu, 18 Apr 2024 19:52:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469949; cv=none; b=TCV2GM7FqmV0rl/KyIJ+/8llUnRSawjajUnoZ5Mie9Z1nGfcms4aTYWJK/usmTVbbUDs2uDrMZFEIr678cQh/ywOlQpekA5BW1mPOhfESpKiTemIhA4cuVmXpipVPCtTqzeeHtRxbjAxbpc1tCrBbUVvxWZFQvxuD1n3Y+W9gjw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469949; c=relaxed/simple; bh=9VNZiN/LhNydhx1ESFWqspNdurMS5WG84UMK12gTx1Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UM6k6Kg3JL7G0vjTRC/BmwdiaspxPC8MA7WI6ufYMqdd8pIlMWi89zly70yfgPuxrqwBjj0Gt6GwIQqR6U05sv/0/7QoSW3ayFCWxhUF0uXLhfC3UZzN9Pr0c98bjv+M2AV2UG6AxFmi+GnsjVdKklIVuPiVynVLcGM9PrwlPrI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AkoWGY1n; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AkoWGY1n" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-610b96c8ca2so22379667b3.2 for ; Thu, 18 Apr 2024 12:52:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469947; x=1714074747; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bwUTYaz8zxeLWvGDnxMv0voRtmZ/Nn0Ru5SD7WhY/cY=; b=AkoWGY1nxWzLbyQ4z6/F/nTvQhWwNUqW9h51DhVv6gcMPMt0K0IJJ+ca3cEn26Pro/ Dc2folbR1u91Gn3C9gSezm8PO3A5YzR/ErQT04c6k/DESwnOOub/FJMxcDYvK6zyRNWj swTHaeOe2g8Tg0oGZGN8NFXaK9rescNEz094NMFqohG1rbtqLmVIJ+C622U8BnI2mfly 3wX8KH9jM1bZMDNtPxaxsaQobVWPN56FoEhoBm7oAQmyF+VdpuiesFUeyJHHYbs+tSW4 6gumrbQ6ilUhK64sfirnE4Zqui/de/uTiRMD/QShkxwoIsg1iSCjx8IsT4b3i/rClVlT EMsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469947; x=1714074747; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bwUTYaz8zxeLWvGDnxMv0voRtmZ/Nn0Ru5SD7WhY/cY=; b=eQFGpxNN29fkRqgqLsSH6Uk76c4Tk/BcaPLXGrx3XJsogJAfN80SS3Sj5uOb/QQj7U I0PX1NevsTqWS8Tq5A+Hrw1FhqqpxrZ8HLi5YJldP04hmrc5teID8LHrqQjMd0/8vb1Q LUjzL1ojCiMObltxGQV+8FSwfyNTkJghbMCL77oahZq4AUxN7tb5puWYDo2FCrUBEk4G Safnb0ZmVOYMTSen3CqguS242KQDTYo3MMxu+Lq84WSiqVYCJomoVNYufZa02Cj6j23p T0YYu3OdBd93+IxDGyRRAlhM+8/AR1iofBKCyv7FVKKwjZQcUo1M9Qy6dpgF45p8zPX4 gA8g== X-Gm-Message-State: AOJu0YykztbhjyNHJhzh4IljN4QJLNGiGsbsGAtz1LGMFjnv4rb/lGbb pn1HSo9Q7gAq+ahiQ+C24sY6RrOKbFlroGPQwqIIG5YKaSRSmA3J+Bnp0Yq9X597/8eAhJ+b6OJ 7ZRUbR8XrvID9s9YDetOO5o9gbMq6guSCCiAcYVSEqFhZA7igVO1w8W18KursHs+UjW++5CXnJu 3dpF1/hvRc3J901TuwvBy6X+mb4UsNo4s8JpPYhbv5m4g= X-Google-Smtp-Source: AGHT+IEzfufoFaqPXDZd5vA/PywOHq/fR5Z+QoGuLF1IMlWloWx/99a4euqM29iGKVszVtqcmFLy+Pd7/i5enQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a0d:d982:0:b0:61b:2210:4f43 with SMTP id b124-20020a0dd982000000b0061b22104f43mr591593ywe.6.1713469946859; Thu, 18 Apr 2024 12:52:26 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:56 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-7-shailend@google.com> Subject: [RFC PATCH net-next 6/9] gve: Avoid rescheduling napi if on wrong cpu From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC In order to make possible the implementation of per-queue ndo hooks, gve_turnup was changed in a previous patch to account for queues already having some unprocessed descriptors: it does a one-off napi_schdule to handle them. If conditions of consistent high traffic persist in the immediate aftermath of this, the poll routine for a queue can be "stuck" on the cpu on which the ndo hooks ran, instead of the cpu its irq has affinity with. This situation is exacerbated by the fact that the ndo hooks for all the queues are invoked on the same cpu, potentially causing all the napi poll routines to be residing on the same cpu. A self correcting mechanism in the poll method itself solves this problem. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_main.c | 33 ++++++++++++++++++++-- 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index e97633b68e25..9f6a897c87cb 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -610,6 +610,7 @@ struct gve_notify_block { struct gve_priv *priv; struct gve_tx_ring *tx; /* tx rings on this block */ struct gve_rx_ring *rx; /* rx rings on this block */ + u32 irq; }; /* Tracks allowed and current queue settings */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 4ab90d3eb1cb..c348dff7cca6 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -253,6 +254,18 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) return IRQ_HANDLED; } +static int gve_is_napi_on_home_cpu(struct gve_priv *priv, u32 irq) +{ + int cpu_curr = smp_processor_id(); + const struct cpumask *aff_mask; + + aff_mask = irq_get_effective_affinity_mask(irq); + if (unlikely(!aff_mask)) + return 1; + + return cpumask_test_cpu(cpu_curr, aff_mask); +} + int gve_napi_poll(struct napi_struct *napi, int budget) { struct gve_notify_block *block; @@ -322,8 +335,21 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int budget) reschedule |= work_done == budget; } - if (reschedule) - return budget; + if (reschedule) { + /* Reschedule by returning budget only if already on the correct + * cpu. + */ + if (likely(gve_is_napi_on_home_cpu(priv, block->irq))) + return budget; + + /* If not on the cpu with which this queue's irq has affinity + * with, we avoid rescheduling napi and arm the irq instead so + * that napi gets rescheduled back eventually onto the right + * cpu. + */ + if (work_done == budget) + work_done--; + } if (likely(napi_complete_done(napi, work_done))) { /* Enable interrupts again. @@ -428,6 +454,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) "Failed to receive msix vector %d\n", i); goto abort_with_some_ntfy_blocks; } + block->irq = priv->msix_vectors[msix_idx].vector; irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, get_cpu_mask(i % active_cpus)); block->irq_db_index = &priv->irq_db_indices[i].index; @@ -441,6 +468,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); + block->irq = 0; } kvfree(priv->ntfy_blocks); priv->ntfy_blocks = NULL; @@ -474,6 +502,7 @@ static void gve_free_notify_blocks(struct gve_priv *priv) irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, NULL); free_irq(priv->msix_vectors[msix_idx].vector, block); + block->irq = 0; } free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); kvfree(priv->ntfy_blocks); From patchwork Thu Apr 18 19:51:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635391 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB21B1BED97 for ; Thu, 18 Apr 2024 19:52:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469951; cv=none; b=K01hItmmwfnVBwF6snY1agNAgoKB3clldXMPdqcEUcGIWnoZW42AMmNDj7ZWltbqSxUfTGmrbGCdavi9WxocfMv2OgSjAxHJqMRxS0nnkp5di6Ad2swpOdVTWxATAZTUn9e1XjaMahxSck4NipNCuiZLqxFfE8FqGddrypGeKj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469951; c=relaxed/simple; bh=EACbfvY/4424J5g7tN1NGHoJKDeUycyOIa1G0728pbs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=J3rHfosWdoZUdyl7pbAWuYTZePmmMn5qk7vwn/F4pXe1QgpfOFnq/xdiB3dHW4vHkwojTRnXFdpNX9VDh7mp/SwIPRQ3l8xf0F4uFni2qiV6LaxmGl+IlmqT/KKWrtrrIpPLWdKrPPjOLX5X7yAUKF48qWpDZJO2SLbtSM0Tv+g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mmBksB5z; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mmBksB5z" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de45dba157dso2475400276.1 for ; Thu, 18 Apr 2024 12:52:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469949; x=1714074749; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JXApQ/BYRhq5FbSARBQKlWaEk3VaY98fqoz2ny5u76A=; b=mmBksB5zE8b0uZgL/64nMVJugbe90KTDIhgaYFu/CJiqOPQcTJSlQYBo7ttwRKYGiY uLzh5Apf0GELLXyDGUNjFV+kGIz2+LIgis872FTAC264OT4BnfSjUHEhd6uPkksr3/Eb UJbaZLatP6WFqLBjP08kDbL+rDfC8A+0Lhp5nE3gjVK968yn4Ks6kLAd18BjAa1Cdl3Q uJmI9sA10OwXN/YdQNV7K/vVjMI68dZTYb82iSkTJdZsO0eAlBb/mn2oIxJYcbnmWjdx 3bDjQlMWElUUy3bwlTSCJ/YRWC1rAAV5MwwJeh8j+e6WGkFMiQDc/8xRRbCLNjgYDlXr xVew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469949; x=1714074749; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JXApQ/BYRhq5FbSARBQKlWaEk3VaY98fqoz2ny5u76A=; b=HxhcOUxKvGd7KWAFDelJDRZ9IhOmapeKd/ogf+v9eeCCSSfd9MAnGsseq2erNgUMCh 8nuoOq3gEGT+qxbByyHGJMdTVp/nim6NoUGr4GkPVnI0kxI/6Af04zNAr689CBKOD05j kzJ0ghLoV9/jqeyiLMhuXgkrmrT4DY7JxtlvHYfy7uB72CeppGvspmY31rspFV+pQaRH 17REEC9C+2WkrGOZhYe1HrqnZMUjtuSCKnoZ0a+5vekIZVvzS1Ab3p++lIFEjpRURE4u kicYxCGuUj5zIemtB9O2XS3sVEwQiWXsG5MB3Bwuavf+Nevkum9dx7mYlko9N15nz6wp goCg== X-Gm-Message-State: AOJu0Yz0NBPrlayonrS3CZ3aDJ255t51O7arep+aUZeAvMMHnLp+Qk3i hQ4MmAAStbWnC5ENHf6Rv0AvrvAmhOn0Y43BfDhIan89+0sr/2U8nnNupttS3oIVU029Gbxz59B P6s6HQQc5L75LUsS/THHpTi6IsrsqlWmUq905WUWHtiPddh3LC7PsrKoISQhYFx8QDlfZKxj6mP 7a9NHhrj7G9cH5zHhKXPXtHfgAjVM/ES95GVM2WGj8i10= X-Google-Smtp-Source: AGHT+IFvefHzkZrPL3Hdz4Rwx4k6mgBm0n4x4/iUO5+yRxJETIw9mOV0L5SL3+TDsiP6w8Iq5w5Tfpy1TfQxbw== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:100c:b0:dbe:387d:a8ef with SMTP id w12-20020a056902100c00b00dbe387da8efmr337872ybt.1.1713469948593; Thu, 18 Apr 2024 12:52:28 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:57 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-8-shailend@google.com> Subject: [RFC PATCH net-next 7/9] gve: Reset Rx ring state in the ring-stop funcs From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This does not fix any existing bug. In anticipation of the ndo queue api hooks that alloc/free/start/stop a single Rx queue, the already existing per-queue stop functions are being made more robust. Specifically for this use case: rx_queue_n.stop() + rx_queue_n.start() Note that this is not the use case being used in devmem tcp (the first place these new ndo hooks would be used). There the usecase is: new_queue.alloc() + old_queue.stop() + new_queue.start() + old_queue.free() Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_rx.c | 48 +++++++-- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 102 +++++++++++++++---- 2 files changed, 120 insertions(+), 30 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index fde7503940f7..1d235caab4c5 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -54,6 +54,41 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, rx->data.page_info = NULL; } +static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) +{ + ctx->skb_head = NULL; + ctx->skb_tail = NULL; + ctx->total_size = 0; + ctx->frag_cnt = 0; + ctx->drop_pkt = false; +} + +static void gve_rx_init_ring_state_gqi(struct gve_rx_ring *rx) +{ + rx->desc.seqno = 1; + rx->cnt = 0; + gve_rx_ctx_clear(&rx->ctx); +} + +static void gve_rx_reset_ring_gqi(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + const u32 slots = priv->rx_desc_cnt; + size_t size; + + /* Reset desc ring */ + if (rx->desc.desc_ring) { + size = slots * sizeof(rx->desc.desc_ring[0]); + memset(rx->desc.desc_ring, 0, size); + } + + /* Reset q_resources */ + if (rx->q_resources) + memset(rx->q_resources, 0, sizeof(*rx->q_resources)); + + gve_rx_init_ring_state_gqi(rx); +} + void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -63,6 +98,7 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) gve_remove_napi(priv, ntfy_idx); gve_rx_remove_from_block(priv, idx); + gve_rx_reset_ring_gqi(priv, idx); } static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, @@ -226,15 +262,6 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, return err; } -static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) -{ - ctx->skb_head = NULL; - ctx->skb_tail = NULL; - ctx->total_size = 0; - ctx->frag_cnt = 0; - ctx->drop_pkt = false; -} - void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -313,9 +340,8 @@ static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, err = -ENOMEM; goto abort_with_q_resources; } - rx->cnt = 0; rx->db_threshold = slots / 2; - rx->desc.seqno = 1; + gve_rx_init_ring_state_gqi(rx); rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; gve_rx_ctx_clear(&rx->ctx); diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index 15108407b54f..dc2c6bd92e82 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -211,6 +211,82 @@ static void gve_rx_free_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx) } } +static void gve_rx_init_ring_state_dqo(struct gve_rx_ring *rx, + const u32 buffer_queue_slots, + const u32 completion_queue_slots) +{ + int i; + + /* Set buffer queue state */ + rx->dqo.bufq.mask = buffer_queue_slots - 1; + rx->dqo.bufq.head = 0; + rx->dqo.bufq.tail = 0; + + /* Set completion queue state */ + rx->dqo.complq.num_free_slots = completion_queue_slots; + rx->dqo.complq.mask = completion_queue_slots - 1; + rx->dqo.complq.cur_gen_bit = 0; + rx->dqo.complq.head = 0; + + /* Set RX SKB context */ + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; + + /* Set up linked list of buffer IDs */ + if (rx->dqo.buf_states) { + for (i = 0; i < rx->dqo.num_buf_states - 1; i++) + rx->dqo.buf_states[i].next = i + 1; + rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; + } + + rx->dqo.free_buf_states = 0; + rx->dqo.recycled_buf_states.head = -1; + rx->dqo.recycled_buf_states.tail = -1; + rx->dqo.used_buf_states.head = -1; + rx->dqo.used_buf_states.tail = -1; +} + +static void gve_rx_reset_ring_dqo(struct gve_priv *priv, int idx) +{ + struct gve_rx_ring *rx = &priv->rx[idx]; + size_t size; + int i; + + const u32 buffer_queue_slots = priv->rx_desc_cnt; + const u32 completion_queue_slots = priv->rx_desc_cnt; + + /* Reset buffer queue */ + if (rx->dqo.bufq.desc_ring) { + size = sizeof(rx->dqo.bufq.desc_ring[0]) * + buffer_queue_slots; + memset(rx->dqo.bufq.desc_ring, 0, size); + } + + /* Reset completion queue */ + if (rx->dqo.complq.desc_ring) { + size = sizeof(rx->dqo.complq.desc_ring[0]) * + completion_queue_slots; + memset(rx->dqo.complq.desc_ring, 0, size); + } + + /* Reset q_resources */ + if (rx->q_resources) + memset(rx->q_resources, 0, sizeof(*rx->q_resources)); + + /* Reset buf states */ + if (rx->dqo.buf_states) { + for (i = 0; i < rx->dqo.num_buf_states; i++) { + struct gve_rx_buf_state_dqo *bs = &rx->dqo.buf_states[i]; + + if (bs->page_info.page) + gve_free_page_dqo(priv, bs, !rx->dqo.qpl); + } + } + + gve_rx_init_ring_state_dqo(rx, buffer_queue_slots, + completion_queue_slots); +} + void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) { int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); @@ -220,6 +296,7 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) gve_remove_napi(priv, ntfy_idx); gve_rx_remove_from_block(priv, idx); + gve_rx_reset_ring_dqo(priv, idx); } static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, @@ -275,10 +352,10 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } -static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx) +static int gve_rx_alloc_hdr_bufs(struct gve_priv *priv, struct gve_rx_ring *rx, + const u32 buf_count) { struct device *hdev = &priv->pdev->dev; - int buf_count = rx->dqo.bufq.mask + 1; rx->dqo.hdr_bufs.data = dma_alloc_coherent(hdev, priv->header_buf_size * buf_count, &rx->dqo.hdr_bufs.addr, GFP_KERNEL); @@ -303,7 +380,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev = &priv->pdev->dev; size_t size; - int i; const u32 buffer_queue_slots = cfg->ring_size; const u32 completion_queue_slots = cfg->ring_size; @@ -313,11 +389,6 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, memset(rx, 0, sizeof(*rx)); rx->gve = priv; rx->q_num = idx; - rx->dqo.bufq.mask = buffer_queue_slots - 1; - rx->dqo.complq.num_free_slots = completion_queue_slots; - rx->dqo.complq.mask = completion_queue_slots - 1; - rx->ctx.skb_head = NULL; - rx->ctx.skb_tail = NULL; rx->dqo.num_buf_states = cfg->raw_addressing ? min_t(s16, S16_MAX, buffer_queue_slots * 4) : @@ -330,19 +401,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, /* Allocate header buffers for header-split */ if (cfg->enable_header_split) - if (gve_rx_alloc_hdr_bufs(priv, rx)) + if (gve_rx_alloc_hdr_bufs(priv, rx, buffer_queue_slots)) goto err; - /* Set up linked list of buffer IDs */ - for (i = 0; i < rx->dqo.num_buf_states - 1; i++) - rx->dqo.buf_states[i].next = i + 1; - - rx->dqo.buf_states[rx->dqo.num_buf_states - 1].next = -1; - rx->dqo.recycled_buf_states.head = -1; - rx->dqo.recycled_buf_states.tail = -1; - rx->dqo.used_buf_states.head = -1; - rx->dqo.used_buf_states.tail = -1; - /* Allocate RX completion queue */ size = sizeof(rx->dqo.complq.desc_ring[0]) * completion_queue_slots; @@ -370,6 +431,9 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, if (!rx->q_resources) goto err; + gve_rx_init_ring_state_dqo(rx, buffer_queue_slots, + completion_queue_slots); + return 0; err: From patchwork Thu Apr 18 19:51:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635392 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6D1D1C65E2 for ; Thu, 18 Apr 2024 19:52:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469953; cv=none; b=AsyUWCqHGZvqzpgywcPaeOX8JawI3w0p1ok5s7/KmPNc6rSPbt96uQa/OHrMRP7i/BahwAIoWZYD+zya5eekd2P8mjXsmuJTuYlPnmtSoWyBde4IXUuenQXDcYrFpk52JAgDesVqwNMXenzlngjPH3TQUs5fty2s65WppC3vgW4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469953; c=relaxed/simple; bh=iVtaO+JCOkLmAStEigUQ6mklcRztbcS4p/hKfGvUJfs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MuJP9r8vRmZachQOL/hw92LCTHkJZy8wmB06kWwpgHyz6Gm75VFUKgeC2Iz4PgQ2PtsnWVWlxQgrkTAd+G2adbavPz5oEyNOnr3nFNCgHwm/2mGwqbh/iUcHo40hw4jlzSbDvnDy2/fd57DO86ytCx7jipq8oX49ttdVanPKcAA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QXRtoc+y; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QXRtoc+y" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-de45d0b7ffaso2512535276.2 for ; Thu, 18 Apr 2024 12:52:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469950; x=1714074750; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=p3R/SsFbpJPup9xdm1UCPCSdiOwdSkTPUhjZ3dGPZ3k=; b=QXRtoc+yZfiuT+ndAlWeBzqfY01Jpkt3JGGPD6XmsDsrW3vKPL2DBScQ6+8JqdsLpR XjnxzCV8ARU0SaP+RG4safIBjhwlwAFVi8n6AS/73sRkj7XsNzRlpglDd4MbQXnTogrm JguSQab7/yBFR9YRsaTglHE3Yat7KWogY7NPoas4X3z+Jvzgs6hwG2gMxfNhvivHrLbT q02C64CxduFeT/i7qSqQztD3cLxWFPJ+cK2yLcrQ9e+BGbfEKodgcAoEfSHOvzcy64/z jEmYNne1WrnNYglEyo7WNrT0jW2GhPD9MmjMDCjDIR6IZnb1V4CEnMc51FMmoamHYodo Ot7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469950; x=1714074750; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=p3R/SsFbpJPup9xdm1UCPCSdiOwdSkTPUhjZ3dGPZ3k=; b=EG2UqZD0GmV0pw4IafQXrYL4V3A/YHTlGrFrkmwnXKeElfBTcCpnL5dVIKDwkeurgH hUorXsk2d0SAmXm5DskP87+PLdGcaYYcAgemjLmSwgf9Rn4kcYokYusufU5bGdBw6jNq qixZVkuWnBUXtithR0kxf2kKN9OIYg/m6kU7J1u9U/da3UbHkvxDA4ucbI8f08P3bjqJ zxl+VVeG/4gVCyqN2nuC4ayr3fTxgmFkrSezpFEnm0JePdM+vgU6ybaIj6MkLbBrl7q0 4wpplbNA/CCPVlJRMC5yf1TmW6r2c/MiYqNQn6Qpeb6sZlMcwXpEIFa91cJmVkapI5CQ MnyA== X-Gm-Message-State: AOJu0YxGEXnh2p2WyhnSXAcP1KRdIs+a9bhMQKrv/ZcOXaftg0d0+mxH RAAuWpcW2XL1lQiA2JLQVmIqQjSficjHuChjg5aVrF/41zUq5ozsmd9S7RfGvCkeyETPW2OuJRi RvSDTdVymwDMzpVnWdmwnRWcZl1Pk6MXLIOIQeR8tNSsFHcAfzV0IznmooFVuK1a1ox2aVLPFUE OviE+0XJgfOiYER8v5PmMwmDHN5a0rth/Byp9m5YzU6aI= X-Google-Smtp-Source: AGHT+IGLFLVdaqdoVa8ScYzUF/mbxSwIUZeqcm5ZrKmUjlD0KpBa/WzS7qgHH5Q7BavDe7HMBnmIbMtATGPWkg== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:2988:b0:de4:7037:69a2 with SMTP id ew8-20020a056902298800b00de4703769a2mr152649ybb.5.1713469949973; Thu, 18 Apr 2024 12:52:29 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:58 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-9-shailend@google.com> Subject: [RFC PATCH net-next 8/9] gve: Account for stopped queues when reading NIC stats From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC We now account for the fact that the NIC might send us stats for a subset of queues. Without this change, gve_get_ethtool_stats might make an invalid access on the priv->stats_report->stats array. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve_ethtool.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c index 299206d15c73..2a451aba8328 100644 --- a/drivers/net/ethernet/google/gve/gve_ethtool.c +++ b/drivers/net/ethernet/google/gve/gve_ethtool.c @@ -181,12 +181,17 @@ gve_get_ethtool_stats(struct net_device *netdev, sizeof(int), GFP_KERNEL); if (!rx_qid_to_stats_idx) return; + for (ring = 0; ring < priv->rx_cfg.num_queues; ring++) + rx_qid_to_stats_idx[ring] = -1; tx_qid_to_stats_idx = kmalloc_array(num_tx_queues, sizeof(int), GFP_KERNEL); if (!tx_qid_to_stats_idx) { kfree(rx_qid_to_stats_idx); return; } + for (ring = 0; ring < num_tx_queues; ring++) + tx_qid_to_stats_idx[ring] = -1; + for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0, rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0, rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0, @@ -308,11 +313,11 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = rx->rx_copybreak_pkt; data[i++] = rx->rx_copied_pkt; /* stats from NIC */ - if (skip_nic_stats) { + stats_idx = rx_qid_to_stats_idx[ring]; + if (skip_nic_stats || stats_idx == -1) { /* skip NIC rx stats */ i += NIC_RX_STATS_REPORT_NUM; } else { - stats_idx = rx_qid_to_stats_idx[ring]; for (j = 0; j < NIC_RX_STATS_REPORT_NUM; j++) { u64 value = be64_to_cpu(report_stats[stats_idx + j].value); @@ -383,11 +388,11 @@ gve_get_ethtool_stats(struct net_device *netdev, data[i++] = gve_tx_load_event_counter(priv, tx); data[i++] = tx->dma_mapping_error; /* stats from NIC */ - if (skip_nic_stats) { + stats_idx = tx_qid_to_stats_idx[ring]; + if (skip_nic_stats || stats_idx == -1) { /* skip NIC tx stats */ i += NIC_TX_STATS_REPORT_NUM; } else { - stats_idx = tx_qid_to_stats_idx[ring]; for (j = 0; j < NIC_TX_STATS_REPORT_NUM; j++) { u64 value = be64_to_cpu(report_stats[stats_idx + j].value); From patchwork Thu Apr 18 19:51:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13635393 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B27E61BED97 for ; Thu, 18 Apr 2024 19:52:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469954; cv=none; b=qXggfSXpS/vqiRJgTxoxp5mF/rOG5OqqliLAIlG7xzI6ndgTo0ctWgOsfPMbZFjdpJToklnNvso/o/YZuFwIQRSjup9G9fodT+NidR3nZKQJ15hV1kBbNrkV9M6kJGQC5PuwKPr47XYovLHI09zMMhu/1FE+gIjaBY7Bh+GyJgg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713469954; c=relaxed/simple; bh=WDJ5j8PJj/UBtgdIBrsmMBPpR6pgGYQj8OHYaP/oy/I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JiFIfUUVyhmoybUdtfe1827tJ38eFvNSaYCwIyGY7w+Q+4e1wREIKUbJoE2Yl2Z+laiw8CUxPCeixnuXd50+PDekEcwpaP5VBEelZ4QRoUqQ+g2IDFLjPHj41jqsB+37Hx2ZdH/Yv7DB95Yp78NyWbRTUQAaGq6MH7i06vJI+EA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rtebK5ec; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rtebK5ec" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61afb46bad8so17037387b3.0 for ; Thu, 18 Apr 2024 12:52:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1713469951; x=1714074751; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IHKeTrkXK3fIfwzUH8OvkiHPe+4pqEtlSdkT5mNWt1o=; b=rtebK5ecprY8gvnvHqIyc4PlFDGaZp3EtcrqwSICLpmlXT9HfEzCzV89l1UJrLMS/b bpNaDzsb2rfiVrW7DzWAhPb6SMUebJ1P6UDqzzUM10inBdB84g0ILBr4iOxrtBDVYhTh 8lx4psX/nujdB1WpqB5grQCbT3adazVO8HbwNxS+5OfrMYzduZsLzeCCItZNPphvV6E2 Nl3752UMbD+znETqLLP4UrQX+QhLf/v5rHSXek9VIGwn3f/u4kT2mGzW6co3e04S2ulr YketbTkDh3PUMzjzzwcDl9701dAEaVwhwP/NKeF2GdwEoCO8OLlJrxMJrrjCewQkAj26 le7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713469951; x=1714074751; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IHKeTrkXK3fIfwzUH8OvkiHPe+4pqEtlSdkT5mNWt1o=; b=v0SVVsoBLDz3U+jlDJcU0YsXSlw76mRsU3bWZluBeg2O33ERNzyDVy29pkZAXgKhiU TkjSIwhahBdGmwO9SN131RboSIdEXvQvfeUws1T7swSWcy5sGX4oJ25+1e70UwvDl9lZ T4AXWx3E8/r4mWhLIni6oOxlDaDR31W9F6Y3GgDYhbhESO1CVi/iGDsZ3fpcHLz3aqyv cA7unpTRwV8+GpMkSvbU0V1pEpEkNk1lm+qomcVNtEqeUq1oCeul9gIE2/GzPwF5JDtO R4U/BeRf0ATlD0GKuYtioYowsVV/valJdG1WtrMmdGVKtu4qNzl+FmQ1bcfeaj7aimDn rV1g== X-Gm-Message-State: AOJu0YwfrKu+XAaZgCHxBusTGFdD1kUOjULFysci5hMN/OugxoxFlTxk rJXgdP28p3c7+ICsRDMJx0nbPPG4zb0ChC5V/sOo6JSndX8pcb4ByOSiX58pK2BeesfJUYaNbYl huWqZRknc6zh1YNRXrG12nRReG3QO6WXgkd3LqMe+nZfeOOQM0LQN9Zs8QgsVmrRl5LK2+Jjzsr i6ZtiY6ikNHMo76ZWF+83hq/tRG2DNiXDbTNildLgUt10= X-Google-Smtp-Source: AGHT+IFOm/vkNIltjVtWSDKWUWKxm7l7OJK1Q1Tg/OCy9uLIdUCyIM2PQpJspnMKOjSKsZofFd6Atb/oe2rJvw== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a81:4911:0:b0:618:9348:6b92 with SMTP id w17-20020a814911000000b0061893486b92mr688971ywa.1.1713469951585; Thu, 18 Apr 2024 12:52:31 -0700 (PDT) Date: Thu, 18 Apr 2024 19:51:59 +0000 In-Reply-To: <20240418195159.3461151-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240418195159.3461151-1-shailend@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240418195159.3461151-10-shailend@google.com> Subject: [RFC PATCH net-next 9/9] gve: Implement queue api From: Shailend Chand To: netdev@vger.kernel.org Cc: almasrymina@google.com, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, willemb@google.com, Shailend Chand X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC An api enabling the net stack to reset driver queues is implemented for gve. Signed-off-by: Shailend Chand --- drivers/net/ethernet/google/gve/gve.h | 6 + drivers/net/ethernet/google/gve/gve_dqo.h | 6 + drivers/net/ethernet/google/gve/gve_main.c | 143 +++++++++++++++++++ drivers/net/ethernet/google/gve/gve_rx.c | 12 +- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 12 +- 5 files changed, 167 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 9f6a897c87cb..d752e525bde7 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1147,6 +1147,12 @@ bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx); void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); int gve_rx_poll(struct gve_notify_block *block, int budget); bool gve_rx_work_pending(struct gve_rx_ring *rx); +int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx); +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg); int gve_rx_alloc_rings(struct gve_priv *priv); int gve_rx_alloc_rings_gqi(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg); diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index b81584829c40..e83773fb891f 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -44,6 +44,12 @@ void gve_tx_free_rings_dqo(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg); void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx); void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx); +int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx); +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg); int gve_rx_alloc_rings_dqo(struct gve_priv *priv, struct gve_rx_alloc_rings_cfg *cfg); void gve_rx_free_rings_dqo(struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index c348dff7cca6..5e652958f10f 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include "gve.h" @@ -2070,6 +2071,15 @@ static void gve_turnup(struct gve_priv *priv) gve_set_napi_enabled(priv); } +static void gve_turnup_and_check_status(struct gve_priv *priv) +{ + u32 status; + + gve_turnup(priv); + status = ioread32be(&priv->reg_bar0->device_status); + gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); +} + static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct gve_notify_block *block; @@ -2530,6 +2540,138 @@ static void gve_write_version(u8 __iomem *driver_version_register) writeb('\n', driver_version_register); } +static int gve_rx_queue_stop(struct net_device *dev, int idx, + void **out_per_q_mem) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *rx; + int err; + + if (!priv->rx) + return -EAGAIN; + if (idx < 0 || idx >= priv->rx_cfg.max_queues) + return -ERANGE; + + /* Destroying queue 0 while other queues exist is not supported in DQO */ + if (!gve_is_gqi(priv) && idx == 0) + return -ERANGE; + + rx = kvzalloc(sizeof(*rx), GFP_KERNEL); + if (!rx) + return -ENOMEM; + *rx = priv->rx[idx]; + + /* Single-queue destruction requires quiescence on all queues */ + gve_turndown(priv); + + /* This failure will trigger a reset - no need to clean up */ + err = gve_adminq_destroy_single_rx_queue(priv, idx); + if (err) { + kvfree(rx); + return err; + } + + if (gve_is_gqi(priv)) + gve_rx_stop_ring_gqi(priv, idx); + else + gve_rx_stop_ring_dqo(priv, idx); + + /* Turn the unstopped queues back up */ + gve_turnup_and_check_status(priv); + + *out_per_q_mem = rx; + return 0; +} + +static void gve_rx_queue_mem_free(struct net_device *dev, void *per_q_mem) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_alloc_rings_cfg cfg = {0}; + struct gve_rx_ring *rx; + + gve_rx_get_curr_alloc_cfg(priv, &cfg); + rx = (struct gve_rx_ring *)per_q_mem; + if (!rx) + return; + + if (gve_is_gqi(priv)) + gve_rx_free_ring_gqi(priv, rx, &cfg); + else + gve_rx_free_ring_dqo(priv, rx, &cfg); + + kvfree(per_q_mem); +} + +static void *gve_rx_queue_mem_alloc(struct net_device *dev, int idx) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_alloc_rings_cfg cfg = {0}; + struct gve_rx_ring *rx; + int err; + + gve_rx_get_curr_alloc_cfg(priv, &cfg); + if (idx < 0 || idx >= cfg.qcfg->max_queues) + return NULL; + + rx = kvzalloc(sizeof(*rx), GFP_KERNEL); + if (!rx) + return NULL; + + if (gve_is_gqi(priv)) + err = gve_rx_alloc_ring_gqi(priv, &cfg, rx, idx); + else + err = gve_rx_alloc_ring_dqo(priv, &cfg, rx, idx); + + if (err) { + kvfree(rx); + return NULL; + } + return rx; +} + +static int gve_rx_queue_start(struct net_device *dev, int idx, void *per_q_mem) +{ + struct gve_priv *priv = netdev_priv(dev); + struct gve_rx_ring *rx; + int err; + + if (!priv->rx) + return -EAGAIN; + if (idx < 0 || idx >= priv->rx_cfg.max_queues) + return -ERANGE; + rx = (struct gve_rx_ring *)per_q_mem; + priv->rx[idx] = *rx; + + /* Single-queue creation requires quiescence on all queues */ + gve_turndown(priv); + + if (gve_is_gqi(priv)) + gve_rx_start_ring_gqi(priv, idx); + else + gve_rx_start_ring_dqo(priv, idx); + + /* This failure will trigger a reset - no need to clean up */ + err = gve_adminq_create_single_rx_queue(priv, idx); + if (err) + return err; + + if (gve_is_gqi(priv)) + gve_rx_write_doorbell(priv, &priv->rx[idx]); + else + gve_rx_post_buffers_dqo(&priv->rx[idx]); + + /* Turn the unstopped queues back up */ + gve_turnup_and_check_status(priv); + return 0; +} + +static const struct netdev_queue_mgmt_ops gve_queue_mgmt_ops = { + .ndo_queue_mem_alloc = gve_rx_queue_mem_alloc, + .ndo_queue_mem_free = gve_rx_queue_mem_free, + .ndo_queue_start = gve_rx_queue_start, + .ndo_queue_stop = gve_rx_queue_stop, +}; + static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) { int max_tx_queues, max_rx_queues; @@ -2584,6 +2726,7 @@ static int gve_probe(struct pci_dev *pdev, const struct pci_device_id *ent) pci_set_drvdata(pdev, dev); dev->ethtool_ops = &gve_ethtool_ops; dev->netdev_ops = &gve_netdev_ops; + dev->queue_mgmt_ops = &gve_queue_mgmt_ops; /* Set default and supported features. * diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 1d235caab4c5..307bf97d4778 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -101,8 +101,8 @@ void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) gve_rx_reset_ring_gqi(priv, idx); } -static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, - struct gve_rx_alloc_rings_cfg *cfg) +void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct device *dev = &priv->pdev->dev; u32 slots = rx->mask + 1; @@ -270,10 +270,10 @@ void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) gve_add_napi(priv, ntfy_idx, gve_napi_poll); } -static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, - struct gve_rx_alloc_rings_cfg *cfg, - struct gve_rx_ring *rx, - int idx) +int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { struct device *hdev = &priv->pdev->dev; u32 slots = cfg->ring_size; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index dc2c6bd92e82..dcbc37118870 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -299,8 +299,8 @@ void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) gve_rx_reset_ring_dqo(priv, idx); } -static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, - struct gve_rx_alloc_rings_cfg *cfg) +void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct device *hdev = &priv->pdev->dev; size_t completion_queue_slots; @@ -373,10 +373,10 @@ void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx) gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo); } -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, - struct gve_rx_alloc_rings_cfg *cfg, - struct gve_rx_ring *rx, - int idx) +int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { struct device *hdev = &priv->pdev->dev; size_t size;