From patchwork Mon Jan 22 18:26:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13526041 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 768664E1DE for ; Mon, 22 Jan 2024 18:27:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948036; cv=none; b=Oeq/xovEFe7n0KLx3NazQsqaBlxCsKzJiipAwD2QgoZrKPxJ3nzJv1cPiF9BQur4VLxdtZBAloi1gpkNR4zHV8tJ9fQjVMnUPvIj7L+cu7BbeHpgiUq//YQFCmDKQwCxHECDE/F1sE6Jyp9RSQ0CT6JEKmeeMm6Zqf1TXDQwQBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948036; c=relaxed/simple; bh=M1+1uKzWhDca9C6MmnPnkLaJCwhh4mjDOdU+1Skk1mc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=edCC1kC1o+cSDR9sYfV0C8vE736IB64lsMNw2lZRMbYM3Z8I3j1M+vOw1xoM/H1ErG6n7mCMGhwfI8RG0VmdyaI264SZVoJrKobe25Ye/Bck/vV2kcyI2IJvzx3DTl/n9l1gQn3tkJyldhleQrTAEKZbuCj1xyb5XxhrS9gNdBQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2OW9Za0Q; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2OW9Za0Q" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc253fee264so4846452276.2 for ; Mon, 22 Jan 2024 10:27:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705948034; x=1706552834; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g7tQsEE8sJMlY7lyUitaaY8KsBTbZCqMTqV6aFNSoUU=; b=2OW9Za0QZwK7w/fnnknTSZF8SMV35ivmUpLbGsRSnTpvOicu1idY5Bjr0tflTqVGdG p7PjCQiOzKhpwdiydsNJuGMG1uYlmVRNOBcM0LbC89ruUmxyDlBFHa+20JFZ5/90106m PReTJintwjcY1DXk8kyBvhSLq+ZT8URjkJAPpUzLvmc1XbDhcy+6CowR92808YyQ4jS7 /iXZ6WPa7/bCiEbp27CFNLvofPhsD7ScDAWi9PiR4ft19k1nzATFvnU/mIVJc7kRG919 UA4UORhzaPzx2lx+JRuggYx7eFXC7c9nHyzoZSREChToYDyrPBNJOZqM44f5KFoWqfLK WhFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705948034; x=1706552834; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g7tQsEE8sJMlY7lyUitaaY8KsBTbZCqMTqV6aFNSoUU=; b=qQ0cVHa8LHziArq5XHTIBQlktkJe9/7VlL6hwTC656DuHF9HRztOhDowXeb0n1Vz5M e1SGrAMVI62I1UOyEq4bg3+yaz5/cfH2XAsRokHP9jQbisP6/DFENiOozlO5VtDkeRr8 PHwJwKlNylmI4/MwgobfJHZMTAk20reuKNpIGVVZaOXHCNBl/RYKgWk3dZcxemuECbUT iWhUuO8RuVI+Rlk1UWxo4dgjzv9Tqw6njPWm6+RxS0QOMLWa0po8RUVKB2FjzGCEhxU2 nG/tshOdfB9pkJaOLU2Qq8MlqptZHvbfbGFA3AWpaWrEnXmBQUxZ9Jg0sEjxDL1XeV18 TVTw== X-Gm-Message-State: AOJu0YyLDLLDEE2TiLzcJCgZjBuHXkEHHd8ls44PNnj+IBpOdX3pkayK T7vf+npn210XkgkHbYukMwvkSunhociresgF1n3FEiTrRNMbBH/cskKfwxLZFnvCLbUrk/Y/ABi oFEz4xlS2CwL6bLcJ9h1qpj/PVR1Rb70ICiM5L5kzuyHnKwzkZhI1Cy5YF4Hn3yYwiVD9jTYc4D NjYHBAZGe2espkz2Jot6bth7XhXq2djOgKKez+OxN9gX4= X-Google-Smtp-Source: AGHT+IH9WyYqqggKussWEWLVCJgnuv90qtF6K0KYsZuOgGe9703zlhpy0m0mP3qCahx1ZYsxW5HDBfO0QEbz1g== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a05:6902:b1d:b0:dc2:1dd0:c517 with SMTP id ch29-20020a0569020b1d00b00dc21dd0c517mr2340540ybb.7.1705948034453; Mon, 22 Jan 2024 10:27:14 -0800 (PST) Date: Mon, 22 Jan 2024 18:26:27 +0000 In-Reply-To: <20240122182632.1102721-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240122182632.1102721-1-shailend@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240122182632.1102721-2-shailend@google.com> Subject: [PATCH net-next 1/6] gve: Define config structs for queue allocation From: Shailend Chand To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Shailend Chand , Willem de Bruijn , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org Queue allocation functions currently can only allocate into priv and free memory in priv. These new structs would be passed into the queue functions in a subsequent change to make them capable of returning newly allocated resources and not just writing them into priv. They also make it possible to allocate resources for queues with a different config than that of the currently active queues. Signed-off-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Jeroen de Borst Reviewed-by: Simon Horman --- drivers/net/ethernet/google/gve/gve.h | 49 +++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index b80349154604..00e7f8b06d89 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -622,6 +622,55 @@ struct gve_ptype_lut { struct gve_ptype ptypes[GVE_NUM_PTYPES]; }; +/* Parameters for allocating queue page lists */ +struct gve_qpls_alloc_cfg { + struct gve_qpl_config *qpl_cfg; + struct gve_queue_config *tx_cfg; + struct gve_queue_config *rx_cfg; + + u16 num_xdp_queues; + bool raw_addressing; + bool is_gqi; + + /* Allocated resources are returned here */ + struct gve_queue_page_list *qpls; +}; + +/* Parameters for allocating resources for tx queues */ +struct gve_tx_alloc_rings_cfg { + struct gve_queue_config *qcfg; + + /* qpls and qpl_cfg must already be allocated */ + struct gve_queue_page_list *qpls; + struct gve_qpl_config *qpl_cfg; + + u16 ring_size; + u16 start_idx; + u16 num_rings; + bool raw_addressing; + + /* Allocated resources are returned here */ + struct gve_tx_ring *tx; +}; + +/* Parameters for allocating resources for rx queues */ +struct gve_rx_alloc_rings_cfg { + /* tx config is also needed to determine QPL ids */ + struct gve_queue_config *qcfg; + struct gve_queue_config *qcfg_tx; + + /* qpls and qpl_cfg must already be allocated */ + struct gve_queue_page_list *qpls; + struct gve_qpl_config *qpl_cfg; + + u16 ring_size; + bool raw_addressing; + bool enable_header_split; + + /* Allocated resources are returned here */ + struct gve_rx_ring *rx; +}; + /* GVE_QUEUE_FORMAT_UNSPECIFIED must be zero since 0 is the default value * when the entire configure_device_resources command is zeroed out and the * queue_format is not specified. From patchwork Mon Jan 22 18:26:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13526042 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0DB594E1D8 for ; Mon, 22 Jan 2024 18:27:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948038; cv=none; b=aip8pi4f6QuYHlLlweQZBvFKAFhXUHaWRuGFgo1/7GtNpDufrEJj6iaovC19I4RfixvPP+M68UFgzmkYBN76lmPfVA0lCRVu1kNd1gFt7QuJe24JSsLddrzLb9brwP3/DhQy7CkRzU5PHhspCvyEGUVy7ie+K0kptNDmP9bV6wg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948038; c=relaxed/simple; bh=sPf3mycb6RY8IyzINb597l9tzKNCJy5eZVlBIyZSsZM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q2evOg2JrItdgK9N5Hp4PBx/MMD2b10JsN2L90ksdVGewql8EzO7ACzlR9WMa0+AVjjPskh4gFze1r2qxBJqEc0wRpnbyKpJhjf4C7ku2p4RgqY3LQmewBlpSS2cTvXceba9g613gBIyCTKA2tSaA9d2L5me+DmPm92U5E484XY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=p1A6tPhE; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p1A6tPhE" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc3645a6790so1282579276.0 for ; Mon, 22 Jan 2024 10:27:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705948036; x=1706552836; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g1mFMGUtcMeHcpNS7KvYStwC+KPwCUVfqYIGCF0vLyQ=; b=p1A6tPhEudAx8RdZp7AONId3Gfj+REcEYczxTWaJo1Z2eWGPSnW/vGse0FSNAsKL6G vivjjd+PLwI3YIgxuti+DH274ftvBMQ/pHXs4eN8WDN75aFIN8Bg8rXsjoDM51IJ6lyZ OpnqUMZtV2rYYkpObM48poXc0L/yPgKzBkN2JdJxMLk2xE40sjG+x7A0/orW9kahKAub V9u6WSkpKpkdruXKgW4NRB7uSG9sYHdisUQLFwN2Ov1og9RRIq+XHFxjrWeCavredraG EOM9FVe4BABJQlw1QHaK34RgKhou0HbMdYzKzlEUjfm4klPSvzzTzu6y2WavFtk0Gji1 6zPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705948036; x=1706552836; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g1mFMGUtcMeHcpNS7KvYStwC+KPwCUVfqYIGCF0vLyQ=; b=P40qjq6SR3Od8wmWnk7S3YTfSCkmYbuVaT0UKR3yI0gbHFIKBIppXTu2tbiVk7gVsW QO9+uUqhU+cWLpbrPZZuXlBwn8OrAQcUr+6aitaHrN7x8CT/lUNyJo6f1YGAXimExEmX xoVsrGNqpKpxghUfp4FH1iosIK7e/YCh9O8nhvJBk+f0SfAAEA/DXoGUQlGjukKLNVcX 62yGG2kO6wmiUpd8scUq/+QFTbt2ZVW2LE8uSRJGD/QlPnK0D3tx6k5a/kiTaTvGHHdN M99pbagMXruhGLgYA6niykPbMsR/4YVPPf7jBJRzqTlQD203xG5wYPouu68qlV4xt9NF HaZg== X-Gm-Message-State: AOJu0YxbIMXmt93/AavMxKtBaEmHisIF00raVJnPKEfbAuBwXCo0x5zo p70lC+TEXBHXOlqreZ60xaeAmhFeYXa/fk3VFMHkwiYTnLdARXV1cjWDZeGZvRpbh4lcJOLlvGG 574H58tuNXx1w2pNM5F3FigVpdNbVgB5VXqkX7Zdb810KF3tEe5l/6RU9F1pO3bzMvNiP8/kByd 3KDSCN6o5hl2tM6um4gUg+6lyfdZtuxhhwbGeFzF0QKZs= X-Google-Smtp-Source: AGHT+IFmLUg86t3IArsUmQXgslvksmN6u50KeZiAelsNGBFTGnmpp6Kt8S+ellDuf/0i1kWs2KGQqIw7oIqCFg== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a25:cec4:0:b0:dc2:1cd6:346e with SMTP id x187-20020a25cec4000000b00dc21cd6346emr2349540ybe.8.1705948036101; Mon, 22 Jan 2024 10:27:16 -0800 (PST) Date: Mon, 22 Jan 2024 18:26:28 +0000 In-Reply-To: <20240122182632.1102721-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240122182632.1102721-1-shailend@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240122182632.1102721-3-shailend@google.com> Subject: [PATCH net-next 2/6] gve: Refactor napi add and remove functions From: Shailend Chand To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Shailend Chand , Willem de Bruijn , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org This change makes the napi poll functions non-static and moves the gve_(add|remove)_napi functions to gve_utils.c, to make possible future "start queue" hooks in the datapath files. Signed-off-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Jeroen de Borst Reviewed-by: Simon Horman --- drivers/net/ethernet/google/gve/gve.h | 3 +++ drivers/net/ethernet/google/gve/gve_dqo.h | 2 ++ drivers/net/ethernet/google/gve/gve_main.c | 20 +++----------------- drivers/net/ethernet/google/gve/gve_utils.c | 15 +++++++++++++++ drivers/net/ethernet/google/gve/gve_utils.h | 3 +++ 5 files changed, 26 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 00e7f8b06d89..ba6819ac600e 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1085,6 +1085,9 @@ static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv) return gve_xdp_tx_queue_id(priv, 0); } +/* gqi napi handler defined in gve_main.c */ +int gve_napi_poll(struct napi_struct *napi, int budget); + /* buffers */ int gve_alloc_page(struct gve_priv *priv, struct device *dev, struct page **page, dma_addr_t *dma, diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index c36b93f0de15..8058b09f8e3e 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -93,4 +93,6 @@ gve_set_itr_coalesce_usecs_dqo(struct gve_priv *priv, gve_write_irq_doorbell_dqo(priv, block, gve_setup_itr_interval_dqo(usecs)); } + +int gve_napi_poll_dqo(struct napi_struct *napi, int budget); #endif /* _GVE_DQO_H_ */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 619bf63ec935..e07048cd249e 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -22,6 +22,7 @@ #include "gve_dqo.h" #include "gve_adminq.h" #include "gve_register.h" +#include "gve_utils.h" #define GVE_DEFAULT_RX_COPYBREAK (256) @@ -252,7 +253,7 @@ static irqreturn_t gve_intr_dqo(int irq, void *arg) return IRQ_HANDLED; } -static int gve_napi_poll(struct napi_struct *napi, int budget) +int gve_napi_poll(struct napi_struct *napi, int budget) { struct gve_notify_block *block; __be32 __iomem *irq_doorbell; @@ -302,7 +303,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) return work_done; } -static int gve_napi_poll_dqo(struct napi_struct *napi, int budget) +int gve_napi_poll_dqo(struct napi_struct *napi, int budget) { struct gve_notify_block *block = container_of(napi, struct gve_notify_block, napi); @@ -581,21 +582,6 @@ static void gve_teardown_device_resources(struct gve_priv *priv) gve_clear_device_resources_ok(priv); } -static void gve_add_napi(struct gve_priv *priv, int ntfy_idx, - int (*gve_poll)(struct napi_struct *, int)) -{ - struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; - - netif_napi_add(priv->dev, &block->napi, gve_poll); -} - -static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx) -{ - struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; - - netif_napi_del(&block->napi); -} - static int gve_register_xdp_qpls(struct gve_priv *priv) { int start_id; diff --git a/drivers/net/ethernet/google/gve/gve_utils.c b/drivers/net/ethernet/google/gve/gve_utils.c index 26e08d753270..974a75623789 100644 --- a/drivers/net/ethernet/google/gve/gve_utils.c +++ b/drivers/net/ethernet/google/gve/gve_utils.c @@ -81,3 +81,18 @@ void gve_dec_pagecnt_bias(struct gve_rx_slot_page_info *page_info) page_ref_add(page_info->page, INT_MAX - pagecount); } } + +void gve_add_napi(struct gve_priv *priv, int ntfy_idx, + int (*gve_poll)(struct napi_struct *, int)) +{ + struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + + netif_napi_add(priv->dev, &block->napi, gve_poll); +} + +void gve_remove_napi(struct gve_priv *priv, int ntfy_idx) +{ + struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx]; + + netif_napi_del(&block->napi); +} diff --git a/drivers/net/ethernet/google/gve/gve_utils.h b/drivers/net/ethernet/google/gve/gve_utils.h index 324fd98a6112..924516e9eaae 100644 --- a/drivers/net/ethernet/google/gve/gve_utils.h +++ b/drivers/net/ethernet/google/gve/gve_utils.h @@ -23,5 +23,8 @@ struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi, /* Decrement pagecnt_bias. Set it back to INT_MAX if it reached zero. */ void gve_dec_pagecnt_bias(struct gve_rx_slot_page_info *page_info); +void gve_add_napi(struct gve_priv *priv, int ntfy_idx, + int (*gve_poll)(struct napi_struct *, int)); +void gve_remove_napi(struct gve_priv *priv, int ntfy_idx); #endif /* _GVE_UTILS_H */ From patchwork Mon Jan 22 18:26:29 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13526044 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D52624EB34 for ; Mon, 22 Jan 2024 18:27:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948042; cv=none; b=pm/ZDkjruRu0oyPKLRjRL1F2+VGApD7rIYLoB6TPQW4wt+GOimu/6mAL0PNgcWHSGpwGtq9zLtpjzPLdcsnSISMTvn0MMl76kbOVPjBAZTCbDPkHsqmeC0SSlmLk7cRGi/TCIsxeNhV0NTwDZX9NPonHGqYMB3TtIeZZzyHhNZ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948042; c=relaxed/simple; bh=c637tod2X5oCr5L5H14vRRP9qp9Mxb34IFjEb1sNtS0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HraZ3d3UN/P46biKjJ0VrFqZr5OzKMO0hoy+x7WEAwDCxZTaZGeL9OS/q5nX9/NMZu56AuTmRNo3smkJ203oLrfz5Q0IDw4MJebNJxUraUqg3epfXO2q6vUUNYz1AIdwDq1Ff9IBt3H9GrtB+9vdekkaRmP3JwBmwqxvmyjAawo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=csPXcNEA; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="csPXcNEA" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc26605c273so5132730276.0 for ; Mon, 22 Jan 2024 10:27:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705948038; x=1706552838; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ldJ7zfylBUKegiJSiQoeJpoPmabB3Vg1EhaFutzLsoY=; b=csPXcNEAavC4rIJER7zZs3gcofznsKyv71n9dQS9IEHsGzexq5A52nGH/g6p6UolPE NQ+U2U8PQU+Y3nt9P718cXzvWD0W6t6/FksSemP5WzK+UzAFb5GcmNGjznby2eaJFIiX 4QLFYAZZ93pquT66rQQ+CHYj6QRjISLDysIvdZvkPjy3MqnzUHYt9VCF6tfDI8BNX1uM 2y0LD3mEqq+9F9DimOVUMWPVmLPHrMCuJksV/KVtQ0NIXPAshuw7DtpSYMoZnZVZlKc3 vPmDhirdrVq15rXvaAGSQp4gdjzCGrCGf8oX9IHJ3Xf6vs5tnoGaA3XhB3O7kdLnqch1 EBTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705948038; x=1706552838; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ldJ7zfylBUKegiJSiQoeJpoPmabB3Vg1EhaFutzLsoY=; b=R0GjzXv1rCTvHd02nKByhuFlQPq5CACPZnhRKijFmUP/UYHqD8v6ZIONy01SaT9qMT XLFk1cXV7n/ob7a9PqCBojvwN7mnfSluE5AiQAmzmaKyYQzl+WpQSPcajD8HugjRTqYE 8WY7+o1jCO/bOjFbcBtozUSF4HMMlW/gYuJ0gaIW3xkUKZoUpH6IhKdUr+W+1gFroD5V 0vhKk448fMyIieh6CNHQ0XAqzqjs7IgRTd+sAJ1TEp+IuJnw+mr2CtnSh89m3MX3UQLr nN/8dbhqFXTpOGSpfc6cSRTN4Iq3ArJ+FO3pn6CQAp7EJLm70Ic+SA+x5awKN6ODCPx+ 9F5g== X-Gm-Message-State: AOJu0YzqVq7jDufbF7LEnExQpUgm0vxI7DbytnBXOlfnSb7GBLBonyxJ /21eT6fPm3YeojOkc/zzH6ZT/PvBvf/iV5iYfZS5dAUymlkwzsFwt2yKDr4t6zaC4+WI5ZAQy+1 57X8rRinldRzOV2Zd6exum9+UNwT7jm5w+70jVPNVic8NDrrRp7/4ogJ9SMI9na4yyHfuLxTCp+ pq0NUczBxct3ZybGWDVn1MVT8hEJB8lvgPfeKXfvKFiL0= X-Google-Smtp-Source: AGHT+IEdXJdX9GuuHnocjkE3zoazQe6t5JU6umF/ybXrIoDa1ZryyTCpA72iy4RygAsRxuMO4tIjUYwoNTEnrQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a25:d60b:0:b0:dc2:2af6:6ace with SMTP id n11-20020a25d60b000000b00dc22af66acemr2162622ybg.3.1705948037879; Mon, 22 Jan 2024 10:27:17 -0800 (PST) Date: Mon, 22 Jan 2024 18:26:29 +0000 In-Reply-To: <20240122182632.1102721-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240122182632.1102721-1-shailend@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240122182632.1102721-4-shailend@google.com> Subject: [PATCH net-next 3/6] gve: Switch to config-aware queue allocation From: Shailend Chand To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Shailend Chand , Willem de Bruijn , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org The new config-aware functions will help achieve the goal of being able to allocate resources for new queues while there already are active queues serving traffic. These new functions work off of arbitrary queue allocation configs rather than just the currently active config in priv, and they return the newly allocated resources instead of writing them into priv. Signed-off-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Jeroen de Borst Reviewed-by: Simon Horman --- drivers/net/ethernet/google/gve/gve.h | 92 +-- drivers/net/ethernet/google/gve/gve_dqo.h | 16 +- drivers/net/ethernet/google/gve/gve_main.c | 601 +++++++++++-------- drivers/net/ethernet/google/gve/gve_rx.c | 116 ++-- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 91 ++- drivers/net/ethernet/google/gve/gve_tx.c | 128 ++-- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 108 +++- drivers/net/ethernet/google/gve/gve_utils.c | 16 + drivers/net/ethernet/google/gve/gve_utils.h | 2 + 9 files changed, 745 insertions(+), 425 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index ba6819ac600e..fd290f3ad6ec 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -966,14 +966,14 @@ static inline bool gve_is_qpl(struct gve_priv *priv) priv->queue_format == GVE_DQO_QPL_FORMAT; } -/* Returns the number of tx queue page lists - */ -static inline u32 gve_num_tx_qpls(struct gve_priv *priv) +/* Returns the number of tx queue page lists */ +static inline u32 gve_num_tx_qpls(const struct gve_queue_config *tx_cfg, + int num_xdp_queues, + bool is_qpl) { - if (!gve_is_qpl(priv)) + if (!is_qpl) return 0; - - return priv->tx_cfg.num_queues + priv->num_xdp_queues; + return tx_cfg->num_queues + num_xdp_queues; } /* Returns the number of XDP tx queue page lists @@ -986,14 +986,13 @@ static inline u32 gve_num_xdp_qpls(struct gve_priv *priv) return priv->num_xdp_queues; } -/* Returns the number of rx queue page lists - */ -static inline u32 gve_num_rx_qpls(struct gve_priv *priv) +/* Returns the number of rx queue page lists */ +static inline u32 gve_num_rx_qpls(const struct gve_queue_config *rx_cfg, + bool is_qpl) { - if (!gve_is_qpl(priv)) + if (!is_qpl) return 0; - - return priv->rx_cfg.num_queues; + return rx_cfg->num_queues; } static inline u32 gve_tx_qpl_id(struct gve_priv *priv, int tx_qid) @@ -1006,59 +1005,59 @@ static inline u32 gve_rx_qpl_id(struct gve_priv *priv, int rx_qid) return priv->tx_cfg.max_queues + rx_qid; } +/* Returns the index into priv->qpls where a certain rx queue's QPL resides */ +static inline u32 gve_get_rx_qpl_id(const struct gve_queue_config *tx_cfg, int rx_qid) +{ + return tx_cfg->max_queues + rx_qid; +} + static inline u32 gve_tx_start_qpl_id(struct gve_priv *priv) { return gve_tx_qpl_id(priv, 0); } -static inline u32 gve_rx_start_qpl_id(struct gve_priv *priv) +/* Returns the index into priv->qpls where the first rx queue's QPL resides */ +static inline u32 gve_rx_start_qpl_id(const struct gve_queue_config *tx_cfg) { - return gve_rx_qpl_id(priv, 0); + return gve_get_rx_qpl_id(tx_cfg, 0); } -/* Returns a pointer to the next available tx qpl in the list of qpls - */ +/* Returns a pointer to the next available tx qpl in the list of qpls */ static inline -struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_priv *priv, int tx_qid) +struct gve_queue_page_list *gve_assign_tx_qpl(struct gve_tx_alloc_rings_cfg *cfg, + int tx_qid) { - int id = gve_tx_qpl_id(priv, tx_qid); - /* QPL already in use */ - if (test_bit(id, priv->qpl_cfg.qpl_id_map)) + if (test_bit(tx_qid, cfg->qpl_cfg->qpl_id_map)) return NULL; - - set_bit(id, priv->qpl_cfg.qpl_id_map); - return &priv->qpls[id]; + set_bit(tx_qid, cfg->qpl_cfg->qpl_id_map); + return &cfg->qpls[tx_qid]; } -/* Returns a pointer to the next available rx qpl in the list of qpls - */ +/* Returns a pointer to the next available rx qpl in the list of qpls */ static inline -struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_priv *priv, int rx_qid) +struct gve_queue_page_list *gve_assign_rx_qpl(struct gve_rx_alloc_rings_cfg *cfg, + int rx_qid) { - int id = gve_rx_qpl_id(priv, rx_qid); - + int id = gve_get_rx_qpl_id(cfg->qcfg_tx, rx_qid); /* QPL already in use */ - if (test_bit(id, priv->qpl_cfg.qpl_id_map)) + if (test_bit(id, cfg->qpl_cfg->qpl_id_map)) return NULL; - - set_bit(id, priv->qpl_cfg.qpl_id_map); - return &priv->qpls[id]; + set_bit(id, cfg->qpl_cfg->qpl_id_map); + return &cfg->qpls[id]; } -/* Unassigns the qpl with the given id - */ -static inline void gve_unassign_qpl(struct gve_priv *priv, int id) +/* Unassigns the qpl with the given id */ +static inline void gve_unassign_qpl(struct gve_qpl_config *qpl_cfg, int id) { - clear_bit(id, priv->qpl_cfg.qpl_id_map); + clear_bit(id, qpl_cfg->qpl_id_map); } -/* Returns the correct dma direction for tx and rx qpls - */ +/* Returns the correct dma direction for tx and rx qpls */ static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv, int id) { - if (id < gve_rx_start_qpl_id(priv)) + if (id < gve_rx_start_qpl_id(&priv->tx_cfg)) return DMA_TO_DEVICE; else return DMA_FROM_DEVICE; @@ -1103,8 +1102,12 @@ int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx, void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid); bool gve_tx_poll(struct gve_notify_block *block, int budget); bool gve_xdp_poll(struct gve_notify_block *block, int budget); -int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings); -void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings); +int gve_tx_alloc_rings_gqi(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg); +void gve_tx_free_rings_gqi(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg); +void gve_tx_start_ring_gqi(struct gve_priv *priv, int idx); +void gve_tx_stop_ring_gqi(struct gve_priv *priv, int idx); u32 gve_tx_load_event_counter(struct gve_priv *priv, struct gve_tx_ring *tx); bool gve_tx_clean_pending(struct gve_priv *priv, struct gve_tx_ring *tx); @@ -1113,7 +1116,12 @@ void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx); int gve_rx_poll(struct gve_notify_block *block, int budget); bool gve_rx_work_pending(struct gve_rx_ring *rx); int gve_rx_alloc_rings(struct gve_priv *priv); -void gve_rx_free_rings_gqi(struct gve_priv *priv); +int gve_rx_alloc_rings_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg); +void gve_rx_free_rings_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg); +void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx); +void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx); /* Reset */ void gve_schedule_reset(struct gve_priv *priv); int gve_reset(struct gve_priv *priv, bool attempt_teardown); diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethernet/google/gve/gve_dqo.h index 8058b09f8e3e..b81584829c40 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -38,10 +38,18 @@ netdev_features_t gve_features_check_dqo(struct sk_buff *skb, netdev_features_t features); bool gve_tx_poll_dqo(struct gve_notify_block *block, bool do_clean); int gve_rx_poll_dqo(struct gve_notify_block *block, int budget); -int gve_tx_alloc_rings_dqo(struct gve_priv *priv); -void gve_tx_free_rings_dqo(struct gve_priv *priv); -int gve_rx_alloc_rings_dqo(struct gve_priv *priv); -void gve_rx_free_rings_dqo(struct gve_priv *priv); +int gve_tx_alloc_rings_dqo(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg); +void gve_tx_free_rings_dqo(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg); +void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx); +void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx); +int gve_rx_alloc_rings_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg); +void gve_rx_free_rings_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg); +void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx); +void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx); int gve_clean_tx_done_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, struct napi_struct *napi); void gve_rx_post_buffers_dqo(struct gve_rx_ring *rx); diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index e07048cd249e..8049a47c38fc 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -582,61 +582,102 @@ static void gve_teardown_device_resources(struct gve_priv *priv) gve_clear_device_resources_ok(priv); } +static int gve_unregister_qpl(struct gve_priv *priv, u32 i) +{ + int err; + + err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id); + if (err) { + netif_err(priv, drv, priv->dev, + "Failed to unregister queue page list %d\n", + priv->qpls[i].id); + return err; + } + + priv->num_registered_pages -= priv->qpls[i].num_entries; + return 0; +} + +static int gve_register_qpl(struct gve_priv *priv, u32 i) +{ + int num_rx_qpls; + int pages; + int err; + + /* Rx QPLs succeed Tx QPLs in the priv->qpls array. */ + num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); + if (i >= gve_rx_start_qpl_id(&priv->tx_cfg) + num_rx_qpls) { + netif_err(priv, drv, priv->dev, + "Cannot register nonexisting QPL at index %d\n", i); + return -EINVAL; + } + + pages = priv->qpls[i].num_entries; + + if (pages + priv->num_registered_pages > priv->max_registered_pages) { + netif_err(priv, drv, priv->dev, + "Reached max number of registered pages %llu > %llu\n", + pages + priv->num_registered_pages, + priv->max_registered_pages); + return -EINVAL; + } + + err = gve_adminq_register_page_list(priv, &priv->qpls[i]); + if (err) { + netif_err(priv, drv, priv->dev, + "failed to register queue page list %d\n", + priv->qpls[i].id); + /* This failure will trigger a reset - no need to clean + * up + */ + return err; + } + + priv->num_registered_pages += pages; + return 0; +} + static int gve_register_xdp_qpls(struct gve_priv *priv) { int start_id; int err; int i; - start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv)); + start_id = gve_xdp_tx_start_queue_id(priv); for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_adminq_register_page_list(priv, &priv->qpls[i]); - if (err) { - netif_err(priv, drv, priv->dev, - "failed to register queue page list %d\n", - priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean - * up - */ + err = gve_register_qpl(priv, i); + /* This failure will trigger a reset - no need to clean up */ + if (err) return err; - } } return 0; } static int gve_register_qpls(struct gve_priv *priv) { + int num_tx_qpls, num_rx_qpls; int start_id; int err; int i; - start_id = gve_tx_start_qpl_id(priv); - for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) { - err = gve_adminq_register_page_list(priv, &priv->qpls[i]); - if (err) { - netif_err(priv, drv, priv->dev, - "failed to register queue page list %d\n", - priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean - * up - */ + num_tx_qpls = gve_num_tx_qpls(&priv->tx_cfg, gve_num_xdp_qpls(priv), + gve_is_qpl(priv)); + num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); + + for (i = 0; i < num_tx_qpls; i++) { + err = gve_register_qpl(priv, i); + if (err) return err; - } } - start_id = gve_rx_start_qpl_id(priv); - for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) { - err = gve_adminq_register_page_list(priv, &priv->qpls[i]); - if (err) { - netif_err(priv, drv, priv->dev, - "failed to register queue page list %d\n", - priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean - * up - */ + /* there might be a gap between the tx and rx qpl ids */ + start_id = gve_rx_start_qpl_id(&priv->tx_cfg); + for (i = 0; i < num_rx_qpls; i++) { + err = gve_register_qpl(priv, start_id + i); + if (err) return err; - } } + return 0; } @@ -646,48 +687,40 @@ static int gve_unregister_xdp_qpls(struct gve_priv *priv) int err; int i; - start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv)); + start_id = gve_xdp_tx_start_queue_id(priv); for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean up */ - if (err) { - netif_err(priv, drv, priv->dev, - "Failed to unregister queue page list %d\n", - priv->qpls[i].id); + err = gve_unregister_qpl(priv, i); + /* This failure will trigger a reset - no need to clean */ + if (err) return err; - } } return 0; } static int gve_unregister_qpls(struct gve_priv *priv) { + int num_tx_qpls, num_rx_qpls; int start_id; int err; int i; - start_id = gve_tx_start_qpl_id(priv); - for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) { - err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean up */ - if (err) { - netif_err(priv, drv, priv->dev, - "Failed to unregister queue page list %d\n", - priv->qpls[i].id); + num_tx_qpls = gve_num_tx_qpls(&priv->tx_cfg, gve_num_xdp_qpls(priv), + gve_is_qpl(priv)); + num_rx_qpls = gve_num_rx_qpls(&priv->rx_cfg, gve_is_qpl(priv)); + + for (i = 0; i < num_tx_qpls; i++) { + err = gve_unregister_qpl(priv, i); + /* This failure will trigger a reset - no need to clean */ + if (err) return err; - } } - start_id = gve_rx_start_qpl_id(priv); - for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) { - err = gve_adminq_unregister_page_list(priv, priv->qpls[i].id); - /* This failure will trigger a reset - no need to clean up */ - if (err) { - netif_err(priv, drv, priv->dev, - "Failed to unregister queue page list %d\n", - priv->qpls[i].id); + start_id = gve_rx_start_qpl_id(&priv->tx_cfg); + for (i = 0; i < num_rx_qpls; i++) { + err = gve_unregister_qpl(priv, start_id + i); + /* This failure will trigger a reset - no need to clean */ + if (err) return err; - } } return 0; } @@ -762,120 +795,124 @@ static int gve_create_rings(struct gve_priv *priv) return 0; } -static void add_napi_init_xdp_sync_stats(struct gve_priv *priv, - int (*napi_poll)(struct napi_struct *napi, - int budget)) +static void init_xdp_sync_stats(struct gve_priv *priv) { int start_id = gve_xdp_tx_start_queue_id(priv); int i; - /* Add xdp tx napi & init sync stats*/ + /* Init stats */ for (i = start_id; i < start_id + priv->num_xdp_queues; i++) { int ntfy_idx = gve_tx_idx_to_ntfy(priv, i); u64_stats_init(&priv->tx[i].statss); priv->tx[i].ntfy_id = ntfy_idx; - gve_add_napi(priv, ntfy_idx, napi_poll); } } -static void add_napi_init_sync_stats(struct gve_priv *priv, - int (*napi_poll)(struct napi_struct *napi, - int budget)) +static void gve_init_sync_stats(struct gve_priv *priv) { int i; - /* Add tx napi & init sync stats*/ - for (i = 0; i < gve_num_tx_queues(priv); i++) { - int ntfy_idx = gve_tx_idx_to_ntfy(priv, i); - + for (i = 0; i < priv->tx_cfg.num_queues; i++) u64_stats_init(&priv->tx[i].statss); - priv->tx[i].ntfy_id = ntfy_idx; - gve_add_napi(priv, ntfy_idx, napi_poll); - } - /* Add rx napi & init sync stats*/ - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - int ntfy_idx = gve_rx_idx_to_ntfy(priv, i); + /* Init stats for XDP TX queues */ + init_xdp_sync_stats(priv); + + for (i = 0; i < priv->rx_cfg.num_queues; i++) u64_stats_init(&priv->rx[i].statss); - priv->rx[i].ntfy_id = ntfy_idx; - gve_add_napi(priv, ntfy_idx, napi_poll); +} + +static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg) +{ + cfg->qcfg = &priv->tx_cfg; + cfg->raw_addressing = !gve_is_qpl(priv); + cfg->qpls = priv->qpls; + cfg->qpl_cfg = &priv->qpl_cfg; + cfg->ring_size = priv->tx_desc_cnt; + cfg->start_idx = 0; + cfg->num_rings = gve_num_tx_queues(priv); + cfg->tx = priv->tx; +} + +static void gve_tx_stop_rings(struct gve_priv *priv, int start_id, int num_rings) +{ + int i; + + if (!priv->tx) + return; + + for (i = start_id; i < start_id + num_rings; i++) { + if (gve_is_gqi(priv)) + gve_tx_stop_ring_gqi(priv, i); + else + gve_tx_stop_ring_dqo(priv, i); } } -static void gve_tx_free_rings(struct gve_priv *priv, int start_id, int num_rings) +static void gve_tx_start_rings(struct gve_priv *priv, int start_id, + int num_rings) { - if (gve_is_gqi(priv)) { - gve_tx_free_rings_gqi(priv, start_id, num_rings); - } else { - gve_tx_free_rings_dqo(priv); + int i; + + for (i = start_id; i < start_id + num_rings; i++) { + if (gve_is_gqi(priv)) + gve_tx_start_ring_gqi(priv, i); + else + gve_tx_start_ring_dqo(priv, i); } } static int gve_alloc_xdp_rings(struct gve_priv *priv) { - int start_id; + struct gve_tx_alloc_rings_cfg cfg = {0}; int err = 0; if (!priv->num_xdp_queues) return 0; - start_id = gve_xdp_tx_start_queue_id(priv); - err = gve_tx_alloc_rings(priv, start_id, priv->num_xdp_queues); + gve_tx_get_curr_alloc_cfg(priv, &cfg); + cfg.start_idx = gve_xdp_tx_start_queue_id(priv); + cfg.num_rings = priv->num_xdp_queues; + + err = gve_tx_alloc_rings_gqi(priv, &cfg); if (err) return err; - add_napi_init_xdp_sync_stats(priv, gve_napi_poll); + + gve_tx_start_rings(priv, cfg.start_idx, cfg.num_rings); + init_xdp_sync_stats(priv); return 0; } -static int gve_alloc_rings(struct gve_priv *priv) +static int gve_alloc_rings(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; - /* Setup tx rings */ - priv->tx = kvcalloc(priv->tx_cfg.max_queues, sizeof(*priv->tx), - GFP_KERNEL); - if (!priv->tx) - return -ENOMEM; - if (gve_is_gqi(priv)) - err = gve_tx_alloc_rings(priv, 0, gve_num_tx_queues(priv)); + err = gve_tx_alloc_rings_gqi(priv, tx_alloc_cfg); else - err = gve_tx_alloc_rings_dqo(priv); + err = gve_tx_alloc_rings_dqo(priv, tx_alloc_cfg); if (err) - goto free_tx; - - /* Setup rx rings */ - priv->rx = kvcalloc(priv->rx_cfg.max_queues, sizeof(*priv->rx), - GFP_KERNEL); - if (!priv->rx) { - err = -ENOMEM; - goto free_tx_queue; - } + return err; if (gve_is_gqi(priv)) - err = gve_rx_alloc_rings(priv); + err = gve_rx_alloc_rings_gqi(priv, rx_alloc_cfg); else - err = gve_rx_alloc_rings_dqo(priv); + err = gve_rx_alloc_rings_dqo(priv, rx_alloc_cfg); if (err) - goto free_rx; - - if (gve_is_gqi(priv)) - add_napi_init_sync_stats(priv, gve_napi_poll); - else - add_napi_init_sync_stats(priv, gve_napi_poll_dqo); + goto free_tx; return 0; -free_rx: - kvfree(priv->rx); - priv->rx = NULL; -free_tx_queue: - gve_tx_free_rings(priv, 0, gve_num_tx_queues(priv)); free_tx: - kvfree(priv->tx); - priv->tx = NULL; + if (gve_is_gqi(priv)) + gve_tx_free_rings_gqi(priv, tx_alloc_cfg); + else + gve_tx_free_rings_dqo(priv, tx_alloc_cfg); return err; } @@ -923,52 +960,30 @@ static int gve_destroy_rings(struct gve_priv *priv) return 0; } -static void gve_rx_free_rings(struct gve_priv *priv) -{ - if (gve_is_gqi(priv)) - gve_rx_free_rings_gqi(priv); - else - gve_rx_free_rings_dqo(priv); -} - static void gve_free_xdp_rings(struct gve_priv *priv) { - int ntfy_idx, start_id; - int i; + struct gve_tx_alloc_rings_cfg cfg = {0}; + + gve_tx_get_curr_alloc_cfg(priv, &cfg); + cfg.start_idx = gve_xdp_tx_start_queue_id(priv); + cfg.num_rings = priv->num_xdp_queues; - start_id = gve_xdp_tx_start_queue_id(priv); if (priv->tx) { - for (i = start_id; i < start_id + priv->num_xdp_queues; i++) { - ntfy_idx = gve_tx_idx_to_ntfy(priv, i); - gve_remove_napi(priv, ntfy_idx); - } - gve_tx_free_rings(priv, start_id, priv->num_xdp_queues); + gve_tx_stop_rings(priv, cfg.start_idx, cfg.num_rings); + gve_tx_free_rings_gqi(priv, &cfg); } } -static void gve_free_rings(struct gve_priv *priv) +static void gve_free_rings(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *tx_cfg, + struct gve_rx_alloc_rings_cfg *rx_cfg) { - int num_tx_queues = gve_num_tx_queues(priv); - int ntfy_idx; - int i; - - if (priv->tx) { - for (i = 0; i < num_tx_queues; i++) { - ntfy_idx = gve_tx_idx_to_ntfy(priv, i); - gve_remove_napi(priv, ntfy_idx); - } - gve_tx_free_rings(priv, 0, num_tx_queues); - kvfree(priv->tx); - priv->tx = NULL; - } - if (priv->rx) { - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - ntfy_idx = gve_rx_idx_to_ntfy(priv, i); - gve_remove_napi(priv, ntfy_idx); - } - gve_rx_free_rings(priv); - kvfree(priv->rx); - priv->rx = NULL; + if (gve_is_gqi(priv)) { + gve_tx_free_rings_gqi(priv, tx_cfg); + gve_rx_free_rings_gqi(priv, rx_cfg); + } else { + gve_tx_free_rings_dqo(priv, tx_cfg); + gve_rx_free_rings_dqo(priv, rx_cfg); } } @@ -990,21 +1005,13 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, return 0; } -static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id, - int pages) +static int gve_alloc_queue_page_list(struct gve_priv *priv, + struct gve_queue_page_list *qpl, + u32 id, int pages) { - struct gve_queue_page_list *qpl = &priv->qpls[id]; int err; int i; - if (pages + priv->num_registered_pages > priv->max_registered_pages) { - netif_err(priv, drv, priv->dev, - "Reached max number of registered pages %llu > %llu\n", - pages + priv->num_registered_pages, - priv->max_registered_pages); - return -EINVAL; - } - qpl->id = id; qpl->num_entries = 0; qpl->pages = kvcalloc(pages, sizeof(*qpl->pages), GFP_KERNEL); @@ -1025,7 +1032,6 @@ static int gve_alloc_queue_page_list(struct gve_priv *priv, u32 id, return -ENOMEM; qpl->num_entries++; } - priv->num_registered_pages += pages; return 0; } @@ -1039,9 +1045,10 @@ void gve_free_page(struct device *dev, struct page *page, dma_addr_t dma, put_page(page); } -static void gve_free_queue_page_list(struct gve_priv *priv, u32 id) +static void gve_free_queue_page_list(struct gve_priv *priv, + struct gve_queue_page_list *qpl, + int id) { - struct gve_queue_page_list *qpl = &priv->qpls[id]; int i; if (!qpl->pages) @@ -1058,19 +1065,30 @@ static void gve_free_queue_page_list(struct gve_priv *priv, u32 id) free_pages: kvfree(qpl->pages); qpl->pages = NULL; - priv->num_registered_pages -= qpl->num_entries; } -static int gve_alloc_xdp_qpls(struct gve_priv *priv) +static void gve_free_n_qpls(struct gve_priv *priv, + struct gve_queue_page_list *qpls, + int start_id, + int num_qpls) +{ + int i; + + for (i = start_id; i < start_id + num_qpls; i++) + gve_free_queue_page_list(priv, &qpls[i], i); +} + +static int gve_alloc_n_qpls(struct gve_priv *priv, + struct gve_queue_page_list *qpls, + int page_count, + int start_id, + int num_qpls) { - int start_id; - int i, j; int err; + int i; - start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv)); - for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) { - err = gve_alloc_queue_page_list(priv, i, - priv->tx_pages_per_qpl); + for (i = start_id; i < start_id + num_qpls; i++) { + err = gve_alloc_queue_page_list(priv, &qpls[i], i, page_count); if (err) goto free_qpls; } @@ -1078,95 +1096,89 @@ static int gve_alloc_xdp_qpls(struct gve_priv *priv) return 0; free_qpls: - for (j = start_id; j <= i; j++) - gve_free_queue_page_list(priv, j); + /* Must include the failing QPL too for gve_alloc_queue_page_list fails + * without cleaning up. + */ + gve_free_n_qpls(priv, qpls, start_id, i - start_id + 1); return err; } -static int gve_alloc_qpls(struct gve_priv *priv) +static int gve_alloc_qpls(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *cfg) { - int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues; + int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; + int rx_start_id, tx_num_qpls, rx_num_qpls; + struct gve_queue_page_list *qpls; int page_count; - int start_id; - int i, j; int err; - if (!gve_is_qpl(priv)) + if (cfg->raw_addressing) return 0; - priv->qpls = kvcalloc(max_queues, sizeof(*priv->qpls), GFP_KERNEL); - if (!priv->qpls) + qpls = kvcalloc(max_queues, sizeof(*qpls), GFP_KERNEL); + if (!qpls) return -ENOMEM; - start_id = gve_tx_start_qpl_id(priv); - page_count = priv->tx_pages_per_qpl; - for (i = start_id; i < start_id + gve_num_tx_qpls(priv); i++) { - err = gve_alloc_queue_page_list(priv, i, - page_count); - if (err) - goto free_qpls; + cfg->qpl_cfg->qpl_map_size = BITS_TO_LONGS(max_queues) * + sizeof(unsigned long) * BITS_PER_BYTE; + cfg->qpl_cfg->qpl_id_map = kvcalloc(BITS_TO_LONGS(max_queues), + sizeof(unsigned long), GFP_KERNEL); + if (!cfg->qpl_cfg->qpl_id_map) { + err = -ENOMEM; + goto free_qpl_array; } - start_id = gve_rx_start_qpl_id(priv); + /* Allocate TX QPLs */ + page_count = priv->tx_pages_per_qpl; + tx_num_qpls = gve_num_tx_qpls(cfg->tx_cfg, cfg->num_xdp_queues, + gve_is_qpl(priv)); + err = gve_alloc_n_qpls(priv, qpls, page_count, 0, tx_num_qpls); + if (err) + goto free_qpl_map; + /* Allocate RX QPLs */ + rx_start_id = gve_rx_start_qpl_id(cfg->tx_cfg); /* For GQI_QPL number of pages allocated have 1:1 relationship with * number of descriptors. For DQO, number of pages required are * more than descriptors (because of out of order completions). */ - page_count = priv->queue_format == GVE_GQI_QPL_FORMAT ? - priv->rx_data_slot_cnt : priv->rx_pages_per_qpl; - for (i = start_id; i < start_id + gve_num_rx_qpls(priv); i++) { - err = gve_alloc_queue_page_list(priv, i, - page_count); - if (err) - goto free_qpls; - } - - priv->qpl_cfg.qpl_map_size = BITS_TO_LONGS(max_queues) * - sizeof(unsigned long) * BITS_PER_BYTE; - priv->qpl_cfg.qpl_id_map = kvcalloc(BITS_TO_LONGS(max_queues), - sizeof(unsigned long), GFP_KERNEL); - if (!priv->qpl_cfg.qpl_id_map) { - err = -ENOMEM; - goto free_qpls; - } + page_count = cfg->is_gqi ? priv->rx_data_slot_cnt : priv->rx_pages_per_qpl; + rx_num_qpls = gve_num_rx_qpls(cfg->rx_cfg, gve_is_qpl(priv)); + err = gve_alloc_n_qpls(priv, qpls, page_count, rx_start_id, rx_num_qpls); + if (err) + goto free_tx_qpls; + cfg->qpls = qpls; return 0; -free_qpls: - for (j = 0; j <= i; j++) - gve_free_queue_page_list(priv, j); - kvfree(priv->qpls); - priv->qpls = NULL; +free_tx_qpls: + gve_free_n_qpls(priv, qpls, 0, tx_num_qpls); +free_qpl_map: + kvfree(cfg->qpl_cfg->qpl_id_map); + cfg->qpl_cfg->qpl_id_map = NULL; +free_qpl_array: + kvfree(qpls); return err; } -static void gve_free_xdp_qpls(struct gve_priv *priv) -{ - int start_id; - int i; - - start_id = gve_tx_qpl_id(priv, gve_xdp_tx_start_queue_id(priv)); - for (i = start_id; i < start_id + gve_num_xdp_qpls(priv); i++) - gve_free_queue_page_list(priv, i); -} - -static void gve_free_qpls(struct gve_priv *priv) +static void gve_free_qpls(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *cfg) { - int max_queues = priv->tx_cfg.max_queues + priv->rx_cfg.max_queues; + int max_queues = cfg->tx_cfg->max_queues + cfg->rx_cfg->max_queues; + struct gve_queue_page_list *qpls = cfg->qpls; int i; - if (!priv->qpls) + if (!qpls) return; - kvfree(priv->qpl_cfg.qpl_id_map); - priv->qpl_cfg.qpl_id_map = NULL; + kvfree(cfg->qpl_cfg->qpl_id_map); + cfg->qpl_cfg->qpl_id_map = NULL; for (i = 0; i < max_queues; i++) - gve_free_queue_page_list(priv, i); + gve_free_queue_page_list(priv, &qpls[i], i); - kvfree(priv->qpls); - priv->qpls = NULL; + kvfree(qpls); + cfg->qpls = NULL; } /* Use this to schedule a reset when the device is capable of continuing @@ -1277,8 +1289,72 @@ static void gve_drain_page_cache(struct gve_priv *priv) } } +static void gve_qpls_get_curr_alloc_cfg(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *cfg) +{ + cfg->raw_addressing = !gve_is_qpl(priv); + cfg->is_gqi = gve_is_gqi(priv); + cfg->num_xdp_queues = priv->num_xdp_queues; + cfg->qpl_cfg = &priv->qpl_cfg; + cfg->tx_cfg = &priv->tx_cfg; + cfg->rx_cfg = &priv->rx_cfg; + cfg->qpls = priv->qpls; +} + +static void gve_rx_get_curr_alloc_cfg(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg) +{ + cfg->qcfg = &priv->rx_cfg; + cfg->qcfg_tx = &priv->tx_cfg; + cfg->raw_addressing = !gve_is_qpl(priv); + cfg->qpls = priv->qpls; + cfg->qpl_cfg = &priv->qpl_cfg; + cfg->ring_size = priv->rx_desc_cnt; + cfg->rx = priv->rx; +} + +static void gve_get_curr_alloc_cfgs(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +{ + gve_qpls_get_curr_alloc_cfg(priv, qpls_alloc_cfg); + gve_tx_get_curr_alloc_cfg(priv, tx_alloc_cfg); + gve_rx_get_curr_alloc_cfg(priv, rx_alloc_cfg); +} + +static void gve_rx_start_rings(struct gve_priv *priv, int num_rings) +{ + int i; + + for (i = 0; i < num_rings; i++) { + if (gve_is_gqi(priv)) + gve_rx_start_ring_gqi(priv, i); + else + gve_rx_start_ring_dqo(priv, i); + } +} + +static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) +{ + int i; + + if (!priv->rx) + return; + + for (i = 0; i < num_rings; i++) { + if (gve_is_gqi(priv)) + gve_rx_stop_ring_gqi(priv, i); + else + gve_rx_stop_ring_dqo(priv, i); + } +} + static int gve_open(struct net_device *dev) { + struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(dev); int err; @@ -1287,14 +1363,22 @@ static int gve_open(struct net_device *dev) else priv->num_xdp_queues = 0; - err = gve_alloc_qpls(priv); + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + err = gve_alloc_qpls(priv, &qpls_alloc_cfg); if (err) return err; - - err = gve_alloc_rings(priv); + priv->qpls = qpls_alloc_cfg.qpls; + tx_alloc_cfg.qpls = priv->qpls; + rx_alloc_cfg.qpls = priv->qpls; + err = gve_alloc_rings(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) goto free_qpls; + gve_tx_start_rings(priv, 0, tx_alloc_cfg.num_rings); + gve_rx_start_rings(priv, rx_alloc_cfg.qcfg->num_queues); + gve_init_sync_stats(priv); + err = netif_set_real_num_tx_queues(dev, priv->tx_cfg.num_queues); if (err) goto free_rings; @@ -1333,9 +1417,11 @@ static int gve_open(struct net_device *dev) return 0; free_rings: - gve_free_rings(priv); + gve_tx_stop_rings(priv, 0, tx_alloc_cfg.num_rings); + gve_rx_stop_rings(priv, rx_alloc_cfg.qcfg->num_queues); + gve_free_rings(priv, &tx_alloc_cfg, &rx_alloc_cfg); free_qpls: - gve_free_qpls(priv); + gve_free_qpls(priv, &qpls_alloc_cfg); return err; reset: @@ -1354,6 +1440,9 @@ static int gve_open(struct net_device *dev) static int gve_close(struct net_device *dev) { + struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(dev); int err; @@ -1372,8 +1461,14 @@ static int gve_close(struct net_device *dev) del_timer_sync(&priv->stats_report_timer); gve_unreg_xdp_info(priv); - gve_free_rings(priv); - gve_free_qpls(priv); + + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + gve_tx_stop_rings(priv, 0, tx_alloc_cfg.num_rings); + gve_rx_stop_rings(priv, rx_alloc_cfg.qcfg->num_queues); + gve_free_rings(priv, &tx_alloc_cfg, &rx_alloc_cfg); + gve_free_qpls(priv, &qpls_alloc_cfg); + priv->interface_down_cnt++; return 0; @@ -1390,8 +1485,11 @@ static int gve_close(struct net_device *dev) static int gve_remove_xdp_queues(struct gve_priv *priv) { + int qpl_start_id; int err; + qpl_start_id = gve_xdp_tx_start_queue_id(priv); + err = gve_destroy_xdp_rings(priv); if (err) return err; @@ -1402,18 +1500,22 @@ static int gve_remove_xdp_queues(struct gve_priv *priv) gve_unreg_xdp_info(priv); gve_free_xdp_rings(priv); - gve_free_xdp_qpls(priv); + + gve_free_n_qpls(priv, priv->qpls, qpl_start_id, gve_num_xdp_qpls(priv)); priv->num_xdp_queues = 0; return 0; } static int gve_add_xdp_queues(struct gve_priv *priv) { + int start_id; int err; - priv->num_xdp_queues = priv->tx_cfg.num_queues; + priv->num_xdp_queues = priv->rx_cfg.num_queues; - err = gve_alloc_xdp_qpls(priv); + start_id = gve_xdp_tx_start_queue_id(priv); + err = gve_alloc_n_qpls(priv, priv->qpls, priv->tx_pages_per_qpl, + start_id, gve_num_xdp_qpls(priv)); if (err) goto err; @@ -1438,7 +1540,7 @@ static int gve_add_xdp_queues(struct gve_priv *priv) free_xdp_rings: gve_free_xdp_rings(priv); free_xdp_qpls: - gve_free_xdp_qpls(priv); + gve_free_n_qpls(priv, priv->qpls, start_id, gve_num_xdp_qpls(priv)); err: priv->num_xdp_queues = 0; return err; @@ -2037,6 +2139,8 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) goto err; } + priv->num_registered_pages = 0; + if (skip_describe_device) goto setup_device; @@ -2066,7 +2170,6 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) if (!gve_is_gqi(priv)) netif_set_tso_max_size(priv->dev, GVE_DQO_TX_MAX); - priv->num_registered_pages = 0; priv->rx_copybreak = GVE_DEFAULT_RX_COPYBREAK; /* gvnic has one Notification Block per MSI-x vector, except for the * management vector diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 7a8dc5386fff..c294a1595b6a 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -23,7 +23,9 @@ static void gve_rx_free_buffer(struct device *dev, gve_free_page(dev, page_info->page, dma, DMA_FROM_DEVICE); } -static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) +static void gve_rx_unfill_pages(struct gve_priv *priv, + struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { u32 slots = rx->mask + 1; int i; @@ -36,7 +38,7 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) for (i = 0; i < slots; i++) page_ref_sub(rx->data.page_info[i].page, rx->data.page_info[i].pagecnt_bias - 1); - gve_unassign_qpl(priv, rx->data.qpl->id); + gve_unassign_qpl(cfg->qpl_cfg, rx->data.qpl->id); rx->data.qpl = NULL; for (i = 0; i < rx->qpl_copy_pool_mask + 1; i++) { @@ -49,16 +51,26 @@ static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) rx->data.page_info = NULL; } -static void gve_rx_free_ring(struct gve_priv *priv, int idx) +void gve_rx_stop_ring_gqi(struct gve_priv *priv, int idx) +{ + int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + + if (!gve_rx_was_added_to_block(priv, idx)) + return; + + gve_remove_napi(priv, ntfy_idx); + gve_rx_remove_from_block(priv, idx); +} + +static void gve_rx_free_ring_gqi(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { - struct gve_rx_ring *rx = &priv->rx[idx]; struct device *dev = &priv->pdev->dev; u32 slots = rx->mask + 1; + int idx = rx->q_num; size_t bytes; - gve_rx_remove_from_block(priv, idx); - - bytes = sizeof(struct gve_rx_desc) * priv->rx_desc_cnt; + bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; dma_free_coherent(dev, bytes, rx->desc.desc_ring, rx->desc.bus); rx->desc.desc_ring = NULL; @@ -66,7 +78,7 @@ static void gve_rx_free_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; - gve_rx_unfill_pages(priv, rx); + gve_rx_unfill_pages(priv, rx, cfg); bytes = sizeof(*rx->data.data_ring) * slots; dma_free_coherent(dev, bytes, rx->data.data_ring, @@ -108,7 +120,8 @@ static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, return 0; } -static int gve_prefill_rx_pages(struct gve_rx_ring *rx) +static int gve_rx_prefill_pages(struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { struct gve_priv *priv = rx->gve; u32 slots; @@ -127,7 +140,7 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx) return -ENOMEM; if (!rx->data.raw_addressing) { - rx->data.qpl = gve_assign_rx_qpl(priv, rx->q_num); + rx->data.qpl = gve_assign_rx_qpl(cfg, rx->q_num); if (!rx->data.qpl) { kvfree(rx->data.page_info); rx->data.page_info = NULL; @@ -185,7 +198,7 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx) page_ref_sub(rx->data.page_info[i].page, rx->data.page_info[i].pagecnt_bias - 1); - gve_unassign_qpl(priv, rx->data.qpl->id); + gve_unassign_qpl(cfg->qpl_cfg, rx->data.qpl->id); rx->data.qpl = NULL; return err; @@ -207,13 +220,23 @@ static void gve_rx_ctx_clear(struct gve_rx_ctx *ctx) ctx->drop_pkt = false; } -static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) +void gve_rx_start_ring_gqi(struct gve_priv *priv, int idx) +{ + int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + + gve_rx_add_to_block(priv, idx); + gve_add_napi(priv, ntfy_idx, gve_napi_poll); +} + +static int gve_rx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { - struct gve_rx_ring *rx = &priv->rx[idx]; struct device *hdev = &priv->pdev->dev; + u32 slots = priv->rx_data_slot_cnt; int filled_pages; size_t bytes; - u32 slots; int err; netif_dbg(priv, drv, priv->dev, "allocating rx ring\n"); @@ -223,9 +246,8 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->gve = priv; rx->q_num = idx; - slots = priv->rx_data_slot_cnt; rx->mask = slots - 1; - rx->data.raw_addressing = priv->queue_format == GVE_GQI_RDA_FORMAT; + rx->data.raw_addressing = cfg->raw_addressing; /* alloc rx data ring */ bytes = sizeof(*rx->data.data_ring) * slots; @@ -246,7 +268,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) goto abort_with_slots; } - filled_pages = gve_prefill_rx_pages(rx); + filled_pages = gve_rx_prefill_pages(rx, cfg); if (filled_pages < 0) { err = -ENOMEM; goto abort_with_copy_pool; @@ -269,7 +291,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) (unsigned long)rx->data.data_bus); /* alloc rx desc ring */ - bytes = sizeof(struct gve_rx_desc) * priv->rx_desc_cnt; + bytes = sizeof(struct gve_rx_desc) * cfg->ring_size; rx->desc.desc_ring = dma_alloc_coherent(hdev, bytes, &rx->desc.bus, GFP_KERNEL); if (!rx->desc.desc_ring) { @@ -277,15 +299,11 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) goto abort_with_q_resources; } rx->cnt = 0; - rx->db_threshold = priv->rx_desc_cnt / 2; + rx->db_threshold = slots / 2; rx->desc.seqno = 1; - /* Allocating half-page buffers allows page-flipping which is faster - * than copying or allocating new pages. - */ rx->packet_buffer_size = GVE_DEFAULT_RX_BUFFER_SIZE; gve_rx_ctx_clear(&rx->ctx); - gve_rx_add_to_block(priv, idx); return 0; @@ -294,7 +312,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; abort_filled: - gve_rx_unfill_pages(priv, rx); + gve_rx_unfill_pages(priv, rx, cfg); abort_with_copy_pool: kvfree(rx->qpl_copy_pool); rx->qpl_copy_pool = NULL; @@ -306,36 +324,58 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) return err; } -int gve_rx_alloc_rings(struct gve_priv *priv) +int gve_rx_alloc_rings_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg) { + struct gve_rx_ring *rx; int err = 0; - int i; + int i, j; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - err = gve_rx_alloc_ring(priv, i); + if (!cfg->raw_addressing && !cfg->qpls) { + netif_err(priv, drv, priv->dev, + "Cannot alloc QPL ring before allocing QPLs\n"); + return -EINVAL; + } + + rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), + GFP_KERNEL); + if (!rx) + return -ENOMEM; + + for (i = 0; i < cfg->qcfg->num_queues; i++) { + err = gve_rx_alloc_ring_gqi(priv, cfg, &rx[i], i); if (err) { netif_err(priv, drv, priv->dev, "Failed to alloc rx ring=%d: err=%d\n", i, err); - break; + goto cleanup; } } - /* Unallocate if there was an error */ - if (err) { - int j; - for (j = 0; j < i; j++) - gve_rx_free_ring(priv, j); - } + cfg->rx = rx; + return 0; + +cleanup: + for (j = 0; j < i; j++) + gve_rx_free_ring_gqi(priv, &rx[j], cfg); + kvfree(rx); return err; } -void gve_rx_free_rings_gqi(struct gve_priv *priv) +void gve_rx_free_rings_gqi(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg) { + struct gve_rx_ring *rx = cfg->rx; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) - gve_rx_free_ring(priv, i); + if (!rx) + return; + + for (i = 0; i < cfg->qcfg->num_queues; i++) + gve_rx_free_ring_gqi(priv, &rx[i], cfg); + + kvfree(rx); + cfg->rx = NULL; } void gve_rx_write_doorbell(struct gve_priv *priv, struct gve_rx_ring *rx) diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c index f281e42a7ef9..8e6aeb5b3ed4 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -199,20 +199,30 @@ static int gve_alloc_page_dqo(struct gve_rx_ring *rx, return 0; } -static void gve_rx_free_ring_dqo(struct gve_priv *priv, int idx) +void gve_rx_stop_ring_dqo(struct gve_priv *priv, int idx) +{ + int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + + if (!gve_rx_was_added_to_block(priv, idx)) + return; + + gve_remove_napi(priv, ntfy_idx); + gve_rx_remove_from_block(priv, idx); +} + +static void gve_rx_free_ring_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct gve_rx_alloc_rings_cfg *cfg) { - struct gve_rx_ring *rx = &priv->rx[idx]; struct device *hdev = &priv->pdev->dev; size_t completion_queue_slots; size_t buffer_queue_slots; + int idx = rx->q_num; size_t size; int i; completion_queue_slots = rx->dqo.complq.mask + 1; buffer_queue_slots = rx->dqo.bufq.mask + 1; - gve_rx_remove_from_block(priv, idx); - if (rx->q_resources) { dma_free_coherent(hdev, sizeof(*rx->q_resources), rx->q_resources, rx->q_resources_bus); @@ -226,7 +236,7 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, int idx) gve_free_page_dqo(priv, bs, !rx->dqo.qpl); } if (rx->dqo.qpl) { - gve_unassign_qpl(priv, rx->dqo.qpl->id); + gve_unassign_qpl(cfg->qpl_cfg, rx->dqo.qpl->id); rx->dqo.qpl = NULL; } @@ -251,17 +261,26 @@ static void gve_rx_free_ring_dqo(struct gve_priv *priv, int idx) netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } -static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) +void gve_rx_start_ring_dqo(struct gve_priv *priv, int idx) +{ + int ntfy_idx = gve_rx_idx_to_ntfy(priv, idx); + + gve_rx_add_to_block(priv, idx); + gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo); +} + +static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg, + struct gve_rx_ring *rx, + int idx) { - struct gve_rx_ring *rx = &priv->rx[idx]; struct device *hdev = &priv->pdev->dev; size_t size; int i; - const u32 buffer_queue_slots = - priv->queue_format == GVE_DQO_RDA_FORMAT ? - priv->options_dqo_rda.rx_buff_ring_entries : priv->rx_desc_cnt; - const u32 completion_queue_slots = priv->rx_desc_cnt; + const u32 buffer_queue_slots = cfg->raw_addressing ? + priv->options_dqo_rda.rx_buff_ring_entries : cfg->ring_size; + const u32 completion_queue_slots = cfg->ring_size; netif_dbg(priv, drv, priv->dev, "allocating rx ring DQO\n"); @@ -274,7 +293,7 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) rx->ctx.skb_head = NULL; rx->ctx.skb_tail = NULL; - rx->dqo.num_buf_states = priv->queue_format == GVE_DQO_RDA_FORMAT ? + rx->dqo.num_buf_states = cfg->raw_addressing ? min_t(s16, S16_MAX, buffer_queue_slots * 4) : priv->rx_pages_per_qpl; rx->dqo.buf_states = kvcalloc(rx->dqo.num_buf_states, @@ -308,8 +327,8 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) if (!rx->dqo.bufq.desc_ring) goto err; - if (priv->queue_format != GVE_DQO_RDA_FORMAT) { - rx->dqo.qpl = gve_assign_rx_qpl(priv, rx->q_num); + if (!cfg->raw_addressing) { + rx->dqo.qpl = gve_assign_rx_qpl(cfg, rx->q_num); if (!rx->dqo.qpl) goto err; rx->dqo.next_qpl_page_idx = 0; @@ -320,12 +339,10 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) if (!rx->q_resources) goto err; - gve_rx_add_to_block(priv, idx); - return 0; err: - gve_rx_free_ring_dqo(priv, idx); + gve_rx_free_ring_dqo(priv, rx, cfg); return -ENOMEM; } @@ -337,13 +354,26 @@ void gve_rx_write_doorbell_dqo(const struct gve_priv *priv, int queue_idx) iowrite32(rx->dqo.bufq.tail, &priv->db_bar2[index]); } -int gve_rx_alloc_rings_dqo(struct gve_priv *priv) +int gve_rx_alloc_rings_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg) { - int err = 0; + struct gve_rx_ring *rx; + int err; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - err = gve_rx_alloc_ring_dqo(priv, i); + if (!cfg->raw_addressing && !cfg->qpls) { + netif_err(priv, drv, priv->dev, + "Cannot alloc QPL ring before allocing QPLs\n"); + return -EINVAL; + } + + rx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_rx_ring), + GFP_KERNEL); + if (!rx) + return -ENOMEM; + + for (i = 0; i < cfg->qcfg->num_queues; i++) { + err = gve_rx_alloc_ring_dqo(priv, cfg, &rx[i], i); if (err) { netif_err(priv, drv, priv->dev, "Failed to alloc rx ring=%d: err=%d\n", @@ -352,21 +382,30 @@ int gve_rx_alloc_rings_dqo(struct gve_priv *priv) } } + cfg->rx = rx; return 0; err: for (i--; i >= 0; i--) - gve_rx_free_ring_dqo(priv, i); - + gve_rx_free_ring_dqo(priv, &rx[i], cfg); + kvfree(rx); return err; } -void gve_rx_free_rings_dqo(struct gve_priv *priv) +void gve_rx_free_rings_dqo(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *cfg) { + struct gve_rx_ring *rx = cfg->rx; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) - gve_rx_free_ring_dqo(priv, i); + if (!rx) + return; + + for (i = 0; i < cfg->qcfg->num_queues; i++) + gve_rx_free_ring_dqo(priv, &rx[i], cfg); + + kvfree(rx); + cfg->rx = NULL; } void gve_rx_post_buffers_dqo(struct gve_rx_ring *rx) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index 07ba124780df..4b9853adc113 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -196,29 +196,36 @@ static int gve_clean_xdp_done(struct gve_priv *priv, struct gve_tx_ring *tx, static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx, u32 to_do, bool try_to_wake); -static void gve_tx_free_ring(struct gve_priv *priv, int idx) +void gve_tx_stop_ring_gqi(struct gve_priv *priv, int idx) { + int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_tx_ring *tx = &priv->tx[idx]; + + if (!gve_tx_was_added_to_block(priv, idx)) + return; + + gve_remove_napi(priv, ntfy_idx); + gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); + netdev_tx_reset_queue(tx->netdev_txq); + gve_tx_remove_from_block(priv, idx); +} + +static void gve_tx_free_ring_gqi(struct gve_priv *priv, struct gve_tx_ring *tx, + struct gve_tx_alloc_rings_cfg *cfg) +{ struct device *hdev = &priv->pdev->dev; + int idx = tx->q_num; size_t bytes; u32 slots; - gve_tx_remove_from_block(priv, idx); slots = tx->mask + 1; - if (tx->q_num < priv->tx_cfg.num_queues) { - gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); - netdev_tx_reset_queue(tx->netdev_txq); - } else { - gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt); - } - dma_free_coherent(hdev, sizeof(*tx->q_resources), tx->q_resources, tx->q_resources_bus); tx->q_resources = NULL; if (!tx->raw_addressing) { gve_tx_fifo_release(priv, &tx->tx_fifo); - gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); + gve_unassign_qpl(cfg->qpl_cfg, tx->tx_fifo.qpl->id); tx->tx_fifo.qpl = NULL; } @@ -232,11 +239,23 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx) netif_dbg(priv, drv, priv->dev, "freed tx queue %d\n", idx); } -static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) +void gve_tx_start_ring_gqi(struct gve_priv *priv, int idx) { + int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_tx_ring *tx = &priv->tx[idx]; + + gve_tx_add_to_block(priv, idx); + + tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx); + gve_add_napi(priv, ntfy_idx, gve_napi_poll); +} + +static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg, + struct gve_tx_ring *tx, + int idx) +{ struct device *hdev = &priv->pdev->dev; - u32 slots = priv->tx_desc_cnt; size_t bytes; /* Make sure everything is zeroed to start */ @@ -245,23 +264,23 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) spin_lock_init(&tx->xdp_lock); tx->q_num = idx; - tx->mask = slots - 1; + tx->mask = cfg->ring_size - 1; /* alloc metadata */ - tx->info = vcalloc(slots, sizeof(*tx->info)); + tx->info = vcalloc(cfg->ring_size, sizeof(*tx->info)); if (!tx->info) return -ENOMEM; /* alloc tx queue */ - bytes = sizeof(*tx->desc) * slots; + bytes = sizeof(*tx->desc) * cfg->ring_size; tx->desc = dma_alloc_coherent(hdev, bytes, &tx->bus, GFP_KERNEL); if (!tx->desc) goto abort_with_info; - tx->raw_addressing = priv->queue_format == GVE_GQI_RDA_FORMAT; - tx->dev = &priv->pdev->dev; + tx->raw_addressing = cfg->raw_addressing; + tx->dev = hdev; if (!tx->raw_addressing) { - tx->tx_fifo.qpl = gve_assign_tx_qpl(priv, idx); + tx->tx_fifo.qpl = gve_assign_tx_qpl(cfg, idx); if (!tx->tx_fifo.qpl) goto abort_with_desc; /* map Tx FIFO */ @@ -277,12 +296,6 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) if (!tx->q_resources) goto abort_with_fifo; - netif_dbg(priv, drv, priv->dev, "tx[%d]->bus=%lx\n", idx, - (unsigned long)tx->bus); - if (idx < priv->tx_cfg.num_queues) - tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx); - gve_tx_add_to_block(priv, idx); - return 0; abort_with_fifo: @@ -290,7 +303,7 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) gve_tx_fifo_release(priv, &tx->tx_fifo); abort_with_qpl: if (!tx->raw_addressing) - gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); + gve_unassign_qpl(cfg->qpl_cfg, tx->tx_fifo.qpl->id); abort_with_desc: dma_free_coherent(hdev, bytes, tx->desc, tx->bus); tx->desc = NULL; @@ -300,36 +313,73 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) return -ENOMEM; } -int gve_tx_alloc_rings(struct gve_priv *priv, int start_id, int num_rings) +int gve_tx_alloc_rings_gqi(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg) { + struct gve_tx_ring *tx = cfg->tx; int err = 0; - int i; + int i, j; + + if (!cfg->raw_addressing && !cfg->qpls) { + netif_err(priv, drv, priv->dev, + "Cannot alloc QPL ring before allocing QPLs\n"); + return -EINVAL; + } + + if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { + netif_err(priv, drv, priv->dev, + "Cannot alloc more than the max num of Tx rings\n"); + return -EINVAL; + } + + if (cfg->start_idx == 0) { + tx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_tx_ring), + GFP_KERNEL); + if (!tx) + return -ENOMEM; + } else if (!tx) { + netif_err(priv, drv, priv->dev, + "Cannot alloc tx rings from a nonzero start idx without tx array\n"); + return -EINVAL; + } - for (i = start_id; i < start_id + num_rings; i++) { - err = gve_tx_alloc_ring(priv, i); + for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) { + err = gve_tx_alloc_ring_gqi(priv, cfg, &tx[i], i); if (err) { netif_err(priv, drv, priv->dev, "Failed to alloc tx ring=%d: err=%d\n", i, err); - break; + goto cleanup; } } - /* Unallocate if there was an error */ - if (err) { - int j; - for (j = start_id; j < i; j++) - gve_tx_free_ring(priv, j); - } + cfg->tx = tx; + return 0; + +cleanup: + for (j = 0; j < i; j++) + gve_tx_free_ring_gqi(priv, &tx[j], cfg); + if (cfg->start_idx == 0) + kvfree(tx); return err; } -void gve_tx_free_rings_gqi(struct gve_priv *priv, int start_id, int num_rings) +void gve_tx_free_rings_gqi(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg) { + struct gve_tx_ring *tx = cfg->tx; int i; - for (i = start_id; i < start_id + num_rings; i++) - gve_tx_free_ring(priv, i); + if (!tx) + return; + + for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) + gve_tx_free_ring_gqi(priv, &tx[i], cfg); + + if (cfg->start_idx == 0) { + kvfree(tx); + cfg->tx = NULL; + } } /* gve_tx_avail - Calculates the number of slots available in the ring diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/ethernet/google/gve/gve_tx_dqo.c index f59c4710f118..bc34b6cd3a3e 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -188,13 +188,27 @@ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx) } } -static void gve_tx_free_ring_dqo(struct gve_priv *priv, int idx) +void gve_tx_stop_ring_dqo(struct gve_priv *priv, int idx) { + int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_tx_ring *tx = &priv->tx[idx]; - struct device *hdev = &priv->pdev->dev; - size_t bytes; + if (!gve_tx_was_added_to_block(priv, idx)) + return; + + gve_remove_napi(priv, ntfy_idx); + gve_clean_tx_done_dqo(priv, tx, /*napi=*/NULL); + netdev_tx_reset_queue(tx->netdev_txq); + gve_tx_clean_pending_packets(tx); gve_tx_remove_from_block(priv, idx); +} + +static void gve_tx_free_ring_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, + struct gve_tx_alloc_rings_cfg *cfg) +{ + struct device *hdev = &priv->pdev->dev; + int idx = tx->q_num; + size_t bytes; if (tx->q_resources) { dma_free_coherent(hdev, sizeof(*tx->q_resources), @@ -223,7 +237,7 @@ static void gve_tx_free_ring_dqo(struct gve_priv *priv, int idx) tx->dqo.tx_qpl_buf_next = NULL; if (tx->dqo.qpl) { - gve_unassign_qpl(priv, tx->dqo.qpl->id); + gve_unassign_qpl(cfg->qpl_cfg, tx->dqo.qpl->id); tx->dqo.qpl = NULL; } @@ -253,9 +267,22 @@ static int gve_tx_qpl_buf_init(struct gve_tx_ring *tx) return 0; } -static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, int idx) +void gve_tx_start_ring_dqo(struct gve_priv *priv, int idx) { + int ntfy_idx = gve_tx_idx_to_ntfy(priv, idx); struct gve_tx_ring *tx = &priv->tx[idx]; + + gve_tx_add_to_block(priv, idx); + + tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx); + gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo); +} + +static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg, + struct gve_tx_ring *tx, + int idx) +{ struct device *hdev = &priv->pdev->dev; int num_pending_packets; size_t bytes; @@ -263,12 +290,11 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, int idx) memset(tx, 0, sizeof(*tx)); tx->q_num = idx; - tx->dev = &priv->pdev->dev; - tx->netdev_txq = netdev_get_tx_queue(priv->dev, idx); + tx->dev = hdev; atomic_set_release(&tx->dqo_compl.hw_tx_head, 0); /* Queue sizes must be a power of 2 */ - tx->mask = priv->tx_desc_cnt - 1; + tx->mask = cfg->ring_size - 1; tx->dqo.complq_mask = priv->queue_format == GVE_DQO_RDA_FORMAT ? priv->options_dqo_rda.tx_comp_ring_entries - 1 : tx->mask; @@ -327,8 +353,8 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, int idx) if (!tx->q_resources) goto err; - if (gve_is_qpl(priv)) { - tx->dqo.qpl = gve_assign_tx_qpl(priv, idx); + if (!cfg->raw_addressing) { + tx->dqo.qpl = gve_assign_tx_qpl(cfg, idx); if (!tx->dqo.qpl) goto err; @@ -336,22 +362,45 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, int idx) goto err; } - gve_tx_add_to_block(priv, idx); - return 0; err: - gve_tx_free_ring_dqo(priv, idx); + gve_tx_free_ring_dqo(priv, tx, cfg); return -ENOMEM; } -int gve_tx_alloc_rings_dqo(struct gve_priv *priv) +int gve_tx_alloc_rings_dqo(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg) { + struct gve_tx_ring *tx = cfg->tx; int err = 0; - int i; + int i, j; - for (i = 0; i < priv->tx_cfg.num_queues; i++) { - err = gve_tx_alloc_ring_dqo(priv, i); + if (!cfg->raw_addressing && !cfg->qpls) { + netif_err(priv, drv, priv->dev, + "Cannot alloc QPL ring before allocing QPLs\n"); + return -EINVAL; + } + + if (cfg->start_idx + cfg->num_rings > cfg->qcfg->max_queues) { + netif_err(priv, drv, priv->dev, + "Cannot alloc more than the max num of Tx rings\n"); + return -EINVAL; + } + + if (cfg->start_idx == 0) { + tx = kvcalloc(cfg->qcfg->max_queues, sizeof(struct gve_tx_ring), + GFP_KERNEL); + if (!tx) + return -ENOMEM; + } else if (!tx) { + netif_err(priv, drv, priv->dev, + "Cannot alloc tx rings from a nonzero start idx without tx array\n"); + return -EINVAL; + } + + for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) { + err = gve_tx_alloc_ring_dqo(priv, cfg, &tx[i], i); if (err) { netif_err(priv, drv, priv->dev, "Failed to alloc tx ring=%d: err=%d\n", @@ -360,27 +409,32 @@ int gve_tx_alloc_rings_dqo(struct gve_priv *priv) } } + cfg->tx = tx; return 0; err: - for (i--; i >= 0; i--) - gve_tx_free_ring_dqo(priv, i); - + for (j = 0; j < i; j++) + gve_tx_free_ring_dqo(priv, &tx[j], cfg); + if (cfg->start_idx == 0) + kvfree(tx); return err; } -void gve_tx_free_rings_dqo(struct gve_priv *priv) +void gve_tx_free_rings_dqo(struct gve_priv *priv, + struct gve_tx_alloc_rings_cfg *cfg) { + struct gve_tx_ring *tx = cfg->tx; int i; - for (i = 0; i < priv->tx_cfg.num_queues; i++) { - struct gve_tx_ring *tx = &priv->tx[i]; + if (!tx) + return; - gve_clean_tx_done_dqo(priv, tx, /*napi=*/NULL); - netdev_tx_reset_queue(tx->netdev_txq); - gve_tx_clean_pending_packets(tx); + for (i = cfg->start_idx; i < cfg->start_idx + cfg->num_rings; i++) + gve_tx_free_ring_dqo(priv, &tx[i], cfg); - gve_tx_free_ring_dqo(priv, i); + if (cfg->start_idx == 0) { + kvfree(tx); + cfg->tx = NULL; } } diff --git a/drivers/net/ethernet/google/gve/gve_utils.c b/drivers/net/ethernet/google/gve/gve_utils.c index 974a75623789..535b1796b91d 100644 --- a/drivers/net/ethernet/google/gve/gve_utils.c +++ b/drivers/net/ethernet/google/gve/gve_utils.c @@ -8,6 +8,14 @@ #include "gve_adminq.h" #include "gve_utils.h" +bool gve_tx_was_added_to_block(struct gve_priv *priv, int queue_idx) +{ + struct gve_notify_block *block = + &priv->ntfy_blocks[gve_tx_idx_to_ntfy(priv, queue_idx)]; + + return block->tx != NULL; +} + void gve_tx_remove_from_block(struct gve_priv *priv, int queue_idx) { struct gve_notify_block *block = @@ -30,6 +38,14 @@ void gve_tx_add_to_block(struct gve_priv *priv, int queue_idx) queue_idx); } +bool gve_rx_was_added_to_block(struct gve_priv *priv, int queue_idx) +{ + struct gve_notify_block *block = + &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_idx)]; + + return block->rx != NULL; +} + void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx) { struct gve_notify_block *block = diff --git a/drivers/net/ethernet/google/gve/gve_utils.h b/drivers/net/ethernet/google/gve/gve_utils.h index 924516e9eaae..277921a629f7 100644 --- a/drivers/net/ethernet/google/gve/gve_utils.h +++ b/drivers/net/ethernet/google/gve/gve_utils.h @@ -11,9 +11,11 @@ #include "gve.h" +bool gve_tx_was_added_to_block(struct gve_priv *priv, int queue_idx); void gve_tx_remove_from_block(struct gve_priv *priv, int queue_idx); void gve_tx_add_to_block(struct gve_priv *priv, int queue_idx); +bool gve_rx_was_added_to_block(struct gve_priv *priv, int queue_idx); void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx); void gve_rx_add_to_block(struct gve_priv *priv, int queue_idx); From patchwork Mon Jan 22 18:26:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13526043 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CF5D4EB3F for ; Mon, 22 Jan 2024 18:27:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948042; cv=none; b=si7pg5SlErpQfUzEKvx7kwXDv6d/sXE0tCcGGMvWniRqF7OVQIoTCI0YHYySfkDQsr8Kvnp7fAF/yYIKBCYS9xajjm7b+YLaIikaTcc2SptgL6KiUEDcypmtpXjDq+g8npCgroII7frUgoMblFzabUdk9hr30O8aI3gvVw0zM+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948042; c=relaxed/simple; bh=JQtrOJwcOYER/NPvqtTcHz+T6P+35i6x0S4VO1eN4kc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=O7cPq3KscrhADrxpkNj0MIaeW8dH08Fp4firfP8W1vrzYytJ0iBsGtrXKbjONxw9hA6XeNzd8DhYJZo5IlGoy7/zx6EPAZXYIo3CXW3953KMi1cPb024rJASiftYSwdAfPUEe+JdUSJY2vrOiYxB+QDqGIUni7gHWSncY2CmIwU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=SH0nyJP4; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SH0nyJP4" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5ffa9b3659cso36790777b3.1 for ; Mon, 22 Jan 2024 10:27:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705948039; x=1706552839; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9YHpQmPN0D7ujtn/xUcQd/74Nin1ta+g/9SKfN+rq9U=; b=SH0nyJP4lyVKklK5qMVJWxNaQYdj9LLYfMo+WcJAO1BvyuXbUqBH9togLTvpHfVA4J I3lsVCnnHDAGR7P+NQeLQLhbbdU1XqXfHBciD45//DsfbiOjvaxx/AbjuKftY/Di9flt fn+DQQrjn9FaT6MrOpTix+bOp/Hs3VVwjTtm0EH+H7Ue4ThF+CytilZ8mNeAK0GQbj5b VAb8BRBZDnUebvRIcYJ3zFYacGBiaRRDnObqBFi+IzQg+yR15USI0rRhmTj5/GhtHC5w WxNOjy/XL7rtEtqonLL1NffasiWtsxBmKgP2y9WeFSIQYxUxPa3aIgOjiK78H7YEfY26 VfLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705948039; x=1706552839; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9YHpQmPN0D7ujtn/xUcQd/74Nin1ta+g/9SKfN+rq9U=; b=R6bXnk7SWJyCqDNbrnYzWEgnJeO0yu48IY9WSADGJXBI6SgqFMqR2CJcn5wLmudRn2 IhHRc73Kueh3vwxB9IKi7Tff+R8W3GqFL4iU1cLGNFRm3V45zQVOJa8xW9+EZFeXWSNy Yz6Y7vTzbrya5welRbCribzmK3PscrCHIWWBlE6xRUDcHZ5FDOGGRQkTT44z+P4EbP/b q5FlLt4JFNKvHfsGE5G4+YUmLNZELf8BNxqn+7qs8tF1E49RyF1ZkGoen1Bx2QWgV67n minMSqsUDs1yVBZvma0RESfPPIAzdU7ImIsW5GFsyVxCmx9wfeAoV2kb+46+VmpUKFNZ Pnww== X-Gm-Message-State: AOJu0Ywy9MXONN78ZYv32meLepAuVpcoFhsENgoj6HEfdAK47zTiQgQ1 7REg9Kjk/ZAcN9OSXZ75uVwDmX7eOK/uplQO4iJnI62ePJXjI1dWxnIOGWx0NScra4Fs6TxW6lL XH+JdO7gtEURBGZm4zy3Z5WPnv1u6wVIRNVOc8/9zEsg45EFG7Y/haGWUe5Sj4MMbIPWnLdbRMo SylkFjcPbUcTXDLSDZm27BRFfdW7LEQO8MqPjOW4ookOg= X-Google-Smtp-Source: AGHT+IEwWQeVz10rV54WnVJ4ONOHkWStZQKhnJELZVKnzZLJv1sEsyCqAogYrp6imk8tLjtUshiLncXuIQvTqQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a5b:a4a:0:b0:dc2:2ab7:5f09 with SMTP id z10-20020a5b0a4a000000b00dc22ab75f09mr2171455ybq.2.1705948039468; Mon, 22 Jan 2024 10:27:19 -0800 (PST) Date: Mon, 22 Jan 2024 18:26:30 +0000 In-Reply-To: <20240122182632.1102721-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240122182632.1102721-1-shailend@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240122182632.1102721-5-shailend@google.com> Subject: [PATCH net-next 4/6] gve: Refactor gve_open and gve_close From: Shailend Chand To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Shailend Chand , Willem de Bruijn , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org gve_open is rewritten to be composed of two funcs: gve_queues_mem_alloc and gve_queues_start. The former only allocates queue resources without doing anything to install the queues, which is taken up by the latter. Similarly gve_close is split into gve_queues_stop and gve_queues_mem_free. Separating the acts of queue resource allocation and making the queue become live help with subsequent changes that aim to not take down the datapath when applying new configurations. Signed-off-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_main.c | 159 +++++++++++++++------ 1 file changed, 119 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 8049a47c38fc..3515484e4cea 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1350,45 +1350,99 @@ static void gve_rx_stop_rings(struct gve_priv *priv, int num_rings) } } -static int gve_open(struct net_device *dev) +static void gve_queues_mem_free(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +{ + gve_free_rings(priv, tx_alloc_cfg, rx_alloc_cfg); + gve_free_qpls(priv, qpls_alloc_cfg); +} + +static int gve_queues_mem_alloc(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +{ + int err; + + err = gve_alloc_qpls(priv, qpls_alloc_cfg); + if (err) { + netif_err(priv, drv, priv->dev, "Failed to alloc QPLs\n"); + return err; + } + tx_alloc_cfg->qpls = qpls_alloc_cfg->qpls; + rx_alloc_cfg->qpls = qpls_alloc_cfg->qpls; + err = gve_alloc_rings(priv, tx_alloc_cfg, rx_alloc_cfg); + if (err) { + netif_err(priv, drv, priv->dev, "Failed to alloc rings\n"); + goto free_qpls; + } + + return 0; + +free_qpls: + gve_free_qpls(priv, qpls_alloc_cfg); + return err; +} + +static void gve_queues_mem_remove(struct gve_priv *priv) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; - struct gve_priv *priv = netdev_priv(dev); + + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + gve_queues_mem_free(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + priv->qpls = NULL; + priv->tx = NULL; + priv->rx = NULL; +} + +/* The passed-in queue memory is stored into priv and the queues are made live. + * No memory is allocated. Passed-in memory is freed on errors. + */ +static int gve_queues_start(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +{ + struct net_device *dev = priv->dev; int err; + /* Record new resources into priv */ + priv->qpls = qpls_alloc_cfg->qpls; + priv->tx = tx_alloc_cfg->tx; + priv->rx = rx_alloc_cfg->rx; + + /* Record new configs into priv */ + priv->qpl_cfg = *qpls_alloc_cfg->qpl_cfg; + priv->tx_cfg = *tx_alloc_cfg->qcfg; + priv->rx_cfg = *rx_alloc_cfg->qcfg; + priv->tx_desc_cnt = tx_alloc_cfg->ring_size; + priv->rx_desc_cnt = rx_alloc_cfg->ring_size; + if (priv->xdp_prog) priv->num_xdp_queues = priv->rx_cfg.num_queues; else priv->num_xdp_queues = 0; - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); - err = gve_alloc_qpls(priv, &qpls_alloc_cfg); - if (err) - return err; - priv->qpls = qpls_alloc_cfg.qpls; - tx_alloc_cfg.qpls = priv->qpls; - rx_alloc_cfg.qpls = priv->qpls; - err = gve_alloc_rings(priv, &tx_alloc_cfg, &rx_alloc_cfg); - if (err) - goto free_qpls; - - gve_tx_start_rings(priv, 0, tx_alloc_cfg.num_rings); - gve_rx_start_rings(priv, rx_alloc_cfg.qcfg->num_queues); + gve_tx_start_rings(priv, 0, tx_alloc_cfg->num_rings); + gve_rx_start_rings(priv, rx_alloc_cfg->qcfg->num_queues); gve_init_sync_stats(priv); err = netif_set_real_num_tx_queues(dev, priv->tx_cfg.num_queues); if (err) - goto free_rings; + goto stop_and_free_rings; err = netif_set_real_num_rx_queues(dev, priv->rx_cfg.num_queues); if (err) - goto free_rings; + goto stop_and_free_rings; err = gve_reg_xdp_info(priv, dev); if (err) - goto free_rings; + goto stop_and_free_rings; err = gve_register_qpls(priv); if (err) @@ -1416,29 +1470,22 @@ static int gve_open(struct net_device *dev) priv->interface_up_cnt++; return 0; -free_rings: - gve_tx_stop_rings(priv, 0, tx_alloc_cfg.num_rings); - gve_rx_stop_rings(priv, rx_alloc_cfg.qcfg->num_queues); - gve_free_rings(priv, &tx_alloc_cfg, &rx_alloc_cfg); -free_qpls: - gve_free_qpls(priv, &qpls_alloc_cfg); - return err; - reset: - /* This must have been called from a reset due to the rtnl lock - * so just return at this point. - */ if (gve_get_reset_in_progress(priv)) - return err; - /* Otherwise reset before returning */ + goto stop_and_free_rings; gve_reset_and_teardown(priv, true); /* if this fails there is nothing we can do so just ignore the return */ gve_reset_recovery(priv, false); /* return the original error */ return err; +stop_and_free_rings: + gve_tx_stop_rings(priv, 0, gve_num_tx_queues(priv)); + gve_rx_stop_rings(priv, priv->rx_cfg.num_queues); + gve_queues_mem_remove(priv); + return err; } -static int gve_close(struct net_device *dev) +static int gve_open(struct net_device *dev) { struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; @@ -1446,7 +1493,30 @@ static int gve_close(struct net_device *dev) struct gve_priv *priv = netdev_priv(dev); int err; - netif_carrier_off(dev); + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + + err = gve_queues_mem_alloc(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + if (err) + return err; + + /* No need to free on error: ownership of resources is lost after + * calling gve_queues_start. + */ + err = gve_queues_start(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + if (err) + return err; + + return 0; +} + +static int gve_queues_stop(struct gve_priv *priv) +{ + int err; + + netif_carrier_off(priv->dev); if (gve_get_device_rings_ok(priv)) { gve_turndown(priv); gve_drain_page_cache(priv); @@ -1462,12 +1532,8 @@ static int gve_close(struct net_device *dev) gve_unreg_xdp_info(priv); - gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, - &tx_alloc_cfg, &rx_alloc_cfg); - gve_tx_stop_rings(priv, 0, tx_alloc_cfg.num_rings); - gve_rx_stop_rings(priv, rx_alloc_cfg.qcfg->num_queues); - gve_free_rings(priv, &tx_alloc_cfg, &rx_alloc_cfg); - gve_free_qpls(priv, &qpls_alloc_cfg); + gve_tx_stop_rings(priv, 0, gve_num_tx_queues(priv)); + gve_rx_stop_rings(priv, priv->rx_cfg.num_queues); priv->interface_down_cnt++; return 0; @@ -1483,6 +1549,19 @@ static int gve_close(struct net_device *dev) return gve_reset_recovery(priv, false); } +static int gve_close(struct net_device *dev) +{ + struct gve_priv *priv = netdev_priv(dev); + int err; + + err = gve_queues_stop(priv); + if (err) + return err; + + gve_queues_mem_remove(priv); + return 0; +} + static int gve_remove_xdp_queues(struct gve_priv *priv) { int qpl_start_id; From patchwork Mon Jan 22 18:26:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13526045 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE8A94F1E9 for ; Mon, 22 Jan 2024 18:27:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948043; cv=none; b=eX0Ov0cC8ZCYu8IJZ8dMz5UP74jTNisNV/6Uff9kNMYA4xRHsc9iip9xdQqxnJEt16De6F7ZEe7p+B9x42D2mbvCqRm7pCwD3cqJPY4zGa8ffM6gULGqh9Dk4FlKTN8EONOEDqHFT1tmN0YWqMC0i0k1Fzx493OoS0kSA5GKBj4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948043; c=relaxed/simple; bh=7mmgC/+B6cq+Ipqa7gj8UbT1rFFuY2CkQ/djnd69pBU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tdxVoUQHFpzBp5ek/7y5U2VMAGxbqzvp8ygo6wzc4lJmjyMO/iNahpQLwIe3qCib2XppJt6n59lZ7zFPt9CChZ9c57HTSTucMYPaShA8NX8k9ONIcmPryV/LTcCXKMJXf3QlXeWhI3DUt69WCvUQwHNQevNyVMyM7n17UPKpXuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VLL8wLbW; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VLL8wLbW" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbdfb8fed1bso4583826276.2 for ; Mon, 22 Jan 2024 10:27:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705948041; x=1706552841; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1IKRVHrdDGc1jnjUD4SM5zaEqPO/NM9QbytVSty7I/c=; b=VLL8wLbWegpu/79Vow1AjbR874Rerny+ZVFs3GsyCVHD6EIAdLW1etVSLPL80VnZqI EtYCa1QKGw7IFBg0l9tuk/9x02yFfXkPONXqOGLmKwDfKDZr+gYAvfuSEdEs++4xBICO FnN0pynykSc/wErctGdxcBh8AbQgn1fAUXIWAPsQ7xXrRAd0Rn5vP0r/AWs2EOHPQ10d myqegGx+Gbb0pZWKgBQJ1umg2bDuQ/jUCCGyhjTe2mtzBZ4tw1hkoAYIEop2rdBgzX9v ZfhaUGQ34cN9O043YgRHytROBqG3Jav6d6SuxnB1MHdxsRb4KzdrWj3+rHFbKQKY44/D eSWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705948041; x=1706552841; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1IKRVHrdDGc1jnjUD4SM5zaEqPO/NM9QbytVSty7I/c=; b=hJ7GOCGGkFepveT897A6LRcvhx3ptfUMl3EaLxEymdTqkQTp6ASBzw8vcqrE4TG4pP jF7jBAjzsVMqj/XtrdS6Dp7NWX3X+iu7XlbGnh1nxVYmAODL4Bu97wTRyYy8AqL1yKgu Jv8OHgsQmJyGQAeYMQimG14fqrqsbBJTV9EBDHgTKLZ1C1kXch3goz4p4FiMs3UDtjyk juJ/6e2nJ8RYTWRRxzaxD1Csk2x2LonpcynpWOc3b4PzF820DjlbCKdCk79MJ2q8Z6eR X8NkF6SVFWJdWLBFonb2uDkqkv/R/G0fE9zR00WFYWcHWJERBOwbfKYtrFByY3z4XiL9 qAQA== X-Gm-Message-State: AOJu0YwenXOB3Q13icSXEbsgYNtAw9632NR+CgxipB1qog13z46cfFlk fYHPJjCgWtTJiKz/s+Cwo2EbkrxgUqwSK/ORUhn0TxHYKo52IFqDK9I8gsaBo4FbBHBiLD82HAc KkAeM7q+8gmiZOX0t+9weg864A+YXEGbKUlOK7NlzFpuJ9LyO4r/Jhn893Z36uB+souRcFuZuEt RSKRtI1chbSnl3qnOmUOXI//BBfJWDhJehnMgDUIcxWXw= X-Google-Smtp-Source: AGHT+IG6bGHD4oJWoZaRZHYkf14t2wL4oy4xFRwatl3Lznzz8b+WUwwCMeHiVVldwKdWoPaIjqfHKwyP7TfqHA== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a25:8b8f:0:b0:dc2:4009:b35e with SMTP id j15-20020a258b8f000000b00dc24009b35emr2158977ybl.9.1705948041021; Mon, 22 Jan 2024 10:27:21 -0800 (PST) Date: Mon, 22 Jan 2024 18:26:31 +0000 In-Reply-To: <20240122182632.1102721-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240122182632.1102721-1-shailend@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240122182632.1102721-6-shailend@google.com> Subject: [PATCH net-next 5/6] gve: Alloc before freeing when adjusting queues From: Shailend Chand To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Shailend Chand , Willem de Bruijn , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org Previously, existing queues were being freed before the resources for the new queues were being allocated. This would take down the interface if someone were to attempt to change queue counts under a resource crunch. Signed-off-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_main.c | 89 ++++++++++++++++------ 1 file changed, 67 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 3515484e4cea..78c9cdf1511f 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1869,42 +1869,87 @@ static int gve_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } +static int gve_adjust_config(struct gve_priv *priv, + struct gve_qpls_alloc_cfg *qpls_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) +{ + int err; + + /* Allocate resources for the new confiugration */ + err = gve_queues_mem_alloc(priv, qpls_alloc_cfg, + tx_alloc_cfg, rx_alloc_cfg); + if (err) { + netif_err(priv, drv, priv->dev, + "Adjust config failed to alloc new queues"); + return err; + } + + /* Teardown the device and free existing resources */ + err = gve_close(priv->dev); + if (err) { + netif_err(priv, drv, priv->dev, + "Adjust config failed to close old queues"); + gve_queues_mem_free(priv, qpls_alloc_cfg, + tx_alloc_cfg, rx_alloc_cfg); + return err; + } + + /* Bring the device back up again with the new resources. */ + err = gve_queues_start(priv, qpls_alloc_cfg, + tx_alloc_cfg, rx_alloc_cfg); + if (err) { + netif_err(priv, drv, priv->dev, + "Adjust config failed to start new queues, !!! DISABLING ALL QUEUES !!!\n"); + /* No need to free on error: ownership of resources is lost after + * calling gve_queues_start. + */ + gve_turndown(priv); + return err; + } + + return 0; +} + int gve_adjust_queues(struct gve_priv *priv, struct gve_queue_config new_rx_config, struct gve_queue_config new_tx_config) { + struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; + struct gve_qpl_config new_qpl_cfg; int err; - if (netif_carrier_ok(priv->dev)) { - /* To make this process as simple as possible we teardown the - * device, set the new configuration, and then bring the device - * up again. - */ - err = gve_close(priv->dev); - /* we have already tried to reset in close, - * just fail at this point - */ - if (err) - return err; - priv->tx_cfg = new_tx_config; - priv->rx_cfg = new_rx_config; + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); - err = gve_open(priv->dev); - if (err) - goto err; + /* qpl_cfg is not read-only, it contains a map that gets updated as + * rings are allocated, which is why we cannot use the yet unreleased + * one in priv. + */ + qpls_alloc_cfg.qpl_cfg = &new_qpl_cfg; + tx_alloc_cfg.qpl_cfg = &new_qpl_cfg; + rx_alloc_cfg.qpl_cfg = &new_qpl_cfg; + + /* Relay the new config from ethtool */ + qpls_alloc_cfg.tx_cfg = &new_tx_config; + tx_alloc_cfg.qcfg = &new_tx_config; + rx_alloc_cfg.qcfg_tx = &new_tx_config; + qpls_alloc_cfg.rx_cfg = &new_rx_config; + rx_alloc_cfg.qcfg = &new_rx_config; + tx_alloc_cfg.num_rings = new_tx_config.num_queues; - return 0; + if (netif_carrier_ok(priv->dev)) { + err = gve_adjust_config(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + return err; } /* Set the config for the next up. */ priv->tx_cfg = new_tx_config; priv->rx_cfg = new_rx_config; return 0; -err: - netif_err(priv, drv, priv->dev, - "Adjust queues failed! !!! DISABLING ALL QUEUES !!!\n"); - gve_turndown(priv); - return err; } static void gve_turndown(struct gve_priv *priv) From patchwork Mon Jan 22 18:26:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shailend Chand X-Patchwork-Id: 13526046 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 877974EB34 for ; Mon, 22 Jan 2024 18:27:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948044; cv=none; b=g8S8wxRw2AA2GpHzGE+qp8lzgMLBqULK3GiKt18KroYRs1X105EcbHr+55uAngqH8rdFBoLMweFhYC3c7hTD1pUvTL1kAYymfSVoDPEb7zjZedgbiJ+6ainOzr0tUJPQN0rJkt7CUeePxv7hHIfZKLgwJbHO76FbIosHIqqjowA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705948044; c=relaxed/simple; bh=AoWisGCCyxI21UFrT75HOcEOrhmNkEdYKooDXVMNsrg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tFZQ2rQ3fYqNI2EiD+oLZM2PpcU6aSXH7SCh2g0mCsctzcmM4YFRfPVl8dWi45j71LEflwl8stNAqe4xrS2pdGTxJyz+8WNXddz2I3PeLvBNEuRJxNiJ+YwORm5x2aXEuXPbCe/HyBFCmFV9FZUxY0qQG+bK7vyoF/PFFA9xu00= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hCfTQgH8; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--shailend.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hCfTQgH8" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5efe82b835fso55710327b3.0 for ; Mon, 22 Jan 2024 10:27:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1705948042; x=1706552842; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R02rVSahn7nXV4ulzVCybKwH4H/mMYLbLGUAL9YTiVI=; b=hCfTQgH80ru5tBSYErahppnE5/uvlFNXaIHtgfFcFtLCapGmZn8BXQcZNP5NdcDb+S NEJZ+DlQcxYeLlAvRnk3wdsYsyA2zu0QOOE74hNXHfjdmYWBmy+dNHHOvx2oz/80HKqg rsIQ2Y92uQ44qatQL1amwyQtGX2YoKMEzQSrbap2bV9LDUznZ3IRWQY0E97cRwo0CypP fByDipbWG0wLBFH5VJkKtU3QCJ6XIskbVZFtFCQOiHNR6y050LjqBLD2mWqqUqFsMwWJ gpMy/Y2qeMGRAeGROHFp9sJO90wg6uf5x6CElu/GhwEOHerk+LY5Z0KLI2WTT1y3bQwW 5Osw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705948042; x=1706552842; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R02rVSahn7nXV4ulzVCybKwH4H/mMYLbLGUAL9YTiVI=; b=C4cTv0wQf0So1cd1ZvKh8F9L91Aw40uSFc0k9EmVWomAhOA5ff9ZLegQi6r+O5Xp9h OJ9s94ll1WZhbqzDecAvZUameHUlWrnoQTL9sPPYwmiuNxnqwC37sYN1kpWftv+J9TM9 4uPRjqcsxhumTYn3RAz5FEjvQ+AN/6lILjpgqkBINx04tL1E32LtIG19PKBrp4TiZvzB cnXoMGPfD/LZ919z4jrXQpq0hSUDXYn5klKQ9tUWc8p6iYqI3qnCF3p5cp4m9xMeyZzN w3Amn6uek8N1RyROQKkwajX6RSVUaAHS8dopmWoB/wTDqdcBt8qgcbmJoxP0Y5uze1NQ J/8Q== X-Gm-Message-State: AOJu0YxXS4YpX+TlIJDiGVXuBiedTmEiWmZorh51ra8nM4iQkSKOAW0n 0pJmIIRs5Rjn+olvB13CWUYELmnN24k35jo+sS1OZSYpj7eCx7td4kV3ZkDNJ0gja3+eKx5aq7Z uFTOJq51nGOMFvAw5lsJiaPIHWgo5o9iw7HhuOEdx7pIF55tMAT//OZoyCfluMWOVBT6NLO1C7p 5T9frj/BFFbp2kRcbRIALIr0nNA1w76uKWPRl4xkD3EHc= X-Google-Smtp-Source: AGHT+IF/m+7mZMj7pnzkvoJGEMbDp88WTnKhc6YXkMo5FrxkVmoO4WHGLQ8hHQCzzD/sEoKFIrolWS2RgThXgQ== X-Received: from shailendkvm.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:2648]) (user=shailend job=sendgmr) by 2002:a81:9808:0:b0:5d3:a348:b0c0 with SMTP id p8-20020a819808000000b005d3a348b0c0mr2240300ywg.5.1705948042669; Mon, 22 Jan 2024 10:27:22 -0800 (PST) Date: Mon, 22 Jan 2024 18:26:32 +0000 In-Reply-To: <20240122182632.1102721-1-shailend@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240122182632.1102721-1-shailend@google.com> X-Mailer: git-send-email 2.43.0.429.g432eaa2c6b-goog Message-ID: <20240122182632.1102721-7-shailend@google.com> Subject: [PATCH net-next 6/6] gve: Alloc before freeing when changing features From: Shailend Chand To: netdev@vger.kernel.org Cc: davem@davemloft.net, kuba@kernel.org, Shailend Chand , Willem de Bruijn , Jeroen de Borst X-Patchwork-Delegate: kuba@kernel.org Previously, existing queues were being freed before the resources for the new queues were being allocated. This would take down the interface if someone were to attempt to change feature flags under a resource crunch. Signed-off-by: Shailend Chand Reviewed-by: Willem de Bruijn Reviewed-by: Jeroen de Borst --- drivers/net/ethernet/google/gve/gve_main.c | 41 +++++++++++----------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 78c9cdf1511f..db6d9ae7cd78 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -2069,36 +2069,37 @@ static int gve_set_features(struct net_device *netdev, netdev_features_t features) { const netdev_features_t orig_features = netdev->features; + struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; + struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; + struct gve_qpls_alloc_cfg qpls_alloc_cfg = {0}; struct gve_priv *priv = netdev_priv(netdev); + struct gve_qpl_config new_qpl_cfg; int err; + gve_get_curr_alloc_cfgs(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + /* qpl_cfg is not read-only, it contains a map that gets updated as + * rings are allocated, which is why we cannot use the yet unreleased + * one in priv. + */ + qpls_alloc_cfg.qpl_cfg = &new_qpl_cfg; + tx_alloc_cfg.qpl_cfg = &new_qpl_cfg; + rx_alloc_cfg.qpl_cfg = &new_qpl_cfg; + if ((netdev->features & NETIF_F_LRO) != (features & NETIF_F_LRO)) { netdev->features ^= NETIF_F_LRO; if (netif_carrier_ok(netdev)) { - /* To make this process as simple as possible we - * teardown the device, set the new configuration, - * and then bring the device up again. - */ - err = gve_close(netdev); - /* We have already tried to reset in close, just fail - * at this point. - */ - if (err) - goto err; - - err = gve_open(netdev); - if (err) - goto err; + err = gve_adjust_config(priv, &qpls_alloc_cfg, + &tx_alloc_cfg, &rx_alloc_cfg); + if (err) { + /* Revert the change on error. */ + netdev->features = orig_features; + return err; + } } } return 0; -err: - /* Reverts the change on error. */ - netdev->features = orig_features; - netif_err(priv, drv, netdev, - "Set features failed! !!! DISABLING ALL QUEUES !!!\n"); - return err; } static const struct net_device_ops gve_netdev_ops = {