From patchwork Tue Jul 18 07:57:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Schneider-Pargmann X-Patchwork-Id: 13316848 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12397FBF0 for ; Tue, 18 Jul 2023 07:58:06 +0000 (UTC) Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [IPv6:2a00:1450:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BFA1173B for ; Tue, 18 Jul 2023 00:57:41 -0700 (PDT) Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-3fbc244d384so49565785e9.0 for ; Tue, 18 Jul 2023 00:57:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20221208.gappssmtp.com; s=20221208; t=1689667043; x=1690271843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SZ6L5uumYKOU4JGWuXQWu7fb4u67lZUK8TfUOaYC8lo=; b=EkOesFd126yLWUnLDj8XN6+vGSPelltoNK6t0wTXx31zvFY5ee6GeslsBzFOTACEgc ZJunRJ5DlZqDeqjrQgZt06Pj/TehLAEpHqMCBB/Jq9xal9Kqgd3d9NrAt0Ol1ap5qzBY dxLIdJ6fop/c/1izZawlhvZIDVz/9ngDm5w666enVboHQES+Y0Pas1QU0tQSMDG3USt9 YxmAHP40cX9Etzwabw/mkYH6XGwNtEroS62ukZniqa/JqjyF251mGeNXpAQvTdbCFA3b N4iOruUmkf0FHWBoIHZU/0zP+lABg7WxIXpGgrf2ELl4lI2kM0Wj7QGzabHKXyq8eVJq +Rhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689667043; x=1690271843; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SZ6L5uumYKOU4JGWuXQWu7fb4u67lZUK8TfUOaYC8lo=; b=FkMSa/iL8dWKFUR+OG31rx2WkyfIWlNaiI+79QqGZovcjxAcy9JfW7YLdT+qePUwuI xxshqyGflXaAXiWIN+7XfM0YeGJcj7c2ZbaXB+YMH7764DZ3cdAHvCCCak6pKL/8bZW1 /eeHU2CuuCgT2MvK5KqAFzF/18yUcSxAsI/lwRR85w3x/cl53psx3xYRgtPJsCx+URos +Bkq98GOimVxGZh/jRLorsGXECtGIo6vYUobA5Fj1l2n4/i4YyT12B/GL1/HVm3Tl1yi RtK+UZpPt2t6mHkrmX10s1FltzYWaO1P79VriAa9eisxaYCUCLd2vmPOSRc0mdYqzaq9 2oMg== X-Gm-Message-State: ABy/qLa0sORSafoz3XEWZZm5YPHXqd5vvTxiuFk+xOhSehDDFiY75cBI f4aOkqWOdCjcqC2mawYEeaVgUg== X-Google-Smtp-Source: APBJJlFVYF+XPug8xCE3Vvq11FheaNmr+0uDVZr/VGmrgyyZlMXLXs86QM1TqAW41sYLAoARgCaQjg== X-Received: by 2002:a05:6000:d2:b0:314:424b:45c3 with SMTP id q18-20020a05600000d200b00314424b45c3mr10740251wrx.57.1689667043516; Tue, 18 Jul 2023 00:57:23 -0700 (PDT) Received: from blmsp.fritz.box ([2001:4091:a247:82fa:b762:4f68:e1ed:5041]) by smtp.gmail.com with ESMTPSA id x4-20020a5d54c4000000b003142439c7bcsm1585959wrv.80.2023.07.18.00.57.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Jul 2023 00:57:23 -0700 (PDT) From: Markus Schneider-Pargmann To: Marc Kleine-Budde , Chandrasekar Ramakrishnan , Wolfgang Grandegger Cc: Vincent MAILHOL , Simon Horman , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , linux-can@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Julien Panis , Markus Schneider-Pargmann Subject: [PATCH v5 12/12] can: m_can: Implement transmit submission coalescing Date: Tue, 18 Jul 2023 09:57:08 +0200 Message-Id: <20230718075708.958094-13-msp@baylibre.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230718075708.958094-1-msp@baylibre.com> References: <20230718075708.958094-1-msp@baylibre.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org m_can supports submitting multiple transmits with one register write. This is an interesting option to reduce the number of SPI transfers for peripheral chips. The m_can_tx_op is extended with a bool that signals if it is the last transmission and the submit should be executed immediately. The worker then writes the skb to the FIFO and submits it only if the submit bool is set. If it isn't set, the worker will write the next skb which is waiting in the workqueue to the FIFO, etc. Signed-off-by: Markus Schneider-Pargmann Reviewed-by: Simon Horman --- Notes: Notes: - I put this behind the tx-frames ethtool coalescing option as we do wait before submitting packages but it is something different than the tx-frames-irq option. I am not sure if this is the correct option, please let me know. drivers/net/can/m_can/m_can.c | 55 ++++++++++++++++++++++++++++++++--- drivers/net/can/m_can/m_can.h | 6 ++++ 2 files changed, 57 insertions(+), 4 deletions(-) diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c index b775ee8e5ff5..aa8c5a8445de 100644 --- a/drivers/net/can/m_can/m_can.c +++ b/drivers/net/can/m_can/m_can.c @@ -1515,6 +1515,9 @@ static int m_can_start(struct net_device *dev) if (ret) return ret; + netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0), + cdev->tx_max_coalesced_frames); + cdev->can.state = CAN_STATE_ERROR_ACTIVE; m_can_enable_all_interrupts(cdev); @@ -1811,8 +1814,13 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev, */ can_put_echo_skb(skb, dev, putidx, frame_len); - /* Enable TX FIFO element to start transfer */ - m_can_write(cdev, M_CAN_TXBAR, (1 << putidx)); + if (cdev->is_peripheral) { + /* Delay enabling TX FIFO element */ + cdev->tx_peripheral_submit |= BIT(putidx); + } else { + /* Enable TX FIFO element to start transfer */ + m_can_write(cdev, M_CAN_TXBAR, BIT(putidx)); + } cdev->tx_fifo_putidx = (++cdev->tx_fifo_putidx >= cdev->can.echo_skb_max ? 0 : cdev->tx_fifo_putidx); } @@ -1825,6 +1833,17 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev, return NETDEV_TX_BUSY; } +static void m_can_tx_submit(struct m_can_classdev *cdev) +{ + if (cdev->version == 30) + return; + if (!cdev->is_peripheral) + return; + + m_can_write(cdev, M_CAN_TXBAR, cdev->tx_peripheral_submit); + cdev->tx_peripheral_submit = 0; +} + static void m_can_tx_work_queue(struct work_struct *ws) { struct m_can_tx_op *op = container_of(ws, struct m_can_tx_op, work); @@ -1833,11 +1852,15 @@ static void m_can_tx_work_queue(struct work_struct *ws) op->skb = NULL; m_can_tx_handler(cdev, skb); + if (op->submit) + m_can_tx_submit(cdev); } -static void m_can_tx_queue_skb(struct m_can_classdev *cdev, struct sk_buff *skb) +static void m_can_tx_queue_skb(struct m_can_classdev *cdev, struct sk_buff *skb, + bool submit) { cdev->tx_ops[cdev->next_tx_op].skb = skb; + cdev->tx_ops[cdev->next_tx_op].submit = submit; queue_work(cdev->tx_wq, &cdev->tx_ops[cdev->next_tx_op].work); ++cdev->next_tx_op; @@ -1849,6 +1872,7 @@ static netdev_tx_t m_can_start_peripheral_xmit(struct m_can_classdev *cdev, struct sk_buff *skb) { netdev_tx_t err; + bool submit; if (cdev->can.state == CAN_STATE_BUS_OFF) { m_can_clean(cdev->net); @@ -1859,7 +1883,15 @@ static netdev_tx_t m_can_start_peripheral_xmit(struct m_can_classdev *cdev, if (err != NETDEV_TX_OK) return err; - m_can_tx_queue_skb(cdev, skb); + ++cdev->nr_txs_without_submit; + if (cdev->nr_txs_without_submit >= cdev->tx_max_coalesced_frames || + !netdev_xmit_more()) { + cdev->nr_txs_without_submit = 0; + submit = true; + } else { + submit = false; + } + m_can_tx_queue_skb(cdev, skb, submit); return NETDEV_TX_OK; } @@ -1991,6 +2023,7 @@ static int m_can_get_coalesce(struct net_device *dev, ec->rx_max_coalesced_frames_irq = cdev->rx_max_coalesced_frames_irq; ec->rx_coalesce_usecs_irq = cdev->rx_coalesce_usecs_irq; + ec->tx_max_coalesced_frames = cdev->tx_max_coalesced_frames; ec->tx_max_coalesced_frames_irq = cdev->tx_max_coalesced_frames_irq; ec->tx_coalesce_usecs_irq = cdev->tx_coalesce_usecs_irq; @@ -2035,6 +2068,18 @@ static int m_can_set_coalesce(struct net_device *dev, netdev_err(dev, "tx-frames-irq and tx-usecs-irq can only be set together\n"); return -EINVAL; } + if (ec->tx_max_coalesced_frames > cdev->mcfg[MRAM_TXE].num) { + netdev_err(dev, "tx-frames %u greater than the TX event FIFO %u\n", + ec->tx_max_coalesced_frames, + cdev->mcfg[MRAM_TXE].num); + return -EINVAL; + } + if (ec->tx_max_coalesced_frames > cdev->mcfg[MRAM_TXB].num) { + netdev_err(dev, "tx-frames %u greater than the TX FIFO %u\n", + ec->tx_max_coalesced_frames, + cdev->mcfg[MRAM_TXB].num); + return -EINVAL; + } if (ec->rx_coalesce_usecs_irq != 0 && ec->tx_coalesce_usecs_irq != 0 && ec->rx_coalesce_usecs_irq != ec->tx_coalesce_usecs_irq) { netdev_err(dev, "rx-usecs-irq %u needs to be equal to tx-usecs-irq %u if both are enabled\n", @@ -2045,6 +2090,7 @@ static int m_can_set_coalesce(struct net_device *dev, cdev->rx_max_coalesced_frames_irq = ec->rx_max_coalesced_frames_irq; cdev->rx_coalesce_usecs_irq = ec->rx_coalesce_usecs_irq; + cdev->tx_max_coalesced_frames = ec->tx_max_coalesced_frames; cdev->tx_max_coalesced_frames_irq = ec->tx_max_coalesced_frames_irq; cdev->tx_coalesce_usecs_irq = ec->tx_coalesce_usecs_irq; @@ -2062,6 +2108,7 @@ static const struct ethtool_ops m_can_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ | ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ | ETHTOOL_COALESCE_TX_USECS_IRQ | + ETHTOOL_COALESCE_TX_MAX_FRAMES | ETHTOOL_COALESCE_TX_MAX_FRAMES_IRQ, .get_ts_info = ethtool_op_get_ts_info, .get_coalesce = m_can_get_coalesce, diff --git a/drivers/net/can/m_can/m_can.h b/drivers/net/can/m_can/m_can.h index 5c182aece15c..54af26a94042 100644 --- a/drivers/net/can/m_can/m_can.h +++ b/drivers/net/can/m_can/m_can.h @@ -74,6 +74,7 @@ struct m_can_tx_op { struct m_can_classdev *cdev; struct work_struct work; struct sk_buff *skb; + bool submit; }; struct m_can_classdev { @@ -103,6 +104,7 @@ struct m_can_classdev { u32 active_interrupts; u32 rx_max_coalesced_frames_irq; u32 rx_coalesce_usecs_irq; + u32 tx_max_coalesced_frames; u32 tx_max_coalesced_frames_irq; u32 tx_coalesce_usecs_irq; @@ -117,6 +119,10 @@ struct m_can_classdev { int tx_fifo_size; int next_tx_op; + int nr_txs_without_submit; + /* bitfield of fifo elements that will be submitted together */ + u32 tx_peripheral_submit; + struct mram_cfg mcfg[MRAM_CFG_NUM]; };