From patchwork Wed Mar 15 11:05:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Markus Schneider-Pargmann X-Patchwork-Id: 13175633 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E117EC6FD1D for ; Wed, 15 Mar 2023 11:07:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231509AbjCOLHU (ORCPT ); Wed, 15 Mar 2023 07:07:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231673AbjCOLGs (ORCPT ); Wed, 15 Mar 2023 07:06:48 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8DBA580922 for ; Wed, 15 Mar 2023 04:06:39 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id m35so2734751wms.4 for ; Wed, 15 Mar 2023 04:06:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20210112.gappssmtp.com; s=20210112; t=1678878399; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Mmse28A41MDTbbOZve6pZwq9r+Apwq0OhP9O66IiwcY=; b=snClEnuf/onBHGLARSF8/jnvi0Vvmuoe8oIpj23fRHjXeNGHBHagrkhTRPTTAZctwp KY6fwU4DKtoh+7RderEhs/cYX2QZsOV3ccEils52G46N40xJDoiuEIil887N7p/3bllk Uz9dAzyBGRF19ElieVACbNB4ktTlIUlbn2SRLv7AqprDuYYF0MnviKWXWJE4tyDscNtX E520AfueYvRNg4iE3kPsxq0AOWNcujxibHdaMmsXeSXlVtVg52yMV5Rw3lZSGfGCXSS6 JN7ZPmPiVc6TfeoPK2brrpTki1Y2OFbi7IKRAMP94dS8MkRh7y08v8G/lbLZwjPwKvqz BJJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678878399; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Mmse28A41MDTbbOZve6pZwq9r+Apwq0OhP9O66IiwcY=; b=tZ5xv6j5PwMV6Qvd9yy1qRndHp98PasIqtRdygyWxqUmTA6q/ZR3QQLA80OlKqBo3I F7CPZdBfoTmwLlcnf7ckk1I79d/BbNhRyGObvnZohczOIyIpz88nNCzC8nKraumvC581 fC3b11U1P7gcDxc5aMgqDcoumhMH7GyLkj5+hEifWzLoXuCPc0aaLiJg+p8ZzQWOPIiU lCviZYTUGC1K8KqslXT4IM1Jqa0oXI3L/7H2ZVA1O26Ikv6Ha0YiqwPD8q6DSOi+upcx y8EZgJWtVP/MfifKZ6IjmCCxVTVtA/clEJYj2Bkqm7LHHPTYGKP49bYdEh0YAEIiBwI1 N5YA== X-Gm-Message-State: AO0yUKVQ0VE63Bf6kcbvAPBogxqdeNF3qj3l6NGh2jfUO1tpRTD82JL9 Hjixo6nWuuARTMbqK4L2FHPrXw== X-Google-Smtp-Source: AK7set+Ns1HCJqYSmy7ma6ydGSwfGO5XyJBh54KmFPSF/OdFTApS5Hle9MCI8NmpQI794RcP72vNgA== X-Received: by 2002:a05:600c:3588:b0:3ed:3993:6a93 with SMTP id p8-20020a05600c358800b003ed39936a93mr370247wmq.19.1678878399164; Wed, 15 Mar 2023 04:06:39 -0700 (PDT) Received: from blmsp.fritz.box ([2001:4090:a247:8056:be7d:83e:a6a5:4659]) by smtp.gmail.com with ESMTPSA id l4-20020a7bc444000000b003eafc47eb09sm1460563wmi.43.2023.03.15.04.06.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Mar 2023 04:06:38 -0700 (PDT) From: Markus Schneider-Pargmann To: Marc Kleine-Budde , Chandrasekar Ramakrishnan , Wolfgang Grandegger Cc: Vincent MAILHOL , Simon Horman , linux-can@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Markus Schneider-Pargmann Subject: [PATCH v3 16/16] can: m_can: Implement transmit submission coalescing Date: Wed, 15 Mar 2023 12:05:46 +0100 Message-Id: <20230315110546.2518305-17-msp@baylibre.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315110546.2518305-1-msp@baylibre.com> References: <20230315110546.2518305-1-msp@baylibre.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org m_can supports submitting multiple transmits with one register write. This is an interesting option to reduce the number of SPI transfers for peripheral chips. The m_can_tx_op is extended with a bool that signals if it is the last transmission and the submit should be executed immediately. The worker then writes the skb to the FIFO and submits it only if the submit bool is set. If it isn't set, the worker will write the next skb which is waiting in the workqueue to the FIFO, etc. Signed-off-by: Markus Schneider-Pargmann --- Notes: Notes: - I ran into lost messages in the receive FIFO when using this implementation. I guess this only shows up with my test setup in loopback mode and maybe not enough CPU power. - I put this behind the tx-frames ethtool coalescing option as we do wait before submitting packages but it is something different than the tx-frames-irq option. I am not sure if this is the correct option, please let me know. drivers/net/can/m_can/m_can.c | 55 ++++++++++++++++++++++++++++++++--- drivers/net/can/m_can/m_can.h | 6 ++++ 2 files changed, 57 insertions(+), 4 deletions(-) diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c index 63d6e95717e3..1f758894e122 100644 --- a/drivers/net/can/m_can/m_can.c +++ b/drivers/net/can/m_can/m_can.c @@ -1508,6 +1508,9 @@ static int m_can_start(struct net_device *dev) if (ret) return ret; + netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0), + cdev->tx_max_coalesced_frames); + cdev->can.state = CAN_STATE_ERROR_ACTIVE; m_can_enable_all_interrupts(cdev); @@ -1818,8 +1821,13 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev, */ can_put_echo_skb(skb, dev, putidx, frame_len); - /* Enable TX FIFO element to start transfer */ - m_can_write(cdev, M_CAN_TXBAR, (1 << putidx)); + if (cdev->is_peripheral) { + /* Delay enabling TX FIFO element */ + cdev->tx_peripheral_submit |= BIT(putidx); + } else { + /* Enable TX FIFO element to start transfer */ + m_can_write(cdev, M_CAN_TXBAR, BIT(putidx)); + } cdev->tx_fifo_putidx = (++cdev->tx_fifo_putidx >= cdev->can.echo_skb_max ? 0 : cdev->tx_fifo_putidx); } @@ -1832,6 +1840,17 @@ static netdev_tx_t m_can_tx_handler(struct m_can_classdev *cdev, return NETDEV_TX_BUSY; } +static void m_can_tx_submit(struct m_can_classdev *cdev) +{ + if (cdev->version == 30) + return; + if (!cdev->is_peripheral) + return; + + m_can_write(cdev, M_CAN_TXBAR, cdev->tx_peripheral_submit); + cdev->tx_peripheral_submit = 0; +} + static void m_can_tx_work_queue(struct work_struct *ws) { struct m_can_tx_op *op = container_of(ws, struct m_can_tx_op, work); @@ -1840,11 +1859,15 @@ static void m_can_tx_work_queue(struct work_struct *ws) op->skb = NULL; m_can_tx_handler(cdev, skb); + if (op->submit) + m_can_tx_submit(cdev); } -static void m_can_tx_queue_skb(struct m_can_classdev *cdev, struct sk_buff *skb) +static void m_can_tx_queue_skb(struct m_can_classdev *cdev, struct sk_buff *skb, + bool submit) { cdev->tx_ops[cdev->next_tx_op].skb = skb; + cdev->tx_ops[cdev->next_tx_op].submit = submit; queue_work(cdev->tx_wq, &cdev->tx_ops[cdev->next_tx_op].work); ++cdev->next_tx_op; @@ -1856,6 +1879,7 @@ static netdev_tx_t m_can_start_peripheral_xmit(struct m_can_classdev *cdev, struct sk_buff *skb) { netdev_tx_t err; + bool submit; if (cdev->can.state == CAN_STATE_BUS_OFF) { m_can_clean(cdev->net); @@ -1866,7 +1890,15 @@ static netdev_tx_t m_can_start_peripheral_xmit(struct m_can_classdev *cdev, if (err != NETDEV_TX_OK) return err; - m_can_tx_queue_skb(cdev, skb); + ++cdev->nr_txs_without_submit; + if (cdev->nr_txs_without_submit >= cdev->tx_max_coalesced_frames || + !netdev_xmit_more()) { + cdev->nr_txs_without_submit = 0; + submit = true; + } else { + submit = false; + } + m_can_tx_queue_skb(cdev, skb, submit); return NETDEV_TX_OK; } @@ -1998,6 +2030,7 @@ static int m_can_get_coalesce(struct net_device *dev, ec->rx_max_coalesced_frames_irq = cdev->rx_max_coalesced_frames_irq; ec->rx_coalesce_usecs_irq = cdev->rx_coalesce_usecs_irq; + ec->tx_max_coalesced_frames = cdev->tx_max_coalesced_frames; ec->tx_max_coalesced_frames_irq = cdev->tx_max_coalesced_frames_irq; ec->tx_coalesce_usecs_irq = cdev->tx_coalesce_usecs_irq; @@ -2042,6 +2075,18 @@ static int m_can_set_coalesce(struct net_device *dev, netdev_err(dev, "tx-frames-irq and tx-usecs-irq can only be set together\n"); return -EINVAL; } + if (ec->tx_max_coalesced_frames > cdev->mcfg[MRAM_TXE].num) { + netdev_err(dev, "tx-frames %u greater than the TX event FIFO %u\n", + ec->tx_max_coalesced_frames, + cdev->mcfg[MRAM_TXE].num); + return -EINVAL; + } + if (ec->tx_max_coalesced_frames > cdev->mcfg[MRAM_TXB].num) { + netdev_err(dev, "tx-frames %u greater than the TX FIFO %u\n", + ec->tx_max_coalesced_frames, + cdev->mcfg[MRAM_TXB].num); + return -EINVAL; + } if (ec->rx_coalesce_usecs_irq != 0 && ec->tx_coalesce_usecs_irq != 0 && ec->rx_coalesce_usecs_irq != ec->tx_coalesce_usecs_irq) { netdev_err(dev, "rx-usecs-irq %u needs to be equal to tx-usecs-irq %u if both are enabled\n", @@ -2052,6 +2097,7 @@ static int m_can_set_coalesce(struct net_device *dev, cdev->rx_max_coalesced_frames_irq = ec->rx_max_coalesced_frames_irq; cdev->rx_coalesce_usecs_irq = ec->rx_coalesce_usecs_irq; + cdev->tx_max_coalesced_frames = ec->tx_max_coalesced_frames; cdev->tx_max_coalesced_frames_irq = ec->tx_max_coalesced_frames_irq; cdev->tx_coalesce_usecs_irq = ec->tx_coalesce_usecs_irq; @@ -2069,6 +2115,7 @@ static const struct ethtool_ops m_can_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ | ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ | ETHTOOL_COALESCE_TX_USECS_IRQ | + ETHTOOL_COALESCE_TX_MAX_FRAMES | ETHTOOL_COALESCE_TX_MAX_FRAMES_IRQ, .get_ts_info = ethtool_op_get_ts_info, .get_coalesce = m_can_get_coalesce, diff --git a/drivers/net/can/m_can/m_can.h b/drivers/net/can/m_can/m_can.h index e230cf320a6c..a2a6d10015fd 100644 --- a/drivers/net/can/m_can/m_can.h +++ b/drivers/net/can/m_can/m_can.h @@ -74,6 +74,7 @@ struct m_can_tx_op { struct m_can_classdev *cdev; struct work_struct work; struct sk_buff *skb; + bool submit; }; struct m_can_classdev { @@ -103,6 +104,7 @@ struct m_can_classdev { u32 active_interrupts; u32 rx_max_coalesced_frames_irq; u32 rx_coalesce_usecs_irq; + u32 tx_max_coalesced_frames; u32 tx_max_coalesced_frames_irq; u32 tx_coalesce_usecs_irq; @@ -117,6 +119,10 @@ struct m_can_classdev { int tx_fifo_size; int next_tx_op; + int nr_txs_without_submit; + /* bitfield of fifo elements that will be submitted together */ + u32 tx_peripheral_submit; + struct mram_cfg mcfg[MRAM_CFG_NUM]; };