From patchwork Fri Jan 28 10:49:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tobias Waldekranz X-Patchwork-Id: 12728203 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 645F7C433F5 for ; Fri, 28 Jan 2022 10:49:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229462AbiA1Kt6 (ORCPT ); Fri, 28 Jan 2022 05:49:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47672 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243890AbiA1Kty (ORCPT ); Fri, 28 Jan 2022 05:49:54 -0500 Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com [IPv6:2a00:1450:4864:20::134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B550C06173B for ; Fri, 28 Jan 2022 02:49:54 -0800 (PST) Received: by mail-lf1-x134.google.com with SMTP id x23so11057988lfc.0 for ; Fri, 28 Jan 2022 02:49:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=waldekranz-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:organization:content-transfer-encoding; bh=P3RVmK1/PHIT/K/vRfwQmZuyVW8oc6qkrGNE4RBfJ2s=; b=LYUosqZrCVV8Wc3nPps9qFzsrFbdjYOAITcmqa1/iVhhfdPpFNWOdaqFPZPt9ZnGOh 7bLHUbwqEfJ4cdgqrjm0yACCpI2dkABzx335mERYjpJrvJ58h2HmqQz0b3t+FzUO0c6d WSgAKbGoNg5ykXkcG9q8BQQFxRBGxcxhSwUocDKtMyDqKO5IwcnAAN1UwlWEhugMppvs ZCmaTwquqawqmhdeid9ce+MfR2IKH1RGTFUr4VhKpjxL/Pmj3A75WrcIjDO1vdpHqWMS QzRcQQFzTBJ+mcghyCxLJlhg1tHX/2eyd5rabm425EsHG13OeppwAiEobBvAknOSpGyo 3vLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:organization:content-transfer-encoding; bh=P3RVmK1/PHIT/K/vRfwQmZuyVW8oc6qkrGNE4RBfJ2s=; b=xz04awP1pv6z+pEWaw8ArppSYdmygJblVs+uIRX+iV/V0Oix2tsxsXEQKGMCK5uzUv p0W0TYShcfWC2xyepU97MDb1kJxEFr8Ad5GwYY4GujZe9hNJzqISmbcwdSZ925nU9O9F A084uL8OPzT+ENn/PjpHY2SshmVerFoR2dHyCije9QnPVTiCY2VzyoeCYu6FhWqjyFwu tV6/0jxnmTpXpRmO0h6vzhvRfYXQzF82cwJps4vnVhD9BTfjbcZvFwbsEneIpJX9oCoC n4o7Mp1YBP+xt1DHVOV8m9Ofwe5DkS7rLBFQUEki12oMzRmQZsqzKwMjcH9lql/9/pF0 LFWw== X-Gm-Message-State: AOAM53382+KmbzvP8xq2XSfdZevh2uWHyPVJaKKDyB35OhfYkK831qBe NI+UZHLEjh8et4vRCReeyOvdnQ== X-Google-Smtp-Source: ABdhPJzIV/VTGbhlSzkuNYaYy5pxDnHyHZ1g/VR9w+3doTyseuzu+bKuwXrxMAXHtImqyk/2k2W7jg== X-Received: by 2002:a19:8c19:: with SMTP id o25mr5632962lfd.300.1643366992489; Fri, 28 Jan 2022 02:49:52 -0800 (PST) Received: from veiron.westermo.com (static-193-12-47-89.cust.tele2.se. [193.12.47.89]) by smtp.gmail.com with ESMTPSA id v29sm1931347ljv.72.2022.01.28.02.49.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Jan 2022 02:49:52 -0800 (PST) From: Tobias Waldekranz To: davem@davemloft.net, kuba@kernel.org Cc: netdev@vger.kernel.org, Andrew Lunn , Vivien Didelot , Florian Fainelli , Vladimir Oltean , linux-kernel@vger.kernel.org Subject: [PATCH v2 net-next 1/2] net: dsa: mv88e6xxx: Improve performance of busy bit polling Date: Fri, 28 Jan 2022 11:49:37 +0100 Message-Id: <20220128104938.2211441-2-tobias@waldekranz.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220128104938.2211441-1-tobias@waldekranz.com> References: <20220128104938.2211441-1-tobias@waldekranz.com> MIME-Version: 1.0 Organization: Westermo Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Avoid a long delay when a busy bit is still set and has to be polled again. Measurements on a system with 2 Opals (6097F) and one Agate (6352) show that even with this much tighter loop, we have about a 50% chance of the bit being cleared on the first poll, all other accesses see the bit being cleared on the second poll. On a standard MDIO bus running MDC at 2.5MHz, a single access with 32 bits of preamble plus 32 bits of data takes 64*(1/2.5MHz) = 25.6us. This means that mv88e6xxx_smi_direct_wait took 26us + CPU overhead in the fast scenario, but 26us + 1500us + 26us + CPU overhead in the slow case - bringing the average close to 1ms. With this change in place, the slow case is closer to 2*26us + CPU overhead, with the average well below 100us - a 10x improvement. This translates to real-world winnings. On a 3-chip 20-port system, the modprobe time drops by 88%: Before: root@coronet:~# time modprobe mv88e6xxx real 0m 15.99s user 0m 0.00s sys 0m 1.52s After: root@coronet:~# time modprobe mv88e6xxx real 0m 2.21s user 0m 0.00s sys 0m 1.54s Signed-off-by: Tobias Waldekranz Reviewed-by: Andrew Lunn --- drivers/net/dsa/mv88e6xxx/chip.c | 10 +++++++--- drivers/net/dsa/mv88e6xxx/smi.c | 8 ++++++-- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index 58ca684d73f7..de8a568a8c53 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -86,12 +86,16 @@ int mv88e6xxx_write(struct mv88e6xxx_chip *chip, int addr, int reg, u16 val) int mv88e6xxx_wait_mask(struct mv88e6xxx_chip *chip, int addr, int reg, u16 mask, u16 val) { + const unsigned long timeout = jiffies + msecs_to_jiffies(50); u16 data; int err; int i; - /* There's no bus specific operation to wait for a mask */ - for (i = 0; i < 16; i++) { + /* There's no bus specific operation to wait for a mask. Even + * if the initial poll takes longer than 50ms, always do at + * least one more attempt. + */ + for (i = 0; time_before(jiffies, timeout) || (i < 2); i++) { err = mv88e6xxx_read(chip, addr, reg, &data); if (err) return err; @@ -99,7 +103,7 @@ int mv88e6xxx_wait_mask(struct mv88e6xxx_chip *chip, int addr, int reg, if ((data & mask) == val) return 0; - usleep_range(1000, 2000); + cpu_relax(); } dev_err(chip->dev, "Timeout while waiting for switch\n"); diff --git a/drivers/net/dsa/mv88e6xxx/smi.c b/drivers/net/dsa/mv88e6xxx/smi.c index 282fe08db050..466d2aaa9fcb 100644 --- a/drivers/net/dsa/mv88e6xxx/smi.c +++ b/drivers/net/dsa/mv88e6xxx/smi.c @@ -55,11 +55,15 @@ static int mv88e6xxx_smi_direct_write(struct mv88e6xxx_chip *chip, static int mv88e6xxx_smi_direct_wait(struct mv88e6xxx_chip *chip, int dev, int reg, int bit, int val) { + const unsigned long timeout = jiffies + msecs_to_jiffies(50); u16 data; int err; int i; - for (i = 0; i < 16; i++) { + /* Even if the initial poll takes longer than 50ms, always do + * at least one more attempt. + */ + for (i = 0; time_before(jiffies, timeout) || (i < 2); i++) { err = mv88e6xxx_smi_direct_read(chip, dev, reg, &data); if (err) return err; @@ -67,7 +71,7 @@ static int mv88e6xxx_smi_direct_wait(struct mv88e6xxx_chip *chip, if (!!(data & BIT(bit)) == !!val) return 0; - usleep_range(1000, 2000); + cpu_relax(); } return -ETIMEDOUT; From patchwork Fri Jan 28 10:49:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tobias Waldekranz X-Patchwork-Id: 12728204 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6500CC433EF for ; Fri, 28 Jan 2022 10:50:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347863AbiA1Kt6 (ORCPT ); Fri, 28 Jan 2022 05:49:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344394AbiA1Ktz (ORCPT ); Fri, 28 Jan 2022 05:49:55 -0500 Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com [IPv6:2a00:1450:4864:20::129]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27DE0C061747 for ; Fri, 28 Jan 2022 02:49:55 -0800 (PST) Received: by mail-lf1-x129.google.com with SMTP id z4so10963796lft.3 for ; Fri, 28 Jan 2022 02:49:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=waldekranz-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:organization:content-transfer-encoding; bh=oxpnsh/OUctFb6T79J0flVx8GBK6w+VDMjJn6kjBEBU=; b=dmHlUM8ZeNPTS9fNp72+MzkocS55sCGHtUFThTZ6ux65eaSH3bNFHf/tAzzIyEMPat GQf2mDUA0RQCNnN3grDOB4FkHFdE1+eaJ91Ms0qPgrBrklgrvQmxeCMCvW0WqZf5VBE5 WiKzBxjnS4QCXB3T1ExKCioEvIC6rQh4KRhLXpZCSAEZZ5MPDUFV9MFX9Mp2ngLv7rqh etGQ/dCWHChDysjiLhvxb3mzxD2aj4CaOOeY33rNRlijN/DcIu9zp9v4khHfMBZOMFg0 FKOuMqigudwM9Fvy2c+U7oyueUmfU2OHbTXmfm6ZETgqN9fYKY2k7LTjE0qBwexK4qnr jkrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:organization:content-transfer-encoding; bh=oxpnsh/OUctFb6T79J0flVx8GBK6w+VDMjJn6kjBEBU=; b=RreMjh7ORBZDqI9iOMiObj5+7qMsc+v6XGWraVEs27VIoxQnZZvmunlhuJmA3XsxiQ 3z+EwLLMF8H3zjzFdktDEKM49xQaPY3U6prOPrbzv+wwqRM6g8U4meG+4+J2shmXbMJO Ugm0t00sN+TTPDGs4+kiQJcHWQ8OCFSNxMQW1iBWU+4pc7jyIuU4/o2xzJgiZwsYo449 zRtFbmkOif3PqEsJSHXPvma6OWB6Rsj739grC6x6mVnu9/wLPiPgtx6RJGeuoruY0uRw TYqxI2HRfxvfPkgAHESyMUtxxtSytv04+Q1OK1MxSLC0uY4wgIXNEoDLXg7YGepkOxiA bz7w== X-Gm-Message-State: AOAM530ZaijMkvZPiR34lJ+mXgqM2zjXB0A35a0GkAfkH29ptWPa9jWT hJ/udeQb2qI5P8iawdfgxt7XZq8Wa65bzQ== X-Google-Smtp-Source: ABdhPJwDRpUxlnB9lHsjOwDBsDTd1U5eE1VFJdpKW6Hn6w5fr1gv4ge/x0ZaMXjlAe7tkhU3qJwWsA== X-Received: by 2002:a05:6512:49b:: with SMTP id v27mr5864291lfq.396.1643366993419; Fri, 28 Jan 2022 02:49:53 -0800 (PST) Received: from veiron.westermo.com (static-193-12-47-89.cust.tele2.se. [193.12.47.89]) by smtp.gmail.com with ESMTPSA id v29sm1931347ljv.72.2022.01.28.02.49.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Jan 2022 02:49:53 -0800 (PST) From: Tobias Waldekranz To: davem@davemloft.net, kuba@kernel.org Cc: netdev@vger.kernel.org, Andrew Lunn , Vivien Didelot , Florian Fainelli , Vladimir Oltean , linux-kernel@vger.kernel.org Subject: [PATCH v2 net-next 2/2] net: dsa: mv88e6xxx: Improve indirect addressing performance Date: Fri, 28 Jan 2022 11:49:38 +0100 Message-Id: <20220128104938.2211441-3-tobias@waldekranz.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220128104938.2211441-1-tobias@waldekranz.com> References: <20220128104938.2211441-1-tobias@waldekranz.com> MIME-Version: 1.0 Organization: Westermo Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Before this change, both the read and write callback would start out by asserting that the chip's busy flag was cleared. However, both callbacks also made sure to wait for the clearing of the busy bit before returning - making the initial check superfluous. The only time that would ever have an effect was if the busy bit was initially set for some reason. With that in mind, make sure to perform an initial check of the busy bit, after which both read and write can rely the previous operation to have waited for the bit to clear. This cuts the number of operations on the underlying MDIO bus by 25% Signed-off-by: Tobias Waldekranz Reviewed-by: Andrew Lunn --- drivers/net/dsa/mv88e6xxx/chip.h | 1 + drivers/net/dsa/mv88e6xxx/smi.c | 24 ++++++++++++++---------- 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h index 8271b8aa7b71..438cee853d07 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.h +++ b/drivers/net/dsa/mv88e6xxx/chip.h @@ -392,6 +392,7 @@ struct mv88e6xxx_chip { struct mv88e6xxx_bus_ops { int (*read)(struct mv88e6xxx_chip *chip, int addr, int reg, u16 *val); int (*write)(struct mv88e6xxx_chip *chip, int addr, int reg, u16 val); + int (*init)(struct mv88e6xxx_chip *chip); }; struct mv88e6xxx_mdio_bus { diff --git a/drivers/net/dsa/mv88e6xxx/smi.c b/drivers/net/dsa/mv88e6xxx/smi.c index 466d2aaa9fcb..dfb72a29626b 100644 --- a/drivers/net/dsa/mv88e6xxx/smi.c +++ b/drivers/net/dsa/mv88e6xxx/smi.c @@ -108,11 +108,6 @@ static int mv88e6xxx_smi_indirect_read(struct mv88e6xxx_chip *chip, { int err; - err = mv88e6xxx_smi_direct_wait(chip, chip->sw_addr, - MV88E6XXX_SMI_CMD, 15, 0); - if (err) - return err; - err = mv88e6xxx_smi_direct_write(chip, chip->sw_addr, MV88E6XXX_SMI_CMD, MV88E6XXX_SMI_CMD_BUSY | @@ -136,11 +131,6 @@ static int mv88e6xxx_smi_indirect_write(struct mv88e6xxx_chip *chip, { int err; - err = mv88e6xxx_smi_direct_wait(chip, chip->sw_addr, - MV88E6XXX_SMI_CMD, 15, 0); - if (err) - return err; - err = mv88e6xxx_smi_direct_write(chip, chip->sw_addr, MV88E6XXX_SMI_DATA, data); if (err) @@ -159,9 +149,20 @@ static int mv88e6xxx_smi_indirect_write(struct mv88e6xxx_chip *chip, MV88E6XXX_SMI_CMD, 15, 0); } +static int mv88e6xxx_smi_indirect_init(struct mv88e6xxx_chip *chip) +{ + /* Ensure that the chip starts out in the ready state. As both + * reads and writes always ensure this on return, they can + * safely depend on the chip not being busy on entry. + */ + return mv88e6xxx_smi_direct_wait(chip, chip->sw_addr, + MV88E6XXX_SMI_CMD, 15, 0); +} + static const struct mv88e6xxx_bus_ops mv88e6xxx_smi_indirect_ops = { .read = mv88e6xxx_smi_indirect_read, .write = mv88e6xxx_smi_indirect_write, + .init = mv88e6xxx_smi_indirect_init, }; int mv88e6xxx_smi_init(struct mv88e6xxx_chip *chip, @@ -179,5 +180,8 @@ int mv88e6xxx_smi_init(struct mv88e6xxx_chip *chip, chip->bus = bus; chip->sw_addr = sw_addr; + if (chip->smi_ops->init) + return chip->smi_ops->init(chip); + return 0; }