From patchwork Wed Nov 4 14:00:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880737 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67383C4742C for ; Wed, 4 Nov 2020 14:01:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 18A62221E2 for ; Wed, 4 Nov 2020 14:01:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726921AbgKDOBP (ORCPT ); Wed, 4 Nov 2020 09:01:15 -0500 Received: from mga01.intel.com ([192.55.52.88]:47942 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730162AbgKDOAf (ORCPT ); Wed, 4 Nov 2020 09:00:35 -0500 IronPort-SDR: 2h5XEzWXYmTl4+WyGv/DrxlH313AAdJtfGdp9sjBBpFJ6UGoR/cYdnVA1QLkl+iJvEvhIEhEdA 57KawJYYXFoQ== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="187077837" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="187077837" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:34 -0800 IronPort-SDR: 55+C+FN/VNIl1oZWAZxxdiMnvgqPJSBeYe6RX+dPqR5NF9aPwEMGmiPwLtbU+4JjNLWk2xrSgE v/yxAvjeWL3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="352684827" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga008.jf.intel.com with ESMTP; 04 Nov 2020 06:00:31 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id DB8D512A; Wed, 4 Nov 2020 16:00:30 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 01/10] thunderbolt: Do not clear USB4 router protocol adapter IFC and ISE bits Date: Wed, 4 Nov 2020 17:00:21 +0300 Message-Id: <20201104140030.6853-2-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org These fields are marked as vendor defined in the USB4 spec and should not be modified by the software, so only clear them when we are dealing with pre-USB4 hardware. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/path.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/path.c b/drivers/thunderbolt/path.c index 03e7b714deab..7c2c45d9ba4a 100644 --- a/drivers/thunderbolt/path.c +++ b/drivers/thunderbolt/path.c @@ -406,10 +406,17 @@ static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index, if (!hop.pending) { if (clear_fc) { - /* Clear flow control */ - hop.ingress_fc = 0; + /* + * Clear flow control. Protocol adapters + * IFC and ISE bits are vendor defined + * in the USB4 spec so we clear them + * only for pre-USB4 adapters. + */ + if (!tb_switch_is_usb4(port->sw)) { + hop.ingress_fc = 0; + hop.ingress_shared_buffer = 0; + } hop.egress_fc = 0; - hop.ingress_shared_buffer = 0; hop.egress_shared_buffer = 0; return tb_port_write(port, &hop, TB_CFG_HOPS, From patchwork Wed Nov 4 14:00:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880723 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B22A7C388F7 for ; Wed, 4 Nov 2020 14:00:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6515E221E2 for ; Wed, 4 Nov 2020 14:00:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730168AbgKDOAf (ORCPT ); Wed, 4 Nov 2020 09:00:35 -0500 Received: from mga11.intel.com ([192.55.52.93]:11983 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730074AbgKDOAe (ORCPT ); Wed, 4 Nov 2020 09:00:34 -0500 IronPort-SDR: yGGcITX5S5aUVzuNfIu10xaHHyvhrORMg7CCWSTapSKNHpAQPUrdIec468AVK5uV21iKJh+bR4 FU3mLDcfWxzg== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="165711170" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="165711170" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:33 -0800 IronPort-SDR: SMt0g0IDCdg3TgoxxBQ+WVpNwYkpkSq/7d90U6faiPxZnf5ffOfgpsqBGirdeqTfia6/b9Cq2G XuQ9ctNSkOXA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="306424180" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 04 Nov 2020 06:00:31 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id F19411C5; Wed, 4 Nov 2020 16:00:30 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 03/10] thunderbolt: Create XDomain devices for loops back to the host Date: Wed, 4 Nov 2020 17:00:23 +0300 Message-Id: <20201104140030.6853-4-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org It is perfectly possible to have loops back from the routers to the host, or even from one host port to another. Instead of ignoring these, we create XDomain devices for each. This allows creating services such as DMA traffic test that is used in manufacturing for example. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/xdomain.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index e2866248f389..7c61d2aeaac9 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -960,10 +960,8 @@ static void tb_xdomain_get_uuid(struct work_struct *work) return; } - if (uuid_equal(&uuid, xd->local_uuid)) { + if (uuid_equal(&uuid, xd->local_uuid)) dev_dbg(&xd->dev, "intra-domain loop detected\n"); - return; - } /* * If the UUID is different, there is another domain connected From patchwork Wed Nov 4 14:00:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880725 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B28E7C4741F for ; Wed, 4 Nov 2020 14:00:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6909E221E2 for ; Wed, 4 Nov 2020 14:00:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730242AbgKDOAj (ORCPT ); Wed, 4 Nov 2020 09:00:39 -0500 Received: from mga17.intel.com ([192.55.52.151]:1607 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730196AbgKDOAg (ORCPT ); Wed, 4 Nov 2020 09:00:36 -0500 IronPort-SDR: /VQK9A7Uk5q6vrdIwGWjoYU+X7LiZs7xQm0RXUCFxCG2v+gpX0tDeEhwdCDhNCCnsQCYBHOBe1 wFu1+1uVQZ5w== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="149069163" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="149069163" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:34 -0800 IronPort-SDR: ZlMg6vHkUSwYWHmlpK6l2RFmAQbema0FFS5LJ5a3+C6Ee79RQ+u5k74eg64dQ72WYvJUB+HeOx m+d3pGF3xVGw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="363982559" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga007.jf.intel.com with ESMTP; 04 Nov 2020 06:00:31 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 074D45C2; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 04/10] thunderbolt: Add link_speed and link_width to XDomain Date: Wed, 4 Nov 2020 17:00:24 +0300 Message-Id: <20201104140030.6853-5-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Isaac Hazan Link speed and link width are needed for checking expected values in case of using a loopback service. Signed-off-by: Isaac Hazan Signed-off-by: Mika Westerberg --- .../ABI/testing/sysfs-bus-thunderbolt | 28 ++++++++ drivers/thunderbolt/switch.c | 9 ++- drivers/thunderbolt/tb.h | 1 + drivers/thunderbolt/xdomain.c | 65 +++++++++++++++++++ include/linux/thunderbolt.h | 4 ++ 5 files changed, 106 insertions(+), 1 deletion(-) diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt index 0b4ab9e4b8f4..a91b4b24496e 100644 --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt @@ -1,3 +1,31 @@ +What: /sys/bus/thunderbolt/devices//rx_speed +Date: Feb 2021 +KernelVersion: 5.11 +Contact: Isaac Hazan +Description: This attribute reports the XDomain RX speed per lane. + All RX lanes run at the same speed. + +What: /sys/bus/thunderbolt/devices//rx_lanes +Date: Feb 2021 +KernelVersion: 5.11 +Contact: Isaac Hazan +Description: This attribute reports the number of RX lanes the XDomain + is using simultaneously through its upstream port. + +What: /sys/bus/thunderbolt/devices//tx_speed +Date: Feb 2021 +KernelVersion: 5.11 +Contact: Isaac Hazan +Description: This attribute reports the XDomain TX speed per lane. + All TX lanes run at the same speed. + +What: /sys/bus/thunderbolt/devices//tx_lanes +Date: Feb 2021 +KernelVersion: 5.11 +Contact: Isaac Hazan +Description: This attribute reports number of TX lanes the XDomain + is using simultaneously through its upstream port. + What: /sys/bus/thunderbolt/devices/.../domainX/boot_acl Date: Jun 2018 KernelVersion: 4.17 diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index c73bbfe69ba1..05a360901790 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -932,7 +932,14 @@ int tb_port_get_link_speed(struct tb_port *port) return speed == LANE_ADP_CS_1_CURRENT_SPEED_GEN3 ? 20 : 10; } -static int tb_port_get_link_width(struct tb_port *port) +/** + * tb_port_get_link_width() - Get current link width + * @port: Port to check (USB4 or CIO) + * + * Returns link width. Return values can be 1 (Single-Lane), 2 (Dual-Lane) + * or negative errno in case of failure. + */ +int tb_port_get_link_width(struct tb_port *port) { u32 val; int ret; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 8ea360b0ff77..09658d07460e 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -864,6 +864,7 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, (p) = tb_next_port_on_path((src), (dst), (p))) int tb_port_get_link_speed(struct tb_port *port); +int tb_port_get_link_width(struct tb_port *port); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 7c61d2aeaac9..26dc1fc886e5 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -941,6 +941,43 @@ static void tb_xdomain_restore_paths(struct tb_xdomain *xd) } } +static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd) +{ + return tb_to_switch(xd->dev.parent); +} + +static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd) +{ + bool change = false; + struct tb_port *port; + int ret; + + port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + + ret = tb_port_get_link_speed(port); + if (ret < 0) + return ret; + + if (xd->link_speed != ret) + change = true; + + xd->link_speed = ret; + + ret = tb_port_get_link_width(port); + if (ret < 0) + return ret; + + if (xd->link_width != ret) + change = true; + + xd->link_width = ret; + + if (change) + kobject_uevent(&xd->dev.kobj, KOBJ_CHANGE); + + return 0; +} + static void tb_xdomain_get_uuid(struct work_struct *work) { struct tb_xdomain *xd = container_of(work, typeof(*xd), @@ -1052,6 +1089,8 @@ static void tb_xdomain_get_properties(struct work_struct *work) xd->properties = dir; xd->property_block_gen = gen; + tb_xdomain_update_link_attributes(xd); + tb_xdomain_restore_paths(xd); mutex_unlock(&xd->lock); @@ -1158,9 +1197,35 @@ static ssize_t unique_id_show(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RO(unique_id); +static ssize_t speed_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); + + return sprintf(buf, "%u.0 Gb/s\n", xd->link_speed); +} + +static DEVICE_ATTR(rx_speed, 0444, speed_show, NULL); +static DEVICE_ATTR(tx_speed, 0444, speed_show, NULL); + +static ssize_t lanes_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct tb_xdomain *xd = container_of(dev, struct tb_xdomain, dev); + + return sprintf(buf, "%u\n", xd->link_width); +} + +static DEVICE_ATTR(rx_lanes, 0444, lanes_show, NULL); +static DEVICE_ATTR(tx_lanes, 0444, lanes_show, NULL); + static struct attribute *xdomain_attrs[] = { &dev_attr_device.attr, &dev_attr_device_name.attr, + &dev_attr_rx_lanes.attr, + &dev_attr_rx_speed.attr, + &dev_attr_tx_lanes.attr, + &dev_attr_tx_speed.attr, &dev_attr_unique_id.attr, &dev_attr_vendor.attr, &dev_attr_vendor_name.attr, diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 5db2b11ab085..e441af88ed77 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -179,6 +179,8 @@ void tb_unregister_property_dir(const char *key, struct tb_property_dir *dir); * @lock: Lock to serialize access to the following fields of this structure * @vendor_name: Name of the vendor (or %NULL if not known) * @device_name: Name of the device (or %NULL if not known) + * @link_speed: Speed of the link in Gb/s + * @link_width: Width of the link (1 or 2) * @is_unplugged: The XDomain is unplugged * @resume: The XDomain is being resumed * @needs_uuid: If the XDomain does not have @remote_uuid it will be @@ -223,6 +225,8 @@ struct tb_xdomain { struct mutex lock; const char *vendor_name; const char *device_name; + unsigned int link_speed; + unsigned int link_width; bool is_unplugged; bool resume; bool needs_uuid; From patchwork Wed Nov 4 14:00:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880731 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78CCEC4742C for ; Wed, 4 Nov 2020 14:00:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 33AFE221E2 for ; Wed, 4 Nov 2020 14:00:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730274AbgKDOAp (ORCPT ); Wed, 4 Nov 2020 09:00:45 -0500 Received: from mga05.intel.com ([192.55.52.43]:11093 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730235AbgKDOAk (ORCPT ); Wed, 4 Nov 2020 09:00:40 -0500 IronPort-SDR: ACnGGOflFNtZL1mc6L1uRDNNkL5wAgSVIxGDHuNrYDyUNlqxrIfXIroo8K1Kmyld5saHppC/jc 1p8ccTsCjVmQ== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="253928283" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="253928283" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:37 -0800 IronPort-SDR: VvgyJRkRJdqOvYtCZTSIe6Z7lFWlfnuA/Ny4KO2Qa5XzaVSypFqYtpy3ulN/0c52BVHNZjIvJr rOKNJLiPH9Hg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="320814125" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 04 Nov 2020 06:00:34 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 0FEB9646; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 05/10] thunderbolt: Add functions for enabling and disabling lane bonding on XDomain Date: Wed, 4 Nov 2020 17:00:25 +0300 Message-Id: <20201104140030.6853-6-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Isaac Hazan These can be used by service drivers to enable and disable lane bonding as needed. Signed-off-by: Isaac Hazan Signed-off-by: Mika Westerberg --- drivers/thunderbolt/switch.c | 24 +++++++++++-- drivers/thunderbolt/tb.h | 3 ++ drivers/thunderbolt/xdomain.c | 66 +++++++++++++++++++++++++++++++++++ include/linux/thunderbolt.h | 2 ++ 4 files changed, 92 insertions(+), 3 deletions(-) diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c index 05a360901790..cdfd8cccfe19 100644 --- a/drivers/thunderbolt/switch.c +++ b/drivers/thunderbolt/switch.c @@ -503,12 +503,13 @@ static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port) /** * tb_port_state() - get connectedness state of a port + * @port: the port to check * * The port must have a TB_CAP_PHY (i.e. it should be a real port). * * Return: Returns an enum tb_port_state on success or an error code on failure. */ -static int tb_port_state(struct tb_port *port) +int tb_port_state(struct tb_port *port) { struct tb_cap_phy phy; int res; @@ -1008,7 +1009,16 @@ static int tb_port_set_link_width(struct tb_port *port, unsigned int width) port->cap_phy + LANE_ADP_CS_1, 1); } -static int tb_port_lane_bonding_enable(struct tb_port *port) +/** + * tb_port_lane_bonding_enable() - Enable bonding on port + * @port: port to enable + * + * Enable bonding by setting the link width of the port and the + * other port in case of dual link port. + * + * Return: %0 in case of success and negative errno in case of error + */ +int tb_port_lane_bonding_enable(struct tb_port *port) { int ret; @@ -1038,7 +1048,15 @@ static int tb_port_lane_bonding_enable(struct tb_port *port) return 0; } -static void tb_port_lane_bonding_disable(struct tb_port *port) +/** + * tb_port_lane_bonding_disable() - Disable bonding on port + * @port: port to disable + * + * Disable bonding by setting the link width of the port and the + * other port in case of dual link port. + * + */ +void tb_port_lane_bonding_disable(struct tb_port *port) { port->dual_link_port->bonded = false; port->bonded = false; diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index 09658d07460e..aa7e2dc66059 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -865,6 +865,9 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end, int tb_port_get_link_speed(struct tb_port *port); int tb_port_get_link_width(struct tb_port *port); +int tb_port_state(struct tb_port *port); +int tb_port_lane_bonding_enable(struct tb_port *port); +void tb_port_lane_bonding_disable(struct tb_port *port); int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec); int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap); diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 26dc1fc886e5..63889fbd8156 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -8,6 +8,7 @@ */ #include +#include #include #include #include @@ -21,6 +22,7 @@ #define XDOMAIN_UUID_RETRIES 10 #define XDOMAIN_PROPERTIES_RETRIES 60 #define XDOMAIN_PROPERTIES_CHANGED_RETRIES 10 +#define XDOMAIN_BONDING_WAIT 100 /* ms */ struct xdomain_request_work { struct work_struct work; @@ -1442,6 +1444,70 @@ void tb_xdomain_remove(struct tb_xdomain *xd) device_unregister(&xd->dev); } +/** + * tb_xdomain_lane_bonding_enable() - Enable lane bonding on XDomain + * @xd: XDomain connection + * + * Lane bonding is disabled by default for XDomains. This function tries + * to enable bonding by first enabling the port and waiting for the CL0 + * state. + * + * Return: %0 in case of success and negative errno in case of error. + */ +int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd) +{ + struct tb_port *port; + int ret; + + port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + if (!port->dual_link_port) + return -ENODEV; + + ret = tb_port_enable(port->dual_link_port); + if (ret) + return ret; + + ret = tb_wait_for_port(port->dual_link_port, true); + if (ret < 0) + return ret; + if (!ret) + return -ENOTCONN; + + ret = tb_port_lane_bonding_enable(port); + if (ret) { + tb_port_warn(port, "failed to enable lane bonding\n"); + return ret; + } + + tb_xdomain_update_link_attributes(xd); + + dev_dbg(&xd->dev, "lane bonding enabled\n"); + return 0; +} +EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_enable); + +/** + * tb_xdomain_lane_bonding_disable() - Disable lane bonding + * @xd: XDomain connection + * + * Lane bonding is disabled by default for XDomains. If bonding has been + * enabled, this function can be used to disable it. + */ +void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd) +{ + struct tb_port *port; + + port = tb_port_at(xd->route, tb_xdomain_parent(xd)); + if (port->dual_link_port) { + tb_port_lane_bonding_disable(port); + tb_port_disable(port->dual_link_port); + tb_xdomain_update_link_attributes(xd); + + dev_dbg(&xd->dev, "lane bonding disabled\n"); + } +} +EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable); + /** * tb_xdomain_enable_paths() - Enable DMA paths for XDomain connection * @xd: XDomain connection diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index e441af88ed77..0a747f92847e 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -247,6 +247,8 @@ struct tb_xdomain { u8 depth; }; +int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd); +void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd); int tb_xdomain_enable_paths(struct tb_xdomain *xd, u16 transmit_path, u16 transmit_ring, u16 receive_path, u16 receive_ring); From patchwork Wed Nov 4 14:00:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880739 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27C97C2D0A3 for ; Wed, 4 Nov 2020 14:01:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6FE521734 for ; Wed, 4 Nov 2020 14:00:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729584AbgKDOA6 (ORCPT ); Wed, 4 Nov 2020 09:00:58 -0500 Received: from mga07.intel.com ([134.134.136.100]:26915 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730233AbgKDOAk (ORCPT ); Wed, 4 Nov 2020 09:00:40 -0500 IronPort-SDR: YnYIHzJtxFBAWtAO+qBrLeJ2uJja5WRy1Cw5iTjPDztqErsKZPD2X15Mn4LYZ/SNYFma5NvIMg w11dNDwnYq3Q== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="233379916" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="233379916" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:37 -0800 IronPort-SDR: E08XnRhgs7DIt4N3u2GIuJUM9WT0vWpvjZ+hBJLS2tboRkw9HSK9CVvnRvAEMrWV2wyvUc7hZv jhGVm0VsyO0A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="527585247" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga006.fm.intel.com with ESMTP; 04 Nov 2020 06:00:34 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 188E989F; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 06/10] thunderbolt: Create debugfs directory automatically for services Date: Wed, 4 Nov 2020 17:00:26 +0300 Message-Id: <20201104140030.6853-7-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This allows service drivers to use it as parent directory if they need to add their own debugfs entries. Signed-off-by: Mika Westerberg --- drivers/thunderbolt/debugfs.c | 24 ++++++++++++++++++++++++ drivers/thunderbolt/tb.h | 4 ++++ drivers/thunderbolt/xdomain.c | 3 +++ include/linux/thunderbolt.h | 4 ++++ 4 files changed, 35 insertions(+) diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c index ed65d2b13964..a80278fc50af 100644 --- a/drivers/thunderbolt/debugfs.c +++ b/drivers/thunderbolt/debugfs.c @@ -691,6 +691,30 @@ void tb_switch_debugfs_remove(struct tb_switch *sw) debugfs_remove_recursive(sw->debugfs_dir); } +/** + * tb_service_debugfs_init() - Add debugfs directory for service + * @svc: Thunderbolt service pointer + * + * Adds debugfs directory for service. + */ +void tb_service_debugfs_init(struct tb_service *svc) +{ + svc->debugfs_dir = debugfs_create_dir(dev_name(&svc->dev), + tb_debugfs_root); +} + +/** + * tb_service_debugfs_remove() - Remove service debugfs directory + * @svc: Thunderbolt service pointer + * + * Removes the previously created debugfs directory for @svc. + */ +void tb_service_debugfs_remove(struct tb_service *svc) +{ + debugfs_remove(svc->debugfs_dir); + svc->debugfs_dir = NULL; +} + void tb_debugfs_init(void) { tb_debugfs_root = debugfs_create_dir("thunderbolt", NULL); diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h index aa7e2dc66059..08af4fe642c0 100644 --- a/drivers/thunderbolt/tb.h +++ b/drivers/thunderbolt/tb.h @@ -1029,11 +1029,15 @@ void tb_debugfs_init(void); void tb_debugfs_exit(void); void tb_switch_debugfs_init(struct tb_switch *sw); void tb_switch_debugfs_remove(struct tb_switch *sw); +void tb_service_debugfs_init(struct tb_service *svc); +void tb_service_debugfs_remove(struct tb_service *svc); #else static inline void tb_debugfs_init(void) { } static inline void tb_debugfs_exit(void) { } static inline void tb_switch_debugfs_init(struct tb_switch *sw) { } static inline void tb_switch_debugfs_remove(struct tb_switch *sw) { } +static inline void tb_service_debugfs_init(struct tb_service *svc) { } +static inline void tb_service_debugfs_remove(struct tb_service *svc) { } #endif #ifdef CONFIG_USB4_KUNIT_TEST diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c index 63889fbd8156..c36b29736cd0 100644 --- a/drivers/thunderbolt/xdomain.c +++ b/drivers/thunderbolt/xdomain.c @@ -777,6 +777,7 @@ static void tb_service_release(struct device *dev) struct tb_service *svc = container_of(dev, struct tb_service, dev); struct tb_xdomain *xd = tb_service_parent(svc); + tb_service_debugfs_remove(svc); ida_simple_remove(&xd->service_ids, svc->id); kfree(svc->key); kfree(svc); @@ -891,6 +892,8 @@ static void enumerate_services(struct tb_xdomain *xd) svc->dev.parent = &xd->dev; dev_set_name(&svc->dev, "%s.%d", dev_name(&xd->dev), svc->id); + tb_service_debugfs_init(svc); + if (device_register(&svc->dev)) { put_device(&svc->dev); break; diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index 0a747f92847e..a844fd5d96ab 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -350,6 +350,9 @@ void tb_unregister_protocol_handler(struct tb_protocol_handler *handler); * @prtcvers: Protocol version from the properties directory * @prtcrevs: Protocol software revision from the properties directory * @prtcstns: Protocol settings mask from the properties directory + * @debugfs_dir: Pointer to the service debugfs directory. Always created + * when debugfs is enabled. Can be used by service drivers to + * add their own entries under the service. * * Each domain exposes set of services it supports as collection of * properties. For each service there will be one corresponding @@ -363,6 +366,7 @@ struct tb_service { u32 prtcvers; u32 prtcrevs; u32 prtcstns; + struct dentry *debugfs_dir; }; static inline struct tb_service *tb_service_get(struct tb_service *svc) From patchwork Wed Nov 4 14:00:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880735 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B527C2D0A3 for ; Wed, 4 Nov 2020 14:00:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 495A3221E2 for ; Wed, 4 Nov 2020 14:00:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730308AbgKDOA4 (ORCPT ); Wed, 4 Nov 2020 09:00:56 -0500 Received: from mga05.intel.com ([192.55.52.43]:11093 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730253AbgKDOAn (ORCPT ); Wed, 4 Nov 2020 09:00:43 -0500 IronPort-SDR: Er8yvEGwfqzzfgibEHRs+K1AueZAQXWMGKL1t2FbjD2bwl769/WMzej67TYaFMMTSeG6EcnPZS BvTbay8ZMRvQ== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="253928285" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="253928285" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:37 -0800 IronPort-SDR: PSCTWDcFsFCudo+gxdhE1eSnyt7qVmJk6mbdzpv5NEQ1jOBzEn9YUfXnizOBBcnAeWL//Khj3M /Adq3cwFoSpA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="320814128" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 04 Nov 2020 06:00:34 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 2105E94B; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 07/10] thunderbolt: Make it possible to allocate one directional DMA tunnel Date: Wed, 4 Nov 2020 17:00:27 +0300 Message-Id: <20201104140030.6853-8-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org With DMA tunnels it is possible that the service using it does not require bi-directional paths so make RX and TX optional (but of course one of them needs to be set). Signed-off-by: Mika Westerberg --- drivers/thunderbolt/tunnel.c | 50 ++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 19 deletions(-) diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c index 829b6ccdd5d4..dcdf9c7a9cae 100644 --- a/drivers/thunderbolt/tunnel.c +++ b/drivers/thunderbolt/tunnel.c @@ -34,9 +34,6 @@ #define TB_DP_AUX_PATH_OUT 1 #define TB_DP_AUX_PATH_IN 2 -#define TB_DMA_PATH_OUT 0 -#define TB_DMA_PATH_IN 1 - static const char * const tb_tunnel_names[] = { "PCI", "DP", "DMA", "USB3" }; #define __TB_TUNNEL_PRINT(level, tunnel, fmt, arg...) \ @@ -829,10 +826,10 @@ static void tb_dma_init_path(struct tb_path *path, unsigned int isb, * @nhi: Host controller port * @dst: Destination null port which the other domain is connected to * @transmit_ring: NHI ring number used to send packets towards the - * other domain + * other domain. Set to %0 if TX path is not needed. * @transmit_path: HopID used for transmitting packets * @receive_ring: NHI ring number used to receive packets from the - * other domain + * other domain. Set to %0 if RX path is not needed. * @reveive_path: HopID used for receiving packets * * Return: Returns a tb_tunnel on success or NULL on failure. @@ -843,10 +840,19 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, int receive_path) { struct tb_tunnel *tunnel; + size_t npaths = 0, i = 0; struct tb_path *path; u32 credits; - tunnel = tb_tunnel_alloc(tb, 2, TB_TUNNEL_DMA); + if (receive_ring) + npaths++; + if (transmit_ring) + npaths++; + + if (WARN_ON(!npaths)) + return NULL; + + tunnel = tb_tunnel_alloc(tb, npaths, TB_TUNNEL_DMA); if (!tunnel) return NULL; @@ -856,22 +862,28 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi, credits = tb_dma_credits(nhi); - path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, "DMA RX"); - if (!path) { - tb_tunnel_free(tunnel); - return NULL; + if (receive_ring) { + path = tb_path_alloc(tb, dst, receive_path, nhi, receive_ring, 0, + "DMA RX"); + if (!path) { + tb_tunnel_free(tunnel); + return NULL; + } + tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, + credits); + tunnel->paths[i++] = path; } - tb_dma_init_path(path, TB_PATH_NONE, TB_PATH_SOURCE | TB_PATH_INTERNAL, - credits); - tunnel->paths[TB_DMA_PATH_IN] = path; - path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, "DMA TX"); - if (!path) { - tb_tunnel_free(tunnel); - return NULL; + if (transmit_ring) { + path = tb_path_alloc(tb, nhi, transmit_ring, dst, transmit_path, 0, + "DMA TX"); + if (!path) { + tb_tunnel_free(tunnel); + return NULL; + } + tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); + tunnel->paths[i++] = path; } - tb_dma_init_path(path, TB_PATH_SOURCE, TB_PATH_ALL, credits); - tunnel->paths[TB_DMA_PATH_OUT] = path; return tunnel; } From patchwork Wed Nov 4 14:00:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880719 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3838AC2D0A3 for ; Wed, 4 Nov 2020 14:00:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D240D221E2 for ; Wed, 4 Nov 2020 14:00:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730281AbgKDOAu (ORCPT ); Wed, 4 Nov 2020 09:00:50 -0500 Received: from mga18.intel.com ([134.134.136.126]:48401 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730252AbgKDOAo (ORCPT ); Wed, 4 Nov 2020 09:00:44 -0500 IronPort-SDR: wF+RQaemNDjmdF1IircpR5p16EqKbn6XjKxHvjYpQKWHlm9dVPCV7wxQ3Me9Jxd96Y8MpZRhyJ S9605bcpsrdA== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="156992566" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="156992566" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:37 -0800 IronPort-SDR: BEQPrM7N9DrAqe++bQETEawCLSsl/9lT4w6a8MR3btxZumS1A94V+WbFjLlcKx1phSHi71vUwu UMe/U1Vw8ZLA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="426672237" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga001.fm.intel.com with ESMTP; 04 Nov 2020 06:00:34 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 29BE9956; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 08/10] thunderbolt: Add support for end-to-end flow control Date: Wed, 4 Nov 2020 17:00:28 +0300 Message-Id: <20201104140030.6853-9-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org USB4 spec defines end-to-end (E2E) flow control that can be used between hosts to prevent overflow of a RX ring. We previously had this partially implemented but that code was removed with commit 53f13319d131 ("thunderbolt: Get rid of E2E workaround") with the idea that we add it back properly if there ever is need. Now that we are going to add DMA traffic test driver (in subsequent patches) this can be useful. For this reason we modify tb_ring_alloc_rx/tx() so that they accept RING_FLAG_E2E and configure the hardware ring accordingly. The RX side also requires passing TX HopID (e2e_tx_hop) used in the credit grant packets. Signed-off-by: Mika Westerberg Cc: David S. Miller --- drivers/net/thunderbolt.c | 2 +- drivers/thunderbolt/ctl.c | 4 ++-- drivers/thunderbolt/nhi.c | 36 ++++++++++++++++++++++++++++++++---- include/linux/thunderbolt.h | 8 +++++++- 4 files changed, 42 insertions(+), 8 deletions(-) diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c index 3160443ef3b9..d7b5f87eaa15 100644 --- a/drivers/net/thunderbolt.c +++ b/drivers/net/thunderbolt.c @@ -866,7 +866,7 @@ static int tbnet_open(struct net_device *dev) eof_mask = BIT(TBIP_PDF_FRAME_END); ring = tb_ring_alloc_rx(xd->tb->nhi, -1, TBNET_RING_SIZE, - RING_FLAG_FRAME, sof_mask, eof_mask, + RING_FLAG_FRAME, 0, sof_mask, eof_mask, tbnet_start_poll, net); if (!ring) { netdev_err(dev, "failed to allocate Rx ring\n"); diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c index 9894b8f63064..1d86e27a0ef3 100644 --- a/drivers/thunderbolt/ctl.c +++ b/drivers/thunderbolt/ctl.c @@ -628,8 +628,8 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, event_cb cb, void *cb_data) if (!ctl->tx) goto err; - ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0xffff, - 0xffff, NULL, NULL); + ctl->rx = tb_ring_alloc_rx(nhi, 0, 10, RING_FLAG_NO_SUSPEND, 0, 0xffff, + 0xffff, NULL, NULL); if (!ctl->rx) goto err; diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index d3b6178d841a..9829e98174c3 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -483,7 +483,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring) static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size, bool transmit, unsigned int flags, - u16 sof_mask, u16 eof_mask, + int e2e_tx_hop, u16 sof_mask, u16 eof_mask, void (*start_poll)(void *), void *poll_data) { @@ -506,6 +506,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size, ring->is_tx = transmit; ring->size = size; ring->flags = flags; + ring->e2e_tx_hop = e2e_tx_hop; ring->sof_mask = sof_mask; ring->eof_mask = eof_mask; ring->head = 0; @@ -550,7 +551,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size, struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size, unsigned int flags) { - return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, NULL, NULL); + return tb_ring_alloc(nhi, hop, size, true, flags, 0, 0, 0, NULL, NULL); } EXPORT_SYMBOL_GPL(tb_ring_alloc_tx); @@ -560,6 +561,7 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx); * @hop: HopID (ring) to allocate. Pass %-1 for automatic allocation. * @size: Number of entries in the ring * @flags: Flags for the ring + * @e2e_tx_hop: Transmit HopID when E2E is enabled in @flags * @sof_mask: Mask of PDF values that start a frame * @eof_mask: Mask of PDF values that end a frame * @start_poll: If not %NULL the ring will call this function when an @@ -568,10 +570,11 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx); * @poll_data: Optional data passed to @start_poll */ struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size, - unsigned int flags, u16 sof_mask, u16 eof_mask, + unsigned int flags, int e2e_tx_hop, + u16 sof_mask, u16 eof_mask, void (*start_poll)(void *), void *poll_data) { - return tb_ring_alloc(nhi, hop, size, false, flags, sof_mask, eof_mask, + return tb_ring_alloc(nhi, hop, size, false, flags, e2e_tx_hop, sof_mask, eof_mask, start_poll, poll_data); } EXPORT_SYMBOL_GPL(tb_ring_alloc_rx); @@ -618,6 +621,31 @@ void tb_ring_start(struct tb_ring *ring) ring_iowrite32options(ring, sof_eof_mask, 4); ring_iowrite32options(ring, flags, 0); } + + /* + * Now that the ring valid bit is set we can configure E2E if + * enabled for the ring. + */ + if (ring->flags & RING_FLAG_E2E) { + if (!ring->is_tx) { + u32 hop; + + hop = ring->e2e_tx_hop << REG_RX_OPTIONS_E2E_HOP_SHIFT; + hop &= REG_RX_OPTIONS_E2E_HOP_MASK; + flags |= hop; + + dev_dbg(&ring->nhi->pdev->dev, + "enabling E2E for %s %d with TX HopID %d\n", + RING_TYPE(ring), ring->hop, ring->e2e_tx_hop); + } else { + dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n", + RING_TYPE(ring), ring->hop); + } + + flags |= RING_FLAG_E2E_FLOW_CONTROL; + ring_iowrite32options(ring, flags, 0); + } + ring_interrupt_active(ring, true); ring->running = true; err: diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h index a844fd5d96ab..034dccf93955 100644 --- a/include/linux/thunderbolt.h +++ b/include/linux/thunderbolt.h @@ -481,6 +481,8 @@ struct tb_nhi { * @irq: MSI-X irq number if the ring uses MSI-X. %0 otherwise. * @vector: MSI-X vector number the ring uses (only set if @irq is > 0) * @flags: Ring specific flags + * @e2e_tx_hop: Transmit HopID when E2E is enabled. Only applicable to + * RX ring. For TX ring this should be set to %0. * @sof_mask: Bit mask used to detect start of frame PDF * @eof_mask: Bit mask used to detect end of frame PDF * @start_poll: Called when ring interrupt is triggered to start @@ -504,6 +506,7 @@ struct tb_ring { int irq; u8 vector; unsigned int flags; + int e2e_tx_hop; u16 sof_mask; u16 eof_mask; void (*start_poll)(void *data); @@ -514,6 +517,8 @@ struct tb_ring { #define RING_FLAG_NO_SUSPEND BIT(0) /* Configure the ring to be in frame mode */ #define RING_FLAG_FRAME BIT(1) +/* Enable end-to-end flow control */ +#define RING_FLAG_E2E BIT(2) struct ring_frame; typedef void (*ring_cb)(struct tb_ring *, struct ring_frame *, bool canceled); @@ -562,7 +567,8 @@ struct ring_frame { struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size, unsigned int flags); struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size, - unsigned int flags, u16 sof_mask, u16 eof_mask, + unsigned int flags, int e2e_tx_hop, + u16 sof_mask, u16 eof_mask, void (*start_poll)(void *), void *poll_data); void tb_ring_start(struct tb_ring *ring); void tb_ring_stop(struct tb_ring *ring); From patchwork Wed Nov 4 14:00:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880733 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F1BDC55179 for ; Wed, 4 Nov 2020 14:01:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF3CF22280 for ; Wed, 4 Nov 2020 14:01:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730314AbgKDOA7 (ORCPT ); Wed, 4 Nov 2020 09:00:59 -0500 Received: from mga14.intel.com ([192.55.52.115]:57917 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730232AbgKDOAl (ORCPT ); Wed, 4 Nov 2020 09:00:41 -0500 IronPort-SDR: JrlTIzC/mxgtB3AUeiwillKetW1B19INzcokCXU6GC6wgSjU6XHaMJfjW/t7VQxsin4NMiK40w +zYPhrTmK91A== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="168431447" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="168431447" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:37 -0800 IronPort-SDR: RHYrOeqbaxpH3t/xVWj1uAFaXquuMqGKfpW5VJiKQTBHOvV2RUbYmjkg29WGPhYok8PO0nzvN9 EIWnMs2AcwYQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="358060381" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga002.fm.intel.com with ESMTP; 04 Nov 2020 06:00:34 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 33C8FA17; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 09/10] thunderbolt: Add DMA traffic test driver Date: Wed, 4 Nov 2020 17:00:29 +0300 Message-Id: <20201104140030.6853-10-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Isaac Hazan This driver allows sending DMA traffic over XDomain connection. Specifically over a loopback connection using either a Thunderbolt/USB4 cable that is connected back to the host router port, or a special loopback dongle that has RX and TX lines crossed. This can be useful at manufacturing floor to check whether Thunderbolt/USB4 ports are functional. The driver exposes debugfs directory under the XDomain service that can be used to configure the driver, start the test and check the results. If a loopback dongle is used the steps to send and receive 1000 packets can be done like: # modprobe thunderbolt_dma_test # echo 1000 > /sys/kernel/debug/thunderbolt//dma_test/packets_to_receive # echo 1000 > /sys/kernel/debug/thunderbolt//dma_test/packets_to_send # echo 1 > /sys/kernel/debug/thunderbolt//dma_test/test # cat /sys/kernel/debug/thunderbolt//dma_test/status When a cable is connected back to host then there are two Thunderbolt services, one is configured for receiving (does not matter which one): # modprobe thunderbolt_dma_test # echo 1000 > /sys/kernel/debug/thunderbolt//dma_test/packets_to_receive # echo 1 > /sys/kernel/debug/thunderbolt//dma_test/test The other one for sending: # echo 1000 > /sys/kernel/debug/thunderbolt//dma_test/packets_to_send # echo 1 > /sys/kernel/debug/thunderbolt//dma_test/test Results can be read from both services status attributes. Signed-off-by: Isaac Hazan Signed-off-by: Mika Westerberg --- drivers/thunderbolt/Kconfig | 13 + drivers/thunderbolt/Makefile | 3 + drivers/thunderbolt/dma_test.c | 736 +++++++++++++++++++++++++++++++++ 3 files changed, 752 insertions(+) create mode 100644 drivers/thunderbolt/dma_test.c diff --git a/drivers/thunderbolt/Kconfig b/drivers/thunderbolt/Kconfig index 7fc058f81d00..4bfec8a28064 100644 --- a/drivers/thunderbolt/Kconfig +++ b/drivers/thunderbolt/Kconfig @@ -31,4 +31,17 @@ config USB4_KUNIT_TEST bool "KUnit tests" depends on KUNIT=y +config USB4_DMA_TEST + tristate "DMA traffic test driver" + depends on DEBUG_FS + help + This allows sending and receiving DMA traffic through loopback + connection. Loopback connection can be done by either special + dongle that has TX/RX lines crossed, or by simply connecting a + cable back to the host. Only enable this if you know what you + are doing. Normal users and distro kernels should say N here. + + To compile this driver a module, choose M here. The module will be + called thunderbolt_dma_test. + endif # USB4 diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile index 571537371072..7aa48f6c41d9 100644 --- a/drivers/thunderbolt/Makefile +++ b/drivers/thunderbolt/Makefile @@ -7,3 +7,6 @@ thunderbolt-objs += nvm.o retimer.o quirks.o thunderbolt-${CONFIG_ACPI} += acpi.o thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o thunderbolt-${CONFIG_USB4_KUNIT_TEST} += test.o + +thunderbolt_dma_test-${CONFIG_USB4_DMA_TEST} += dma_test.o +obj-$(CONFIG_USB4_DMA_TEST) += thunderbolt_dma_test.o diff --git a/drivers/thunderbolt/dma_test.c b/drivers/thunderbolt/dma_test.c new file mode 100644 index 000000000000..f924423fa180 --- /dev/null +++ b/drivers/thunderbolt/dma_test.c @@ -0,0 +1,736 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * DMA traffic test driver + * + * Copyright (C) 2020, Intel Corporation + * Authors: Isaac Hazan + * Mika Westerberg + */ + +#include +#include +#include +#include +#include +#include + +#define DMA_TEST_HOPID 8 +#define DMA_TEST_TX_RING_SIZE 64 +#define DMA_TEST_RX_RING_SIZE 256 +#define DMA_TEST_FRAME_SIZE SZ_4K +#define DMA_TEST_DATA_PATTERN 0x0123456789abcdefLL +#define DMA_TEST_MAX_PACKETS 1000 + +enum dma_test_frame_pdf { + DMA_TEST_PDF_FRAME_START = 1, + DMA_TEST_PDF_FRAME_END, +}; + +struct dma_test_frame { + struct dma_test *dma_test; + void *data; + struct ring_frame frame; +}; + +enum dma_test_test_error { + DMA_TEST_NO_ERROR, + DMA_TEST_INTERRUPTED, + DMA_TEST_BUFFER_ERROR, + DMA_TEST_DMA_ERROR, + DMA_TEST_CONFIG_ERROR, + DMA_TEST_SPEED_ERROR, + DMA_TEST_WIDTH_ERROR, + DMA_TEST_BONDING_ERROR, + DMA_TEST_PACKET_ERROR, +}; + +static const char * const dma_test_error_names[] = { + [DMA_TEST_NO_ERROR] = "no errors", + [DMA_TEST_INTERRUPTED] = "interrupted by signal", + [DMA_TEST_BUFFER_ERROR] = "no memory for packet buffers", + [DMA_TEST_DMA_ERROR] = "DMA ring setup failed", + [DMA_TEST_CONFIG_ERROR] = "configuration is not valid", + [DMA_TEST_SPEED_ERROR] = "unexpected link speed", + [DMA_TEST_WIDTH_ERROR] = "unexpected link width", + [DMA_TEST_BONDING_ERROR] = "lane bonding configuration error", + [DMA_TEST_PACKET_ERROR] = "packet check failed", +}; + +enum dma_test_result { + DMA_TEST_NOT_RUN, + DMA_TEST_SUCCESS, + DMA_TEST_FAIL, +}; + +static const char * const dma_test_result_names[] = { + [DMA_TEST_NOT_RUN] = "not run", + [DMA_TEST_SUCCESS] = "success", + [DMA_TEST_FAIL] = "failed", +}; + +/** + * struct dma_test - DMA test device driver private data + * @svc: XDomain service the driver is bound to + * @xd: XDomain the service belongs to + * @rx_ring: Software ring holding RX frames + * @tx_ring: Software ring holding TX frames + * @packets_to_send: Number of packets to send + * @packets_to_receive: Number of packets to receive + * @packets_sent: Actual number of packets sent + * @packets_received: Actual number of packets received + * @link_speed: Expected link speed (Gb/s), %0 to use whatever is negotiated + * @link_width: Expected link width (Gb/s), %0 to use whatever is negotiated + * @crc_errors: Number of CRC errors during the test run + * @buffer_overflow_errors: Number of buffer overflow errors during the test + * run + * @result: Result of the last run + * @error_code: Error code of the last run + * @complete: Used to wait for the Rx to complete + * @lock: Lock serializing access to this structure + * @debugfs_dir: dentry of this dma_test + */ +struct dma_test { + const struct tb_service *svc; + struct tb_xdomain *xd; + struct tb_ring *rx_ring; + struct tb_ring *tx_ring; + unsigned int packets_to_send; + unsigned int packets_to_receive; + unsigned int packets_sent; + unsigned int packets_received; + unsigned int link_speed; + unsigned int link_width; + unsigned int crc_errors; + unsigned int buffer_overflow_errors; + enum dma_test_result result; + enum dma_test_test_error error_code; + struct completion complete; + struct mutex lock; + struct dentry *debugfs_dir; +}; + +/* DMA test property directory UUID: 3188cd10-6523-4a5a-a682-fdca07a248d8 */ +static const uuid_t dma_test_dir_uuid = + UUID_INIT(0x3188cd10, 0x6523, 0x4a5a, + 0xa6, 0x82, 0xfd, 0xca, 0x07, 0xa2, 0x48, 0xd8); + +static struct tb_property_dir *dma_test_dir; +static void *dma_test_pattern; + +static void dma_test_free_rings(struct dma_test *dt) +{ + if (dt->rx_ring) { + tb_ring_free(dt->rx_ring); + dt->rx_ring = NULL; + } + if (dt->tx_ring) { + tb_ring_free(dt->tx_ring); + dt->tx_ring = NULL; + } +} + +static int dma_test_start_rings(struct dma_test *dt) +{ + unsigned int flags = RING_FLAG_FRAME; + struct tb_xdomain *xd = dt->xd; + int ret, e2e_tx_hop = 0; + struct tb_ring *ring; + + /* + * If we are both sender and receiver (traffic goes over a + * special loopback dongle) enable E2E flow control. This avoids + * losing packets. + */ + if (dt->packets_to_send && dt->packets_to_receive) + flags |= RING_FLAG_E2E; + + if (dt->packets_to_send) { + ring = tb_ring_alloc_tx(xd->tb->nhi, -1, DMA_TEST_TX_RING_SIZE, + flags); + if (!ring) + return -ENOMEM; + + dt->tx_ring = ring; + e2e_tx_hop = ring->hop; + } + + if (dt->packets_to_receive) { + u16 sof_mask, eof_mask; + + sof_mask = BIT(DMA_TEST_PDF_FRAME_START); + eof_mask = BIT(DMA_TEST_PDF_FRAME_END); + + ring = tb_ring_alloc_rx(xd->tb->nhi, -1, DMA_TEST_RX_RING_SIZE, + flags, e2e_tx_hop, sof_mask, eof_mask, + NULL, NULL); + if (!ring) { + dma_test_free_rings(dt); + return -ENOMEM; + } + + dt->rx_ring = ring; + } + + ret = tb_xdomain_enable_paths(dt->xd, DMA_TEST_HOPID, + dt->tx_ring ? dt->tx_ring->hop : 0, + DMA_TEST_HOPID, + dt->rx_ring ? dt->rx_ring->hop : 0); + if (ret) { + dma_test_free_rings(dt); + return ret; + } + + if (dt->tx_ring) + tb_ring_start(dt->tx_ring); + if (dt->rx_ring) + tb_ring_start(dt->rx_ring); + + return 0; +} + +static void dma_test_stop_rings(struct dma_test *dt) +{ + if (dt->rx_ring) + tb_ring_stop(dt->rx_ring); + if (dt->tx_ring) + tb_ring_stop(dt->tx_ring); + + if (tb_xdomain_disable_paths(dt->xd)) + dev_warn(&dt->svc->dev, "failed to disable DMA paths\n"); + + dma_test_free_rings(dt); +} + +static void dma_test_rx_callback(struct tb_ring *ring, struct ring_frame *frame, + bool canceled) +{ + struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame); + struct dma_test *dt = tf->dma_test; + struct device *dma_dev = tb_ring_dma_device(dt->rx_ring); + + dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE, + DMA_FROM_DEVICE); + kfree(tf->data); + + if (canceled) { + kfree(tf); + return; + } + + dt->packets_received++; + dev_dbg(&dt->svc->dev, "packet %u/%u received\n", dt->packets_received, + dt->packets_to_receive); + + if (tf->frame.flags & RING_DESC_CRC_ERROR) + dt->crc_errors++; + if (tf->frame.flags & RING_DESC_BUFFER_OVERRUN) + dt->buffer_overflow_errors++; + + kfree(tf); + + if (dt->packets_received == dt->packets_to_receive) + complete(&dt->complete); +} + +static int dma_test_submit_rx(struct dma_test *dt, size_t npackets) +{ + struct device *dma_dev = tb_ring_dma_device(dt->rx_ring); + int i; + + for (i = 0; i < npackets; i++) { + struct dma_test_frame *tf; + dma_addr_t dma_addr; + + tf = kzalloc(sizeof(*tf), GFP_KERNEL); + if (!tf) + return -ENOMEM; + + tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL); + if (!tf->data) { + kfree(tf); + return -ENOMEM; + } + + dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE, + DMA_FROM_DEVICE); + if (dma_mapping_error(dma_dev, dma_addr)) { + kfree(tf->data); + kfree(tf); + return -ENOMEM; + } + + tf->frame.buffer_phy = dma_addr; + tf->frame.callback = dma_test_rx_callback; + tf->dma_test = dt; + INIT_LIST_HEAD(&tf->frame.list); + + tb_ring_rx(dt->rx_ring, &tf->frame); + } + + return 0; +} + +static void dma_test_tx_callback(struct tb_ring *ring, struct ring_frame *frame, + bool canceled) +{ + struct dma_test_frame *tf = container_of(frame, typeof(*tf), frame); + struct dma_test *dt = tf->dma_test; + struct device *dma_dev = tb_ring_dma_device(dt->tx_ring); + + dma_unmap_single(dma_dev, tf->frame.buffer_phy, DMA_TEST_FRAME_SIZE, + DMA_TO_DEVICE); + kfree(tf->data); + kfree(tf); +} + +static int dma_test_submit_tx(struct dma_test *dt, size_t npackets) +{ + struct device *dma_dev = tb_ring_dma_device(dt->tx_ring); + int i; + + for (i = 0; i < npackets; i++) { + struct dma_test_frame *tf; + dma_addr_t dma_addr; + + tf = kzalloc(sizeof(*tf), GFP_KERNEL); + if (!tf) + return -ENOMEM; + + tf->frame.size = 0; /* means 4096 */ + tf->dma_test = dt; + + tf->data = kzalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL); + if (!tf->data) { + kfree(tf); + return -ENOMEM; + } + + memcpy(tf->data, dma_test_pattern, DMA_TEST_FRAME_SIZE); + + dma_addr = dma_map_single(dma_dev, tf->data, DMA_TEST_FRAME_SIZE, + DMA_TO_DEVICE); + if (dma_mapping_error(dma_dev, dma_addr)) { + kfree(tf->data); + kfree(tf); + return -ENOMEM; + } + + tf->frame.buffer_phy = dma_addr; + tf->frame.callback = dma_test_tx_callback; + tf->frame.sof = DMA_TEST_PDF_FRAME_START; + tf->frame.eof = DMA_TEST_PDF_FRAME_END; + INIT_LIST_HEAD(&tf->frame.list); + + dt->packets_sent++; + dev_dbg(&dt->svc->dev, "packet %u/%u sent\n", dt->packets_sent, + dt->packets_to_send); + + tb_ring_tx(dt->tx_ring, &tf->frame); + } + + return 0; +} + +#define DMA_TEST_DEBUGFS_ATTR(__fops, __get, __validate, __set) \ +static int __fops ## _show(void *data, u64 *val) \ +{ \ + struct tb_service *svc = data; \ + struct dma_test *dt = tb_service_get_drvdata(svc); \ + int ret; \ + \ + ret = mutex_lock_interruptible(&dt->lock); \ + if (ret) \ + return ret; \ + __get(dt, val); \ + mutex_unlock(&dt->lock); \ + return 0; \ +} \ +static int __fops ## _store(void *data, u64 val) \ +{ \ + struct tb_service *svc = data; \ + struct dma_test *dt = tb_service_get_drvdata(svc); \ + int ret; \ + \ + ret = __validate(val); \ + if (ret) \ + return ret; \ + ret = mutex_lock_interruptible(&dt->lock); \ + if (ret) \ + return ret; \ + __set(dt, val); \ + mutex_unlock(&dt->lock); \ + return 0; \ +} \ +DEFINE_DEBUGFS_ATTRIBUTE(__fops ## _fops, __fops ## _show, \ + __fops ## _store, "%llu\n") + +static void lanes_get(const struct dma_test *dt, u64 *val) +{ + *val = dt->link_width; +} + +static int lanes_validate(u64 val) +{ + return val > 2 ? -EINVAL : 0; +} + +static void lanes_set(struct dma_test *dt, u64 val) +{ + dt->link_width = val; +} +DMA_TEST_DEBUGFS_ATTR(lanes, lanes_get, lanes_validate, lanes_set); + +static void speed_get(const struct dma_test *dt, u64 *val) +{ + *val = dt->link_speed; +} + +static int speed_validate(u64 val) +{ + switch (val) { + case 20: + case 10: + case 0: + return 0; + default: + return -EINVAL; + } +} + +static void speed_set(struct dma_test *dt, u64 val) +{ + dt->link_speed = val; +} +DMA_TEST_DEBUGFS_ATTR(speed, speed_get, speed_validate, speed_set); + +static void packets_to_receive_get(const struct dma_test *dt, u64 *val) +{ + *val = dt->packets_to_receive; +} + +static int packets_to_receive_validate(u64 val) +{ + return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0; +} + +static void packets_to_receive_set(struct dma_test *dt, u64 val) +{ + dt->packets_to_receive = val; +} +DMA_TEST_DEBUGFS_ATTR(packets_to_receive, packets_to_receive_get, + packets_to_receive_validate, packets_to_receive_set); + +static void packets_to_send_get(const struct dma_test *dt, u64 *val) +{ + *val = dt->packets_to_send; +} + +static int packets_to_send_validate(u64 val) +{ + return val > DMA_TEST_MAX_PACKETS ? -EINVAL : 0; +} + +static void packets_to_send_set(struct dma_test *dt, u64 val) +{ + dt->packets_to_send = val; +} +DMA_TEST_DEBUGFS_ATTR(packets_to_send, packets_to_send_get, + packets_to_send_validate, packets_to_send_set); + +static int dma_test_set_bonding(struct dma_test *dt) +{ + switch (dt->link_width) { + case 2: + return tb_xdomain_lane_bonding_enable(dt->xd); + case 1: + tb_xdomain_lane_bonding_disable(dt->xd); + fallthrough; + default: + return 0; + } +} + +static bool dma_test_validate_config(struct dma_test *dt) +{ + if (!dt->packets_to_send && !dt->packets_to_receive) + return false; + if (dt->packets_to_send && dt->packets_to_receive && + dt->packets_to_send != dt->packets_to_receive) + return false; + return true; +} + +static void dma_test_check_errors(struct dma_test *dt, int ret) +{ + if (!dt->error_code) { + if (dt->link_speed && dt->xd->link_speed != dt->link_speed) { + dt->error_code = DMA_TEST_SPEED_ERROR; + } else if (dt->link_width && + dt->xd->link_width != dt->link_width) { + dt->error_code = DMA_TEST_WIDTH_ERROR; + } else if (dt->packets_to_send != dt->packets_sent || + dt->packets_to_receive != dt->packets_received || + dt->crc_errors || dt->buffer_overflow_errors) { + dt->error_code = DMA_TEST_PACKET_ERROR; + } else { + return; + } + } + + dt->result = DMA_TEST_FAIL; +} + +static int test_store(void *data, u64 val) +{ + struct tb_service *svc = data; + struct dma_test *dt = tb_service_get_drvdata(svc); + int ret; + + if (val != 1) + return -EINVAL; + + ret = mutex_lock_interruptible(&dt->lock); + if (ret) + return ret; + + dt->packets_sent = 0; + dt->packets_received = 0; + dt->crc_errors = 0; + dt->buffer_overflow_errors = 0; + dt->result = DMA_TEST_SUCCESS; + dt->error_code = DMA_TEST_NO_ERROR; + + dev_dbg(&svc->dev, "DMA test starting\n"); + if (dt->link_speed) + dev_dbg(&svc->dev, "link_speed: %u Gb/s\n", dt->link_speed); + if (dt->link_width) + dev_dbg(&svc->dev, "link_width: %u\n", dt->link_width); + dev_dbg(&svc->dev, "packets_to_send: %u\n", dt->packets_to_send); + dev_dbg(&svc->dev, "packets_to_receive: %u\n", dt->packets_to_receive); + + if (!dma_test_validate_config(dt)) { + dev_err(&svc->dev, "invalid test configuration\n"); + dt->error_code = DMA_TEST_CONFIG_ERROR; + goto out_unlock; + } + + ret = dma_test_set_bonding(dt); + if (ret) { + dev_err(&svc->dev, "failed to set lanes\n"); + dt->error_code = DMA_TEST_BONDING_ERROR; + goto out_unlock; + } + + ret = dma_test_start_rings(dt); + if (ret) { + dev_err(&svc->dev, "failed to enable DMA rings\n"); + dt->error_code = DMA_TEST_DMA_ERROR; + goto out_unlock; + } + + if (dt->packets_to_receive) { + reinit_completion(&dt->complete); + ret = dma_test_submit_rx(dt, dt->packets_to_receive); + if (ret) { + dev_err(&svc->dev, "failed to submit receive buffers\n"); + dt->error_code = DMA_TEST_BUFFER_ERROR; + goto out_stop; + } + } + + if (dt->packets_to_send) { + ret = dma_test_submit_tx(dt, dt->packets_to_send); + if (ret) { + dev_err(&svc->dev, "failed to submit transmit buffers\n"); + dt->error_code = DMA_TEST_BUFFER_ERROR; + goto out_stop; + } + } + + if (dt->packets_to_receive) { + ret = wait_for_completion_interruptible(&dt->complete); + if (ret) { + dt->error_code = DMA_TEST_INTERRUPTED; + goto out_stop; + } + } + +out_stop: + dma_test_stop_rings(dt); +out_unlock: + dma_test_check_errors(dt, ret); + mutex_unlock(&dt->lock); + + dev_dbg(&svc->dev, "DMA test %s\n", dma_test_result_names[dt->result]); + return ret; +} +DEFINE_DEBUGFS_ATTRIBUTE(test_fops, NULL, test_store, "%llu\n"); + +static int status_show(struct seq_file *s, void *not_used) +{ + struct tb_service *svc = s->private; + struct dma_test *dt = tb_service_get_drvdata(svc); + int ret; + + ret = mutex_lock_interruptible(&dt->lock); + if (ret) + return ret; + + seq_printf(s, "result: %s\n", dma_test_result_names[dt->result]); + if (dt->result == DMA_TEST_NOT_RUN) + goto out_unlock; + + seq_printf(s, "packets received: %u\n", dt->packets_received); + seq_printf(s, "packets sent: %u\n", dt->packets_sent); + seq_printf(s, "CRC errors: %u\n", dt->crc_errors); + seq_printf(s, "buffer overflow errors: %u\n", + dt->buffer_overflow_errors); + seq_printf(s, "error: %s\n", dma_test_error_names[dt->error_code]); + +out_unlock: + mutex_unlock(&dt->lock); + return 0; +} +DEFINE_SHOW_ATTRIBUTE(status); + +static void dma_test_debugfs_init(struct tb_service *svc) +{ + struct dma_test *dt = tb_service_get_drvdata(svc); + + dt->debugfs_dir = debugfs_create_dir("dma_test", svc->debugfs_dir); + + debugfs_create_file("lanes", 0600, dt->debugfs_dir, svc, &lanes_fops); + debugfs_create_file("speed", 0600, dt->debugfs_dir, svc, &speed_fops); + debugfs_create_file("packets_to_receive", 0600, dt->debugfs_dir, svc, + &packets_to_receive_fops); + debugfs_create_file("packets_to_send", 0600, dt->debugfs_dir, svc, + &packets_to_send_fops); + debugfs_create_file("status", 0400, dt->debugfs_dir, svc, &status_fops); + debugfs_create_file("test", 0200, dt->debugfs_dir, svc, &test_fops); +} + +static int dma_test_probe(struct tb_service *svc, const struct tb_service_id *id) +{ + struct tb_xdomain *xd = tb_service_parent(svc); + struct dma_test *dt; + + dt = devm_kzalloc(&svc->dev, sizeof(*dt), GFP_KERNEL); + if (!dt) + return -ENOMEM; + + dt->svc = svc; + dt->xd = xd; + mutex_init(&dt->lock); + init_completion(&dt->complete); + + tb_service_set_drvdata(svc, dt); + dma_test_debugfs_init(svc); + + return 0; +} + +static void dma_test_remove(struct tb_service *svc) +{ + struct dma_test *dt = tb_service_get_drvdata(svc); + + mutex_lock(&dt->lock); + debugfs_remove_recursive(dt->debugfs_dir); + mutex_unlock(&dt->lock); +} + +static int __maybe_unused dma_test_suspend(struct device *dev) +{ + /* + * No need to do anything special here. If userspace is writing + * to the test attribute when suspend started, it comes out from + * wait_for_completion_interruptible() with -ERESTARTSYS and the + * DMA test fails tearing down the rings. Once userspace is + * thawed the kernel restarts the write syscall effectively + * re-running the test. + */ + return 0; +} + +static int __maybe_unused dma_test_resume(struct device *dev) +{ + return 0; +} + +static const struct dev_pm_ops dma_test_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(dma_test_suspend, dma_test_resume) +}; + +static const struct tb_service_id dma_test_ids[] = { + { TB_SERVICE("dma_test", 1) }, + { }, +}; +MODULE_DEVICE_TABLE(tbsvc, dma_test_ids); + +static struct tb_service_driver dma_test_driver = { + .driver = { + .owner = THIS_MODULE, + .name = "thunderbolt_dma_test", + .pm = &dma_test_pm_ops, + }, + .probe = dma_test_probe, + .remove = dma_test_remove, + .id_table = dma_test_ids, +}; + +static int __init dma_test_init(void) +{ + u64 data_value = DMA_TEST_DATA_PATTERN; + int i, ret; + + dma_test_pattern = kmalloc(DMA_TEST_FRAME_SIZE, GFP_KERNEL); + if (!dma_test_pattern) + return -ENOMEM; + + for (i = 0; i < DMA_TEST_FRAME_SIZE / sizeof(data_value); i++) + ((u32 *)dma_test_pattern)[i] = data_value++; + + dma_test_dir = tb_property_create_dir(&dma_test_dir_uuid); + if (!dma_test_dir) { + ret = -ENOMEM; + goto err_free_pattern; + } + + tb_property_add_immediate(dma_test_dir, "prtcid", 1); + tb_property_add_immediate(dma_test_dir, "prtcvers", 1); + tb_property_add_immediate(dma_test_dir, "prtcrevs", 0); + tb_property_add_immediate(dma_test_dir, "prtcstns", 0); + + ret = tb_register_property_dir("dma_test", dma_test_dir); + if (ret) + goto err_free_dir; + + ret = tb_register_service_driver(&dma_test_driver); + if (ret) + goto err_unregister_dir; + + return 0; + +err_unregister_dir: + tb_unregister_property_dir("dma_test", dma_test_dir); +err_free_dir: + tb_property_free_dir(dma_test_dir); +err_free_pattern: + kfree(dma_test_pattern); + + return ret; +} +module_init(dma_test_init); + +static void __exit dma_test_exit(void) +{ + tb_unregister_service_driver(&dma_test_driver); + tb_unregister_property_dir("dma_test", dma_test_dir); + tb_property_free_dir(dma_test_dir); + kfree(dma_test_pattern); +} +module_exit(dma_test_exit); + +MODULE_AUTHOR("Isaac Hazan "); +MODULE_AUTHOR("Mika Westerberg "); +MODULE_DESCRIPTION("DMA traffic test driver"); +MODULE_LICENSE("GPL v2"); From patchwork Wed Nov 4 14:00:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mika Westerberg X-Patchwork-Id: 11880727 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DECA0C4742C for ; Wed, 4 Nov 2020 14:00:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95863221E2 for ; Wed, 4 Nov 2020 14:00:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730250AbgKDOAl (ORCPT ); Wed, 4 Nov 2020 09:00:41 -0500 Received: from mga04.intel.com ([192.55.52.120]:11837 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730234AbgKDOAj (ORCPT ); Wed, 4 Nov 2020 09:00:39 -0500 IronPort-SDR: XqUclAvqz/xMZOPzxW8SUID4ZZgA8zYYelrQzOyi9WsGpXeBFLA4TJB3aW1phH84wc3fdIEB8f MIJ7oGuV6MkA== X-IronPort-AV: E=McAfee;i="6000,8403,9794"; a="166627059" X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="166627059" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Nov 2020 06:00:37 -0800 IronPort-SDR: xlA1/JcpnIA/IBpPU9hgDGrOzym1OWu4qMHNXYGmi0KVaN24Vc9r35DrsuXev+wrjCE/YKFy4o UWixCZ6Tb4FA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,451,1596524400"; d="scan'208";a="396910286" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 04 Nov 2020 06:00:34 -0800 Received: by black.fi.intel.com (Postfix, from userid 1001) id 3D044A23; Wed, 4 Nov 2020 16:00:31 +0200 (EET) From: Mika Westerberg To: linux-usb@vger.kernel.org Cc: Michael Jamet , Yehezkel Bernat , Andreas Noever , Isaac Hazan , Lukas Wunner , "David S . Miller" , Mika Westerberg , netdev@vger.kernel.org Subject: [PATCH 10/10] MAINTAINERS: Add Isaac as maintainer of Thunderbolt DMA traffic test driver Date: Wed, 4 Nov 2020 17:00:30 +0300 Message-Id: <20201104140030.6853-11-mika.westerberg@linux.intel.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201104140030.6853-1-mika.westerberg@linux.intel.com> References: <20201104140030.6853-1-mika.westerberg@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Isaac Hazan I will be maintaining the Thunderbolt DMA traffic test driver. Signed-off-by: Isaac Hazan Signed-off-by: Mika Westerberg --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index b516bb34a8d5..fcd84749defd 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17371,6 +17371,12 @@ W: http://thinkwiki.org/wiki/Ibm-acpi T: git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git F: drivers/platform/x86/thinkpad_acpi.c +THUNDERBOLT DMA TRAFFIC TEST DRIVER +M: Isaac Hazan +L: linux-usb@vger.kernel.org +S: Maintained +F: drivers/thunderbolt/dma_test.c + THUNDERBOLT DRIVER M: Andreas Noever M: Michael Jamet