From patchwork Fri Jan 15 16:19:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 12023193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E7A4C433E0 for ; Fri, 15 Jan 2021 16:19:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D13D3238EB for ; Fri, 15 Jan 2021 16:19:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729384AbhAOQTI (ORCPT ); Fri, 15 Jan 2021 11:19:08 -0500 Received: from mga07.intel.com ([134.134.136.100]:57456 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728020AbhAOQTI (ORCPT ); Fri, 15 Jan 2021 11:19:08 -0500 IronPort-SDR: 7Ewwgms6leusm5vbtesLgWw8ZNJO863fumzBrFpG5W1ZYaoII0ek8sq0jJXttW+AwSzio4JDrS a/K8e8bZhUwA== X-IronPort-AV: E=McAfee;i="6000,8403,9864"; a="242638645" X-IronPort-AV: E=Sophos;i="5.79,349,1602572400"; d="scan'208";a="242638645" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2021 08:17:16 -0800 IronPort-SDR: xPNZdf8OhXGDQE59eTSTCMhCOSqjrvK7p5k3EBoIwcsYfjFRHJ/saomieipw2JrLiXpdUFs3yg JNb5bKq+XlKg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,349,1602572400"; d="scan'208";a="465626805" Received: from mattu-haswell.fi.intel.com ([10.237.72.170]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2021 08:17:15 -0800 From: Mathias Nyman To: Cc: , Mathias Nyman , stable@vger.kernel.org, Ross Zwisler Subject: [PATCH 1/2] xhci: make sure TRB is fully written before giving it to the controller Date: Fri, 15 Jan 2021 18:19:06 +0200 Message-Id: <20210115161907.2875631-2-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210115161907.2875631-1-mathias.nyman@linux.intel.com> References: <20210115161907.2875631-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Once the command ring doorbell is rung the xHC controller will parse all command TRBs on the command ring that have the cycle bit set properly. If the driver just started writing the next command TRB to the ring when hardware finished the previous TRB, then HW might fetch an incomplete TRB as long as its cycle bit set correctly. A command TRB is 16 bytes (128 bits) long. Driver writes the command TRB in four 32 bit chunks, with the chunk containing the cycle bit last. This does however not guarantee that chunks actually get written in that order. This was detected in stress testing when canceling URBs with several connected USB devices. Two consecutive "Set TR Dequeue pointer" commands got queued right after each other, and the second one was only partially written when the controller parsed it, causing the dequeue pointer to be set to bogus values. This was seen as error messages: "Mismatch between completed Set TR Deq Ptr command & xHCI internal state" Solution is to add a write memory barrier before writing the cycle bit. Cc: Tested-by: Ross Zwisler Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-ring.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 5677b81c0915..cf0c93a90200 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2931,6 +2931,8 @@ static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, trb->field[0] = cpu_to_le32(field1); trb->field[1] = cpu_to_le32(field2); trb->field[2] = cpu_to_le32(field3); + /* make sure TRB is fully written before giving it to the controller */ + wmb(); trb->field[3] = cpu_to_le32(field4); trace_xhci_queue_trb(ring, trb); From patchwork Fri Jan 15 16:19:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathias Nyman X-Patchwork-Id: 12023195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1D26CC433E9 for ; Fri, 15 Jan 2021 16:19:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C3A5C238EB for ; Fri, 15 Jan 2021 16:19:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727332AbhAOQTK (ORCPT ); Fri, 15 Jan 2021 11:19:10 -0500 Received: from mga07.intel.com ([134.134.136.100]:57457 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728020AbhAOQTJ (ORCPT ); Fri, 15 Jan 2021 11:19:09 -0500 IronPort-SDR: X+qmMBJHMdtQQFg7yLWPZdVrVXHrK/uD13Wwz+Vox8OcLwFFUH1mX8p+vMjgTXyu7EQS4xN1O/ +kL2eq4ZzS+g== X-IronPort-AV: E=McAfee;i="6000,8403,9864"; a="242638647" X-IronPort-AV: E=Sophos;i="5.79,349,1602572400"; d="scan'208";a="242638647" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2021 08:17:18 -0800 IronPort-SDR: Xwf28O28/g0RHdhuwhK6du15I44lbrOrTGVHe6/9URI9jbLE6Jz20yk33orYj2HIbzEjMU97/U FEPTFIegN1hg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,349,1602572400"; d="scan'208";a="465626811" Received: from mattu-haswell.fi.intel.com ([10.237.72.170]) by fmsmga001.fm.intel.com with ESMTP; 15 Jan 2021 08:17:17 -0800 From: Mathias Nyman To: Cc: , JC Kuo , stable@vger.kernel.org, Mathias Nyman Subject: [PATCH 2/2] xhci: tegra: Delay for disabling LFPS detector Date: Fri, 15 Jan 2021 18:19:07 +0200 Message-Id: <20210115161907.2875631-3-mathias.nyman@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210115161907.2875631-1-mathias.nyman@linux.intel.com> References: <20210115161907.2875631-1-mathias.nyman@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: JC Kuo Occasionally, we are seeing some SuperSpeed devices resumes right after being directed to U3. This commits add 500us delay to ensure LFPS detector is disabled before sending ACK to firmware. [ 16.099363] tegra-xusb 70090000.usb: entering ELPG [ 16.104343] tegra-xusb 70090000.usb: 2-1 isn't suspended: 0x0c001203 [ 16.114576] tegra-xusb 70090000.usb: not all ports suspended: -16 [ 16.120789] tegra-xusb 70090000.usb: entering ELPG failed The register write passes through a few flop stages of 32KHz clock domain. NVIDIA ASIC designer reviewed RTL and suggests 500us delay. Cc: stable@vger.kernel.org Signed-off-by: JC Kuo Signed-off-by: Mathias Nyman --- drivers/usb/host/xhci-tegra.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c index 934be1686352..50bb91b6a4b8 100644 --- a/drivers/usb/host/xhci-tegra.c +++ b/drivers/usb/host/xhci-tegra.c @@ -623,6 +623,13 @@ static void tegra_xusb_mbox_handle(struct tegra_xusb *tegra, enable); if (err < 0) break; + + /* + * wait 500us for LFPS detector to be disabled before + * sending ACK + */ + if (!enable) + usleep_range(500, 1000); } if (err < 0) {