From patchwork Fri Sep 27 14:44:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Karol Herbst X-Patchwork-Id: 11164663 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0B9B14DB for ; Fri, 27 Sep 2019 14:44:30 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EED2E217D7 for ; Fri, 27 Sep 2019 14:44:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EED2E217D7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id ABB686EEF9; Fri, 27 Sep 2019 14:44:27 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by gabe.freedesktop.org (Postfix) with ESMTPS id E93286EEF9 for ; Fri, 27 Sep 2019 14:44:26 +0000 (UTC) Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3727F8763B for ; Fri, 27 Sep 2019 14:44:26 +0000 (UTC) Received: by mail-wr1-f70.google.com with SMTP id n18so1130851wro.11 for ; Fri, 27 Sep 2019 07:44:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=tKJlXVu+C+SUSWhFcHR/cVLK3eVfpegwLk/6ZeTU4Lw=; b=tGFA1E1aAEQlmkGF1q5WEW+w9x5k/KbE010csd+oAHooOl+KLf6KCXgv1sU0Bj6b/b BEeeVn9y58ZH3xzbDWdIXK6wsQs/G49PyhFlSE6cRLi193Xh71fGkrKZppOhSORZG9iA LzwoBEUlfFbZQo4m+ORytAbm04z3vsF8TX8bQkaBgqAaM/H4jFPypIljx4S0O3aB6NpB 1nJ3f9ILby3wIL9jks3SbS9TLR2CeVFx8DnKnIc057Bd+3Pw/o5TfkyI7uxuUsi0vv3I hCJVeRfv6KGQlvy3RWyXKMxAEFc5mKplXeDkKHQyoIEaRf1rozeeTi9w0NG3eCavlzt5 bFAA== X-Gm-Message-State: APjAAAUhHJTz4uvQlPWDYHmnvWhK24PXA3vxkKFV3TpAN4iTyV77Vh78 OIQI3j06aRfDZWV25zzqf4aPSHptt+0fhNfNvdrjNUrBSHEiDKCbzlqERuUzGAgaftOUk3P1B6k /X+weup7RNAJ0zVk11XKpqf4dyHcW X-Received: by 2002:a1c:1c7:: with SMTP id 190mr395784wmb.23.1569595464976; Fri, 27 Sep 2019 07:44:24 -0700 (PDT) X-Google-Smtp-Source: APXvYqxC2IYP1DXa/fMg1u9ig7ZhBeyIkZ56QIfJYKHZiecxua/UDg6iVgqlCp0MKwat0LoPX/Jjxg== X-Received: by 2002:a1c:1c7:: with SMTP id 190mr395767wmb.23.1569595464744; Fri, 27 Sep 2019 07:44:24 -0700 (PDT) Received: from kherbst.pingu.com ([2a02:8308:b0be:6900:6174:20eb:3f66:382f]) by smtp.gmail.com with ESMTPSA id e18sm4580926wrv.63.2019.09.27.07.44.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Sep 2019 07:44:23 -0700 (PDT) From: Karol Herbst To: linux-kernel@vger.kernel.org Subject: [RFC PATCH] pci: prevent putting pcie devices into lower device states on certain intel bridges Date: Fri, 27 Sep 2019 16:44:21 +0200 Message-Id: <20190927144421.22608-1-kherbst@redhat.com> X-Mailer: git-send-email 2.21.0 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Karol Herbst , linux-pci@vger.kernel.org, dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org, Bjorn Helgaas Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Fixes runpm breakage mainly on Nvidia GPUs as they are not able to resume. Works perfectly with this workaround applied. RFC comment: We are quite sure that there is a higher amount of bridges affected by this, but I was only testing it on my own machine for now. I've stresstested runpm by doing 5000 runpm cycles with that patch applied and never saw it fail. I mainly wanted to get a discussion going on if that's a feasable workaround indeed or if we need something better. I am also sure, that the nouveau driver itself isn't at fault as I am able to reproduce the same issue by poking into some PCI registers on the PCIe bridge to put the GPU into D3cold as it's done in ACPI code. I've written a little python script to reproduce this issue without the need of loading nouveau: https://raw.githubusercontent.com/karolherbst/pci-stub-runpm/master/nv_runpm_bug_test.py Signed-off-by: Karol Herbst Cc: Bjorn Helgaas Cc: Lyude Paul Cc: linux-pci@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: nouveau@lists.freedesktop.org --- drivers/pci/pci.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 088fcdc8d2b4..9dbd29ced1ac 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -799,6 +799,42 @@ static inline bool platform_pci_bridge_d3(struct pci_dev *dev) return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false; } +/* + * some intel bridges cause serious issues with runpm if the client device + * is put into D1/D2/D3hot before putting the client into D3cold via + * platform means (generally ACPI). + * + * skipping this makes runpm work perfectly fine on such devices. + * + * As far as we know only skylake and kaby lake SoCs are affected. + */ +static unsigned short intel_broken_d3_bridges[] = { + /* kbl */ + 0x1901, +}; + +static inline bool intel_broken_pci_pm(struct pci_bus *bus) +{ + struct pci_dev *bridge; + int i; + + if (!bus || !bus->self) + return false; + + bridge = bus->self; + if (bridge->vendor != PCI_VENDOR_ID_INTEL) + return false; + + for (i = 0; i < ARRAY_SIZE(intel_broken_d3_bridges); i++) { + if (bridge->device == intel_broken_d3_bridges[i]) { + pci_err(bridge, "found broken intel bridge\n"); + return true; + } + } + + return false; +} + /** * pci_raw_set_power_state - Use PCI PM registers to set the power state of * given PCI device @@ -827,6 +863,9 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state) if (state < PCI_D0 || state > PCI_D3hot) return -EINVAL; + if (state != PCI_D0 && intel_broken_pci_pm(dev->bus)) + return 0; + /* * Validate current state: * Can enter D0 from any state, but if we can only go deeper