From patchwork Fri Jan 17 18:47:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Abbott X-Patchwork-Id: 13943911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E7F3C02183 for ; Fri, 17 Jan 2025 18:50:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=thdukG+G+Hf8HToZ/Gcz+gc/k0VJqX4Uv/6IUD32wZs=; b=ytZfLcv+RuuPikRqIVxK7zDdgI 7+2Vw2DUVS2VGjpqrtfM5trwDOQZYPmlPBTFUwqMRTeSXB3FN69Ky5GGt0KZOvnh94H4aRJ0Uhe0K 7JAINmRCgvxgpQ5bCn73UP5BeKkuYdGkr5V6pM0NW/fxmBIFV2fQPFyDEVFPuRB3LhcWokGMR9b1n 86wu5y1cfLIGAJTryN56TsagtWVsUoudSym4H+oNgM9Uyhjg2SjH5CZ0ORyigE2iCiqq1G4XGXkzr rAAM8l1XNPVcGdoByZ9OgcpBt7jHIhNn4S01eqtuOG/K4znazOvR8A+zZdEsC5/lBNE3lj9fQl7O5 Ts3/e53A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYrQ6-000000019VJ-1NVd; Fri, 17 Jan 2025 18:50:14 +0000 Received: from mail-qv1-xf36.google.com ([2607:f8b0:4864:20::f36]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYrNd-000000018u4-05uW for linux-arm-kernel@lists.infradead.org; Fri, 17 Jan 2025 18:47:42 +0000 Received: by mail-qv1-xf36.google.com with SMTP id 6a1803df08f44-6dafe70ccd6so4251166d6.1 for ; Fri, 17 Jan 2025 10:47:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737139660; x=1737744460; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=thdukG+G+Hf8HToZ/Gcz+gc/k0VJqX4Uv/6IUD32wZs=; b=ZGhoqHA9caBDOoQN3buTQEsjXgc/gqXFV53SCRE0R6CpSoR4dv8abxf98HC3ySmkPF /fQqLMlyDTc2hWGhCHw/gtX5WWA/dUik0g6UkjOYAI6Ip6crAMjwaZF0Fdn1kAEHm8Ve fAmr6p1W2GL/NLrskyT0TV+6jJ7keJh9xQfDRPbZ65RkNAya7KT69K2k6qtApJnt19eT vgJM6h8WCAEWuJFIvTVa1FcL2rO4VQqc+BahJWw0SMpACtZn1S/ZlApK++bBI/+zQ7Uf RWphQc7XP61ZFlquRbGKKIo9f5SNY659rxfB/yqmClEkCywgoBGNBt3OKxyfH/o7JDvp RSbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737139660; x=1737744460; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=thdukG+G+Hf8HToZ/Gcz+gc/k0VJqX4Uv/6IUD32wZs=; b=eFZGMAntmF7uPikwr+r6sOp/liwHMNadhC+ENlPvG3o7SyctiLwvhGe8mg2RvJVoHu EVfT5xZO568lEWVoXSy0YR38mYfVVq3CvftsPQJrRTL4SSlvLgVkzOHlwKFddw7ckCxo ifGu7nZ2v978m5/8bRj4q6G+P5Rnz7BDVRcOKgO9Q1iPZKhgh2IJIVENr6LSnNxcHA/V M30AwlTvp/UZ2UMLdOFpWFa9vBPd0vbIdxdIL2KNezZzU8D2LU9cqVucCfHHs/khgMvK it8Y/ygqNFeyBKWKWb8gEThhmMA6FWGCMC4oNLcFobOHlE9IgkGQ6eq+xN0LJWj3tAL9 p4qA== X-Forwarded-Encrypted: i=1; AJvYcCVu3dmPk7yZcU9RNXKFjDKCsuIBWof+qci5pZT7nhMJrffTYS3eg2AyQd2lxQOaZ8TsUTd05LFobdOo0svbZUif@lists.infradead.org X-Gm-Message-State: AOJu0Yww9JcB4opBu+KEYT8yhBCabvQ/cA6wZ34v9Tt+2daV7/JRQrnT UIHQAK64kRI9z/dXLz7grp39WHrS588LEwuV9t56kkGDFIG2GG3n X-Gm-Gg: ASbGncvaDZBFwRvut2EnWtFc3QygZcuWnYNc7SNAD7hf3Zc+jVWOoAFPgm82onJZ4Kh GzvWKhnGIJdNpvDzN3rke+JZ+sOOJ6WtuAGjj6b02TSa59xrefLSg37V8snkICR+utK7GYj10G4 /LwyMhooT3JqGzBiVhbruNOs90MXsXgutJpzItya5uJ9RUkmRaaDzg2fX6icx08n1RaFK9JApJX cmedUJcGwrP5WJiA0db5C7aZa3VNcWFsMy3m4JFDH4LVjdHKW10mpEIgCaFLERpXhnOlQCLnlz2 bZnqa2+1qysP8HA= X-Google-Smtp-Source: AGHT+IFCtwKid4R9G/cATIXezUnzqNQSbD7LKvgass5fNNZc7tPfiHB8Zo6yUqQCv+onFrLudNkh8Q== X-Received: by 2002:a05:6214:27e5:b0:6d8:967a:1a60 with SMTP id 6a1803df08f44-6e1b2155c60mr25286736d6.2.1737139658289; Fri, 17 Jan 2025 10:47:38 -0800 (PST) Received: from [192.168.1.99] (ool-4355b0da.dyn.optonline.net. [67.85.176.218]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6e1afcd3859sm13992176d6.74.2025.01.17.10.47.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 10:47:37 -0800 (PST) From: Connor Abbott Date: Fri, 17 Jan 2025 13:47:07 -0500 Subject: [PATCH 1/3] iommu/arm-smmu: Fix spurious interrupts with stall-on-fault MIME-Version: 1.0 Message-Id: <20250117-msm-gpu-fault-fixes-next-v1-1-bc9b332b5d0b@gmail.com> References: <20250117-msm-gpu-fault-fixes-next-v1-0-bc9b332b5d0b@gmail.com> In-Reply-To: <20250117-msm-gpu-fault-fixes-next-v1-0-bc9b332b5d0b@gmail.com> To: Rob Clark , Will Deacon , Robin Murphy , Joerg Roedel , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten Cc: iommu@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, freedreno@lists.freedesktop.org, Connor Abbott X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1737139656; l=5695; i=cwabbott0@gmail.com; s=20240426; h=from:subject:message-id; bh=7AFM9ok1w81f2C++Gf69M3gwTiOfZOHfwkbHtBmlR6U=; b=X+UwyS84Mnzz7AGUA5nnVxAYS//fGOH+DQuYOJ9NlINCnfa/ZQ7Lo1kEPeJsOuB3QXodNs8nF iU2wOQc8uUUBczya9xYLQs0rwFL2blr/6XtA1wGsbUnBXZjEb0pUDBH X-Developer-Key: i=cwabbott0@gmail.com; a=ed25519; pk=dkpOeRSXLzVgqhy0Idr3nsBr4ranyERLMnoAgR4cHmY= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250117_104741_066376_465F46FE X-CRM114-Status: GOOD ( 21.96 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On some SMMUv2 implementations, including MMU-500, SMMU_CBn_FSR.SS asserts an interrupt. The only way to clear that bit is to resume the transaction by writing SMMU_CBn_RESUME, but typically resuming the transaction requires complex operations (copying in pages, etc.) that can't be done in IRQ context. drm/msm already has a problem, because its fault handler sometimes schedules a job to dump the GPU state and doesn't resume translation until this is complete. Work around this by disabling context fault interrupts until after the transaction is resumed. Because other context banks can share an IRQ line, we may still get an interrupt intended for another context bank, but in this case only SMMU_CBn_FSR.SS will be asserted and we can skip it assuming that interrupts are disabled which is accomplished by removing the bit from ARM_SMMU_CB_FSR_FAULT. Signed-off-by: Connor Abbott --- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 15 +++++++++++++- drivers/iommu/arm/arm-smmu/arm-smmu.c | 32 ++++++++++++++++++++++++++++++ drivers/iommu/arm/arm-smmu/arm-smmu.h | 2 +- 3 files changed, 47 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index 59d02687280e8d37b5e944619fcfe4ebd1bd6926..ee2fdf7e79a6d04bc2700e454253c96b573c5569 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -125,12 +125,25 @@ static void qcom_adreno_smmu_resume_translation(const void *cookie, bool termina struct arm_smmu_domain *smmu_domain = (void *)cookie; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; struct arm_smmu_device *smmu = smmu_domain->smmu; - u32 reg = 0; + u32 reg = 0, sctlr; + unsigned long flags; if (terminate) reg |= ARM_SMMU_RESUME_TERMINATE; + spin_lock_irqsave(&smmu_domain->stall_lock, flags); + arm_smmu_cb_write(smmu, cfg->cbndx, ARM_SMMU_CB_RESUME, reg); + + /* + * Re-enable interrupts after they were disabled by + * arm_smmu_context_fault(). + */ + sctlr = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR); + sctlr |= ARM_SMMU_SCTLR_CFIE; + arm_smmu_cb_write(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR, sctlr); + + spin_unlock_irqrestore(&smmu_domain->stall_lock, flags); } static void qcom_adreno_smmu_set_prr_bit(const void *cookie, bool set) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 79afc92e1d8b984dd35c469a3f283ad0c78f3d26..c92de760940ee2872f22dbe1b2519e02766aa143 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -457,12 +457,43 @@ static irqreturn_t arm_smmu_context_fault(int irq, void *dev) DEFAULT_RATELIMIT_BURST); int idx = smmu_domain->cfg.cbndx; int ret; + unsigned long flags; arm_smmu_read_context_fault_info(smmu, idx, &cfi); if (!(cfi.fsr & ARM_SMMU_CB_FSR_FAULT)) return IRQ_NONE; + /* + * On some implementations FSR.SS asserts a context fault + * interrupt. We do not want this behavior, because resolving the + * original context fault typically requires operations that cannot be + * performed in IRQ context but leaving the stall unacknowledged will + * immediately lead to another spurious interrupt as FSR.SS is still + * set. Work around this by disabling interrupts for this context bank. + * It's expected that interrupts are re-enabled after resuming the + * translation. + * + * We have to do this before report_iommu_fault() so that we don't + * leave interrupts disabled in case the downstream user decides the + * fault can be resolved inside its fault handler. + * + * There is a possible race if there are multiple context banks sharing + * the same interrupt and both signal an interrupt in between writing + * RESUME and SCTLR. We could disable interrupts here before we + * re-enable them in the resume handler, leaving interrupts enabled. + * Lock the write to serialize it with the resume handler. + */ + if (cfi.fsr & ARM_SMMU_CB_FSR_SS) { + u32 val; + + spin_lock_irqsave(&smmu_domain->stall_lock, flags); + val = arm_smmu_cb_read(smmu, idx, ARM_SMMU_CB_SCTLR); + val &= ~ARM_SMMU_SCTLR_CFIE; + arm_smmu_cb_write(smmu, idx, ARM_SMMU_CB_SCTLR, val); + spin_unlock_irqrestore(&smmu_domain->stall_lock, flags); + } + ret = report_iommu_fault(&smmu_domain->domain, NULL, cfi.iova, cfi.fsynr & ARM_SMMU_CB_FSYNR0_WNR ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ); @@ -921,6 +952,7 @@ static struct iommu_domain *arm_smmu_domain_alloc_paging(struct device *dev) mutex_init(&smmu_domain->init_mutex); spin_lock_init(&smmu_domain->cb_lock); + spin_lock_init(&smmu_domain->stall_lock); return &smmu_domain->domain; } diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.h b/drivers/iommu/arm/arm-smmu/arm-smmu.h index 2dbf3243b5ad2db01e17fb26c26c838942a491be..153fac131b2484d468fd482ffbf130efc8cfb8f6 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.h +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.h @@ -216,7 +216,6 @@ enum arm_smmu_cbar_type { ARM_SMMU_CB_FSR_TLBLKF) #define ARM_SMMU_CB_FSR_FAULT (ARM_SMMU_CB_FSR_MULTI | \ - ARM_SMMU_CB_FSR_SS | \ ARM_SMMU_CB_FSR_UUT | \ ARM_SMMU_CB_FSR_EF | \ ARM_SMMU_CB_FSR_PF | \ @@ -384,6 +383,7 @@ struct arm_smmu_domain { enum arm_smmu_domain_stage stage; struct mutex init_mutex; /* Protects smmu pointer */ spinlock_t cb_lock; /* Serialises ATS1* ops and TLB syncs */ + spinlock_t stall_lock; struct iommu_domain domain; }; From patchwork Fri Jan 17 18:47:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Abbott X-Patchwork-Id: 13943913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 95DC5C02183 for ; Fri, 17 Jan 2025 18:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=7cXS8jZh32k3xn1O/NOhZ8hhmDw8Y92QQ/WTT5FTNxU=; b=EMCLKl+NsKZQknKEiyAAUorVpf RFrjMH4hpU/qfKsJbVQBmKCSIXe02LU4tEyg4No8UtE4UwVOUzvoDt1bZwYHCoDhlYA8bZLdy9hIl vMC8O+1XtpEc/fFhtFn4nueoh7Xc9vYcIVAGkJHPh8Y4aamgVGHLL88D4EcRVzz3U4XYUcgXx1Rpo NBOQkWstVS7z2ACcIHbX6nHyW+2FMo/GKsRCApUolC7lEKhWop9aeHdt2haf7lvwsCc+fPYOyPSAb OXf1P383bFwR33QTfyuUwWJRU/CgxRBiloY22/jPnSTuOEsoV67qNLkeJJvPE+S/uBS/hzjZp5BRv cvr9moDg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYrSl-00000001ATZ-1jKX; Fri, 17 Jan 2025 18:52:59 +0000 Received: from mail-qv1-xf2b.google.com ([2607:f8b0:4864:20::f2b]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYrNe-000000018ua-00q1 for linux-arm-kernel@lists.infradead.org; Fri, 17 Jan 2025 18:47:43 +0000 Received: by mail-qv1-xf2b.google.com with SMTP id 6a1803df08f44-6d8de4407f3so3525296d6.0 for ; Fri, 17 Jan 2025 10:47:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737139661; x=1737744461; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=7cXS8jZh32k3xn1O/NOhZ8hhmDw8Y92QQ/WTT5FTNxU=; b=MgY8mNTxbYA628H1kEN41GaxaD5nkYh9Q5eibVEh/AtqPQOD9LYdPl9wQ7gMaiGXQn P31QQBnvlacQevlRZLgBuAom6YllNIofwP/NI6WIERNX45tp8XgXWth7wVWVgbT5HPFz DoZPofeSAiC6v8jjk5iHuv70B+0HNA4KhlPeFhiZVmKyPAb1AqivglmgcyNVk3HcszVj Guthb1KcGkFMDshwDxgR+bz9rlNeezp7PC+dFc3aI6Gz8KpQpiLyUgqady8c4jE9/LEv 5IFmeQSCWOI1AoIN1Wd4WWIChC7MYnGhT1qsXZW/CIjWqsYHHbuGpzDPoOUs6noIKlDt aW5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737139661; x=1737744461; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7cXS8jZh32k3xn1O/NOhZ8hhmDw8Y92QQ/WTT5FTNxU=; b=sMGaGie7LvxuN8Ah8McXl3Jdh41JpM2w6eeX6pnV7SH0niNyhyHgamtyxyApALLIOB wgx7gNFCxRwv2o2wQTN6uWygWx7b9di1qPCSIIWThkiaZShwQ3CwVNCIdGqxC5c7l8aL ++jjEU5vzaTXKUig3txslE9C2q0tnMHUDjp/iLqcDqJk1k3p4PTGNZwQIqMShIv3cPN0 e1F501PeQabFzHh+wH1s0eZpvKIMNK0n0cSOAW5wYKuMaX2xhdURaCRh7tqni7iN5cTJ rvbKaepUsBpNbi/4ZLmEn+QYRWHek7RK/YOhWEQy2H0JcVbwUViEuqN7iJYuqSOTkVao PXPw== X-Forwarded-Encrypted: i=1; AJvYcCVaOLZpN5upVjSM1jPnEpMkmG1/vcnWDOhEwXEKXKRj0qrTYlW/yeoZT9phl5+THimJBfqg+sMOT/dT5gsDTu3z@lists.infradead.org X-Gm-Message-State: AOJu0YzFg87GdrF3XQs+lwCxUsaLkhm2om11d+F5405A4YbbiNvikHkr mWnpH3zI6+auHG4LleWK2SD/iLVZZewoXXkDlxjf1K4LXbx3sq9CGFIssw== X-Gm-Gg: ASbGncteMawDB+j2NBEptcZuG4X9KgVP+FGmhG7W0Zuh7ucj/iq77qxq2t2qZsCMncM sRx+82wJM+HXfRrwwjFh2D8LcIgUegR79xtX4/dvTdeC1K6x6J+tWcpA7P36jPPvvD862QkbWzT e//t6q1RC36XRkwMCd72A35MpmLPGQaO6d15lqe5TtddoZzzJhZUcS6UC9KHDMEWvZBbWHwIp1k 2YoGo9dbAiJ1VCHOy74WTcCIWPgFhUNiDcNUzCCBke8rZijzxMyXn7jFQCSm/RBxMnIXoWsW+p/ LbH4XdWgq+te3b8= X-Google-Smtp-Source: AGHT+IFkED0JQTC0Xpd5G1Q8mAnhBtwFYNMO1jthK556QYFx0lYcOPWbJ2YYI/wogCp8QUtnRASS4Q== X-Received: by 2002:a05:6214:528b:b0:6d8:7d84:a514 with SMTP id 6a1803df08f44-6e1b2175b9dmr23314216d6.5.1737139659256; Fri, 17 Jan 2025 10:47:39 -0800 (PST) Received: from [192.168.1.99] (ool-4355b0da.dyn.optonline.net. [67.85.176.218]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6e1afcd3859sm13992176d6.74.2025.01.17.10.47.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 10:47:38 -0800 (PST) From: Connor Abbott Date: Fri, 17 Jan 2025 13:47:08 -0500 Subject: [PATCH 2/3] iommu/arm-smmu-qcom: Make set_stall work when the device is on MIME-Version: 1.0 Message-Id: <20250117-msm-gpu-fault-fixes-next-v1-2-bc9b332b5d0b@gmail.com> References: <20250117-msm-gpu-fault-fixes-next-v1-0-bc9b332b5d0b@gmail.com> In-Reply-To: <20250117-msm-gpu-fault-fixes-next-v1-0-bc9b332b5d0b@gmail.com> To: Rob Clark , Will Deacon , Robin Murphy , Joerg Roedel , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten Cc: iommu@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, freedreno@lists.freedesktop.org, Connor Abbott X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1737139656; l=2114; i=cwabbott0@gmail.com; s=20240426; h=from:subject:message-id; bh=JyOLVh0gEkcPnnA/qjt0Kfj3pqe9UFDopPZk8F1e/x0=; b=Z8mukskVxG/5911ObuOE8aZjxBMyoMMxQL49HUEn1UzKTTkKUCl1YqM1Kdm1q24jL47h7A9xL sQTHHPy79i0CWIRIUNZjb06P1taEZ4QbjZs13mX8cjDTJoodH+OmmrC X-Developer-Key: i=cwabbott0@gmail.com; a=ed25519; pk=dkpOeRSXLzVgqhy0Idr3nsBr4ranyERLMnoAgR4cHmY= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250117_104742_045606_058B2FE9 X-CRM114-Status: GOOD ( 13.45 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Up until now we have only called the set_stall callback during initialization when the device is off. But we will soon start calling it to temporarily disable stall-on-fault when the device is on, so handle that by checking if the device is on and writing SCTLR. Signed-off-by: Connor Abbott --- drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 31 +++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c index ee2fdf7e79a6d04bc2700e454253c96b573c5569..54be27f7b49d78b7542fd714d6aade2b23c65fc0 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c @@ -112,12 +112,37 @@ static void qcom_adreno_smmu_set_stall(const void *cookie, bool enabled) { struct arm_smmu_domain *smmu_domain = (void *)cookie; struct arm_smmu_cfg *cfg = &smmu_domain->cfg; - struct qcom_smmu *qsmmu = to_qcom_smmu(smmu_domain->smmu); + struct arm_smmu_device *smmu = smmu_domain->smmu; + struct qcom_smmu *qsmmu = to_qcom_smmu(smmu); + u32 mask = BIT(cfg->cbndx); + bool stall_changed = !!(qsmmu->stall_enabled & mask) != enabled; if (enabled) - qsmmu->stall_enabled |= BIT(cfg->cbndx); + qsmmu->stall_enabled |= mask; else - qsmmu->stall_enabled &= ~BIT(cfg->cbndx); + qsmmu->stall_enabled &= ~mask; + + /* + * If the device is on and we changed the setting, update the register. + */ + if (stall_changed && pm_runtime_get_if_active(smmu->dev) > 0) { + u32 reg = arm_smmu_cb_read(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR); + + if (enabled) + reg |= ARM_SMMU_SCTLR_CFCFG; + else + reg &= ~ARM_SMMU_SCTLR_CFCFG; + + arm_smmu_cb_write(smmu, cfg->cbndx, ARM_SMMU_CB_SCTLR, reg); + + /* + * If doing this in the context fault handler, make sure the + * update lands before we acknowledge the fault. + */ + wmb(); + + pm_runtime_put_autosuspend(smmu->dev); + } } static void qcom_adreno_smmu_resume_translation(const void *cookie, bool terminate) From patchwork Fri Jan 17 18:47:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Abbott X-Patchwork-Id: 13943912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A12DEC02183 for ; Fri, 17 Jan 2025 18:51:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:In-Reply-To:References :Message-Id:Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:Date: From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=4L1KrStSGgMrLKkKl6uxBZehY7mDLoPXpGyU7mfraHE=; b=X49IJfVjlUuEp56NyAsx1NV2D/ y1Sz2BvHyg9CVBPwGqcK7WiXyjrq1i2UivLVXslvXhrScddNZJ50MMXYvRsqXbdYLzpWjX/XNEZvp XfG6jzqYKr2BeZyeGqmWWs5MsAxeKE69rPVEF2VWRSIXSf8h1B3l8P5R2n6mle6DZdhd+7LmTDBAz fqkiJp566MQCJaJxliw3vWQTgN/KyUzloJQfbWYcFigZ9vkpPVr32lhqYb0OmmXGIOH4IOviEMHT1 JNNr4hwMybCJfDaEkpLDJDOP8AOSvHSsD0rwcnQY7aMGpHfc35aZtaUfGDp0Czz+Ov4+csq4jlQdw +Tm4rnYA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1tYrRL-000000019pf-44Yy; Fri, 17 Jan 2025 18:51:31 +0000 Received: from mail-qv1-xf32.google.com ([2607:f8b0:4864:20::f32]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1tYrNd-000000018uL-2HTt for linux-arm-kernel@lists.infradead.org; Fri, 17 Jan 2025 18:47:42 +0000 Received: by mail-qv1-xf32.google.com with SMTP id 6a1803df08f44-6d8983825a3so3380636d6.0 for ; Fri, 17 Jan 2025 10:47:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737139660; x=1737744460; darn=lists.infradead.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=4L1KrStSGgMrLKkKl6uxBZehY7mDLoPXpGyU7mfraHE=; b=nVCjWu5uEkJ1221mXe8TAeEEZeGFp/XnMbWbydTqQfmMlVSKTIAX/2W9h0I7s60zg/ 2+u+ZYyNyJdiyBRDBUSI9duuMlEOIic34blw6MJ2HrMuFaOsoVK52ScHndit3l41t3xe qf3KZwJexBFZxgmn2NjVO2tmMqF9j/Q1/DEDkJoNXY3eDyMdnctfAScddy5DXejKa0KJ KCFIjxtaLgnhInne9NWYScyDgVUQP9c4AbymGRJgSq0CAfq/Gt9UUGq+5JRqqdZiie4i 1pd3euP7frXSvz1TswAhPykmJnYlmgrgvdYdhS73dF6Em7rmGFMtSYJ7+rQENkva9UBo 3Irg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737139660; x=1737744460; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4L1KrStSGgMrLKkKl6uxBZehY7mDLoPXpGyU7mfraHE=; b=pASKcMOVIF0deXhMtaXUH9Ik616Zpd4qh/yvB3+XDstKb+fri10jtQP42rpUuifflt jX4IDWHuyK1hpUmOULw30zRNSN/ztiYxKW+YbsXNMv+0FMBDq+LrgUKJ3yaGu8P5Hstj QN0/oqiV5kG6mM0i3yPhuYObMRcnqvAaxaLb3TrMBg0Wqkw9fRWe9UYaX23fP15h6uUz ysMgI4dZz1ArZN5Gm+Wj6HcpXofOfaoDUFphKbqPpuHqTtQnsLifL6wB1zlzvTIPgC8U sv9gA+bdmX++MnFJe9lhSGpUDUROa5nzTRLaXUnIy39IEdEgb/aDZRo03ND3S5GmGH6I sc0Q== X-Forwarded-Encrypted: i=1; AJvYcCVZCyWuior1dumDF6SfYD3/SwVVgFcDw9LuVN2pNhGZAz9rAlH+qeXkFi/+1kr+Tnup6t32IteEwZeRc8I8BYX6@lists.infradead.org X-Gm-Message-State: AOJu0YwcGnpISFigFE3KmxDJJKvnvkxhDCUhHlym6WfaaegR0Zzc/KHX ysC7VCyFT0jmuTejumavpt6UJG+hizj6NZB89sOvmfK9vY35y5b0 X-Gm-Gg: ASbGncuMPVMGkh0CpaYr41e5hGjJisu9nTP/7boWTfUxwk+09PZ8Mz00V3DLRuX21R3 Ig0B9qPSMcHQKfk7DkXCmaA/hMQ47WX27xfRJibU3FRWZgWKI4ARxVc2VJjDnCF+PVzoyIOL7d3 TKHk/ICv0G1+z+fXEpmlqUgTqZt7NDCQ+gv1sGFD59A9rNiUS03fV3Am4P651y2/L4iY342LXm0 i5i+sHErekjslCt99xlY/mQDltOTmKVbtfP8wT2t8LH8Fu7Tnq6O389a/GOGeIgExqvTLU/3iJu oaa6dtyOyuzV1ic= X-Google-Smtp-Source: AGHT+IEIyw5aGTESg9zUs6A5tho39lsXqAds1z3a8v1SLXZHSS9GUHBu+n7xHH9/ywAcj3SysGTDjA== X-Received: by 2002:a05:6214:528a:b0:6d8:a90b:1564 with SMTP id 6a1803df08f44-6e1b21ba3f6mr22060366d6.6.1737139660304; Fri, 17 Jan 2025 10:47:40 -0800 (PST) Received: from [192.168.1.99] (ool-4355b0da.dyn.optonline.net. [67.85.176.218]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6e1afcd3859sm13992176d6.74.2025.01.17.10.47.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2025 10:47:39 -0800 (PST) From: Connor Abbott Date: Fri, 17 Jan 2025 13:47:09 -0500 Subject: [PATCH 3/3] drm/msm: Temporarily disable stall-on-fault after a page fault MIME-Version: 1.0 Message-Id: <20250117-msm-gpu-fault-fixes-next-v1-3-bc9b332b5d0b@gmail.com> References: <20250117-msm-gpu-fault-fixes-next-v1-0-bc9b332b5d0b@gmail.com> In-Reply-To: <20250117-msm-gpu-fault-fixes-next-v1-0-bc9b332b5d0b@gmail.com> To: Rob Clark , Will Deacon , Robin Murphy , Joerg Roedel , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten Cc: iommu@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, freedreno@lists.freedesktop.org, Connor Abbott X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1737139656; l=9406; i=cwabbott0@gmail.com; s=20240426; h=from:subject:message-id; bh=Azd0LHGJbPV6BAUlAR+Xa9srFF0aNcFCaktmBN9LT5I=; b=gspZvcy0PinUMwLI8bcCW+Uo6tVZOE/Ynr6m7+0VeKjlXgIRjeYaEXzAu1aAjUlDpx/jq2nlv ln3K0SO9BS3B9wdm/nclFxOBJtN7Ugr7u838eMB87ADklULFCMaraL+ X-Developer-Key: i=cwabbott0@gmail.com; a=ed25519; pk=dkpOeRSXLzVgqhy0Idr3nsBr4ranyERLMnoAgR4cHmY= X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250117_104741_591240_0A1E9B94 X-CRM114-Status: GOOD ( 27.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When things go wrong, the GPU is capable of quickly generating millions of faulting translation requests per second. When that happens, in the stall-on-fault model each access will stall until it wins the race to signal the fault and then the RESUME register is written. This slows processing page faults to a crawl as the GPU can generate faults much faster than the CPU can acknowledge them. It also means that all available resources in the SMMU are saturated waiting for the stalled transactions, so that other transactions such as transactions generated by the GMU, which shares a context bank with the GPU, cannot proceed. This causes a GMU watchdog timeout, which leads to a failed reset because GX cannot collapse when there is a transaction pending and a permanently hung GPU. On older platforms with qcom,smmu-v2, it seems that when one transaction is stalled subsequent faulting transactions are terminated, which avoids this problem, but the MMU-500 follows the spec here. To work around these problem, disable stall-on-fault as soon as we get a page fault until a cooldown period after pagefaults stop. This allows the GMU some guaranteed time to continue working. We also keep it disabled so long as the current devcoredump hasn't been deleted, because in that case we likely won't capture another one if there's a fault. After this commit HFI messages still occasionally time out, because the crashdump handler doesn't run fast enough to let the GMU resume, but the driver seems to recover from it. This will probably go away after the HFI timeout is increased. Signed-off-by: Connor Abbott --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 2 ++ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 +++ drivers/gpu/drm/msm/adreno/adreno_gpu.c | 56 ++++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 21 +++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 9 ++++++ drivers/gpu/drm/msm/msm_mmu.h | 1 + 6 files changed, 92 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 71dca78cd7a5324e9ff5b14f173e2209fa42e196..a559e47af5b549e154fa6c32ef8879dd856531a2 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -131,6 +131,8 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i, ibs = 0; + adreno_gpu_enable_iommu_stall(adreno_gpu); + if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) { ring->cur_ctx_seqno = 0; a5xx_submit_in_rb(gpu, submit); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0ae29a7c8a4d3f74236a35cc919f69d5c0a384a0..0e63ee62d3eff3e274bae375430efbdf6f8dccf0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -212,6 +212,8 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i, ibs = 0; + adreno_gpu_enable_iommu_stall(adreno_gpu); + a6xx_set_pagetable(a6xx_gpu, ring, submit); get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), @@ -335,6 +337,8 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct msm_ringbuffer *ring = submit->ring; unsigned int i, ibs = 0; + adreno_gpu_enable_iommu_stall(adreno_gpu); + /* * Toggle concurrent binning for pagetable switch and set the thread to * BR since only it can execute the pagetable switch packets. diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 1238f326597808eb28b4c6822cbd41a26e555eb9..6bf834d075219193cce187ec5f55aa691121aad3 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -246,16 +246,65 @@ u64 adreno_private_address_space_size(struct msm_gpu *gpu) return SZ_4G; } +void adreno_gpu_enable_iommu_stall(struct adreno_gpu *adreno_gpu) +{ + struct msm_gpu *gpu = &adreno_gpu->base; + unsigned long flags; + + /* + * Wait until the cooldown period has passed and we would actually + * collect a crashdump to re-enable stall-on-fault. + */ + spin_lock_irqsave(&adreno_gpu->fault_stall_lock, flags); + if (!adreno_gpu->stall_enabled && + READ_ONCE(adreno_gpu->enable_stall_on_submit) && + !READ_ONCE(gpu->crashstate)) { + adreno_gpu->stall_enabled = true; + + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + } + spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); +} + +static void fault_stall_handler(struct timer_list *t) +{ + struct adreno_gpu *gpu = from_timer(gpu, t, fault_stall_timer); + + WRITE_ONCE(gpu->enable_stall_on_submit, true); +} + + #define ARM_SMMU_FSR_TF BIT(1) #define ARM_SMMU_FSR_PF BIT(3) #define ARM_SMMU_FSR_EF BIT(4) +#define ARM_SMMU_FSR_SS BIT(30) int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]) { + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); const char *type = "UNKNOWN"; - bool do_devcoredump = info && !READ_ONCE(gpu->crashstate); + bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && + !READ_ONCE(gpu->crashstate); + unsigned long irq_flags; + + /* + * In case there is a subsequent storm of pagefaults, disable + * stall-on-fault for at least half a second. + */ + spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags); + if (adreno_gpu->stall_enabled) { + adreno_gpu->stall_enabled = false; + adreno_gpu->enable_stall_on_submit = false; + + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + + } + spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); + + mod_timer(&adreno_gpu->fault_stall_timer, + round_jiffies_up(jiffies + msecs_to_jiffies(500))); /* * If we aren't going to be resuming later from fault_worker, then do @@ -1143,6 +1192,11 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev, adreno_gpu->info->inactive_period); pm_runtime_use_autosuspend(dev); + spin_lock_init(&adreno_gpu->fault_stall_lock); + timer_setup(&adreno_gpu->fault_stall_timer, fault_stall_handler, 0); + adreno_gpu->enable_stall_on_submit = true; + adreno_gpu->stall_enabled = true; + return msm_gpu_init(drm, pdev, &adreno_gpu->base, &funcs->base, gpu_name, &adreno_gpu_config); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index dcf454629ce037b2a8274a6699674ad754ce1f07..c59501afa40c223d02bea3ff9b0dbc309d099317 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -205,6 +205,25 @@ struct adreno_gpu { /* firmware: */ const struct firmware *fw[ADRENO_FW_MAX]; + spinlock_t fault_stall_lock; + + struct timer_list fault_stall_timer; + + /** + * enable_stall_on_submit: + * + * Whether to re-enable stall-on-fault on the next submit. + */ + bool enable_stall_on_submit; + + /** + * stall_enabled: + * + * Whether stall-on-fault is currently enabled. + */ + bool stall_enabled; + + struct { /** * @rgb565_predicator: Unknown, introduced with A650 family, @@ -629,6 +648,8 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]); +void adreno_gpu_enable_iommu_stall(struct adreno_gpu *gpu); + int adreno_read_speedbin(struct device *dev, u32 *speedbin); /* diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2a94e82316f95c5f9dcc37ef0a4664a29e3492b2..8d5380e6dcc217c7c209b51527bf15748b3ada71 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -351,6 +351,14 @@ static void msm_iommu_resume_translation(struct msm_mmu *mmu) adreno_smmu->resume_translation(adreno_smmu->cookie, true); } +static void msm_iommu_set_stall(struct msm_mmu *mmu, bool enable) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(mmu->dev); + + if (adreno_smmu->set_stall) + adreno_smmu->set_stall(adreno_smmu->cookie, enable); +} + static void msm_iommu_detach(struct msm_mmu *mmu) { struct msm_iommu *iommu = to_msm_iommu(mmu); @@ -399,6 +407,7 @@ static const struct msm_mmu_funcs funcs = { .unmap = msm_iommu_unmap, .destroy = msm_iommu_destroy, .resume_translation = msm_iommu_resume_translation, + .set_stall = msm_iommu_set_stall, }; struct msm_mmu *msm_iommu_new(struct device *dev, unsigned long quirks) diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index 88af4f490881f2a6789ae2d03e1c02d10046331a..2694a356a17904e7572b767b16ed0cee806406cf 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -16,6 +16,7 @@ struct msm_mmu_funcs { int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); + void (*set_stall)(struct msm_mmu *mmu, bool enable); }; enum msm_mmu_type {