From patchwork Wed Jul 14 16:52:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SmFzb24tSkggTGluICjmnpfnnb/npaUp?= X-Patchwork-Id: 12377493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65F68C07E9A for ; Wed, 14 Jul 2021 17:05:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2BB4960698 for ; Wed, 14 Jul 2021 17:05:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2BB4960698 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qBY06OsLf9viiEWcS5QcU3gCVvf403vZ4qOyHB7hERQ=; b=HVxXuaBiZCt0w/ 4HmxZT8C+Sx96BtyFDLdg+T+e26U4XawOWZxzR6n41KyQVRy04mZUlsTHLRWipkD9LLZFdVpF/PDe 5Gvh8ohuNzVPGY+l6IBoDLTLkLa+NC0yqAX0511teEV2ZiaEB/3aAfFuCBIvKF5e/kheuBlNiE5Mz Nf1HHnaxqZZAr4xs8EmCIGBtS0HamoBgzN2b1JNC6t2YfVG1tFlT7vgKGdUSChdfJcsk/NUsSybu4 ypfjG6JZll3QUNc2zkfOM+zz8Y81KEzCgnJEl1qTP4PZFj0ijKurpB5d/2PA75+5tYLeaVWUA8yho r65OsiSofdxYjc9HZ59A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3iI7-00EE2D-Fr; Wed, 14 Jul 2021 17:03:23 +0000 Received: from mailgw01.mediatek.com ([216.200.240.184]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3iHM-00EDoH-Ns; Wed, 14 Jul 2021 17:02:38 +0000 X-UUID: d36ca6003bc24de99eda0468c447928a-20210714 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=xNtTHNaD2cPAScLu0Sq4EILdtNhsVEqXzfTgvFnO7qI=; b=PmhIvtEnInwxaJzxlwv+UibredfS9e3Y0rvpk8gB+yAYyP1D5dbFglreTzMyP6ER8PxP8h/RuryWLI4HK4zH59Yv4/5CbBPD/0wecm5GT30nzg4TRhSMDAWOgy+XVHeUS9fPLX87mqDa1HeY60Wo43uksXTL62ygRWB3m+DqZUc=; X-UUID: d36ca6003bc24de99eda0468c447928a-20210714 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw01.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 890912230; Wed, 14 Jul 2021 10:02:27 -0700 Received: from MTKMBS02N2.mediatek.inc (172.21.101.101) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 14 Jul 2021 09:52:26 -0700 Received: from mtkcas07.mediatek.inc (172.21.101.84) by mtkmbs02n2.mediatek.inc (172.21.101.101) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 15 Jul 2021 00:52:11 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas07.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 15 Jul 2021 00:52:11 +0800 From: jason-jh.lin To: , CC: , , , , , , , , Subject: [PATCH v3 5/5] mailbox: cmdq: fix GCE can not receive hardward event Date: Thu, 15 Jul 2021 00:52:08 +0800 Message-ID: <20210714165208.2841-6-jason-jh.lin@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210714165208.2841-1-jason-jh.lin@mediatek.com> References: <20210714165208.2841-1-jason-jh.lin@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210714_100236_826707_EFCC68D4 X-CRM114-Status: GOOD ( 19.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For the design of GCE hardware event signal transportation, evnet rx will send the event signal to all GCE event merges after receiving the event signal from the other hardware. Because GCE event merges need to response to event rx, their clocks must be enabled at that time. To make sure all the gce clock is enabled while receiving the hardware event, each cmdq mailbox should enable or disable the others gce clk at the same time. Signed-off-by: jason-jh.lin --- drivers/mailbox/mtk-cmdq-mailbox.c | 105 +++++++++++++++++++++++------ 1 file changed, 86 insertions(+), 19 deletions(-) diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c index fc67888a677c..7e9b0907ec56 100644 --- a/drivers/mailbox/mtk-cmdq-mailbox.c +++ b/drivers/mailbox/mtk-cmdq-mailbox.c @@ -19,6 +19,7 @@ #define CMDQ_OP_CODE_MASK (0xff << CMDQ_OP_CODE_SHIFT) #define CMDQ_NUM_CMD(t) (t->cmd_buf_size / CMDQ_INST_SIZE) +#define CMDQ_GCE_NUM_MAX (2) #define CMDQ_CURR_IRQ_STATUS 0x10 #define CMDQ_SYNC_TOKEN_UPDATE 0x68 @@ -73,14 +74,16 @@ struct cmdq { u32 thread_nr; u32 irq_mask; struct cmdq_thread *thread; - struct clk *clock; + struct clk *clock[CMDQ_GCE_NUM_MAX]; bool suspended; u8 shift_pa; + u32 gce_num; }; struct gce_plat { u32 thread_nr; u8 shift; + u32 gce_num; }; u8 cmdq_get_shift_pa(struct mbox_chan *chan) @@ -120,11 +123,15 @@ static void cmdq_init(struct cmdq *cmdq) { int i; - WARN_ON(clk_enable(cmdq->clock) < 0); + for (i = 0; i < cmdq->gce_num; i++) + WARN_ON(clk_enable(cmdq->clock[i]) < 0); + writel(CMDQ_THR_ACTIVE_SLOT_CYCLES, cmdq->base + CMDQ_THR_SLOT_CYCLES); for (i = 0; i <= CMDQ_MAX_EVENT; i++) writel(i, cmdq->base + CMDQ_SYNC_TOKEN_UPDATE); - clk_disable(cmdq->clock); + + for (i = 0; i < cmdq->gce_num; i++) + clk_disable(cmdq->clock[i]); } static int cmdq_thread_reset(struct cmdq *cmdq, struct cmdq_thread *thread) @@ -257,8 +264,12 @@ static void cmdq_thread_irq_handler(struct cmdq *cmdq, } if (list_empty(&thread->task_busy_list)) { + int i; + cmdq_thread_disable(cmdq, thread); - clk_disable(cmdq->clock); + + for (i = 0; i < cmdq->gce_num; i++) + clk_disable(cmdq->clock[i]); } } @@ -303,7 +314,8 @@ static int cmdq_suspend(struct device *dev) if (task_running) dev_warn(dev, "exist running task(s) in suspend\n"); - clk_unprepare(cmdq->clock); + for (i = 0; i < cmdq->gce_num; i++) + clk_unprepare(cmdq->clock[i]); return 0; } @@ -311,8 +323,11 @@ static int cmdq_suspend(struct device *dev) static int cmdq_resume(struct device *dev) { struct cmdq *cmdq = dev_get_drvdata(dev); + int i; + + for (i = 0; i < cmdq->gce_num; i++) + WARN_ON(clk_prepare(cmdq->clock[i]) < 0); - WARN_ON(clk_prepare(cmdq->clock) < 0); cmdq->suspended = false; return 0; } @@ -320,8 +335,10 @@ static int cmdq_resume(struct device *dev) static int cmdq_remove(struct platform_device *pdev) { struct cmdq *cmdq = platform_get_drvdata(pdev); + int i; - clk_unprepare(cmdq->clock); + for (i = 0; i < cmdq->gce_num; i++) + clk_unprepare(cmdq->clock[i]); return 0; } @@ -348,7 +365,11 @@ static int cmdq_mbox_send_data(struct mbox_chan *chan, void *data) task->pkt = pkt; if (list_empty(&thread->task_busy_list)) { - WARN_ON(clk_enable(cmdq->clock) < 0); + int i; + + for (i = 0; i < cmdq->gce_num; i++) + WARN_ON(clk_enable(cmdq->clock[i]) < 0); + /* * The thread reset will clear thread related register to 0, * including pc, end, priority, irq, suspend and enable. Thus @@ -401,6 +422,7 @@ static void cmdq_mbox_shutdown(struct mbox_chan *chan) struct cmdq *cmdq = dev_get_drvdata(chan->mbox->dev); struct cmdq_task *task, *tmp; unsigned long flags; + int i; spin_lock_irqsave(&thread->chan->lock, flags); if (list_empty(&thread->task_busy_list)) @@ -420,7 +442,9 @@ static void cmdq_mbox_shutdown(struct mbox_chan *chan) } cmdq_thread_disable(cmdq, thread); - clk_disable(cmdq->clock); + + for (i = 0; i < cmdq->gce_num; i++) + clk_disable(cmdq->clock[i]); done: /* * The thread->task_busy_list empty means thread already disable. The @@ -440,6 +464,7 @@ static int cmdq_mbox_flush(struct mbox_chan *chan, unsigned long timeout) struct cmdq_task *task, *tmp; unsigned long flags; u32 enable; + int i; spin_lock_irqsave(&thread->chan->lock, flags); if (list_empty(&thread->task_busy_list)) @@ -463,7 +488,9 @@ static int cmdq_mbox_flush(struct mbox_chan *chan, unsigned long timeout) cmdq_thread_resume(thread); cmdq_thread_disable(cmdq, thread); - clk_disable(cmdq->clock); + + for (i = 0; i < cmdq->gce_num; i++) + clk_disable(cmdq->clock[i]); out: spin_unlock_irqrestore(&thread->chan->lock, flags); @@ -512,6 +539,9 @@ static int cmdq_probe(struct platform_device *pdev) struct cmdq *cmdq; int err, i; struct gce_plat *plat_data; + struct device_node *phandle = dev->of_node; + struct device_node *node; + int alias_id = 0; cmdq = devm_kzalloc(dev, sizeof(*cmdq), GFP_KERNEL); if (!cmdq) @@ -536,6 +566,7 @@ static int cmdq_probe(struct platform_device *pdev) cmdq->thread_nr = plat_data->thread_nr; cmdq->shift_pa = plat_data->shift; + cmdq->gce_num = plat_data->gce_num; cmdq->irq_mask = GENMASK(cmdq->thread_nr - 1, 0); err = devm_request_irq(dev, cmdq->irq, cmdq_irq_handler, IRQF_SHARED, "mtk_cmdq", cmdq); @@ -547,10 +578,24 @@ static int cmdq_probe(struct platform_device *pdev) dev_dbg(dev, "cmdq device: addr:0x%p, va:0x%p, irq:%d\n", dev, cmdq->base, cmdq->irq); - cmdq->clock = devm_clk_get(dev, "gce"); - if (IS_ERR(cmdq->clock)) { - dev_err(dev, "failed to get gce clk\n"); - return PTR_ERR(cmdq->clock); + if (cmdq->gce_num > 1) { + for_each_child_of_node(phandle->parent, node) { + alias_id = of_alias_get_id(node, "gce"); + if (alias_id < cmdq->gce_num) { + cmdq->clock[alias_id] = of_clk_get(node, 0); + if (IS_ERR(cmdq->clock[alias_id])) { + dev_err(dev, "failed to get gce clk: %d\n", + alias_id); + return PTR_ERR(cmdq->clock[alias_id]); + } + } + } + } else { + cmdq->clock[alias_id] = devm_clk_get(&pdev->dev, "gce"); + if (IS_ERR(cmdq->clock[alias_id])) { + dev_err(dev, "failed to get gce clk\n"); + return PTR_ERR(cmdq->clock[alias_id]); + } } cmdq->mbox.dev = dev; @@ -586,7 +631,9 @@ static int cmdq_probe(struct platform_device *pdev) } platform_set_drvdata(pdev, cmdq); - WARN_ON(clk_prepare(cmdq->clock) < 0); + + for (i = 0; i < cmdq->gce_num; i++) + WARN_ON(clk_prepare(cmdq->clock[i]) < 0); cmdq_init(cmdq); @@ -598,15 +645,35 @@ static const struct dev_pm_ops cmdq_pm_ops = { .resume = cmdq_resume, }; -static const struct gce_plat gce_plat_v2 = {.thread_nr = 16}; -static const struct gce_plat gce_plat_v3 = {.thread_nr = 24}; -static const struct gce_plat gce_plat_v4 = {.thread_nr = 24, .shift = 3}; +static const struct gce_plat gce_plat_v2 = { + .thread_nr = 16, + .shift = 0, + .gce_num = 1 +}; + +static const struct gce_plat gce_plat_v3 = { + .thread_nr = 24, + .shift = 0, + .gce_num = 1 +}; + +static const struct gce_plat gce_plat_v4 = { + .thread_nr = 24, + .shift = 3, + .gce_num = 1 +}; + +static const struct gce_plat gce_plat_v5 = { + .thread_nr = 24, + .shift = 3, + .gce_num = 2 +}; static const struct of_device_id cmdq_of_ids[] = { {.compatible = "mediatek,mt8173-gce", .data = (void *)&gce_plat_v2}, {.compatible = "mediatek,mt8183-gce", .data = (void *)&gce_plat_v3}, {.compatible = "mediatek,mt6779-gce", .data = (void *)&gce_plat_v4}, - {.compatible = "mediatek,mt8195-gce", .data = (void *)&gce_plat_v4}, + {.compatible = "mediatek,mt8195-gce", .data = (void *)&gce_plat_v5}, {} };