From patchwork Thu May 6 14:37:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mauro Carvalho Chehab X-Patchwork-Id: 12242229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28EEDC433B4 for ; Thu, 6 May 2021 14:37:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E9451610D2 for ; Thu, 6 May 2021 14:37:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234833AbhEFOiX (ORCPT ); Thu, 6 May 2021 10:38:23 -0400 Received: from mail.kernel.org ([198.145.29.99]:36114 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234397AbhEFOiW (ORCPT ); Thu, 6 May 2021 10:38:22 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 7339F610A0; Thu, 6 May 2021 14:37:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620311844; bh=fJSwzFf7c0GmQiYWV5kfjwbHtHttKC4v3Xf+3U7MQWE=; h=From:To:Cc:Subject:Date:From; b=YmrPgJU2fnYn2hgmdWtfZVPlhNhJLGJ1mvpacnoqrsD3ZZhOID+H3zj90sz6ZBaNM tJIL/aUtzUuCf5RYG+2LED2Zl9q+4+hck5t4iKt8NetaEcR1+hiLVCX9RuiDh8lSK/ JNmRv06Vf2yiTBOCruDnqkPGpjuHHYUVI6n8UTrgkfYNahjCQtCQGr65x93fVearrj wRP3sYb4h3uTcgo22YBy8sjGxqBK53uY/l8lCAUxeeC1jI4YWdltj0kRKwdICkgcBi WFCI+JqmIEEczaE88YcKeZEYlH86L+IV/47iMz/+W3nueqwNL4luYvsJBVoVZ3g9Sz FDwLVi1/ix40g== Received: by mail.kernel.org with local (Exim 4.94.2) (envelope-from ) id 1lef7y-000P0w-0J; Thu, 06 May 2021 16:37:22 +0200 From: Mauro Carvalho Chehab Cc: linuxarm@huawei.com, mauro.chehab@huawei.com, Mauro Carvalho Chehab , Ezequiel Garcia , Greg Kroah-Hartman , Hans Verkuil , Mauro Carvalho Chehab , Philipp Zabel , linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev Subject: [PATCH v2] media: hantro: do a PM resume earlier Date: Thu, 6 May 2021 16:37:20 +0200 Message-Id: X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Sender: Mauro Carvalho Chehab To: unlisted-recipients:; (no To-header on input) Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org The device_run() first enables the clock and then tries to resume PM runtime, checking for errors. Well, if for some reason the pm_runtime can not resume, it would be better to detect it beforehand. So, change the order inside device_run(). Reviewed-by: Ezequiel Garcia Fixes: 775fec69008d ("media: add Rockchip VPU JPEG encoder driver") Signed-off-by: Mauro Carvalho Chehab --- drivers/staging/media/hantro/hantro_drv.c | 35 +++++++++++++++-------- 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/drivers/staging/media/hantro/hantro_drv.c b/drivers/staging/media/hantro/hantro_drv.c index 595e82a82728..eea2009fa17b 100644 --- a/drivers/staging/media/hantro/hantro_drv.c +++ b/drivers/staging/media/hantro/hantro_drv.c @@ -56,16 +56,12 @@ dma_addr_t hantro_get_ref(struct hantro_ctx *ctx, u64 ts) return hantro_get_dec_buf_addr(ctx, buf); } -static void hantro_job_finish(struct hantro_dev *vpu, - struct hantro_ctx *ctx, - enum vb2_buffer_state result) +static void hantro_job_finish_no_pm(struct hantro_dev *vpu, + struct hantro_ctx *ctx, + enum vb2_buffer_state result) { struct vb2_v4l2_buffer *src, *dst; - pm_runtime_mark_last_busy(vpu->dev); - pm_runtime_put_autosuspend(vpu->dev); - clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks); - src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx); @@ -81,6 +77,18 @@ static void hantro_job_finish(struct hantro_dev *vpu, result); } +static void hantro_job_finish(struct hantro_dev *vpu, + struct hantro_ctx *ctx, + enum vb2_buffer_state result) +{ + pm_runtime_mark_last_busy(vpu->dev); + pm_runtime_put_autosuspend(vpu->dev); + + clk_bulk_disable(vpu->variant->num_clocks, vpu->clocks); + + hantro_job_finish_no_pm(vpu, ctx, result); +} + void hantro_irq_done(struct hantro_dev *vpu, enum vb2_buffer_state result) { @@ -152,11 +160,14 @@ static void device_run(void *priv) src = hantro_get_src_buf(ctx); dst = hantro_get_dst_buf(ctx); - ret = clk_bulk_enable(ctx->dev->variant->num_clocks, ctx->dev->clocks); - if (ret) - goto err_cancel_job; ret = pm_runtime_get_sync(ctx->dev->dev); - if (ret < 0) + if (ret < 0) { + pm_runtime_put_noidle(ctx->dev->dev); + goto err_cancel_job; + } + + ret = clk_bulk_enable(ctx->dev->variant->num_clocks, ctx->dev->clocks); + if (ret) goto err_cancel_job; v4l2_m2m_buf_copy_metadata(src, dst, true); @@ -165,7 +176,7 @@ static void device_run(void *priv) return; err_cancel_job: - hantro_job_finish(ctx->dev, ctx, VB2_BUF_STATE_ERROR); + hantro_job_finish_no_pm(ctx->dev, ctx, VB2_BUF_STATE_ERROR); } static struct v4l2_m2m_ops vpu_m2m_ops = {