From patchwork Fri Oct 10 14:27:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikhil Devshatwar X-Patchwork-Id: 5066031 Return-Path: X-Original-To: patchwork-linux-omap@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4C3DD9F295 for ; Fri, 10 Oct 2014 14:27:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 619E62022D for ; Fri, 10 Oct 2014 14:27:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5C8EA2020F for ; Fri, 10 Oct 2014 14:27:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755435AbaJJO1R (ORCPT ); Fri, 10 Oct 2014 10:27:17 -0400 Received: from devils.ext.ti.com ([198.47.26.153]:50797 "EHLO devils.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755411AbaJJO1L (ORCPT ); Fri, 10 Oct 2014 10:27:11 -0400 Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by devils.ext.ti.com (8.13.7/8.13.7) with ESMTP id s9AERBUL007141; Fri, 10 Oct 2014 09:27:11 -0500 Received: from DFLE73.ent.ti.com (dfle73.ent.ti.com [128.247.5.110]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id s9AERBmu025839; Fri, 10 Oct 2014 09:27:11 -0500 Received: from dflp32.itg.ti.com (10.64.6.15) by DFLE73.ent.ti.com (128.247.5.110) with Microsoft SMTP Server id 14.3.174.1; Fri, 10 Oct 2014 09:27:10 -0500 Received: from localhost.localdomain (ileax41-snat.itg.ti.com [10.172.224.153]) by dflp32.itg.ti.com (8.14.3/8.13.8) with ESMTP id s9AER45V013256; Fri, 10 Oct 2014 09:27:10 -0500 From: Nikhil Devshatwar To: , CC: Subject: [RFC PATCH 3/4] [media] ti-vpe: Do not perform job transaction atomically Date: Fri, 10 Oct 2014 19:57:02 +0530 Message-ID: <1412951223-4711-4-git-send-email-nikhil.nd@ti.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1412951223-4711-1-git-send-email-nikhil.nd@ti.com> References: <1412951223-4711-1-git-send-email-nikhil.nd@ti.com> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Current VPE driver does not start the job untill all the buffers for a transaction are not queued. When running in multiple context, this might increase the processing latency. Alternate solution would be to try to continue the same context as long as buffers for the transaction are ready; else switch the conext. This may increase number of context switches but it reduces latency significantly. In this approach, the job_ready always succeeds as long as there are buffers on the CAPTURE and OUTPUT stream. Processing may start immediately as the first 2 iterations don't need extra source buffers. Shift all the source buffers after each iteration and remove the oldest buffer. Also, with this removes the constraint of pre buffering 3 buffers before call to STREAMON in case of deinterlacing. Signed-off-by: Nikhil Devshatwar --- drivers/media/platform/ti-vpe/vpe.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/media/platform/ti-vpe/vpe.c b/drivers/media/platform/ti-vpe/vpe.c index a11044f..e37102b 100644 --- a/drivers/media/platform/ti-vpe/vpe.c +++ b/drivers/media/platform/ti-vpe/vpe.c @@ -907,15 +907,14 @@ static struct vpe_ctx *file2ctx(struct file *file) static int job_ready(void *priv) { struct vpe_ctx *ctx = priv; - int needed = ctx->bufs_per_job; - if (ctx->deinterlacing && ctx->src_vbs[2] == NULL) - needed += 2; /* need additional two most recent fields */ - - if (v4l2_m2m_num_src_bufs_ready(ctx->m2m_ctx) < needed) - return 0; - - if (v4l2_m2m_num_dst_bufs_ready(ctx->m2m_ctx) < needed) + /* + * This check is needed as this might be called directly from driver + * When called by m2m framework, this will always satisy, but when + * called from vpe_irq, this might fail. (src stream with zero buffers) + */ + if (v4l2_m2m_num_src_bufs_ready(ctx->m2m_ctx) <= 0 || + v4l2_m2m_num_dst_bufs_ready(ctx->m2m_ctx) <= 0) return 0; return 1; @@ -1124,19 +1123,20 @@ static void device_run(void *priv) struct sc_data *sc = ctx->dev->sc; struct vpe_q_data *d_q_data = &ctx->q_data[Q_DATA_DST]; - if (ctx->deinterlacing && ctx->src_vbs[2] == NULL) { - ctx->src_vbs[2] = v4l2_m2m_src_buf_remove(ctx->m2m_ctx); - WARN_ON(ctx->src_vbs[2] == NULL); - ctx->src_vbs[1] = v4l2_m2m_src_buf_remove(ctx->m2m_ctx); - WARN_ON(ctx->src_vbs[1] == NULL); - } - ctx->src_vbs[0] = v4l2_m2m_src_buf_remove(ctx->m2m_ctx); WARN_ON(ctx->src_vbs[0] == NULL); ctx->dst_vb = v4l2_m2m_dst_buf_remove(ctx->m2m_ctx); WARN_ON(ctx->dst_vb == NULL); if (ctx->deinterlacing) { + + if (ctx->src_vbs[2] == NULL) { + ctx->src_vbs[2] = ctx->src_vbs[0]; + WARN_ON(ctx->src_vbs[2] == NULL); + ctx->src_vbs[1] = ctx->src_vbs[0]; + WARN_ON(ctx->src_vbs[1] == NULL); + } + /* * we have output the first 2 frames through line average, we * now switch to EDI de-interlacer @@ -1360,7 +1360,7 @@ static irqreturn_t vpe_irq(int irq_vpe, void *data) } ctx->bufs_completed++; - if (ctx->bufs_completed < ctx->bufs_per_job) { + if (ctx->bufs_completed < ctx->bufs_per_job && job_ready(ctx)) { device_run(ctx); goto handled; }