From patchwork Fri May 5 15:21:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kieran Bingham X-Patchwork-Id: 9713741 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2A1D7602B9 for ; Fri, 5 May 2017 15:21:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 157A12863C for ; Fri, 5 May 2017 15:21:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A41728675; Fri, 5 May 2017 15:21:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 797462863C for ; Fri, 5 May 2017 15:21:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752915AbdEEPVZ (ORCPT ); Fri, 5 May 2017 11:21:25 -0400 Received: from mail.kernel.org ([198.145.29.136]:45180 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752864AbdEEPVX (ORCPT ); Fri, 5 May 2017 11:21:23 -0400 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BCA99203C4; Fri, 5 May 2017 15:21:21 +0000 (UTC) Received: from CookieMonster.cookiemonster.local (cpc89242-aztw30-2-0-cust488.18-1.cable.virginm.net [86.31.129.233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E32DD203B5; Fri, 5 May 2017 15:21:18 +0000 (UTC) From: Kieran Bingham To: linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-renesas-soc@vger.kernel.org, mchehab@kernel.org Cc: kieran.bingham@ideasonboard.com, Kieran Bingham , Laurent Pinchart Subject: [PATCH v4 2/4] v4l: vsp1: Postpone frame end handling in event of display list race Date: Fri, 5 May 2017 16:21:08 +0100 Message-Id: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: In-Reply-To: References: X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we try to commit the display list while an update is pending, we have missed our opportunity. The display list manager will hold the commit until the next interrupt. In this event, we skip the pipeline completion callback handler so that the pipeline will not mistakenly report frame completion to the user. Signed-off-by: Kieran Bingham Reviewed-by: Laurent Pinchart Signed-off-by: Laurent Pinchart --- drivers/media/platform/vsp1/vsp1_dl.c | 19 +++++++++++++++++-- drivers/media/platform/vsp1/vsp1_dl.h | 2 +- drivers/media/platform/vsp1/vsp1_pipe.c | 13 ++++++++++++- 3 files changed, 30 insertions(+), 4 deletions(-) diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c index 7d8f37772b56..85fe2b4ae310 100644 --- a/drivers/media/platform/vsp1/vsp1_dl.c +++ b/drivers/media/platform/vsp1/vsp1_dl.c @@ -561,9 +561,19 @@ void vsp1_dlm_irq_display_start(struct vsp1_dl_manager *dlm) spin_unlock(&dlm->lock); } -void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) +/** + * vsp1_dlm_irq_frame_end - Display list handler for the frame end interrupt + * @dlm: the display list manager + * + * Return true if the previous display list has completed at frame end, or false + * if it has been delayed by one frame because the display list commit raced + * with the frame end interrupt. The function always returns true in header mode + * as display list processing is then not continuous and races never occur. + */ +bool vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) { struct vsp1_device *vsp1 = dlm->vsp1; + bool completed = false; spin_lock(&dlm->lock); @@ -575,8 +585,10 @@ void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) * perform any operation as there can't be any new display list queued * in that case. */ - if (dlm->mode == VSP1_DL_MODE_HEADER) + if (dlm->mode == VSP1_DL_MODE_HEADER) { + completed = true; goto done; + } /* * The UPD bit set indicates that the commit operation raced with the @@ -594,6 +606,7 @@ void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) if (dlm->queued) { dlm->active = dlm->queued; dlm->queued = NULL; + completed = true; } /* @@ -614,6 +627,8 @@ void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm) done: spin_unlock(&dlm->lock); + + return completed; } /* Hardware Setup */ diff --git a/drivers/media/platform/vsp1/vsp1_dl.h b/drivers/media/platform/vsp1/vsp1_dl.h index 7131aa3c5978..6ec1380a10af 100644 --- a/drivers/media/platform/vsp1/vsp1_dl.h +++ b/drivers/media/platform/vsp1/vsp1_dl.h @@ -28,7 +28,7 @@ struct vsp1_dl_manager *vsp1_dlm_create(struct vsp1_device *vsp1, void vsp1_dlm_destroy(struct vsp1_dl_manager *dlm); void vsp1_dlm_reset(struct vsp1_dl_manager *dlm); void vsp1_dlm_irq_display_start(struct vsp1_dl_manager *dlm); -void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm); +bool vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm); struct vsp1_dl_list *vsp1_dl_list_get(struct vsp1_dl_manager *dlm); void vsp1_dl_list_put(struct vsp1_dl_list *dl); diff --git a/drivers/media/platform/vsp1/vsp1_pipe.c b/drivers/media/platform/vsp1/vsp1_pipe.c index edebf3fa926f..e817623b84e0 100644 --- a/drivers/media/platform/vsp1/vsp1_pipe.c +++ b/drivers/media/platform/vsp1/vsp1_pipe.c @@ -330,10 +330,21 @@ bool vsp1_pipeline_ready(struct vsp1_pipeline *pipe) void vsp1_pipeline_frame_end(struct vsp1_pipeline *pipe) { + bool completed; + if (pipe == NULL) return; - vsp1_dlm_irq_frame_end(pipe->output->dlm); + completed = vsp1_dlm_irq_frame_end(pipe->output->dlm); + if (!completed) { + /* + * If the DL commit raced with the frame end interrupt, the + * commit ends up being postponed by one frame. Return + * immediately without calling the pipeline's frame end handler + * or incrementing the sequence number. + */ + return; + } if (pipe->hgo) vsp1_hgo_frame_end(pipe->hgo);