From patchwork Thu Oct 19 12:51:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Gaignard X-Patchwork-Id: 13429253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 89CB1CDB482 for ; Thu, 19 Oct 2023 14:00:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+h9q5sf7PUOmO+1lNsYDQNTfgsqSsrIuX2X7kSRUHJ4=; b=Bi6rG6a76o5XU0 HnoJ0G+aI9Ge+CaoQgHd2igmpNcmA7UZ3D/HXYrjnapBKZz5lxLKEaui2iyv1zlrPlLVmlIdxtz4P Bb1inTcri69wee/Stoyhy/APsdOKTLKQqxsPfGbSxGz3V66RAphzk8ElOjCB+gtKnvge4bIVxJgxm f8HLZAsV4HPQ5YUAN+Hxp6ZSHTtg80nlPM5veNXcwb1rW/voGnAdqjSmsTWeOa8ZtHsnc6gGrybZT VDnPgLEVbJUXu138mCqIYckv4aFhntddMVchOgkcfbVdzA+XJlOSMCIKzo/KSHPI7cnnemn71Bo3M +fPDXSSNvegfkMiTa9Mw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qtTZm-00HaaZ-2b; Thu, 19 Oct 2023 14:00:38 +0000 Received: from madras.collabora.co.uk ([46.235.227.172]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qtSVv-00HNG5-04; Thu, 19 Oct 2023 12:52:37 +0000 Received: from benjamin-XPS-13-9310.. (unknown [IPv6:2a01:e0a:120:3210:ccea:1fb5:34eb:239b]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: benjamin.gaignard) by madras.collabora.co.uk (Postfix) with ESMTPSA id 26A8B6607332; Thu, 19 Oct 2023 13:52:32 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1697719952; bh=xMJ8tMojfr13gTY+PpecndOGeS4i3P9Tk9kRrGKcBSQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EtIkbWAVNV0zrcnTNqNK5FgGVRgJwxe/LxAW0zi46t1YvYYgLO/Q8dlpSw4zeW/w+ iJwCM5dZ3+UpAgP13Ln7b78mpO3Mj+oY5gjEFCjo/KXZc34HmFSaCmAw9gcNeikulJ GwYctCrepjmRf/NvYuTXtcQPXiqVTmGIqAlBSp79QLrwNfv7dAM096Sqdelh8L0Jxf roturVKO7mkRKvEBfyAC0Y6x81eCUcShohkcl8shkvOYndlxr89g2nKidrW/z5ws1V Csh3WaPinlY4mpo1LP7e8aQa84DQkkik2iacyG7YoBeW7EqH0nCD3G1q/Cgu0vqMLJ m+kCJJ2xgPKhA== From: Benjamin Gaignard To: mchehab@kernel.org, tfiga@chromium.org, m.szyprowski@samsung.com, ming.qian@nxp.com, ezequiel@vanguardiasur.com.ar, p.zabel@pengutronix.de, gregkh@linuxfoundation.org, hverkuil-cisco@xs4all.nl, nicolas.dufresne@collabora.com Cc: linux-media@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, linux-rockchip@lists.infradead.org, linux-staging@lists.linux.dev, kernel@collabora.com, Benjamin Gaignard Subject: [PATCH v13 03/56] media: videobuf2: Stop spamming kernel log with all queue counter Date: Thu, 19 Oct 2023 14:51:29 +0200 Message-Id: <20231019125222.21370-4-benjamin.gaignard@collabora.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231019125222.21370-1-benjamin.gaignard@collabora.com> References: <20231019125222.21370-1-benjamin.gaignard@collabora.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231019_055235_313570_3AF0329A X-CRM114-Status: GOOD ( 12.85 ) X-BeenThere: linux-rockchip@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Upstream kernel work for Rockchip platforms List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-rockchip" Errors-To: linux-rockchip-bounces+linux-rockchip=archiver.kernel.org@lists.infradead.org Only report unbalanced queue counters do avoid spamming kernel log with useless information. Signed-off-by: Benjamin Gaignard --- .../media/common/videobuf2/videobuf2-core.c | 79 +++++++++++-------- 1 file changed, 44 insertions(+), 35 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c index 09be8e026044..47dba2a20d73 100644 --- a/drivers/media/common/videobuf2/videobuf2-core.c +++ b/drivers/media/common/videobuf2/videobuf2-core.c @@ -533,25 +533,26 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) #ifdef CONFIG_VIDEO_ADV_DEBUG /* - * Check that all the calls were balances during the life-time of this - * queue. If not (or if the debug level is 1 or up), then dump the - * counters to the kernel log. + * Check that all the calls were balanced during the life-time of this + * queue. If not then dump the counters to the kernel log. */ if (q->num_buffers) { bool unbalanced = q->cnt_start_streaming != q->cnt_stop_streaming || q->cnt_prepare_streaming != q->cnt_unprepare_streaming || q->cnt_wait_prepare != q->cnt_wait_finish; - if (unbalanced || debug) { - pr_info("counters for queue %p:%s\n", q, - unbalanced ? " UNBALANCED!" : ""); - pr_info(" setup: %u start_streaming: %u stop_streaming: %u\n", - q->cnt_queue_setup, q->cnt_start_streaming, - q->cnt_stop_streaming); - pr_info(" prepare_streaming: %u unprepare_streaming: %u\n", - q->cnt_prepare_streaming, q->cnt_unprepare_streaming); - pr_info(" wait_prepare: %u wait_finish: %u\n", - q->cnt_wait_prepare, q->cnt_wait_finish); + if (unbalanced) { + pr_info("unbalanced counters for queue %p:\n", q); + if (q->cnt_start_streaming != q->cnt_stop_streaming) + pr_info(" setup: %u start_streaming: %u stop_streaming: %u\n", + q->cnt_queue_setup, q->cnt_start_streaming, + q->cnt_stop_streaming); + if (q->cnt_prepare_streaming != q->cnt_unprepare_streaming) + pr_info(" prepare_streaming: %u unprepare_streaming: %u\n", + q->cnt_prepare_streaming, q->cnt_unprepare_streaming); + if (q->cnt_wait_prepare != q->cnt_wait_finish) + pr_info(" wait_prepare: %u wait_finish: %u\n", + q->cnt_wait_prepare, q->cnt_wait_finish); } q->cnt_queue_setup = 0; q->cnt_wait_prepare = 0; @@ -572,29 +573,37 @@ static void __vb2_queue_free(struct vb2_queue *q, unsigned int buffers) vb->cnt_buf_prepare != vb->cnt_buf_finish || vb->cnt_buf_init != vb->cnt_buf_cleanup; - if (unbalanced || debug) { - pr_info(" counters for queue %p, buffer %d:%s\n", - q, buffer, unbalanced ? " UNBALANCED!" : ""); - pr_info(" buf_init: %u buf_cleanup: %u buf_prepare: %u buf_finish: %u\n", - vb->cnt_buf_init, vb->cnt_buf_cleanup, - vb->cnt_buf_prepare, vb->cnt_buf_finish); - pr_info(" buf_out_validate: %u buf_queue: %u buf_done: %u buf_request_complete: %u\n", - vb->cnt_buf_out_validate, vb->cnt_buf_queue, - vb->cnt_buf_done, vb->cnt_buf_request_complete); - pr_info(" alloc: %u put: %u prepare: %u finish: %u mmap: %u\n", - vb->cnt_mem_alloc, vb->cnt_mem_put, - vb->cnt_mem_prepare, vb->cnt_mem_finish, - vb->cnt_mem_mmap); - pr_info(" get_userptr: %u put_userptr: %u\n", - vb->cnt_mem_get_userptr, vb->cnt_mem_put_userptr); - pr_info(" attach_dmabuf: %u detach_dmabuf: %u map_dmabuf: %u unmap_dmabuf: %u\n", - vb->cnt_mem_attach_dmabuf, vb->cnt_mem_detach_dmabuf, - vb->cnt_mem_map_dmabuf, vb->cnt_mem_unmap_dmabuf); - pr_info(" get_dmabuf: %u num_users: %u vaddr: %u cookie: %u\n", + if (unbalanced) { + pr_info("unbalanced counters for queue %p, buffer %d:\n", + q, buffer); + if (vb->cnt_buf_init != vb->cnt_buf_cleanup) + pr_info(" buf_init: %u buf_cleanup: %u\n", + vb->cnt_buf_init, vb->cnt_buf_cleanup); + if (vb->cnt_buf_prepare != vb->cnt_buf_finish) + pr_info(" buf_prepare: %u buf_finish: %u\n", + vb->cnt_buf_prepare, vb->cnt_buf_finish); + if (vb->cnt_buf_queue != vb->cnt_buf_done) + pr_info(" buf_out_validate: %u buf_queue: %u buf_done: %u buf_request_complete: %u\n", + vb->cnt_buf_out_validate, vb->cnt_buf_queue, + vb->cnt_buf_done, vb->cnt_buf_request_complete); + if (vb->cnt_mem_alloc != vb->cnt_mem_put) + pr_info(" alloc: %u put: %u\n", + vb->cnt_mem_alloc, vb->cnt_mem_put); + if (vb->cnt_mem_prepare != vb->cnt_mem_finish) + pr_info(" prepare: %u finish: %u\n", + vb->cnt_mem_prepare, vb->cnt_mem_finish); + if (vb->cnt_mem_get_userptr != vb->cnt_mem_put_userptr) + pr_info(" get_userptr: %u put_userptr: %u\n", + vb->cnt_mem_get_userptr, vb->cnt_mem_put_userptr); + if (vb->cnt_mem_attach_dmabuf != vb->cnt_mem_detach_dmabuf) + pr_info(" attach_dmabuf: %u detach_dmabuf: %u\n", + vb->cnt_mem_attach_dmabuf, vb->cnt_mem_detach_dmabuf); + if (vb->cnt_mem_map_dmabuf != vb->cnt_mem_unmap_dmabuf) + pr_info(" map_dmabuf: %u unmap_dmabuf: %u\n", + vb->cnt_mem_map_dmabuf, vb->cnt_mem_unmap_dmabuf); + pr_info(" get_dmabuf: %u num_users: %u\n", vb->cnt_mem_get_dmabuf, - vb->cnt_mem_num_users, - vb->cnt_mem_vaddr, - vb->cnt_mem_cookie); + vb->cnt_mem_num_users); } } #endif