From patchwork Mon Jul 13 04:57:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Can Guo X-Patchwork-Id: 11658697 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47B2B13B4 for ; Mon, 13 Jul 2020 05:03:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EF002073A for ; Mon, 13 Jul 2020 05:03:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728382AbgGMFDb (ORCPT ); Mon, 13 Jul 2020 01:03:31 -0400 Received: from labrats.qualcomm.com ([199.106.110.90]:3886 "EHLO labrats.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725804AbgGMFDb (ORCPT ); Mon, 13 Jul 2020 01:03:31 -0400 IronPort-SDR: eSOxOpsJH2TTAFvrToeiRyOllEKfD1BU0zxAZap/IAstWALZxTnXjCzUTNF8U/NIkswci3QkY0 +2GxWPruvl8J/YKLJXhCYQ3KArp8zG9EiebsgqhptXxSxPFVrQaPAPpN7kdDvyMRBKQ3t5hhL+ WQqBRedSD9/A9cI0v5V9lV0i1yszfgBSLcWenUbdkYchSJ2EjjNyOFyxSPqKYBHdW06EQ+Dx4n gpOXCkwQ6knKbm7Y4u8kFyzW0ks/FdERtnAYewmBx1jHPk3eKmXnYSqLqyR+OJOE0cVkqaUlDs u+E= X-IronPort-AV: E=Sophos;i="5.75,346,1589266800"; d="scan'208";a="29033308" Received: from unknown (HELO ironmsg05-sd.qualcomm.com) ([10.53.140.145]) by labrats.qualcomm.com with ESMTP; 12 Jul 2020 21:57:26 -0700 Received: from pacamara-linux.qualcomm.com ([192.168.140.135]) by ironmsg05-sd.qualcomm.com with ESMTP; 12 Jul 2020 21:57:25 -0700 Received: by pacamara-linux.qualcomm.com (Postfix, from userid 359480) id 36B5522DAF; Sun, 12 Jul 2020 21:57:25 -0700 (PDT) From: Can Guo To: asutoshd@codeaurora.org, nguyenb@codeaurora.org, hongwus@codeaurora.org, rnayak@codeaurora.org, linux-scsi@vger.kernel.org, kernel-team@android.com, saravanak@google.com, salyzyn@google.com, cang@codeaurora.org Cc: Alim Akhtar , Avri Altman , "James E.J. Bottomley" , "Martin K. Petersen" , Stanley Chu , Bean Huo , Bart Van Assche , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v1 2/4] scsi: ufs: Fix imbalanced scsi_block_reqs_cnt caused by ufshcd_hold() Date: Sun, 12 Jul 2020 21:57:10 -0700 Message-Id: <1594616232-25080-3-git-send-email-cang@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1594616232-25080-1-git-send-email-cang@codeaurora.org> References: <1594616232-25080-1-git-send-email-cang@codeaurora.org> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The scsi_block_reqs_cnt increased in ufshcd_hold() is supposed to be decreased back in ufshcd_ungate_work() in a paired way. However, if specific ufshcd_hold/release sequences are met, it is possible that scsi_block_reqs_cnt is increased twice but only one ungate work is queued. To make sure scsi_block_reqs_cnt is handled by ufshcd_hold() and ufshcd_ungate_work() in a paired way, increase it only if queue_work() returns true. Signed-off-by: Can Guo --- drivers/scsi/ufs/ufshcd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index ebf7a95..33214bb 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -1611,12 +1611,12 @@ int ufshcd_hold(struct ufs_hba *hba, bool async) */ /* fallthrough */ case CLKS_OFF: - ufshcd_scsi_block_requests(hba); hba->clk_gating.state = REQ_CLKS_ON; trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state); - queue_work(hba->clk_gating.clk_gating_workq, - &hba->clk_gating.ungate_work); + if (queue_work(hba->clk_gating.clk_gating_workq, + &hba->clk_gating.ungate_work)) + ufshcd_scsi_block_requests(hba); /* * fall through to check if we should wait for this * work to be done or not.