From patchwork Thu Jul 23 02:34:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Can Guo X-Patchwork-Id: 11679605 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07BFF912 for ; Thu, 23 Jul 2020 02:34:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E3F0F20888 for ; Thu, 23 Jul 2020 02:34:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387463AbgGWCeV (ORCPT ); Wed, 22 Jul 2020 22:34:21 -0400 Received: from labrats.qualcomm.com ([199.106.110.90]:30244 "EHLO labrats.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728914AbgGWCeV (ORCPT ); Wed, 22 Jul 2020 22:34:21 -0400 IronPort-SDR: AVUnZFwD9dC1PVHfriE7QHX9ART67xtSjRGrrv2KMfFJyWLPQzaiaBBOLKK8Vbzfs0RkbMzyCN I1252hTa4tBn0CvYz6bZEnoWmHG1g3jnvFWm7VFjwgrQ+0HZi8k3hkF6k0MtpsDgdkFj7nJzEX CLwNuJqEEfL2UEM+45q1CFOchjJf4ocvLE9RJsbku+NeRelMLZYj+wqHQ9B5H/T2odZa5dkLSP c69hKKCTHjC6EezqMYPrEKssKahFbiuRzsve5WiPNmNz/6GdxhdXt9z6gN3y5m9CFjI/35NrrK Ck0= X-IronPort-AV: E=Sophos;i="5.75,385,1589266800"; d="scan'208";a="47229203" Received: from unknown (HELO ironmsg02-sd.qualcomm.com) ([10.53.140.142]) by labrats.qualcomm.com with ESMTP; 22 Jul 2020 19:34:20 -0700 Received: from pacamara-linux.qualcomm.com ([192.168.140.135]) by ironmsg02-sd.qualcomm.com with ESMTP; 22 Jul 2020 19:34:19 -0700 Received: by pacamara-linux.qualcomm.com (Postfix, from userid 359480) id B07D422A4D; Wed, 22 Jul 2020 19:34:19 -0700 (PDT) From: Can Guo To: asutoshd@codeaurora.org, nguyenb@codeaurora.org, hongwus@codeaurora.org, rnayak@codeaurora.org, sh425.lee@samsung.com, linux-scsi@vger.kernel.org, kernel-team@android.com, saravanak@google.com, salyzyn@google.com, cang@codeaurora.org Cc: Alim Akhtar , Avri Altman , "James E.J. Bottomley" , "Martin K. Petersen" , Stanley Chu , Bean Huo , Bart Van Assche , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 2/9] scsi: ufs: Fix imbalanced scsi_block_reqs_cnt caused by ufshcd_hold() Date: Wed, 22 Jul 2020 19:34:01 -0700 Message-Id: <1595471649-25675-3-git-send-email-cang@codeaurora.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1595471649-25675-1-git-send-email-cang@codeaurora.org> References: <1595471649-25675-1-git-send-email-cang@codeaurora.org> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The scsi_block_reqs_cnt increased in ufshcd_hold() is supposed to be decreased back in ufshcd_ungate_work() in a paired way. However, if specific ufshcd_hold/release sequences are met, it is possible that scsi_block_reqs_cnt is increased twice but only one ungate work is queued. To make sure scsi_block_reqs_cnt is handled by ufshcd_hold() and ufshcd_ungate_work() in a paired way, increase it only if queue_work() returns true. Signed-off-by: Can Guo Reviewed-by: Hongwu Su --- drivers/scsi/ufs/ufshcd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 99bd3e4..2907828 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -1611,12 +1611,12 @@ int ufshcd_hold(struct ufs_hba *hba, bool async) */ /* fallthrough */ case CLKS_OFF: - ufshcd_scsi_block_requests(hba); hba->clk_gating.state = REQ_CLKS_ON; trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state); - queue_work(hba->clk_gating.clk_gating_workq, - &hba->clk_gating.ungate_work); + if (queue_work(hba->clk_gating.clk_gating_workq, + &hba->clk_gating.ungate_work)) + ufshcd_scsi_block_requests(hba); /* * fall through to check if we should wait for this * work to be done or not.