From patchwork Wed Mar 31 07:39:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0194C433C1 for ; Wed, 31 Mar 2021 07:40:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4D57B619B9 for ; Wed, 31 Mar 2021 07:40:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233999AbhCaHkX (ORCPT ); Wed, 31 Mar 2021 03:40:23 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:44523 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234074AbhCaHkK (ORCPT ); Wed, 31 Mar 2021 03:40:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176410; x=1648712410; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xR7BmbkW2ps4Me2oZXCQOwYznBQj5YfqtZ49LQFVlVk=; b=NDkJi0UYP7QQl2cs8HVak7jx7PPLoNAXKVKIdHE5+K8tW29NqpUAVazu IL7M5KAwUGoojB9eTD+lgBgSD9MS2YZdiXvxsmq/WQf4Ik9Rvgudb/kdX TMt7cvody+WzCGUKnvgWb5IddQZ1biwNMB1sJLkhAMpES3iAaeHwrSu5F gYgknGdIUCMDsFnCBGKo7lm/XfcjcUsQRDu+5E2svEDh/4XQIQG7vR5NR 8ZetGkj2G9jNT3V7nMfHXRdLkEjujQMMuz0MW54HcQQrVXuy6o2wjHqrc jAWUfw7sJup6Y7EOXlg9SPu6BXj7NONfzVPQ/hXhxUuGWPDNM+BKIgemm w==; IronPort-SDR: d5hlt+bT/57y9CJ6wDipTxOin28pJq6isAlGbq/gJM0rKPj+38yzKZG2UoJsfuq/siGDZ2cq6W aIoWH/s5OIUQYoRfmsw0YfsJUsYrVEBG8U+mmLq+Gu86JSOntz7dtSawqLLq5HZmZe81f2s/pZ LfSryb5z1pVAkol++OmKjJv4tdLmbcw8NiODgodwnhf0XTyLiNucrYUXEKZAE9xkQBPCEBMzmo 6PMEjRPsgs36lEyT4f4+rjuRtFZ6kXgAFFWGIfnA0yj8ltbYTzD52QOXzsR7O/aeEBLuvl3um0 ujY= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="163422235" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:40:09 +0800 IronPort-SDR: ljXb5pPMfMYTyllauoCVn7+xEQeBa360pzoaDHYqqyMBEyw5kG9oqgw05MhTS7wWQWZEzWI2fK s/BP7ghn/g3Yv9FOkoMu1DHHyUbFjmn6lIFZ9S11Oq95EmdN9ptY8vQzgh8vnbO9N8d0/NVTyC /3W8WIl0JKk4upFPYBhaIDY4oAQIpmFJmP+gZHCNS69/ArrQIFAu/auEI/WE0BInWjkn4iu/vw JWgvOrauTmjFlQtpmpNM95KtrT1fr4NK2dr52YxInEBZi8hdlbRsS1njMat+s5HFBA12wOonRG xi+Y4L/ARUVq4AskmqJ/IaRR Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:20:22 -0700 IronPort-SDR: q1/3aMUa2Cvb2QpLUU8z5ZLD8D555kj5XgX39+xcEwPfaJfu+xByli25WCpKBVZ3NQY2AoaW6M JsV73Jg4+B7EOBMCl8bPu0wQ3EEIuCEt4Dt02M5j6Mzuag2anJrpfmvk4nsheeHbsusIJEYW6/ CoCDx9mdK/3oIq45Nl5prFz1Uh3vSc5y4wNIpn2nwALiqf4xZwYx5uu38BYA3+LOBg6DFw6qQu usC8Mu+uDho8nmxjTlCYGbz43UYpyeU4BC1tRgEh8U3Hhg7f++lpwQUGBHEjYc/oPEZk5mzAHK Z+I= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:05 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 01/11] scsi: ufshpb: Cache HPB Control mode on init Date: Wed, 31 Mar 2021 10:39:42 +0300 Message-Id: <20210331073952.102162-2-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We will use it later, when we'll need to differentiate between device and host control modes. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshcd.h | 2 ++ drivers/scsi/ufs/ufshpb.c | 8 +++++--- drivers/scsi/ufs/ufshpb.h | 2 ++ 3 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h index 4dbe9bc60e85..c01f75963750 100644 --- a/drivers/scsi/ufs/ufshcd.h +++ b/drivers/scsi/ufs/ufshcd.h @@ -656,6 +656,7 @@ struct ufs_hba_variant_params { * @hpb_disabled: flag to check if HPB is disabled * @max_hpb_single_cmd: maximum size of single HPB command * @is_legacy: flag to check HPB 1.0 + * @control_mode: either host or device */ struct ufshpb_dev_info { int num_lu; @@ -665,6 +666,7 @@ struct ufshpb_dev_info { bool hpb_disabled; int max_hpb_single_cmd; bool is_legacy; + u8 control_mode; }; #endif diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 86805af9abe7..5285a50b05dd 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -1615,6 +1615,9 @@ static void ufshpb_lu_parameter_init(struct ufs_hba *hba, % (hpb->srgn_mem_size / HPB_ENTRY_SIZE); hpb->pages_per_srgn = DIV_ROUND_UP(hpb->srgn_mem_size, PAGE_SIZE); + + if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) + hpb->is_hcm = true; } static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2308,11 +2311,10 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) { struct ufshpb_dev_info *hpb_dev_info = &hba->ufshpb_dev; int version, ret; - u8 hpb_mode; u32 max_hpb_single_cmd = HPB_MULTI_CHUNK_LOW; - hpb_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; - if (hpb_mode == HPB_HOST_CONTROL) { + hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; + if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) { dev_err(hba->dev, "%s: host control mode is not supported.\n", __func__); hpb_dev_info->hpb_disabled = true; diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b1128b0ce486..7df30340386a 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -228,6 +228,8 @@ struct ufshpb_lu { u32 entries_per_srgn_shift; u32 pages_per_srgn; + bool is_hcm; + struct ufshpb_stats stats; struct ufshpb_params params; From patchwork Wed Mar 31 07:39:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 700C0C433E1 for ; Wed, 31 Mar 2021 07:40:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 31F3F619B1 for ; Wed, 31 Mar 2021 07:40:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234074AbhCaHkY (ORCPT ); Wed, 31 Mar 2021 03:40:24 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:28172 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234088AbhCaHkS (ORCPT ); Wed, 31 Mar 2021 03:40:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176417; x=1648712417; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DDdf1YZpbbGS2MS0NGmUe99mGJKNugwHoSd4Gb7gXRk=; b=eBu6IEoFnOcDq408s0SSswuymG1xifaBUPgRhPsoyKYVmGq6B6lqs6Ol hq1TlmUlwXC6sC9EDl9gc0ENdAqmRPt+Ra/zLjy52qDi9hVzqKeXEoOPj gEANztL/tXI7AiUBRKo2sDMqF3Z/Yz8kChSgMdYH1YJmpk7EM9XCBDW84 fZYrh8u1p6Mi3DXsW3F2dqxQ5B2StFxTvTlLFBU0lqHmyNpDDl46oBj9m bGM6Q17GQykBob956+1r2ZFa2IWqNcd8IxEQzFDIBF5OZ6HmtdNIKKAZY KJBNChy9QXcDOtEzijeV3yMp/h/PCsJsduZiiNLIM+9Xge9Qzs/404CWn Q==; IronPort-SDR: kveB/wvGATWRG6wqtAEr6htDBFbyMY35A9XGT94qMiyn781t2c5dRKnq/B+BHXebqx4Ub0R90x o7tgHjjbXRHHRKOt2hvVsyO7JKT8n+iHzD93EwkrqM42EvwdWed1ZAx8vZaCXfItLSJAOiFw3o V0IO969iIbfqO5JLIvwu65hdPtkF7zgpY2dhNrgPCzYsK0ZDUCrA1kKyKWQCmNZgJMdvJ77BsU rXVQsDAqCmveJB6LjK3jCTo1LuVtUyMhVfPEufBS71JEFKXFzxgN6zXNvvsZFChByv7T1LJ4I3 vpY= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="163338559" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:40:17 +0800 IronPort-SDR: 0BYfjCapl1yWo/O4m6mF4OC5kDWztEdJuR3iuWz9tyiQw/UjBW+YmfoukxpIWAL5LjsFRi2Fe3 p2K6cuvoJiVv5OXuDUf6VVEi3wAlV2DWxVt98ni2TzlJ8M36yad9FphJ3nAtWMs25C9bhs0cI9 m7vrwv6/oRmkX2n8IB+C0fHb7+09KWtC+VGgOW2tGDMgTEpkok5VV44IBS0voCscbp8xvFPLQW MzoFNvII8g6aQxKX8DWwILhRpxCsrMb8vO8VG0dTys2p0RSQ5P7ooqJVWT0qH5ynGb6YjqMmug Gu4r8qV5+tCLYm9ge6wW/1+i Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:22:01 -0700 IronPort-SDR: ETgRQ7kUBK9sqVziU2ODqHQaN1oXB+4HiLga0+NK3lTYWo4RPt+ybYvWjE1w6OMYLJgT9804Jy 8PGKPTqGFyLVA4rz6UjzDNJieePr+NW3/T9cWffdCeNBwSf4QMwqqXuBEjS8m2O2BCyhK1ZlFU jLn428QOoT2ABTM6en+X77K0VZlyDUW7sCHbdTeMBzz2RfcM5GDcFdpRZPXaJOYjjLoGrvQmVp fzYVhcemfUGveKV93BlSbUowCZCMRMImQPg3l9ob7PVpTdN/1ynDpO4n7vI+gWyd7pgW9lFCt2 uD0= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:13 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 02/11] scsi: ufshpb: Add host control mode support to rsp_upiu Date: Wed, 31 Mar 2021 10:39:43 +0300 Message-Id: <20210331073952.102162-3-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In device control mode, the device may recommend the host to either activate or inactivate a region, and the host should follow. Meaning those are not actually recommendations, but more of instructions. On the contrary, in host control mode, the recommendation protocol is slightly changed: a) The device may only recommend the host to update a subregion of an already-active region. And, b) The device may *not* recommend to inactivate a region. Furthermore, in host control mode, the host may choose not to follow any of the device's recommendations. However, in case of a recommendation to update an active and clean subregion, it is better to follow those recommendation because otherwise the host has no other way to know that some internal relocation took place. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 34 +++++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 2 ++ 2 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 5285a50b05dd..6111019ca31a 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -166,6 +166,8 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); @@ -235,6 +237,11 @@ static bool ufshpb_test_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, return false; } +static inline bool is_rgn_dirty(struct ufshpb_region *rgn) +{ + return test_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); +} + static int ufshpb_fill_ppn_from_page(struct ufshpb_lu *hpb, struct ufshpb_map_ctx *mctx, int pos, int len, u64 *ppn_buf) @@ -712,6 +719,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, struct ufshpb_subregion *srgn) { + struct ufshpb_region *rgn; u32 num_entries = hpb->entries_per_srgn; if (!srgn->mctx) { @@ -725,6 +733,10 @@ static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, num_entries = hpb->last_srgn_entries; bitmap_zero(srgn->mctx->ppn_dirty, num_entries); + + rgn = hpb->rgn_tbl + srgn->rgn_idx; + clear_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + return 0; } @@ -1244,6 +1256,18 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, srgn_i = be16_to_cpu(rsp_field->hpb_active_field[i].active_srgn); + rgn = hpb->rgn_tbl + rgn_i; + if (hpb->is_hcm && + (rgn->rgn_state != HPB_RGN_ACTIVE || is_rgn_dirty(rgn))) { + /* + * in host control mode, subregion activation + * recommendations are only allowed to active regions. + * Also, ignore recommendations for dirty regions - the + * host will make decisions concerning those by himself + */ + continue; + } + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "activate(%d) region %d - %d\n", i, rgn_i, srgn_i); @@ -1251,7 +1275,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, ufshpb_update_active_info(hpb, rgn_i, srgn_i); spin_unlock(&hpb->rsp_list_lock); - rgn = hpb->rgn_tbl + rgn_i; srgn = rgn->srgn_tbl + srgn_i; /* blocking HPB_READ */ @@ -1262,6 +1285,14 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_active_cnt++; } + if (hpb->is_hcm) { + /* + * in host control mode the device is not allowed to inactivate + * regions + */ + goto out; + } + for (i = 0; i < rsp_field->inactive_rgn_cnt; i++) { rgn_i = be16_to_cpu(rsp_field->hpb_inactive_field[i]); dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, @@ -1286,6 +1317,7 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, hpb->stats.rb_inactive_cnt++; } +out: dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "Noti: #ACT %u #INACT %u\n", rsp_field->active_rgn_cnt, rsp_field->inactive_rgn_cnt); diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 7df30340386a..032672114881 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -121,6 +121,8 @@ struct ufshpb_region { /* below information is used by lru */ struct list_head list_lru_rgn; + unsigned long rgn_flags; +#define RGN_FLAG_DIRTY 0 }; #define for_each_sub_region(rgn, i, srgn) \ From patchwork Wed Mar 31 07:39:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174577 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFBFEC433DB for ; Wed, 31 Mar 2021 07:41:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 73843619CD for ; Wed, 31 Mar 2021 07:41:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234182AbhCaHky (ORCPT ); Wed, 31 Mar 2021 03:40:54 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:8958 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234115AbhCaHkZ (ORCPT ); Wed, 31 Mar 2021 03:40:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176424; x=1648712424; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7iRmZ6lDmT0DuAwbtSSQFG2JaK3coj1MKyRKx0MmGM4=; b=LkMjBJeIbyWjhs/M5GT/Qeq068QsJSnaGXkw3+7kSsp3KgzyTVp9QZwp lI9pPy3MWO0ACyvajM7eEGvDKzZUQQJJpyZKVH69tqqcexO+4bC5vxYEr wdyaZj5wpjPNWVWFDwDRHbIMGCwi23ja/XzX9qJdlLk8h4mZsiRwa/+fv bFBie8Q+cKMCHbEEba8hTMo5FYQ4JE2jyMaBnsMjN3oL3OwpuPFkurkDP UDkuSJgbqcHGI2ILr0oo5/xRCTY6rfsDlhew4Y0kILelZaaVuM8IvMr84 dmoZYjnDH8KpplCO040eQyh89INfe0eoVtIkqG6nQomlVqjwxcDR3KzRo Q==; IronPort-SDR: 8D4FkL9hLlY/uEAP/SGVTH/27rtW/dIL2Zy659Dr3XT+i9FSwtGVdwMLvkqhc9vWh0uV7e7OnB vgrrQea4o8YOPpnezXeY4zLapwLcgG/mnPp2tMImKJhzR/Lcu1KsYZnBGtE1gSTvy/NwaDt/ZW YaCS33mfxMQIMzwr5/OSrTYmVqGR3SlqMFsA2hqqBu9lAs31/RqpASniEOiqvsYjyCBihmMID2 rBn64AtP7/g19X1tqNJNxQFx80B20U1W1AGfWMmgnqqVSpfNhXk1yn7Y5XLLzMVDAY+n42r5t5 skc= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="274239158" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:40:24 +0800 IronPort-SDR: 1BjP+mZm+WQ6X3k/csSx1rA3AE3I/If3BirxVp2dPPMKGpbWrpabw/BA9wTOsUvw4HOz/8X46l RedVCd+f9HsxTezuv1kpOwk5fcBjHcm051cv1C0MXMLl+WFzzVhd3r+Ko4rz4uHQ4RP8AIS0NJ L3CUa0TxyMF1m6MSxBPaf2nVGj+17qiuZB8ti9Y8LhNfm8YU+1N+A0v5DXkcEk9V3SvCtf6/z6 TYfnVHIsaGlNpK1jdHMd8uwEn5/PQzdFKbZY76Gn/iMllF1zq+1+ap3sCLx58fH1F46DNuOI8e 59Y+ZETly5GUDuPM5rEFosA2 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:22:08 -0700 IronPort-SDR: HE+xdy8UuGKKBF3d+vUEpztOtVJyKdvR0AvHtHHuksoWttmy6AfJ2c2fyukHjtuX4P11O8VFs8 pfGDDLOwidiRKFYtTFAFD1shI7GJ/87Y6NRGBccNb0xHsLtM6Z6tB1ok+PxGTQ6YJjRuGiFMqi MMA+Ll/sfhCl0jxwcg18nWsoroLUO/TSY+69FJslNHpm9TT9XqP8/ZHVrNUny9Ld6DYnslbafu jXSrYublOdwIVnlxLwOPTCk6Gj84kximwJO40zRsGpcXFym5L7vEkxJxCpkRHOCCZHUyVw/3RC Hdg= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:20 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 03/11] scsi: ufshpb: Transform set_dirty to iterate_rgn Date: Wed, 31 Mar 2021 10:39:44 +0300 Message-Id: <20210331073952.102162-4-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Given a transfer length, set_dirty meticulously runs over all the entries, across subregions and regions if needed. Currently its only use is to mark dirty blocks, but soon HCM may profit from it as well, when managing its read counters. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 6111019ca31a..252fcfb48862 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -144,13 +144,14 @@ static bool ufshpb_is_hpb_rsp_valid(struct ufs_hba *hba, return true; } -static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, - int srgn_idx, int srgn_offset, int cnt) +static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, + int srgn_offset, int cnt, bool set_dirty) { struct ufshpb_region *rgn; struct ufshpb_subregion *srgn; int set_bit_len; int bitmap_len; + unsigned long flags; next_srgn: rgn = hpb->rgn_tbl + rgn_idx; @@ -166,11 +167,14 @@ static void ufshpb_set_ppn_dirty(struct ufshpb_lu *hpb, int rgn_idx, else set_bit_len = cnt; - set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + if (set_dirty) + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); - if (rgn->rgn_state != HPB_RGN_INACTIVE && + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + if (set_dirty && rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); srgn_offset = 0; if (++srgn_idx == hpb->srgns_per_rgn) { @@ -592,10 +596,8 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) /* If command type is WRITE or DISCARD, set bitmap as drity */ if (ufshpb_is_write_or_discard_cmd(cmd)) { - spin_lock_irqsave(&hpb->rgn_state_lock, flags); - ufshpb_set_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, - transfer_len); - spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len, true); return 0; } From patchwork Wed Mar 31 07:39:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3B80C433C1 for ; Wed, 31 Mar 2021 07:41:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D6836199A for ; Wed, 31 Mar 2021 07:41:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234201AbhCaHkz (ORCPT ); Wed, 31 Mar 2021 03:40:55 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:8968 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234088AbhCaHke (ORCPT ); Wed, 31 Mar 2021 03:40:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176434; x=1648712434; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BI+q1+vMMk859Iz5KZFdJyLrCicWlgpuK1Jmh18UGMI=; b=A8Ve6qese1mBEfcXs/0znzknK7UYSKOU+JE0/pyrHzYO/4hW4oADnMZX wGy2i3Fj2v/iselHO4j2ubhp2JoDK/7oPfXllj47v/dtQH0RRWOApai2W 4GmVH9WP+l+1llyjxJc8Lwv+rVPLY/bXHAEln0DZwGW2MYH1n+dYw513r 9iPh+vawxp7/He+glwjmNMaPbzEzpw2NC5CcaI9MIQYrMZ63HfSSHkbP9 xm6bevclLgs5tnxpzmNjF3VccsV9COlbvpV3DnOK0O+MV2bjQfr7QLwYV fCXmQ6wFughowlcVLvCmwK3B7Gbsv5y4C7G/cffKNcTm61BcFeK3lJtJb w==; IronPort-SDR: vwAGrBkkonW4EywBlH4N+jSS6sLvlIHjgrwvAyAw0gRJ144B1idpy+l0JqSqKIeLk6lVoVopdE Qm98hyADJTK2zC8/iER0N6/+oAYQIAQAchJN6z1Ebe+Wo9UUTN3hscyVk+ZtB2LS8di1BzPMci M+o24LLwKMCayLTI0GOIyQ1/9RgWLpA+5jt9jC7fTUFh6Da2lRk7lwZrTXOLSSBwIUNgMG/4qd 3hu4F2o0JTRJ1i3Gp6Q0jPu4uXvCJThLpGL0g/7Fiv23vYsKmN66PUU4BZkgmqI1bh6voQTN7t ohA= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="274239166" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:40:33 +0800 IronPort-SDR: 4ewX9uMG4glTTBy5hjWA2RN9Dd4AoJLn11vpMQbmTJzCVqsl1beAC4vPwS/HBkhvT3AEp9wFFK vsp+7We3BYKiGSqMvSqs4Zsvl9g5F8v/GCRyEjZfGHt5FiIF0LKKMsLkhHVIrKyrhQquTr7XYL vLIOTYMVUWbrlGjxelYYAQewhlt4bDdUgLnkZ3zEjhSiYpqUJ0rahtD6zo2CUeql1Vr/wpeKz5 8Rvv9aPB8kgO2C2quTIKTFWNf3Iw0dWmxzsPe737Dfs3bMxPLt63jB24PhknGhLUvvHgVwxAjY d3tdIoEiXJT/siz9R+LCu89g Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:22:17 -0700 IronPort-SDR: blLlkpBFbwk7b28R9+YxKUyv1VhtkEthqjwtGy1JepW/Rra6q4LPZL5TBah4V1r8tCmRn63Zc7 VZDEkz7rFTlC9W2PIIIChswjGPLHLz4wNzh1SGCiOnDM94bkZ4pWpzNNRJ32R09KNz87qangve pSB4lvB191FDblIOEYDg1NGv4bHx6TgPKj4x84/ICkDKUFqoNlVwtb+IOqv4NdSNWmpqhP+hCH HtMQ4Ejc+g4RndX1hYuBSltA+WBOGQlSOGWGqR9Oh8k9XmFziVY1pAiuwtnhHksaAghizd8zUD b2s= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:29 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 04/11] scsi: ufshpb: Add reads counter Date: Wed, 31 Mar 2021 10:39:45 +0300 Message-Id: <20210331073952.102162-5-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode, reads are the major source of activation trials. Keep track of those reads counters, for both active as well inactive regions. We reset the read counter upon write - we are only interested in "clean" reads. Keep those counters normalized, as we are using those reads as a comparative score, to make various decisions. If during consecutive normalizations an active region has exhaust its reads - inactivate it. while at it, protect the {active,inactive}_count stats by adding them into the applicable handler. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 94 ++++++++++++++++++++++++++++++++++++--- drivers/scsi/ufs/ufshpb.h | 9 ++++ 2 files changed, 97 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 252fcfb48862..3ab66421dc00 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -16,6 +16,8 @@ #include "ufshpb.h" #include "../sd.h" +#define ACTIVATION_THRESHOLD 8 /* 8 IOs */ + /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; static mempool_t *ufshpb_mctx_pool; @@ -26,6 +28,9 @@ static int tot_active_srgn_pages; static struct workqueue_struct *ufshpb_wq; +static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, + int srgn_idx); + bool ufshpb_is_allowed(struct ufs_hba *hba) { return !(hba->ufshpb_dev.hpb_disabled); @@ -148,7 +153,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, int srgn_offset, int cnt, bool set_dirty) { struct ufshpb_region *rgn; - struct ufshpb_subregion *srgn; + struct ufshpb_subregion *srgn, *prev_srgn = NULL; int set_bit_len; int bitmap_len; unsigned long flags; @@ -167,15 +172,39 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, else set_bit_len = cnt; - if (set_dirty) - set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); - spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (set_dirty && rgn->rgn_state != HPB_RGN_INACTIVE && srgn->srgn_state == HPB_SRGN_VALID) bitmap_set(srgn->mctx->ppn_dirty, srgn_offset, set_bit_len); spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + if (hpb->is_hcm && prev_srgn != srgn) { + bool activate = false; + + spin_lock(&rgn->rgn_lock); + if (set_dirty) { + rgn->reads -= srgn->reads; + srgn->reads = 0; + set_bit(RGN_FLAG_DIRTY, &rgn->rgn_flags); + } else { + srgn->reads++; + rgn->reads++; + if (srgn->reads == ACTIVATION_THRESHOLD) + activate = true; + } + spin_unlock(&rgn->rgn_lock); + + if (activate) { + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, + "activate region %d-%d\n", rgn_idx, srgn_idx); + } + + prev_srgn = srgn; + } + srgn_offset = 0; if (++srgn_idx == hpb->srgns_per_rgn) { srgn_idx = 0; @@ -604,6 +633,19 @@ int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) if (!ufshpb_is_support_chunk(hpb, transfer_len)) return 0; + if (hpb->is_hcm) { + /* + * in host control mode, reads are the main source for + * activation trials. + */ + ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset, + transfer_len, false); + + /* keep those counters normalized */ + if (rgn->reads > hpb->entries_per_srgn) + schedule_work(&hpb->ufshpb_normalization_work); + } + spin_lock_irqsave(&hpb->rgn_state_lock, flags); if (ufshpb_test_ppn_dirty(hpb, rgn_idx, srgn_idx, srgn_offset, transfer_len)) { @@ -755,6 +797,8 @@ static void ufshpb_update_active_info(struct ufshpb_lu *hpb, int rgn_idx, if (list_empty(&srgn->list_act_srgn)) list_add_tail(&srgn->list_act_srgn, &hpb->lh_act_srgn); + + hpb->stats.rb_active_cnt++; } static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) @@ -770,6 +814,8 @@ static void ufshpb_update_inactive_info(struct ufshpb_lu *hpb, int rgn_idx) if (list_empty(&rgn->list_inact_rgn)) list_add_tail(&rgn->list_inact_rgn, &hpb->lh_inact_rgn); + + hpb->stats.rb_inactive_cnt++; } static void ufshpb_activate_subregion(struct ufshpb_lu *hpb, @@ -1090,6 +1136,7 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) rgn->rgn_idx); goto out; } + if (!list_empty(&rgn->list_lru_rgn)) { if (ufshpb_check_srgns_issue_state(hpb, rgn)) { ret = -EBUSY; @@ -1284,7 +1331,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, if (srgn->srgn_state == HPB_SRGN_VALID) srgn->srgn_state = HPB_SRGN_INVALID; spin_unlock(&hpb->rgn_state_lock); - hpb->stats.rb_active_cnt++; } if (hpb->is_hcm) { @@ -1316,7 +1362,6 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, } spin_unlock(&hpb->rgn_state_lock); - hpb->stats.rb_inactive_cnt++; } out: @@ -1515,6 +1560,36 @@ static void ufshpb_run_inactive_region_list(struct ufshpb_lu *hpb) spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); } +static void ufshpb_normalization_work_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, + ufshpb_normalization_work); + int rgn_idx; + + for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { + struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; + int srgn_idx; + + spin_lock(&rgn->rgn_lock); + rgn->reads = 0; + for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { + struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; + + srgn->reads >>= 1; + rgn->reads += srgn->reads; + } + spin_unlock(&rgn->rgn_lock); + + if (rgn->rgn_state != HPB_RGN_ACTIVE || rgn->reads) + continue; + + /* if region is active but has no reads - inactivate it */ + spin_lock(&hpb->rsp_list_lock); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock(&hpb->rsp_list_lock); + } +} + static void ufshpb_map_work_handler(struct work_struct *work) { struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, map_work); @@ -1674,6 +1749,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) rgn = rgn_table + rgn_idx; rgn->rgn_idx = rgn_idx; + spin_lock_init(&rgn->rgn_lock); + INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); @@ -1915,6 +1992,9 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); + if (hpb->is_hcm) + INIT_WORK(&hpb->ufshpb_normalization_work, + ufshpb_normalization_work_handler); hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2014,6 +2094,8 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { + if (hpb->is_hcm) + cancel_work_sync(&hpb->ufshpb_normalization_work); cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 032672114881..87495e59fcf1 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -106,6 +106,10 @@ struct ufshpb_subregion { int rgn_idx; int srgn_idx; bool is_last; + + /* subregion reads - for host mode */ + unsigned int reads; + /* below information is used by rsp_list */ struct list_head list_act_srgn; }; @@ -123,6 +127,10 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 + + /* region reads - for host mode */ + spinlock_t rgn_lock; + unsigned int reads; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -212,6 +220,7 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; + struct work_struct ufshpb_normalization_work; /* pinned region information */ u32 lu_pinned_start; From patchwork Wed Mar 31 07:39:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F28ECC433DB for ; Wed, 31 Mar 2021 07:41:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5C6A619D6 for ; Wed, 31 Mar 2021 07:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234207AbhCaHl2 (ORCPT ); Wed, 31 Mar 2021 03:41:28 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:22925 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234127AbhCaHlG (ORCPT ); Wed, 31 Mar 2021 03:41:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176490; x=1648712490; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KFIzO+f8G2d7pNPpUOssuRMbLwQuLI0+hCQ7d1u7nDA=; b=To5QqJMZ5N16RDzEBvDKCCMdwzK7R+nnxpj1IWjYWvmHpURFkj6LRn4/ PWuFziJK5fHyKULQn5b08fWdGcvH2N8SIJurrPUzr6HCCQJPvhg9F1TYc h5D8WimcX4ptfc/9xyyqwQj9wFeC52cEVJHkDks9lTUB6hSaP8lGW86og ouVfM9qtx2D8KpbDUUFEX+YLwP8xYV+C9yrkBieIKGfP9t0MfzYGeqNjt 31/BpL7csbviTpA9J8rzDAy19uwOGYdAxUBVSZ3rjV36kIUHfJndyGt/7 7sLHxfA3QldFqPTWQ0MdDCmmhgviXOYoTLVbGi4TeWnOeHgMG6rPnR0eU Q==; IronPort-SDR: GopJlAZPpiXRndsZX4us5CKMdUIsGFYGpQRzEF25T1/LIamb2ZcfhXpg+Ak1ObFHBEN2Z8eAwI R27d7SWn3OnoCiJMx3sf3mVZN6plSfykqGyPMMiNyHBV3PvJs+zdt4X/+xPLpGIt9BSVOuIlt5 AkhFtwuiAA9Lzx/4X70YoZYI0CSFW//YfIx8yk2jCgSFUVKAbc5rqoo2Owo6UUnTLoFU3x7SQy QXvqbhQ28LoXOFnQq6jbwqnvGGVbG8/B9LluxtrOSsus2aIQwazVrPw+G35bR9fPIYSvIZXGtd 0gg= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="267851328" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:41:05 +0800 IronPort-SDR: L1NfyXW0ylGyGsi454b7FxDPhlUpVWZkm+qXydBnq0hj7v+Ce1OkcKCLfoEo/c0pR/7tNzKNld OSUFur80cP+xfME+5bq+PwfIizh+Saa9xqghkqqy5ptgQpZgdleTN4yfXLUU5bbi0tjcw3ymKJ ZM+YPGjAOwm4I6yveQ3Pvpqqu6BWRFwaLBEHVdTSJI4pjNYHVRjvRo1QftIwBFVaW5obOY9TKp A4rVa9c8Igpq5qnPVZ/RY5TTcLTYxpWlyJN0k67DeVOVjhWHwJPND4loAwwUolXGJR2HmYafBS 1KvSkqTNHZMMYWlA9pQ9NHZn Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:20:54 -0700 IronPort-SDR: +PspDZ1oVg43ApFFOTzQxICU2vVitrl+4tkX8AsE1qBD86w59fiXwX7X216rN6aKf+7QNeZkEo Y7v3sdhWGaDgETgnDngH0LVgm9b+AQL5Zg6XnBgMW+YmMnz8qFL8+2ROTAB8vERlOBnBoryOLW LKsCe+DPBRHo4a6yxaPAiO2/EyFn+NTUdsMQcCr2vScxzVdE+xavhjSdBFO8GPJSkaqNynLIK0 u+Vr78v2VPHseA9J3PAu82jF2OCUQ99g19S6n5QVMNAGYntbSD8ZGjV/Y/M9oEZOxHu1utOvCW cj8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:39 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 05/11] scsi: ufshpb: Make eviction depends on region's reads Date: Wed, 31 Mar 2021 10:39:46 +0300 Message-Id: <20210331073952.102162-6-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host mode, eviction is considered an extreme measure. verify that the entering region has enough reads, and the exiting region has much less reads. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 3ab66421dc00..aefb6dc160ee 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,6 +17,7 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ +#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1057,6 +1058,13 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) if (ufshpb_check_srgns_issue_state(hpb, rgn)) continue; + /* + * in host control mode, verify that the exiting region + * has less reads + */ + if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + continue; + victim_rgn = rgn; break; } @@ -1229,7 +1237,7 @@ static int ufshpb_issue_map_req(struct ufshpb_lu *hpb, static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) { - struct ufshpb_region *victim_rgn; + struct ufshpb_region *victim_rgn = NULL; struct victim_select_info *lru_info = &hpb->lru_info; unsigned long flags; int ret = 0; @@ -1256,7 +1264,15 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * It is okay to evict the least recently used region, * because the device could detect this region * by not issuing HPB_READ + * + * in host control mode, verify that the entering + * region has enough reads */ + if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + ret = -EACCES; + goto out; + } + victim_rgn = ufshpb_victim_lru_info(hpb); if (!victim_rgn) { dev_warn(&hpb->sdev_ufs_lu->sdev_dev, From patchwork Wed Mar 31 07:39:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1591DC433E2 for ; Wed, 31 Mar 2021 07:41:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BA358619D7 for ; Wed, 31 Mar 2021 07:41:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234204AbhCaHk4 (ORCPT ); Wed, 31 Mar 2021 03:40:56 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:8986 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234168AbhCaHku (ORCPT ); Wed, 31 Mar 2021 03:40:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176450; x=1648712450; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=hjAeagR9YTnezWZbY3MD3Zaa3a4X2VoOGgm7iCNox3o=; b=QSlu05itHP6Tt9h3CfWMmPFiZ/Mrzaog0SEX2HZVxCz9hapVAH16K1LX v3r6IJkgNDQHsYIbeHbOFF6dwAvmQssLxgp8OR1JR2m/dB7gRNj5+h5Ql EDdlrY1FzK7jIrWF9xqoc7m0ISyhkrLDi5y86T/S6GJEGc8s9TqSks853 mrtZnsBeKSt+IuvLLr54zm4eoFu28uy8T5bePT0Htwt1wHvQC9UK/mMGg DFj2wc/2AzyUqsCVTgw7EHAfJYf5aj2wKofJbSz1Hy1euB3RpZ/td7nY5 rdmpj5a8BicbnakvSuTFTFBuYQWBQHRG0tYtSvQn27eU14rpB7Hks4+ao w==; IronPort-SDR: FdfnRWg8d2sXP0wygZVGNnfxAxFuew1kRBeZthmsgS0Omo4mEjVddtVg/AWtnMtmUsuiW/WENr 9DriNwwSb28p/6TIsxs7m51eOEcW/ML3/LiJx4HUNyJMdZ+CFAvckLWplkxwtZHkD+U7Qy91qp Whx6jBvnYpPhASJQO23wdZ5Uf1YFLh61g0WU7HibWCu3uSBF+5z247wHrsRWO4Rlj39giRBexd C3eyN4D8W13iZFYTZL8dNTkkv2dRlw9A0v+fl8NvsYTg0JsfYJ2Jcmcs2nvNJqyF61szH9wCy9 nRw= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="274239181" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:40:50 +0800 IronPort-SDR: m0hys45+51AvJS/1bWE1f8iw8XQrzuj2St6pcE6thb/bRc7ALHUlbba3nX4fvLep3soM+TDsiE 1433JQRrm36uG8VewC65d4rm4A/GTQ6UtsktlUPnvcAfP1q3Mta3qOgfkQecnHHX4noAFkczpf Pjalg0FO7myDQ5GlcerJUmYF0xeSEhRSPxqWFJuWrEgcVf0bm7Jwd2HfqIbTuPGFqoLrqwreE5 PAc4CgkgffXM46bg0I5FwCbe2JsHjL9p8FTXsfkxnset4PUOv8bkUhukQ9nqxh9T21yrkN9xis WGnnA1wuop0WmowMu7Qyid74 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:21:02 -0700 IronPort-SDR: zVzjlyk3zx9/ed4ecnZl54Im2Ul0zLrz0er/iZ3DlsqN7RToaOPZjHV/Y7W1WdD9OLuc5KuvA2 MdiMj7T/TzjSbtYrNCQEWehdH9cJ4ufZifybZOLHWpu1aEzM+PwyJUU5LVVnCk0PzhGvr5h3cJ 71cuDw42gIlqAYQNl5aIK+dhzzSdSzORQfKcYb5ZwJ3nHdUKgvQ+4wZ4mm+3BWAQxH+Km9b+zW WZjvLHhUKF165D+4NIctzHc7s8qzA3n9Q+JIzf/G6O/MfS81AXOcGz7jWWqXLwzgsgjaXbfe/b whw= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:46 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 06/11] scsi: ufshpb: Region inactivation in host mode Date: Wed, 31 Mar 2021 10:39:47 +0300 Message-Id: <20210331073952.102162-7-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host mode, the host is expected to send HPB-WRITE-BUFFER with buffer-id = 0x1 when it inactivates a region. Use the map-requests pool as there is no point in assigning a designated cache for umap-requests. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 35 +++++++++++++++++++++++++++++++---- drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 32 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index aefb6dc160ee..fcc954f51bcf 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -914,6 +914,7 @@ static int ufshpb_execute_umap_req(struct ufshpb_lu *hpb, blk_execute_rq_nowait(NULL, req, 1, ufshpb_umap_req_compl_fn); + hpb->stats.umap_req_cnt++; return 0; } @@ -1110,18 +1111,37 @@ static int ufshpb_issue_umap_req(struct ufshpb_lu *hpb, return -EAGAIN; } +static int ufshpb_issue_umap_single_req(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) +{ + return ufshpb_issue_umap_req(hpb, rgn); +} + static int ufshpb_issue_umap_all_req(struct ufshpb_lu *hpb) { return ufshpb_issue_umap_req(hpb, NULL); } -static void __ufshpb_evict_region(struct ufshpb_lu *hpb, - struct ufshpb_region *rgn) +static int __ufshpb_evict_region(struct ufshpb_lu *hpb, + struct ufshpb_region *rgn) { struct victim_select_info *lru_info; struct ufshpb_subregion *srgn; int srgn_idx; + lockdep_assert_held(&hpb->rgn_state_lock); + + if (hpb->is_hcm) { + unsigned long flags; + int ret; + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + ret = ufshpb_issue_umap_single_req(hpb, rgn); + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + if (ret) + return ret; + } + lru_info = &hpb->lru_info; dev_dbg(&hpb->sdev_ufs_lu->sdev_dev, "evict region %d\n", rgn->rgn_idx); @@ -1130,6 +1150,8 @@ static void __ufshpb_evict_region(struct ufshpb_lu *hpb, for_each_sub_region(rgn, srgn_idx, srgn) ufshpb_purge_active_subregion(hpb, srgn); + + return 0; } static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) @@ -1151,7 +1173,7 @@ static int ufshpb_evict_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) goto out; } - __ufshpb_evict_region(hpb, rgn); + ret = __ufshpb_evict_region(hpb, rgn); } out: spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); @@ -1285,7 +1307,9 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) "LRU full (%d), choose victim %d\n", atomic_read(&lru_info->active_cnt), victim_rgn->rgn_idx); - __ufshpb_evict_region(hpb, victim_rgn); + ret = __ufshpb_evict_region(hpb, victim_rgn); + if (ret) + goto out; } /* @@ -1856,6 +1880,7 @@ ufshpb_sysfs_attr_show_func(rb_noti_cnt); ufshpb_sysfs_attr_show_func(rb_active_cnt); ufshpb_sysfs_attr_show_func(rb_inactive_cnt); ufshpb_sysfs_attr_show_func(map_req_cnt); +ufshpb_sysfs_attr_show_func(umap_req_cnt); static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_hit_cnt.attr, @@ -1864,6 +1889,7 @@ static struct attribute *hpb_dev_stat_attrs[] = { &dev_attr_rb_active_cnt.attr, &dev_attr_rb_inactive_cnt.attr, &dev_attr_map_req_cnt.attr, + &dev_attr_umap_req_cnt.attr, NULL, }; @@ -1988,6 +2014,7 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) hpb->stats.rb_active_cnt = 0; hpb->stats.rb_inactive_cnt = 0; hpb->stats.map_req_cnt = 0; + hpb->stats.umap_req_cnt = 0; } static void ufshpb_param_init(struct ufshpb_lu *hpb) diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 87495e59fcf1..1ea58c17a4de 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -191,6 +191,7 @@ struct ufshpb_stats { u64 rb_inactive_cnt; u64 map_req_cnt; u64 pre_req_cnt; + u64 umap_req_cnt; }; struct ufshpb_lu { From patchwork Wed Mar 31 07:39:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF810C433C1 for ; Wed, 31 Mar 2021 07:41:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 980B8619B1 for ; Wed, 31 Mar 2021 07:41:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234186AbhCaHl0 (ORCPT ); Wed, 31 Mar 2021 03:41:26 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:44593 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234207AbhCaHk5 (ORCPT ); Wed, 31 Mar 2021 03:40:57 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176456; x=1648712456; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v4kXE+IBwcAUfe4J96duoT79tUFebgBBmSbuu0lgB9k=; b=rrz2ucQQfAri5ObPrtV3cnedm1gJ4bQrpZzDKKn6OlJ07hl0aNGaAqtZ qzSuYaG214D/XuZXhB/l2pUCe4SKbYAJ573kgl/WrV4EowXwNaAXa74fz oV3t+UO67HBOdk4Q7slI5p2LYjm34Yitq/7VyQL8ekiAN8piYbqPzpxFW 9H2txpK18k3zl4rO5mpNf6GrtZnDFkRYr/oR99P4BeK9GG8GvlrlMDti7 GxcoCSFc5T7u6jP9Nv13SV9QZ+GLEvD6P1FXtJPbrjQc1f+oLevIw0isR tBzbHNQIyWU3oRz+TueaxYIsYOXknfWtU4nEIdE1KFVY3yxiaXUrTf98m g==; IronPort-SDR: EeJBqXPiHnlpWdGXqX/rmv0sZwgh9zDEam4MM8Yt68NSVutRu5xxtEi53cRoGt4XVnowddrAd2 Fvu3MBQJrvMvUxhgTZy/IvSu79PVtQoUtD0zNvdxxkapNkLlSvScSbdMqZuCs9U8XtA8b7FVRr G/BWwRGu65Oc2WqdU3bRqfRq2IFKQKPN5r1B9eNFCt0eCb7Wls0+YnEorwSuOh7l7TSGqpA6E9 AWte+Q0K4PY2iIuBKBKkNW2oWiN2gAgHeT4vHQ4u2ChlSL99BfpCHVyqdCLwLBGEZHpHRYT3sV a1M= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="163422292" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:40:56 +0800 IronPort-SDR: q6Hu9IrvApW7z8M0GOE0Xiy3qtF3neFeRSpQtumXQhwInh/PDRcaHnNc6jgSz/WFp2hRSfPTgc vXVSHQY0Ww0cxSKEthSMhVZHsfW0R/5exLaSIyERfB4umoz6+YF4hEoS4i0AvKL4CyCdY13OB1 XGBfCjEYjPcBef9K7cGDh8rp+5ndd3VQuAte32Prbrp/5jqtLffXMyzgRH4tGC90pDc8Nn+N4z 2MILxDAGwRSZy29gahzErfAfEdLkzDDsoJLiWSVcFrjDcH9baVR/fxCEQlCGsFW/iDqOn58nqs akMIJGn35kg38ti1rZlu1Lu8 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:21:09 -0700 IronPort-SDR: S1aA5hlSxFeVcq/An5WsNLCMRBGqz3ftwvgC79R65H9sf06Zad7mKJr4RKHXuN8rBFajTDMVY0 4XPIjIzDsP+AQjHFCI7c52sgo9Y4AC9173sKKlQWGwayF6ED/7rDHH3pR5GJqO2HBhFiMUgLRH YE0JAJ3hyuNd6gXDit41K8dh4HP0KvB+Rma8JRzg7SqlyeHL311T0HUNmbMJ8z9thUc3PTIP1f DgubZqRP1vH9EgskOjZHaDq6bdfsKv60+O9IciZoGXNry8NN+W6kUk/ZAk8kY5oK2vfQXoyEgM QD8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:40:53 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 07/11] scsi: ufshpb: Add hpb dev reset response Date: Wed, 31 Mar 2021 10:39:48 +0300 Message-Id: <20210331073952.102162-8-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The spec does not define what is the host's recommended response when the device send hpb dev reset response (oper 0x2). We will update all active hpb regions: mark them and do that on the next read. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 32 +++++++++++++++++++++++++++++++- drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 32 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index fcc954f51bcf..1d99099ebd41 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -195,7 +195,8 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, } spin_unlock(&rgn->rgn_lock); - if (activate) { + if (activate || + test_and_clear_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags)) { spin_lock_irqsave(&hpb->rsp_list_lock, flags); ufshpb_update_active_info(hpb, rgn_idx, srgn_idx); spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); @@ -1412,6 +1413,20 @@ static void ufshpb_rsp_req_region_update(struct ufshpb_lu *hpb, queue_work(ufshpb_wq, &hpb->map_work); } +static void ufshpb_dev_reset_handler(struct ufshpb_lu *hpb) +{ + struct victim_select_info *lru_info = &hpb->lru_info; + struct ufshpb_region *rgn; + unsigned long flags; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry(rgn, &lru_info->lh_lru_rgn, list_lru_rgn) + set_bit(RGN_FLAG_UPDATE, &rgn->rgn_flags); + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); +} + /* * This function will parse recommended active subregion information in sense * data field of response UPIU with SAM_STAT_GOOD state. @@ -1486,6 +1501,18 @@ void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) case HPB_RSP_DEV_RESET: dev_warn(&hpb->sdev_ufs_lu->sdev_dev, "UFS device lost HPB information during PM.\n"); + + if (hpb->is_hcm) { + struct scsi_device *sdev; + + __shost_for_each_device(sdev, hba->host) { + struct ufshpb_lu *h = sdev->hostdata; + + if (h) + ufshpb_dev_reset_handler(h); + } + } + break; default: dev_notice(&hpb->sdev_ufs_lu->sdev_dev, @@ -1812,6 +1839,8 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } else { rgn->rgn_state = HPB_RGN_INACTIVE; } + + rgn->rgn_flags = 0; } return 0; @@ -2139,6 +2168,7 @@ static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { if (hpb->is_hcm) cancel_work_sync(&hpb->ufshpb_normalization_work); + cancel_work_sync(&hpb->map_work); } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 1ea58c17a4de..b863540e28d6 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -127,6 +127,7 @@ struct ufshpb_region { struct list_head list_lru_rgn; unsigned long rgn_flags; #define RGN_FLAG_DIRTY 0 +#define RGN_FLAG_UPDATE 1 /* region reads - for host mode */ spinlock_t rgn_lock; From patchwork Wed Mar 31 07:39:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CAD36C433E1 for ; Wed, 31 Mar 2021 07:41:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A3B2B619B1 for ; Wed, 31 Mar 2021 07:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234217AbhCaHl1 (ORCPT ); Wed, 31 Mar 2021 03:41:27 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:28234 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234132AbhCaHlG (ORCPT ); Wed, 31 Mar 2021 03:41:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176466; x=1648712466; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QP2UJD0lb3t29QTKvbWfJVqFsamwPuZQgTwG3RVRoBk=; b=gZBRp4ExBPWwWl40K/KGjW3TgeMzCLD1ohRDqbucpuwliBW/HNqcCxOa Cun5q0rFdXkNLc6/WDRasKbvYrXLYVXFgGGdfCuYOnj8tKZIjQKku0uzP 5nATFLx7FyEkw6IWqwxCC3H2agpeIwv1vraGPjkLyfxZfmTd/+0qy3QEf KWIIv57gu+IRjRJoqMCQDU4YDukFB4uT+DNHnL29lRzwYAUlLavr0Cxdu Iv0Usf3RM2A0tQIcpkENpHbfaMOxLf0qNo/eDOfLqN3OE+SJTNzojrbgj B7C5D6AQj8I8fJILrkuoQoGMtj+pr6/W8A8QyioRr83YGZbJ1FZPU5mg3 w==; IronPort-SDR: c/HQcjHq2AHog9eSDzBfkFSl4SD/sBWdnqtQyaBdgEmga5RLrWoZYgTt5sU0vi+5lKEL2ihwcY B5qw5bMWmQw7OcfuVvLM26LdVHT5T2mw8hdBNBaUC7qwdNRsoUc0iCU13P2/YUFrmYdiNx6dsy H0hzxslYIAEsQaJTww4volLLtI8on4lTMD8JB33jLlH6SAjX30tvZ7JsuHA+pM2ygeT0fHhPmA Qln4a1ozfA7Gv1K39bYuLFeQAcSx1mgZ+rUXeQ1/Dl6Oa0CwjyFerHfgOwj/yD1gv9DVwaklxK jNY= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="163338633" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:41:05 +0800 IronPort-SDR: i2Ys3EpgFGtRZMcwLHKyOmLRb0QGFi15mlJWlRdBpGG5EH3QsrC7392jzjx3AuLuaHdnardeHM pDQ3gC826DgXAOF4YmxQ8VCI16jHTvM/tKR4K82TNtJvWv+KQGKmlO9FOEnQRRH64jABhmeUyy Am55qlCNOOKaiLLR9ncVcjJLNpybPwrqHRAu2K+zQZ6BgB2K41gaKdyodx5riinPAjIljuv7V5 AFyYC2ZgTS3jviePHlo7Eh62KIrpGXBzL88HzhfnU9+KWhtphgl9sjRIYgNwk6Y5VL8ebKrY6x ur/HVDbBp8zmN5enduR42gfx Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:22:49 -0700 IronPort-SDR: mnENHheiE382HQCwDt21zskP4z8eebggWc44HXBdBK5b9BEUz1Oss5SQugLxyKT2tqBpzj37kR gXqLd+W6wD/7mIhU6Pbg8f5V0tIlXUoPHA+alnxoQtBTgrwaIiL8ye6gHEXH2a1WQwj0+m41I0 35bE2uwypdGDtqgZFTwyArjETRRI3SMdoZAMmsuOnDy9xaReg60DOcaMX8oUQ/QsfUnX893Dv0 2TKfQGbmtjN6CcEde4YTbjMRtPRgoSQrayWEcboJ8AonTfmJSE3EWFzDmc2wqj5aE2a4UFWH3m QD8= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:41:00 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 08/11] scsi: ufshpb: Add "Cold" regions timer Date: Wed, 31 Mar 2021 10:39:49 +0300 Message-Id: <20210331073952.102162-9-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In order not to hang on to “cold” regions, we shall inactivate a region that has no READ access for a predefined amount of time - READ_TO_MS. For that purpose we shall monitor the active regions list, polling it on every POLLING_INTERVAL_MS. On timeout expiry we shall add the region to the "to-be-inactivated" list, unless it is clean and did not exhaust its READ_TO_EXPIRIES - another parameter. All this does not apply to pinned regions. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 74 +++++++++++++++++++++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 8 +++++ 2 files changed, 79 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 1d99099ebd41..8dbeaf948afa 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -18,6 +18,9 @@ #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ #define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ +#define READ_TO_MS 1000 +#define READ_TO_EXPIRIES 100 +#define POLLING_INTERVAL_MS 200 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -1031,12 +1034,63 @@ static int ufshpb_check_srgns_issue_state(struct ufshpb_lu *hpb, return 0; } +static void ufshpb_read_to_handler(struct work_struct *work) +{ + struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, + ufshpb_read_to_work.work); + struct victim_select_info *lru_info = &hpb->lru_info; + struct ufshpb_region *rgn, *next_rgn; + unsigned long flags; + LIST_HEAD(expired_list); + + if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) + return; + + spin_lock_irqsave(&hpb->rgn_state_lock, flags); + + list_for_each_entry_safe(rgn, next_rgn, &lru_info->lh_lru_rgn, + list_lru_rgn) { + bool timedout = ktime_after(ktime_get(), rgn->read_timeout); + + if (timedout) { + rgn->read_timeout_expiries--; + if (is_rgn_dirty(rgn) || + rgn->read_timeout_expiries == 0) + list_add(&rgn->list_expired_rgn, &expired_list); + else + rgn->read_timeout = ktime_add_ms(ktime_get(), + READ_TO_MS); + } + } + + spin_unlock_irqrestore(&hpb->rgn_state_lock, flags); + + list_for_each_entry_safe(rgn, next_rgn, &expired_list, + list_expired_rgn) { + list_del_init(&rgn->list_expired_rgn); + spin_lock_irqsave(&hpb->rsp_list_lock, flags); + ufshpb_update_inactive_info(hpb, rgn->rgn_idx); + spin_unlock_irqrestore(&hpb->rsp_list_lock, flags); + } + + ufshpb_kick_map_work(hpb); + + clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); + + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); +} + static void ufshpb_add_lru_info(struct victim_select_info *lru_info, struct ufshpb_region *rgn) { rgn->rgn_state = HPB_RGN_ACTIVE; list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); + if (rgn->hpb->is_hcm) { + rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); + rgn->read_timeout_expiries = READ_TO_EXPIRIES; + } } static void ufshpb_hit_lru_info(struct victim_select_info *lru_info, @@ -1820,6 +1874,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&rgn->list_inact_rgn); INIT_LIST_HEAD(&rgn->list_lru_rgn); + INIT_LIST_HEAD(&rgn->list_expired_rgn); if (rgn_idx == hpb->rgns_per_lu - 1) { srgn_cnt = ((hpb->srgns_per_lu - 1) % @@ -1841,6 +1896,7 @@ static int ufshpb_alloc_region_tbl(struct ufs_hba *hba, struct ufshpb_lu *hpb) } rgn->rgn_flags = 0; + rgn->hpb = hpb; } return 0; @@ -2064,9 +2120,12 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) INIT_LIST_HEAD(&hpb->list_hpb_lu); INIT_WORK(&hpb->map_work, ufshpb_map_work_handler); - if (hpb->is_hcm) + if (hpb->is_hcm) { INIT_WORK(&hpb->ufshpb_normalization_work, ufshpb_normalization_work_handler); + INIT_DELAYED_WORK(&hpb->ufshpb_read_to_work, + ufshpb_read_to_handler); + } hpb->map_req_cache = kmem_cache_create("ufshpb_req_cache", sizeof(struct ufshpb_req), 0, 0, NULL); @@ -2100,6 +2159,10 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + return 0; release_pre_req_mempool: @@ -2166,9 +2229,10 @@ static void ufshpb_discard_rsp_lists(struct ufshpb_lu *hpb) static void ufshpb_cancel_jobs(struct ufshpb_lu *hpb) { - if (hpb->is_hcm) + if (hpb->is_hcm) { + cancel_delayed_work_sync(&hpb->ufshpb_read_to_work); cancel_work_sync(&hpb->ufshpb_normalization_work); - + } cancel_work_sync(&hpb->map_work); } @@ -2276,6 +2340,10 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); + if (hpb->is_hcm) + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(POLLING_INTERVAL_MS)); + } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index b863540e28d6..448062a94760 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -115,6 +115,7 @@ struct ufshpb_subregion { }; struct ufshpb_region { + struct ufshpb_lu *hpb; struct ufshpb_subregion *srgn_tbl; enum HPB_RGN_STATE rgn_state; int rgn_idx; @@ -132,6 +133,10 @@ struct ufshpb_region { /* region reads - for host mode */ spinlock_t rgn_lock; unsigned int reads; + /* region "cold" timer - for host mode */ + ktime_t read_timeout; + unsigned int read_timeout_expiries; + struct list_head list_expired_rgn; }; #define for_each_sub_region(rgn, i, srgn) \ @@ -223,6 +228,9 @@ struct ufshpb_lu { /* for selecting victim */ struct victim_select_info lru_info; struct work_struct ufshpb_normalization_work; + struct delayed_work ufshpb_read_to_work; + unsigned long work_data_bits; +#define TIMEOUT_WORK_RUNNING 0 /* pinned region information */ u32 lu_pinned_start; From patchwork Wed Mar 31 07:39:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27D07C433E2 for ; Wed, 31 Mar 2021 07:42:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 031AE619B1 for ; Wed, 31 Mar 2021 07:41:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234230AbhCaHla (ORCPT ); Wed, 31 Mar 2021 03:41:30 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:64389 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234109AbhCaHlO (ORCPT ); Wed, 31 Mar 2021 03:41:14 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176474; x=1648712474; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZZ0gmRBKCk+915yC+OTsRwaUxNIsnWdddXjW/YlqA48=; b=kNeUtWtt2aezI4f1xSsRU5cECGEjsfgcd9WjlTZd6Q7aYR3UJLgQdU+a LWewWWSJts3cVX3AyNuWM5X2vhlc0VouwOdZM17tdspe4u0HUY/UFX4j4 diVSa36VQaA9LOC921nVoGflM5DCxE7jwYIDIjvp7NJayUmKIqIUj4DZQ UpinaBwFZqFiaOuFNa2m00IwuYiwIY4rUsK+TlAm1eWSqDaqUEiS7Xk/h L8/5yZuvQYDEfKt8nrVM+YRg4l5oDx0KxLBWW3hA5HCYRw8LoXmIsnAAc 4Mw4zf600m35paez2zhF/ZRPf5tMJfpaVdyTusan/m2XV0EZfnnkzdG6A g==; IronPort-SDR: xUinNZUZ7Lb7gN1FCJJI2uBrwYpQPafpYkYPVLw4Ltmrrllw4oyn4fbOJ17djmdfVu98qsmyVt hYHR9kEah9WL5ECfK3/DVqmQ+pZhOXdtUb+uemQrGZOeW7F2X1bblcRUKnD9upO+V7qM7in1U0 GuTmpo8aY+rDiww6l/UnuJgrJe2IbnOWjtmrgbQ9BsLh4NGiQAB80IAiUOvOx1dTz3ZEOhfxnb D4caCKw+YPqR8cSfdyLURsSwKE5EuejW5Fw4Evm9QjL/BAL1Xede7y6mP81CS2BRn1U3Exohdn udw= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="167902128" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:41:13 +0800 IronPort-SDR: qMHdxrIdx9v3/ua8qxJLxfTG6TPvh4cptu7xzMXJydOvRw2Sp7Q9cW1x6I/KzKZT2cYztQHQKL vwEAxcJPXZmyHuDOHg23N8zpZvOfx3ffFhUi+sSRQzzRZCvKsqluaMpxvD1UScHt1aocwMp606 UY/dmOVs8yvThYbBh1XHR55sLj47r5pvVYBHMhaDH0L0+MKvpvd+swK22u/D4UDMNGl9RrLB9Y qzK3fMQxaVlp65Nei/HM+wcsykb6B7cjjxMekIhOz0TGbNc1KbzmqMLP3h/RYVci3m3Z1IgdAw 6rWRI3wP5jGSVxl+a5YUAlJc Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:21:26 -0700 IronPort-SDR: zHZS082WZFoanmfroRZ/zNylL4aA4Zauz4RJ7v7d1VnqbLLZIlljwnXH/zgLOSU4zdkHvAKzhT aoHGzsN57nR/cakGM6baD7zh5ki/GZPY5h1715DSMKhXut7wKdJPi9xDH1rPoAG8Fbo8CeRVYf REGLijSRnoYkbRUNOHQD4z+4PKH58+S9TE7Ig3SmqD0y4I5uiSy5Tkku62HGNTSjJ+l3HALp6F Ottcy+3KUDCg+iM01pgJ+rGPFUNgHbt81T2+g0UMbY9jn0dShAtvVAMjonRj99qGCDSCH7N+Oo i5c= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:41:09 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 09/11] scsi: ufshpb: Limit the number of inflight map requests Date: Wed, 31 Mar 2021 10:39:50 +0300 Message-Id: <20210331073952.102162-10-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In host control mode the host is the originator of map requests. To not flood the device with map requests, use a simple throttling mechanism that limits the number of inflight map requests. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 11 +++++++++++ drivers/scsi/ufs/ufshpb.h | 1 + 2 files changed, 12 insertions(+) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 8dbeaf948afa..c07da481ff4e 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -21,6 +21,7 @@ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 +#define THROTTLE_MAP_REQ_DEFAULT 1 /* memory management */ static struct kmem_cache *ufshpb_mctx_cache; @@ -740,6 +741,14 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct ufshpb_req *map_req; struct bio *bio; + if (hpb->is_hcm && + hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + dev_info(&hpb->sdev_ufs_lu->sdev_dev, + "map_req throttle. inflight %d throttle %d", + hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + return NULL; + } + map_req = ufshpb_get_req(hpb, srgn->rgn_idx, REQ_OP_SCSI_IN); if (!map_req) return NULL; @@ -754,6 +763,7 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, map_req->rb.srgn_idx = srgn->srgn_idx; map_req->rb.mctx = srgn->mctx; + hpb->num_inflight_map_req++; return map_req; } @@ -763,6 +773,7 @@ static void ufshpb_put_map_req(struct ufshpb_lu *hpb, { bio_put(map_req->bio); ufshpb_put_req(hpb, map_req); + hpb->num_inflight_map_req--; } static int ufshpb_clear_dirty_bitmap(struct ufshpb_lu *hpb, diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index 448062a94760..cfa0abac21db 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -217,6 +217,7 @@ struct ufshpb_lu { struct ufshpb_req *pre_req; int num_inflight_pre_req; int throttle_pre_req; + int num_inflight_map_req; struct list_head lh_pre_req_free; int cur_read_id; int pre_req_min_tr_len; From patchwork Wed Mar 31 07:39:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A03EC433E1 for ; Wed, 31 Mar 2021 07:42:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1B13C619D3 for ; Wed, 31 Mar 2021 07:42:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234042AbhCaHmQ (ORCPT ); Wed, 31 Mar 2021 03:42:16 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:22972 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234138AbhCaHlj (ORCPT ); Wed, 31 Mar 2021 03:41:39 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176511; x=1648712511; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Kw0n9qy43sV6k7GArTQnIOeZqCNtAeuHTWn+7jmRqIE=; b=WvjxCUIwGxRI1K/5TnQam3MyM1lVHf2xtgaw1CzXjWixMc4/5iNI2Rr1 LG5EbY18jQbWygI/BbQes4fgk6GsKLuwQSIddbv69SP/PeN60WhOkE7l9 ZJuFMVTHx8SCQQj/3jbniiOpMc7aUGqR84T4fjDML/ztOobqqlZGmkpcM P49fVO91XmmXeqsJPX9GKA8cXviQbSiywV8IZYEi8mrzwITZhkGnPBZSQ EDWN5kbFjgSCWoXjE5UJkW4bZ2eRDdK5x0twcjqNT6j1pz/KNH77hrlei 9OUPTpa1kpHtgMlHDz5MIzagJrDbkyaecZ2df3XY80ydjcqnTWhDAceZ6 w==; IronPort-SDR: N2kqZbMRZpOAo1b4bwUfjNs/cD5PKhsWY67LdTvH8t+YWKyXIrfsAvSBnEVIbMTuty3W5Mh9/r QtA/ZqNhu1mYN16KVa48BKRChvD6cSHB/KSnmWqzPBqtAK6JWrQf8wrkug9oLwzS4PIb7HBQ/S mkNVeYexzTb8PrKZlovs4S/Z2ONT793XKCGQdZ9Bnq+EZZQC6DITEkMeWZJEYB4JACBrkULnlZ UgkGoK2lHH+kbtASJgZvNA60S69/GK27yJ4ePpugD0//MZWzAI9LfWfDM2XdoNgNOTKAiX7ijy tZo= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="267851382" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:41:35 +0800 IronPort-SDR: DZ6nUOQgKTEWlHZIY16t8W70nnX3Li34PKGI151R1kcnSrfjiBuAGO1grnrdfSs6Ny5g2hl72W w5rULhbVr44Rw2BXaAlG+s92NgBAgywVUG53y/IUXwm/xwnmbCpgoKBh+wLp3i2lb8f9zKoWOH cMcatVwepwAcEWiTR3yFRs8sm6BWNckRxBlGfELEgLhGrf+zuhrES9Gu7dfXShlAlhgWsOqJba 5+kJm24s+NGFhXQ/REJ5tTORVrYElSUnA92eg2WNhR6mpbDBLJ86vmhVeeCtiYcOA1tR47JPFT xTFZ2ajydtYf72Mp/6njoJva Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:21:33 -0700 IronPort-SDR: snXp5J+M9D0iDvEYXWMLakipiRRIVXJMl14DJ+9QHcTK39TMGSvcUDSCE4lJ5Mu3lStMrvlazN qJEU3XlxbNGFUNt5hT2X7bl5sArnGtAPcya1BRfTofDTtECaFnCib89Zov2gJWf7S0bwQ02VDt WqK6uMDwDE+8aqS+xtYHVsg9NyFaY63HA9EBeBxCfeR3H/9EomC5Q3QGX8v8Skg+bGPws6wVGk WWE8cChDVJ+KY65JJQjccqAORMUzWlMHyn6SdWJ7kLODRnorHqMKan6r9VNxpVqJJl1E3PND7Y 8W0= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:41:17 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 10/11] scsi: ufshpb: Add support for host control mode Date: Wed, 31 Mar 2021 10:39:51 +0300 Message-Id: <20210331073952.102162-11-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Support devices that report they are using host control mode. Signed-off-by: Avri Altman --- drivers/scsi/ufs/ufshpb.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index c07da481ff4e..08066bb6da65 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -2582,12 +2582,6 @@ void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) u32 max_hpb_single_cmd = HPB_MULTI_CHUNK_LOW; hpb_dev_info->control_mode = desc_buf[DEVICE_DESC_PARAM_HPB_CONTROL]; - if (hpb_dev_info->control_mode == HPB_HOST_CONTROL) { - dev_err(hba->dev, "%s: host control mode is not supported.\n", - __func__); - hpb_dev_info->hpb_disabled = true; - return; - } version = get_unaligned_be16(desc_buf + DEVICE_DESC_PARAM_HPB_VER); if ((version != HPB_SUPPORT_VERSION) && From patchwork Wed Mar 31 07:39:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Avri Altman X-Patchwork-Id: 12174595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19026C433E0 for ; Wed, 31 Mar 2021 07:42:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AB979619CD for ; Wed, 31 Mar 2021 07:42:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234232AbhCaHmC (ORCPT ); Wed, 31 Mar 2021 03:42:02 -0400 Received: from esa4.hgst.iphmx.com ([216.71.154.42]:28275 "EHLO esa4.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234247AbhCaHlf (ORCPT ); Wed, 31 Mar 2021 03:41:35 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1617176494; x=1648712494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=m4MCGdvDSWJMbXIP786fbo35l/6VIK/51peeQuC8zNc=; b=lTmWoDqFamWdy7rwqUv1R3WAIcwWwhnB6dUf+EXsf4YBmZBqiZrtePsD vhg8Ma7ljsgfJJRGWtU/Bb8R3HcyMA8QmFN+56lCDFiQ67iO8LLkSSh4c IsfqWWIQiMKJdzyqTuVhO4yRr9VMaEL8OBQ1NmAomt/YGAodFaNum9BQv acKGIOFr3j0z9KMVl6Uk0dtb7WSxLjqcRmfoYFyGG0k6wbl75lEfa9TQH gAcwSY1LH6tM8OMU8inCJlbYbLwl+rxT3YnG/oxsxrc/c1YIErjJ7O8al 9H+cOLiGYOtGSXXtBTRkCNPGwVShDmh4wtLUOM2iUvFBw3IVqVS2cvgkQ Q==; IronPort-SDR: oKdbOeSYjqVqVYlHuY8ZHZ+cYwMF2BP5TyRcq9CT3w3kAPRkRl1T0OAZTVaWpsVKyUs1dsHchg mchATr7+WmuC7v28LMmvu6X/fBTuBsKbtqkQdmsHYkpko60upEguPI9Q9LW/0NbdcrHATVIzR4 xYRqpRIeh/Fwp7czNH0IfIv3IKumw3BYIvvHSqpDkiSJxP56EuaGUdw5cH/AB3+COcrO+0jpsx 2VB6/pJyBHk+Q0r58+z9316HHdlU3dncXDOj+U9p7LDiud4Fw7CowlVdE9Bgkwh4m8wsXiVQ6f Aog= X-IronPort-AV: E=Sophos;i="5.81,293,1610380800"; d="scan'208";a="163338666" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 31 Mar 2021 15:41:32 +0800 IronPort-SDR: eMNyRS7gveKkvgVTXwMjt7OIVf5/L2tDn/s5dDIaWBn7Rr+kVtZkvr/aIJl5WOxhija+jTJ+6T AvqxJryzvFkkoIhM4f1ANMve4ecLubdmD84MEdf25v62cNZkrF0SqkRYrM9HLBx5jVyGQKGDiv i0RukjS9cXfWJc5DtWm2FzzdUB719DoeypL9UdtH0nOeOPoWbklvg8Lg1MeUa8Oo/a4yyFI9xe w7MAFE53YF0GPkhB/1T7ipKGL5zKKC8a8KMsjKRDbJ7bL0h69QNVoDyS3v08JmiCSNNzd6zjOd 8elCvCF4lhaTj62/JOZscyn6 Received: from uls-op-cesaip01.wdc.com ([10.248.3.36]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2021 00:23:16 -0700 IronPort-SDR: VT3IuHd4mXnW1KJfkgH8ox6/sSerZIBAPCkQ0I2epUbVxA4HtfLrWAIEh8oROXO/K3ehDJWGPm oOGdc/JMuI35jGySk1mJeJlg4Z6JPMQy/JnGZyc9U0bMBIMHF6fKIKti0cy1iXJv15xxOq5PCY YyZVdSJ4ImRRSP74h5Z+nMJm2kD7k/DUtaehXnTK+9OrPRjB6j/2FaVRs7m/4c4JrIe/7C3OSU zEXGgr6sI1jK447+Tc2uH4Amv7SeYmmHfzbClxoenmUYC6jCMSvoS8e2bE1FBHc9kKjIN2bPzq Vp4= WDCIronportException: Internal Received: from bxygm33.sdcorp.global.sandisk.com ([10.0.231.247]) by uls-op-cesaip01.wdc.com with ESMTP; 31 Mar 2021 00:41:28 -0700 From: Avri Altman To: "James E . J . Bottomley" , "Martin K . Petersen" , linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org Cc: gregkh@linuxfoundation.org, Bart Van Assche , yongmyung lee , Daejun Park , alim.akhtar@samsung.com, asutoshd@codeaurora.org, Zang Leigang , Avi Shchislowski , Bean Huo , cang@codeaurora.org, stanley.chu@mediatek.com, Avri Altman Subject: [PATCH v7 11/11] scsi: ufshpb: Make host mode parameters configurable Date: Wed, 31 Mar 2021 10:39:52 +0300 Message-Id: <20210331073952.102162-12-avri.altman@wdc.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210331073952.102162-1-avri.altman@wdc.com> References: <20210331073952.102162-1-avri.altman@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org We can make use of this commit, to elaborate some more of the host control mode logic, explaining what role play each and every variable. While at it, allow those parameters to be configurable. Signed-off-by: Avri Altman --- Documentation/ABI/testing/sysfs-driver-ufs | 84 +++++- drivers/scsi/ufs/ufshpb.c | 288 +++++++++++++++++++-- drivers/scsi/ufs/ufshpb.h | 20 ++ 3 files changed, 365 insertions(+), 27 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs index 419adf450b89..133af2114165 100644 --- a/Documentation/ABI/testing/sysfs-driver-ufs +++ b/Documentation/ABI/testing/sysfs-driver-ufs @@ -1323,14 +1323,76 @@ Description: This entry shows the maximum HPB data size for using single HPB The file is read only. -What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable -Date: March 2021 -Contact: Daejun Park -Description: This entry shows the status of HPB. - - == ============================ - 0 HPB is not enabled. - 1 HPB is enabled - == ============================ - - The file is read only. +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, reads are the major source of activation + trials. once this threshold hs met, the region is added to the + "to-be-activated" list. Since we reset the read counter upon + write, this include sending a rb command updating the region + ppn as well. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor +Date: February 2021 +Contact: Avri Altman +Description: In host control mode, We think of the regions as "buckets". + Those buckets are being filled with reads, and emptied on write. + We use entries_per_srgn - the amount of blocks in a subregion as + our bucket size. This applies because HPB1.0 only concern a + single-block reads. Once the bucket size is crossed, we trigger + a normalization work - not only to avoid overflow, but mainly + because we want to keep those counters normalized, as we are + using those reads as a comparative score, to make various decisions. + The normalization is dividing (shift right) the read counter by + the normalization_factor. If during consecutive normalizations + an active region has exhaust its reads - inactivate it. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter +Date: February 2021 +Contact: Avri Altman +Description: Region deactivation is often due to the fact that eviction took + place: a region become active on the expense of another. This is + happening when the max-active-regions limit has crossed. + In host mode, eviction is considered an extreme measure. We + want to verify that the entering region has enough reads, and + the exiting region has much less reads. eviction_thld_enter is + the min reads that a region must have in order to be considered + as a candidate to evict other region. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit +Date: February 2021 +Contact: Avri Altman +Description: same as above for the exiting region. A region is consider to + be a candidate to be evicted, only if it has less reads than + eviction_thld_exit. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms +Date: February 2021 +Contact: Avri Altman +Description: In order not to hang on to “cold” regions, we shall inactivate + a region that has no READ access for a predefined amount of + time - read_timeout_ms. If read_timeout_ms has expired, and the + region is dirty - it is less likely that we can make any use of + HPB-READing it. So we inactivate it. Still, deactivation has + its overhead, and we may still benefit from HPB-READing this + region if it is clean - see read_timeout_expiries. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries +Date: February 2021 +Contact: Avri Altman +Description: if the region read timeout has expired, but the region is clean, + just re-wind its timer for another spin. Do that as long as it + is clean and did not exhaust its read_timeout_expiries threshold. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms +Date: February 2021 +Contact: Avri Altman +Description: the frequency in which the delayed worker that checks the + read_timeouts is awaken. + +What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req +Date: February 2021 +Contact: Avri Altman +Description: in host control mode the host is the originator of map requests. + To not flood the device with map requests, use a simple throttling + mechanism that limits the number of inflight map requests. diff --git a/drivers/scsi/ufs/ufshpb.c b/drivers/scsi/ufs/ufshpb.c index 08066bb6da65..c9fa2d9ccc2c 100644 --- a/drivers/scsi/ufs/ufshpb.c +++ b/drivers/scsi/ufs/ufshpb.c @@ -17,7 +17,6 @@ #include "../sd.h" #define ACTIVATION_THRESHOLD 8 /* 8 IOs */ -#define EVICTION_THRESHOLD (ACTIVATION_THRESHOLD << 5) /* 256 IOs */ #define READ_TO_MS 1000 #define READ_TO_EXPIRIES 100 #define POLLING_INTERVAL_MS 200 @@ -194,7 +193,7 @@ static void ufshpb_iterate_rgn(struct ufshpb_lu *hpb, int rgn_idx, int srgn_idx, } else { srgn->reads++; rgn->reads++; - if (srgn->reads == ACTIVATION_THRESHOLD) + if (srgn->reads == hpb->params.activation_thld) activate = true; } spin_unlock(&rgn->rgn_lock); @@ -742,10 +741,11 @@ static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, struct bio *bio; if (hpb->is_hcm && - hpb->num_inflight_map_req >= THROTTLE_MAP_REQ_DEFAULT) { + hpb->num_inflight_map_req >= hpb->params.inflight_map_req) { dev_info(&hpb->sdev_ufs_lu->sdev_dev, "map_req throttle. inflight %d throttle %d", - hpb->num_inflight_map_req, THROTTLE_MAP_REQ_DEFAULT); + hpb->num_inflight_map_req, + hpb->params.inflight_map_req); return NULL; } @@ -1052,6 +1052,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) struct victim_select_info *lru_info = &hpb->lru_info; struct ufshpb_region *rgn, *next_rgn; unsigned long flags; + unsigned int poll; LIST_HEAD(expired_list); if (test_and_set_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits)) @@ -1070,7 +1071,7 @@ static void ufshpb_read_to_handler(struct work_struct *work) list_add(&rgn->list_expired_rgn, &expired_list); else rgn->read_timeout = ktime_add_ms(ktime_get(), - READ_TO_MS); + hpb->params.read_timeout_ms); } } @@ -1088,8 +1089,9 @@ static void ufshpb_read_to_handler(struct work_struct *work) clear_bit(TIMEOUT_WORK_RUNNING, &hpb->work_data_bits); + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); } static void ufshpb_add_lru_info(struct victim_select_info *lru_info, @@ -1099,8 +1101,11 @@ static void ufshpb_add_lru_info(struct victim_select_info *lru_info, list_add_tail(&rgn->list_lru_rgn, &lru_info->lh_lru_rgn); atomic_inc(&lru_info->active_cnt); if (rgn->hpb->is_hcm) { - rgn->read_timeout = ktime_add_ms(ktime_get(), READ_TO_MS); - rgn->read_timeout_expiries = READ_TO_EXPIRIES; + rgn->read_timeout = + ktime_add_ms(ktime_get(), + rgn->hpb->params.read_timeout_ms); + rgn->read_timeout_expiries = + rgn->hpb->params.read_timeout_expiries; } } @@ -1129,7 +1134,8 @@ static struct ufshpb_region *ufshpb_victim_lru_info(struct ufshpb_lu *hpb) * in host control mode, verify that the exiting region * has less reads */ - if (hpb->is_hcm && rgn->reads > (EVICTION_THRESHOLD >> 1)) + if (hpb->is_hcm && + rgn->reads > hpb->params.eviction_thld_exit) continue; victim_rgn = rgn; @@ -1356,7 +1362,8 @@ static int ufshpb_add_region(struct ufshpb_lu *hpb, struct ufshpb_region *rgn) * in host control mode, verify that the entering * region has enough reads */ - if (hpb->is_hcm && rgn->reads < EVICTION_THRESHOLD) { + if (hpb->is_hcm && + rgn->reads < hpb->params.eviction_thld_enter) { ret = -EACCES; goto out; } @@ -1697,6 +1704,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) struct ufshpb_lu *hpb = container_of(work, struct ufshpb_lu, ufshpb_normalization_work); int rgn_idx; + u8 factor = hpb->params.normalization_factor; for (rgn_idx = 0; rgn_idx < hpb->rgns_per_lu; rgn_idx++) { struct ufshpb_region *rgn = hpb->rgn_tbl + rgn_idx; @@ -1707,7 +1715,7 @@ static void ufshpb_normalization_work_handler(struct work_struct *work) for (srgn_idx = 0; srgn_idx < hpb->srgns_per_rgn; srgn_idx++) { struct ufshpb_subregion *srgn = rgn->srgn_tbl + srgn_idx; - srgn->reads >>= 1; + srgn->reads >>= factor; rgn->reads += srgn->reads; } spin_unlock(&rgn->rgn_lock); @@ -2031,8 +2039,247 @@ requeue_timeout_ms_store(struct device *dev, struct device_attribute *attr, } static DEVICE_ATTR_RW(requeue_timeout_ms); +ufshpb_sysfs_param_show_func(activation_thld); +static ssize_t +activation_thld_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.activation_thld = val; + + return count; +} +static DEVICE_ATTR_RW(activation_thld); + +ufshpb_sysfs_param_show_func(normalization_factor); +static ssize_t +normalization_factor_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > ilog2(hpb->entries_per_srgn)) + return -EINVAL; + + hpb->params.normalization_factor = val; + + return count; +} +static DEVICE_ATTR_RW(normalization_factor); + +ufshpb_sysfs_param_show_func(eviction_thld_enter); +static ssize_t +eviction_thld_enter_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.eviction_thld_exit) + return -EINVAL; + + hpb->params.eviction_thld_enter = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_enter); + +ufshpb_sysfs_param_show_func(eviction_thld_exit); +static ssize_t +eviction_thld_exit_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= hpb->params.activation_thld) + return -EINVAL; + + hpb->params.eviction_thld_exit = val; + + return count; +} +static DEVICE_ATTR_RW(eviction_thld_exit); + +ufshpb_sysfs_param_show_func(read_timeout_ms); +static ssize_t +read_timeout_ms_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* read_timeout >> timeout_polling_interval */ + if (val < hpb->params.timeout_polling_interval_ms * 2) + return -EINVAL; + + hpb->params.read_timeout_ms = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_ms); + +ufshpb_sysfs_param_show_func(read_timeout_expiries); +static ssize_t +read_timeout_expiries_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0) + return -EINVAL; + + hpb->params.read_timeout_expiries = val; + + return count; +} +static DEVICE_ATTR_RW(read_timeout_expiries); + +ufshpb_sysfs_param_show_func(timeout_polling_interval_ms); +static ssize_t +timeout_polling_interval_ms_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + /* timeout_polling_interval << read_timeout */ + if (val <= 0 || val > hpb->params.read_timeout_ms / 2) + return -EINVAL; + + hpb->params.timeout_polling_interval_ms = val; + + return count; +} +static DEVICE_ATTR_RW(timeout_polling_interval_ms); + +ufshpb_sysfs_param_show_func(inflight_map_req); +static ssize_t inflight_map_req_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct scsi_device *sdev = to_scsi_device(dev); + struct ufshpb_lu *hpb = ufshpb_get_hpb_data(sdev); + int val; + + if (!hpb) + return -ENODEV; + + if (!hpb->is_hcm) + return -EOPNOTSUPP; + + if (kstrtouint(buf, 0, &val)) + return -EINVAL; + + if (val <= 0 || val > hpb->sdev_ufs_lu->queue_depth - 1) + return -EINVAL; + + hpb->params.inflight_map_req = val; + + return count; +} +static DEVICE_ATTR_RW(inflight_map_req); + +static void ufshpb_hcm_param_init(struct ufshpb_lu *hpb) +{ + hpb->params.activation_thld = ACTIVATION_THRESHOLD; + hpb->params.normalization_factor = 1; + hpb->params.eviction_thld_enter = (ACTIVATION_THRESHOLD << 5); + hpb->params.eviction_thld_exit = (ACTIVATION_THRESHOLD << 4); + hpb->params.read_timeout_ms = READ_TO_MS; + hpb->params.read_timeout_expiries = READ_TO_EXPIRIES; + hpb->params.timeout_polling_interval_ms = POLLING_INTERVAL_MS; + hpb->params.inflight_map_req = THROTTLE_MAP_REQ_DEFAULT; +} + static struct attribute *hpb_dev_param_attrs[] = { &dev_attr_requeue_timeout_ms.attr, + &dev_attr_activation_thld.attr, + &dev_attr_normalization_factor.attr, + &dev_attr_eviction_thld_enter.attr, + &dev_attr_eviction_thld_exit.attr, + &dev_attr_read_timeout_ms.attr, + &dev_attr_read_timeout_expiries.attr, + &dev_attr_timeout_polling_interval_ms.attr, + &dev_attr_inflight_map_req.attr, NULL, }; @@ -2116,6 +2363,8 @@ static void ufshpb_stat_init(struct ufshpb_lu *hpb) static void ufshpb_param_init(struct ufshpb_lu *hpb) { hpb->params.requeue_timeout_ms = HPB_REQUEUE_TIME_MS; + if (hpb->is_hcm) + ufshpb_hcm_param_init(hpb); } static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) @@ -2170,9 +2419,13 @@ static int ufshpb_lu_hpb_init(struct ufs_hba *hba, struct ufshpb_lu *hpb) ufshpb_stat_init(hpb); ufshpb_param_init(hpb); - if (hpb->is_hcm) + if (hpb->is_hcm) { + unsigned int poll; + + poll = hpb->params.timeout_polling_interval_ms; schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + msecs_to_jiffies(poll)); + } return 0; @@ -2351,10 +2604,13 @@ void ufshpb_resume(struct ufs_hba *hba) continue; ufshpb_set_state(hpb, HPB_PRESENT); ufshpb_kick_map_work(hpb); - if (hpb->is_hcm) - schedule_delayed_work(&hpb->ufshpb_read_to_work, - msecs_to_jiffies(POLLING_INTERVAL_MS)); + if (hpb->is_hcm) { + unsigned int poll = + hpb->params.timeout_polling_interval_ms; + schedule_delayed_work(&hpb->ufshpb_read_to_work, + msecs_to_jiffies(poll)); + } } } diff --git a/drivers/scsi/ufs/ufshpb.h b/drivers/scsi/ufs/ufshpb.h index cfa0abac21db..68a5af0ff682 100644 --- a/drivers/scsi/ufs/ufshpb.h +++ b/drivers/scsi/ufs/ufshpb.h @@ -185,8 +185,28 @@ struct victim_select_info { atomic_t active_cnt; }; +/** + * ufshpb_params - ufs hpb parameters + * @requeue_timeout_ms - requeue threshold of wb command (0x2) + * @activation_thld - min reads [IOs] to activate/update a region + * @normalization_factor - shift right the region's reads + * @eviction_thld_enter - min reads [IOs] for the entering region in eviction + * @eviction_thld_exit - max reads [IOs] for the exiting region in eviction + * @read_timeout_ms - timeout [ms] from the last read IO to the region + * @read_timeout_expiries - amount of allowable timeout expireis + * @timeout_polling_interval_ms - frequency in which timeouts are checked + * @inflight_map_req - number of inflight map requests + */ struct ufshpb_params { unsigned int requeue_timeout_ms; + unsigned int activation_thld; + unsigned int normalization_factor; + unsigned int eviction_thld_enter; + unsigned int eviction_thld_exit; + unsigned int read_timeout_ms; + unsigned int read_timeout_expiries; + unsigned int timeout_polling_interval_ms; + unsigned int inflight_map_req; }; struct ufshpb_stats {