From patchwork Fri Jan 13 20:48:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Asutosh Das X-Patchwork-Id: 13101581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FB05C3DA78 for ; Fri, 13 Jan 2023 20:52:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230037AbjAMUv6 (ORCPT ); Fri, 13 Jan 2023 15:51:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231324AbjAMUvk (ORCPT ); Fri, 13 Jan 2023 15:51:40 -0500 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 023CC88DF4; Fri, 13 Jan 2023 12:51:34 -0800 (PST) Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 30DKo82i019062; Fri, 13 Jan 2023 20:51:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=qcppdkim1; bh=BIkFMH7n1OThkVJBXreQMG7az+RGUaVE9BihMZ3ke2g=; b=PTOzn4NNuDjWbp9opGYzTUC97pGyTmgJpYijBS7HbZt6Qk+jtAuAfD4rbqeRZV4PdxDy udmTFIV7UmUbLOiHpiMYn9WJXmBZUHCFLrysyZD4N+HGi8Eey9X6ZORV7sPDq+Qqqb3P t62+sU9+toHY/DAw2ntk/IAT8OlzFNn44WCDk6+af7GkH6rPxkxakkTuQNAnJ7jF9Ljv 7H9r0V23pTizho134Ke5yjEvzulFNJdB3aEEBRImjDMt55huEc7+2u8pNL9AyFTvmPvD Q9aNhsAfY5/DTlhrd806UHLYfzzfcRa8OE2LLuAy48oJiSZ6OdsMMrIgJMNPm3qM+53x QA== Received: from nasanppmta01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3n32wu9w4d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 13 Jan 2023 20:51:22 +0000 Received: from nasanex01a.na.qualcomm.com ([10.52.223.231]) by NASANPPMTA01.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 30DKpLlN005840 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 13 Jan 2023 20:51:21 GMT Received: from asutoshd-linux1.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Fri, 13 Jan 2023 12:51:20 -0800 From: Asutosh Das To: , , CC: , , , , , , , , Asutosh Das , , Alim Akhtar , "James E.J. Bottomley" , Arthur Simchaev , Krzysztof Kozlowski , Jinyoung Choi , open list Subject: [PATCH v13 04/15] ufs: core: mcq: Add support to allocate multiple queues Date: Fri, 13 Jan 2023 12:48:41 -0800 Message-ID: X-Mailer: git-send-email 2.7.4 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanex01b.na.qualcomm.com (10.46.141.250) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: x15aWoA5_4kSHsHb01nZgXibWwkJOohs X-Proofpoint-ORIG-GUID: x15aWoA5_4kSHsHb01nZgXibWwkJOohs X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.923,Hydra:6.0.562,FMLib:17.11.122.1 definitions=2023-01-13_10,2023-01-13_02,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 mlxlogscore=999 spamscore=0 clxscore=1015 lowpriorityscore=0 suspectscore=0 impostorscore=0 priorityscore=1501 mlxscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2212070000 definitions=main-2301130142 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org Multi-circular queue (MCQ) has been added in UFSHC v4.0 standard in addition to the Single Doorbell mode. The MCQ mode supports multiple submission and completion queues. Add support to allocate and configure the queues. Add module parameters support to configure the queues. Co-developed-by: Can Guo Signed-off-by: Can Guo Signed-off-by: Asutosh Das Reviewed-by: Bart Van Assche Reviewed-by: Manivannan Sadhasivam --- drivers/ufs/core/Makefile | 2 +- drivers/ufs/core/ufs-mcq.c | 124 +++++++++++++++++++++++++++++++++++++++++ drivers/ufs/core/ufshcd-priv.h | 1 + drivers/ufs/core/ufshcd.c | 12 ++++ include/ufs/ufshcd.h | 4 ++ 5 files changed, 142 insertions(+), 1 deletion(-) create mode 100644 drivers/ufs/core/ufs-mcq.c diff --git a/drivers/ufs/core/Makefile b/drivers/ufs/core/Makefile index 62f38c5..4d02e0f 100644 --- a/drivers/ufs/core/Makefile +++ b/drivers/ufs/core/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o -ufshcd-core-y += ufshcd.o ufs-sysfs.o +ufshcd-core-y += ufshcd.o ufs-sysfs.o ufs-mcq.o ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c new file mode 100644 index 0000000..6ed3625 --- /dev/null +++ b/drivers/ufs/core/ufs-mcq.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022 Qualcomm Innovation Center. All rights reserved. + * + * Authors: + * Asutosh Das + * Can Guo + */ + +#include +#include +#include +#include +#include "ufshcd-priv.h" + +#define MAX_QUEUE_SUP GENMASK(7, 0) +#define UFS_MCQ_MIN_RW_QUEUES 2 +#define UFS_MCQ_MIN_READ_QUEUES 0 +#define UFS_MCQ_NUM_DEV_CMD_QUEUES 1 +#define UFS_MCQ_MIN_POLL_QUEUES 0 + +static int rw_queue_count_set(const char *val, const struct kernel_param *kp) +{ + return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_RW_QUEUES, + num_possible_cpus()); +} + +static const struct kernel_param_ops rw_queue_count_ops = { + .set = rw_queue_count_set, + .get = param_get_uint, +}; + +static unsigned int rw_queues; +module_param_cb(rw_queues, &rw_queue_count_ops, &rw_queues, 0644); +MODULE_PARM_DESC(rw_queues, + "Number of interrupt driven I/O queues used for rw. Default value is nr_cpus"); + +static int read_queue_count_set(const char *val, const struct kernel_param *kp) +{ + return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_READ_QUEUES, + num_possible_cpus()); +} + +static const struct kernel_param_ops read_queue_count_ops = { + .set = read_queue_count_set, + .get = param_get_uint, +}; + +static unsigned int read_queues; +module_param_cb(read_queues, &read_queue_count_ops, &read_queues, 0644); +MODULE_PARM_DESC(read_queues, + "Number of interrupt driven read queues used for read. Default value is 0"); + +static int poll_queue_count_set(const char *val, const struct kernel_param *kp) +{ + return param_set_uint_minmax(val, kp, UFS_MCQ_MIN_POLL_QUEUES, + num_possible_cpus()); +} + +static const struct kernel_param_ops poll_queue_count_ops = { + .set = poll_queue_count_set, + .get = param_get_uint, +}; + +static unsigned int poll_queues = 1; +module_param_cb(poll_queues, &poll_queue_count_ops, &poll_queues, 0644); +MODULE_PARM_DESC(poll_queues, + "Number of poll queues used for r/w. Default value is 1"); + +static int ufshcd_mcq_config_nr_queues(struct ufs_hba *hba) +{ + int i; + u32 hba_maxq, rem, tot_queues; + struct Scsi_Host *host = hba->host; + + hba_maxq = FIELD_GET(MAX_QUEUE_SUP, hba->mcq_capabilities); + + tot_queues = UFS_MCQ_NUM_DEV_CMD_QUEUES + read_queues + poll_queues + + rw_queues; + + if (hba_maxq < tot_queues) { + dev_err(hba->dev, "Total queues (%d) exceeds HC capacity (%d)\n", + tot_queues, hba_maxq); + return -EOPNOTSUPP; + } + + rem = hba_maxq - UFS_MCQ_NUM_DEV_CMD_QUEUES; + + if (rw_queues) { + hba->nr_queues[HCTX_TYPE_DEFAULT] = rw_queues; + rem -= hba->nr_queues[HCTX_TYPE_DEFAULT]; + } else { + rw_queues = num_possible_cpus(); + } + + if (poll_queues) { + hba->nr_queues[HCTX_TYPE_POLL] = poll_queues; + rem -= hba->nr_queues[HCTX_TYPE_POLL]; + } + + if (read_queues) { + hba->nr_queues[HCTX_TYPE_READ] = read_queues; + rem -= hba->nr_queues[HCTX_TYPE_READ]; + } + + if (!hba->nr_queues[HCTX_TYPE_DEFAULT]) + hba->nr_queues[HCTX_TYPE_DEFAULT] = min3(rem, rw_queues, + num_possible_cpus()); + + for (i = 0; i < HCTX_MAX_TYPES; i++) + host->nr_hw_queues += hba->nr_queues[i]; + + hba->nr_hw_queues = host->nr_hw_queues + UFS_MCQ_NUM_DEV_CMD_QUEUES; + return 0; +} + +int ufshcd_mcq_init(struct ufs_hba *hba) +{ + int ret; + + ret = ufshcd_mcq_config_nr_queues(hba); + + return ret; +} diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h index 8fff72e..8d93ef6 100644 --- a/drivers/ufs/core/ufshcd-priv.h +++ b/drivers/ufs/core/ufshcd-priv.h @@ -61,6 +61,7 @@ int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode, int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res); void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); +int ufshcd_mcq_init(struct ufs_hba *hba); #define SD_ASCII_STD true #define SD_RAW false diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c index 2cb05a6..9712086 100644 --- a/drivers/ufs/core/ufshcd.c +++ b/drivers/ufs/core/ufshcd.c @@ -8294,6 +8294,11 @@ static int ufshcd_add_lus(struct ufs_hba *hba) return ret; } +static int ufshcd_alloc_mcq(struct ufs_hba *hba) +{ + return ufshcd_mcq_init(hba); +} + static int ufshcd_device_init(struct ufs_hba *hba, bool init_dev_params) { int ret; @@ -8333,6 +8338,13 @@ static int ufshcd_device_init(struct ufs_hba *hba, bool init_dev_params) if (ret) return ret; if (is_mcq_supported(hba) && !hba->scsi_host_added) { + ret = ufshcd_alloc_mcq(hba); + if (ret) { + /* Continue with SDB mode */ + use_mcq_mode = false; + dev_err(hba->dev, "MCQ mode is disabled, err=%d\n", + ret); + } ret = scsi_add_host(host, hba->dev); if (ret) { dev_err(hba->dev, "scsi_add_host failed\n"); diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h index a343bd4..91ca724 100644 --- a/include/ufs/ufshcd.h +++ b/include/ufs/ufshcd.h @@ -840,6 +840,8 @@ struct ufs_hba_monitor { * ee_ctrl_mask * @luns_avail: number of regular and well known LUNs supported by the UFS * device + * @nr_hw_queues: number of hardware queues configured + * @nr_queues: number of Queues of different queue types * @complete_put: whether or not to call ufshcd_rpm_put() from inside * ufshcd_resume_complete() * @ext_iid_sup: is EXT_IID is supported by UFSHC @@ -994,6 +996,8 @@ struct ufs_hba { u32 debugfs_ee_rate_limit_ms; #endif u32 luns_avail; + unsigned int nr_hw_queues; + unsigned int nr_queues[HCTX_MAX_TYPES]; bool complete_put; bool ext_iid_sup; bool scsi_host_added;