From patchwork Fri Mar 24 18:17:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 13187195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F03BDC6FD1C for ; Fri, 24 Mar 2023 18:20:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232081AbjCXSUl (ORCPT ); Fri, 24 Mar 2023 14:20:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232115AbjCXSUe (ORCPT ); Fri, 24 Mar 2023 14:20:34 -0400 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A6E520A0C; Fri, 24 Mar 2023 11:20:28 -0700 (PDT) Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 32OI051I021913; Fri, 24 Mar 2023 18:18:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2022-7-12; bh=nUDrxezizUndbrZzmqecswz8zdkIl4p6RXBXgvoqp9c=; b=UCEY16kMdll3zfO0/xF7l/syXDk0OaRbP8PzOHAoQIz1JTZ86fi3uk/JnOVJpKh5rqkK O2Z4g3sn9hq9WHpuM7wGwfUbKTmBgirIxlHcmCxUHErc6rUhzF0FqG1c5/dwi+mE/M54 r1hioHoIYafiCaFfOBbLolag/IU3VC0pzAQyTVOR8FeHxRqHD5lt+ekn9Epk7RGmUwu2 mnSLauW9qXwyK97MVH2UIfPueVQOvOKcy0ytFn21GOKba9Leasq0A4whc+pJ9DndRz+C K3JLmDX9cSNMEYA6FB9GPU2ONCO3CXebppi60yLXBulaimnrzpqBw62/YaL4nI6K4K5l Ng== Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3phgrg81dm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 24 Mar 2023 18:18:17 +0000 Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.5/8.17.1.5) with ESMTP id 32OGoJNH001434; Fri, 24 Mar 2023 18:18:16 GMT Received: from pps.reinject (localhost [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 3pgxk4bxmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 24 Mar 2023 18:18:16 +0000 Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 32OIHiEj021017; Fri, 24 Mar 2023 18:18:16 GMT Received: from mnchrist-mac.us.oracle.com (dhcp-10-154-153-54.vpn.oracle.com [10.154.153.54]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3pgxk4bx19-14; Fri, 24 Mar 2023 18:18:16 +0000 From: Mike Christie To: bvanassche@acm.org, hch@lst.de, martin.petersen@oracle.com, linux-scsi@vger.kernel.org, james.bottomley@hansenpartnership.com, linux-block@vger.kernel.org, dm-devel@redhat.com, snitzer@kernel.org, axboe@kernel.dk, linux-nvme@lists.infradead.org, chaitanyak@nvidia.com, kbusch@kernel.org, target-devel@vger.kernel.org Cc: Mike Christie , Chaitanya Kulkarni Subject: [PATCH v5 13/18] nvme: Add pr_ops read_reservation support Date: Fri, 24 Mar 2023 13:17:36 -0500 Message-Id: <20230324181741.13908-14-michael.christie@oracle.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230324181741.13908-1-michael.christie@oracle.com> References: <20230324181741.13908-1-michael.christie@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-03-24_11,2023-03-24_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 adultscore=0 bulkscore=0 mlxscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2303240143 X-Proofpoint-ORIG-GUID: KeEGQHFqyLXhVn33fMGtlua0eDBT3ROw X-Proofpoint-GUID: KeEGQHFqyLXhVn33fMGtlua0eDBT3ROw Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This patch adds support for the pr_ops read_reservation callout by calling the NVMe Reservation Report helper. It then parses that info to detect if there is a reservation and if there is then convert the returned info to a pr_ops pr_held_reservation struct. Signed-off-by: Mike Christie Reviewed-by: Chaitanya Kulkarni --- drivers/nvme/host/pr.c | 83 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/drivers/nvme/host/pr.c b/drivers/nvme/host/pr.c index c550c172ff0d..56d24bc3371a 100644 --- a/drivers/nvme/host/pr.c +++ b/drivers/nvme/host/pr.c @@ -29,6 +29,26 @@ static enum nvme_pr_type nvme_pr_type_from_blk(enum pr_type type) return 0; } +static enum pr_type block_pr_type_from_nvme(enum nvme_pr_type type) +{ + switch (type) { + case NVME_PR_WRITE_EXCLUSIVE: + return PR_WRITE_EXCLUSIVE; + case NVME_PR_EXCLUSIVE_ACCESS: + return PR_EXCLUSIVE_ACCESS; + case NVME_PR_WRITE_EXCLUSIVE_REG_ONLY: + return PR_WRITE_EXCLUSIVE_REG_ONLY; + case NVME_PR_EXCLUSIVE_ACCESS_REG_ONLY: + return PR_EXCLUSIVE_ACCESS_REG_ONLY; + case NVME_PR_WRITE_EXCLUSIVE_ALL_REGS: + return PR_WRITE_EXCLUSIVE_ALL_REGS; + case NVME_PR_EXCLUSIVE_ACCESS_ALL_REGS: + return PR_EXCLUSIVE_ACCESS_ALL_REGS; + } + + return 0; +} + static int nvme_send_ns_head_pr_command(struct block_device *bdev, struct nvme_command *c, void *data, unsigned int data_len) { @@ -222,6 +242,68 @@ static int nvme_pr_read_keys(struct block_device *bdev, return ret; } +static int nvme_pr_read_reservation(struct block_device *bdev, + struct pr_held_reservation *resv) +{ + struct nvme_reservation_status_ext tmp_rse, *rse; + int ret, i, num_regs; + u32 rse_len; + bool eds; + +get_num_regs: + /* + * Get the number of registrations so we know how big to allocate + * the response buffer. + */ + ret = nvme_pr_resv_report(bdev, &tmp_rse, sizeof(tmp_rse), &eds); + if (ret) + return ret; + + num_regs = get_unaligned_le16(&tmp_rse.regctl); + if (!num_regs) { + resv->generation = le32_to_cpu(tmp_rse.gen); + return 0; + } + + rse_len = struct_size(rse, regctl_eds, num_regs); + rse = kzalloc(rse_len, GFP_KERNEL); + if (!rse) + return -ENOMEM; + + ret = nvme_pr_resv_report(bdev, rse, rse_len, &eds); + if (ret) + goto free_rse; + + if (num_regs != get_unaligned_le16(&rse->regctl)) { + kfree(rse); + goto get_num_regs; + } + + resv->generation = le32_to_cpu(rse->gen); + resv->type = block_pr_type_from_nvme(rse->rtype); + + for (i = 0; i < num_regs; i++) { + if (eds) { + if (rse->regctl_eds[i].rcsts) { + resv->key = le64_to_cpu(rse->regctl_eds[i].rkey); + break; + } + } else { + struct nvme_reservation_status *rs; + + rs = (struct nvme_reservation_status *)rse; + if (rs->regctl_ds[i].rcsts) { + resv->key = le64_to_cpu(rs->regctl_ds[i].rkey); + break; + } + } + } + +free_rse: + kfree(rse); + return ret; +} + const struct pr_ops nvme_pr_ops = { .pr_register = nvme_pr_register, .pr_reserve = nvme_pr_reserve, @@ -229,4 +311,5 @@ const struct pr_ops nvme_pr_ops = { .pr_preempt = nvme_pr_preempt, .pr_clear = nvme_pr_clear, .pr_read_keys = nvme_pr_read_keys, + .pr_read_reservation = nvme_pr_read_reservation, };