From patchwork Wed Feb 10 19:52:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minwoo Im X-Patchwork-Id: 12081697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D612C433E6 for ; Wed, 10 Feb 2021 19:56:57 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 08C9264E35 for ; Wed, 10 Feb 2021 19:56:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 08C9264E35 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:47998 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1l9vbc-0006Q1-0x for qemu-devel@archiver.kernel.org; Wed, 10 Feb 2021 14:56:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:57626) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1l9vYB-0002uv-53; Wed, 10 Feb 2021 14:53:23 -0500 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]:35563) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1l9vY8-0006dh-7P; Wed, 10 Feb 2021 14:53:22 -0500 Received: by mail-pl1-x630.google.com with SMTP id g3so1820416plp.2; Wed, 10 Feb 2021 11:53:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5e2a4NCdGhsywMPhtlhVywKRAeacADJw06Qqp7M3tw0=; b=F0ZfR0vJVBGI/IPuPG5pcjXaZRtBz3onYDj82+OPSryDnA0YTeP7lMdRF+vJEr5Kxt FfA1LGowi+Xmr/zydA4UviIcWkWUT6RaouS9RiLeUPJRWgLxFNNOnjdNWjl8Eh6rvcNZ 6lbUTZ+Z/N5iuE5N1jHNF8qXcmJnMA5ez6gtIEqzwDi1Aa/OAtutTxoKz3bKlOTKquTR MmcY79br5Rc5FNJCLckfEOhbqAW/B0g7bZz5sCMsPCTsdtWxUJitItgCA5uFS/9Gnj0D lt+tN1U7PJhYdJXmSNhkgU7FdxNCBQbgI93FiuaX1nO14Ihd4CJGefTyXAPDZuB6Dui8 Nihg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5e2a4NCdGhsywMPhtlhVywKRAeacADJw06Qqp7M3tw0=; b=eVLCmfaomozd1RwANF6lwrnAMDofywoxOp7gSTZY2ILJke+nN/KxKpOvqkGaCprU23 0hzRJGhf6Bp+PV3V9y/WspuxOKJ+hbK+geNy2rnyFutdqSk+RvPz3sPiz9Ig/9lh/sL7 7jNfjVgGC3N5mHFEsueBNB2UGb13f5oKMzddtbR2gH8PLo/TUsP2atLb7+kX7YJ0h4lI 8DTCFS7cmAjqTfu9ecz56TrG2GloQnKKE89KjWM5HQpfNy1FPHuyJXkWJ0bBxROaD50V alZiQ/lIorcyb+GAAixfy+4LEcXdWA5UiGV5oVyVrg2enLR2/drmgs1oQlHT8Ae/8ZnT YRUQ== X-Gm-Message-State: AOAM533bn1rb+YLefMs//qZ65LHiP3Zk4QsFc1oF80R5iBXO9oKYgV4r nstDo3nJqLVCQDQBwg+IZ8edpQ8nVPE0Hw== X-Google-Smtp-Source: ABdhPJznM324pR0JaoGvKF3NtV+OJDTznfLvE8vDjiZTq/z7mByXe9gjswdlYHiejk04p1wUV+dsoQ== X-Received: by 2002:a17:90a:b292:: with SMTP id c18mr532826pjr.134.1612986798063; Wed, 10 Feb 2021 11:53:18 -0800 (PST) Received: from localhost.localdomain ([211.108.35.36]) by smtp.gmail.com with ESMTPSA id e26sm3221773pfm.87.2021.02.10.11.53.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Feb 2021 11:53:17 -0800 (PST) From: Minwoo Im To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [RFC PATCH 1/3] hw/block/nvme: set NVME_DNR in a single place Date: Thu, 11 Feb 2021 04:52:50 +0900 Message-Id: <20210210195252.19339-2-minwoo.im.dev@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210210195252.19339-1-minwoo.im.dev@gmail.com> References: <20210210195252.19339-1-minwoo.im.dev@gmail.com> Received-SPF: pass client-ip=2607:f8b0:4864:20::630; envelope-from=minwoo.im.dev@gmail.com; helo=mail-pl1-x630.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , "Dr . David Alan Gilbert" , Max Reitz , Klaus Jensen , Minwoo Im , Keith Busch Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Set NVME_DNR in the CQ entry status field right before writing the CQ entry: in nvme_post_cqes(). We have put NVME_DNR for all CQ entry status for all error cases. This patch is a former patch to support command retry feature. Signed-off-by: Minwoo Im --- hw/block/nvme.c | 192 ++++++++++++++++++++++++------------------------ 1 file changed, 97 insertions(+), 95 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 93345bf3c1fc..816e0e8e5205 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -270,12 +270,12 @@ static int nvme_aor_check(NvmeNamespace *ns, uint32_t act, uint32_t opn) if (ns->params.max_active_zones != 0 && ns->nr_active_zones + act > ns->params.max_active_zones) { trace_pci_nvme_err_insuff_active_res(ns->params.max_active_zones); - return NVME_ZONE_TOO_MANY_ACTIVE | NVME_DNR; + return NVME_ZONE_TOO_MANY_ACTIVE; } if (ns->params.max_open_zones != 0 && ns->nr_open_zones + opn > ns->params.max_open_zones) { trace_pci_nvme_err_insuff_open_res(ns->params.max_open_zones); - return NVME_ZONE_TOO_MANY_OPEN | NVME_DNR; + return NVME_ZONE_TOO_MANY_OPEN; } return NVME_SUCCESS; @@ -492,7 +492,7 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, if (cmb || pmr) { if (qsg && qsg->sg) { - return NVME_INVALID_USE_OF_CMB | NVME_DNR; + return NVME_INVALID_USE_OF_CMB; } assert(iov); @@ -509,7 +509,7 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, } if (iov && iov->iov) { - return NVME_INVALID_USE_OF_CMB | NVME_DNR; + return NVME_INVALID_USE_OF_CMB; } assert(qsg); @@ -568,7 +568,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2, if (i == n->max_prp_ents - 1 && len > n->page_size) { if (unlikely(prp_ent & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prplist_ent(prp_ent); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + return NVME_INVALID_PRP_OFFSET; } i = 0; @@ -585,7 +585,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2, if (unlikely(prp_ent & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prplist_ent(prp_ent); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + return NVME_INVALID_PRP_OFFSET; } trans_len = MIN(len, n->page_size); @@ -600,7 +600,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2, } else { if (unlikely(prp2 & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prp2_align(prp2); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + return NVME_INVALID_PRP_OFFSET; } status = nvme_map_addr(n, qsg, iov, prp2, len); if (status) { @@ -637,9 +637,9 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg, break; case NVME_SGL_DESCR_TYPE_SEGMENT: case NVME_SGL_DESCR_TYPE_LAST_SEGMENT: - return NVME_INVALID_NUM_SGL_DESCRS | NVME_DNR; + return NVME_INVALID_NUM_SGL_DESCRS; default: - return NVME_SGL_DESCR_TYPE_INVALID | NVME_DNR; + return NVME_SGL_DESCR_TYPE_INVALID; } dlen = le32_to_cpu(segment[i].len); @@ -660,7 +660,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg, } trace_pci_nvme_err_invalid_sgl_excess_length(nvme_cid(req)); - return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + return NVME_DATA_SGL_LEN_INVALID; } trans_len = MIN(*len, dlen); @@ -672,7 +672,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg, addr = le64_to_cpu(segment[i].addr); if (UINT64_MAX - addr < dlen) { - return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + return NVME_DATA_SGL_LEN_INVALID; } status = nvme_map_addr(n, qsg, iov, addr, trans_len); @@ -731,7 +731,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, case NVME_SGL_DESCR_TYPE_LAST_SEGMENT: break; default: - return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; + return NVME_INVALID_SGL_SEG_DESCR; } seg_len = le32_to_cpu(sgld->len); @@ -739,11 +739,11 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, /* check the length of the (Last) Segment descriptor */ if ((!seg_len || seg_len & 0xf) && (NVME_SGL_TYPE(sgld->type) != NVME_SGL_DESCR_TYPE_BIT_BUCKET)) { - return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; + return NVME_INVALID_SGL_SEG_DESCR; } if (UINT64_MAX - addr < seg_len) { - return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + return NVME_DATA_SGL_LEN_INVALID; } nsgld = seg_len / sizeof(NvmeSglDescriptor); @@ -798,7 +798,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, * current segment must not be a Last Segment. */ if (NVME_SGL_TYPE(sgld->type) == NVME_SGL_DESCR_TYPE_LAST_SEGMENT) { - status = NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; + status = NVME_INVALID_SGL_SEG_DESCR; goto unmap; } @@ -818,7 +818,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *iov, out: /* if there is any residual left in len, the SGL was too short */ if (len) { - status = NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + status = NVME_DATA_SGL_LEN_INVALID; goto unmap; } @@ -850,7 +850,7 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, size_t len, NvmeRequest *req) case NVME_PSDT_SGL_MPTR_SGL: /* SGLs shall not be used for Admin commands in NVMe over PCIe */ if (!req->sq->sqid) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } return nvme_map_sgl(n, &req->qsg, &req->iov, req->cmd.dptr.sgl, len, @@ -884,7 +884,7 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len, if (unlikely(residual)) { trace_pci_nvme_err_invalid_dma(); - status = NVME_INVALID_FIELD | NVME_DNR; + status = NVME_INVALID_FIELD; } } else { size_t bytes; @@ -897,7 +897,7 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len, if (unlikely(bytes != len)) { trace_pci_nvme_err_invalid_dma(); - status = NVME_INVALID_FIELD | NVME_DNR; + status = NVME_INVALID_FIELD; } } @@ -945,6 +945,11 @@ static void nvme_post_cqes(void *opaque) static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) { assert(cq->cqid == req->sq->cqid); + + if (req->status != NVME_SUCCESS) { + req->status |= NVME_DNR; + } + trace_pci_nvme_enqueue_req_completion(nvme_cid(req), cq->cqid, req->status); @@ -1069,7 +1074,7 @@ static inline uint16_t nvme_check_mdts(NvmeCtrl *n, size_t len) uint8_t mdts = n->params.mdts; if (mdts && len > n->page_size << mdts) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } return NVME_SUCCESS; @@ -1081,7 +1086,7 @@ static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba, uint64_t nsze = le64_to_cpu(ns->id_ns.nsze); if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) { - return NVME_LBA_RANGE | NVME_DNR; + return NVME_LBA_RANGE; } return NVME_SUCCESS; @@ -1791,11 +1796,11 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) if (!(n->id_ctrl.ocfs & (1 << format))) { trace_pci_nvme_err_copy_invalid_format(format); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (nr > ns->id_ns.msrc + 1) { - return NVME_CMD_SIZE_LIMIT | NVME_DNR; + return NVME_CMD_SIZE_LIMIT; } range = g_new(NvmeCopySourceRange, nr); @@ -1811,7 +1816,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) uint32_t _nlb = le16_to_cpu(range[i].nlb) + 1; if (_nlb > le16_to_cpu(ns->id_ns.mssrl)) { - return NVME_CMD_SIZE_LIMIT | NVME_DNR; + return NVME_CMD_SIZE_LIMIT; } status = nvme_check_bounds(ns, slba, _nlb); @@ -1838,7 +1843,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) } if (nlb > le32_to_cpu(ns->id_ns.mcl)) { - return NVME_CMD_SIZE_LIMIT | NVME_DNR; + return NVME_CMD_SIZE_LIMIT; } bounce = bouncep = g_malloc(nvme_l2b(ns, nlb)); @@ -2006,7 +2011,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) invalid: block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_READ); - return status | NVME_DNR; + return status; } static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, @@ -2100,7 +2105,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, invalid: block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_WRITE); - return status | NVME_DNR; + return status; } static inline uint16_t nvme_write(NvmeCtrl *n, NvmeRequest *req) @@ -2126,14 +2131,14 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, if (!ns->params.zoned) { trace_pci_nvme_err_invalid_opc(c->opcode); - return NVME_INVALID_OPCODE | NVME_DNR; + return NVME_INVALID_OPCODE; } *slba = ((uint64_t)dw11) << 32 | dw10; if (unlikely(*slba >= ns->id_ns.nsze)) { trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze); *slba = 0; - return NVME_LBA_RANGE | NVME_DNR; + return NVME_LBA_RANGE; } *zone_idx = nvme_zone_idx(ns, *slba); @@ -2364,7 +2369,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) zone = &ns->zone_array[zone_idx]; if (slba != zone->d.zslba) { trace_pci_nvme_err_unaligned_zone_cmd(action, slba, zone->d.zslba); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } switch (action) { @@ -2421,7 +2426,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) case NVME_ZONE_ACTION_SET_ZD_EXT: trace_pci_nvme_set_descriptor_extension(slba, zone_idx); if (all || !ns->params.zd_extension_size) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } zd_ext = nvme_get_zd_extension(ns, zone_idx); status = nvme_dma(n, zd_ext, ns->params.zd_extension_size, @@ -2447,9 +2452,6 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_err_invalid_zone_state_transition(action, slba, zone->d.za); } - if (status) { - status |= NVME_DNR; - } return status; } @@ -2506,19 +2508,19 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) zra = dw13 & 0xff; if (zra != NVME_ZONE_REPORT && zra != NVME_ZONE_REPORT_EXTENDED) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (zra == NVME_ZONE_REPORT_EXTENDED && !ns->params.zd_extension_size) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } zrasf = (dw13 >> 8) & 0xff; if (zrasf > NVME_ZONE_REPORT_OFFLINE) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (data_size < sizeof(NvmeZoneReportHeader)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } status = nvme_check_mdts(n, data_size); @@ -2596,17 +2598,17 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) req->cmd.opcode, nvme_io_opc_str(req->cmd.opcode)); if (!nvme_nsid_valid(n, nsid)) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } req->ns = nvme_ns(n, nsid); if (unlikely(!req->ns)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (!(req->ns->iocs[req->cmd.opcode] & NVME_CMD_EFF_CSUPP)) { trace_pci_nvme_err_invalid_opc(req->cmd.opcode); - return NVME_INVALID_OPCODE | NVME_DNR; + return NVME_INVALID_OPCODE; } switch (req->cmd.opcode) { @@ -2634,7 +2636,7 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) assert(false); } - return NVME_INVALID_OPCODE | NVME_DNR; + return NVME_INVALID_OPCODE; } static void nvme_free_sq(NvmeSQueue *sq, NvmeCtrl *n) @@ -2657,7 +2659,7 @@ static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req) if (unlikely(!qid || nvme_check_sqid(n, qid))) { trace_pci_nvme_err_invalid_del_sq(qid); - return NVME_INVALID_QID | NVME_DNR; + return NVME_INVALID_QID; } trace_pci_nvme_del_sq(qid); @@ -2728,24 +2730,24 @@ static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req) if (unlikely(!cqid || nvme_check_cqid(n, cqid))) { trace_pci_nvme_err_invalid_create_sq_cqid(cqid); - return NVME_INVALID_CQID | NVME_DNR; + return NVME_INVALID_CQID; } if (unlikely(!sqid || sqid > n->params.max_ioqpairs || n->sq[sqid] != NULL)) { trace_pci_nvme_err_invalid_create_sq_sqid(sqid); - return NVME_INVALID_QID | NVME_DNR; + return NVME_INVALID_QID; } if (unlikely(!qsize || qsize > NVME_CAP_MQES(n->bar.cap))) { trace_pci_nvme_err_invalid_create_sq_size(qsize); - return NVME_MAX_QSIZE_EXCEEDED | NVME_DNR; + return NVME_MAX_QSIZE_EXCEEDED; } if (unlikely(prp1 & (n->page_size - 1))) { trace_pci_nvme_err_invalid_create_sq_addr(prp1); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + return NVME_INVALID_PRP_OFFSET; } if (unlikely(!(NVME_SQ_FLAGS_PC(qflags)))) { trace_pci_nvme_err_invalid_create_sq_qflags(NVME_SQ_FLAGS_PC(qflags)); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } sq = g_malloc0(sizeof(*sq)); nvme_init_sq(sq, n, prp1, sqid, cqid, qsize + 1); @@ -2780,13 +2782,13 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, time_t current_ms; if (off >= sizeof(smart)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (nsid != 0xffffffff) { ns = nvme_ns(n, nsid); if (!ns) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } nvme_set_blk_stats(ns, &stats); } else { @@ -2839,7 +2841,7 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off, }; if (off >= sizeof(fw_log)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } strpadcpy((char *)&fw_log.frs1, sizeof(fw_log.frs1), "1.0", ' '); @@ -2856,7 +2858,7 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, NvmeErrorLog errlog; if (off >= sizeof(errlog)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (!rae) { @@ -2879,7 +2881,7 @@ static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len, if (off >= sizeof(log)) { trace_pci_nvme_err_invalid_log_page_offset(off, sizeof(log)); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } switch (NVME_CC_CSS(n->bar.cc)) { @@ -2937,7 +2939,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) off = (lpou << 32ULL) | lpol; if (off & 0x3) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } trace_pci_nvme_get_log(nvme_cid(req), lid, lsp, rae, len, off); @@ -2959,7 +2961,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) return nvme_cmd_effects(n, csi, len, off, req); default: trace_pci_nvme_err_invalid_log_page(nvme_cid(req), lid); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } } @@ -2983,7 +2985,7 @@ static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeRequest *req) if (unlikely(!qid || nvme_check_cqid(n, qid))) { trace_pci_nvme_err_invalid_del_cq_cqid(qid); - return NVME_INVALID_CQID | NVME_DNR; + return NVME_INVALID_CQID; } cq = n->cq[qid]; @@ -3037,27 +3039,27 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req) if (unlikely(!cqid || cqid > n->params.max_ioqpairs || n->cq[cqid] != NULL)) { trace_pci_nvme_err_invalid_create_cq_cqid(cqid); - return NVME_INVALID_QID | NVME_DNR; + return NVME_INVALID_QID; } if (unlikely(!qsize || qsize > NVME_CAP_MQES(n->bar.cap))) { trace_pci_nvme_err_invalid_create_cq_size(qsize); - return NVME_MAX_QSIZE_EXCEEDED | NVME_DNR; + return NVME_MAX_QSIZE_EXCEEDED; } if (unlikely(prp1 & (n->page_size - 1))) { trace_pci_nvme_err_invalid_create_cq_addr(prp1); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + return NVME_INVALID_PRP_OFFSET; } if (unlikely(!msix_enabled(&n->parent_obj) && vector)) { trace_pci_nvme_err_invalid_create_cq_vector(vector); - return NVME_INVALID_IRQ_VECTOR | NVME_DNR; + return NVME_INVALID_IRQ_VECTOR; } if (unlikely(vector >= n->params.msix_qsize)) { trace_pci_nvme_err_invalid_create_cq_vector(vector); - return NVME_INVALID_IRQ_VECTOR | NVME_DNR; + return NVME_INVALID_IRQ_VECTOR; } if (unlikely(!(NVME_CQ_FLAGS_PC(qflags)))) { trace_pci_nvme_err_invalid_create_cq_qflags(NVME_CQ_FLAGS_PC(qflags)); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } cq = g_malloc0(sizeof(*cq)); @@ -3115,7 +3117,7 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) DMA_DIRECTION_FROM_DEVICE, req); } - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) @@ -3127,7 +3129,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_identify_ns(nsid); if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } ns = nvme_ns(n, nsid); @@ -3140,7 +3142,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) DMA_DIRECTION_FROM_DEVICE, req); } - return NVME_INVALID_CMD_SET | NVME_DNR; + return NVME_INVALID_CMD_SET; } static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req) @@ -3152,7 +3154,7 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_identify_ns_csi(nsid, c->csi); if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } ns = nvme_ns(n, nsid); @@ -3167,7 +3169,7 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req) DMA_DIRECTION_FROM_DEVICE, req); } - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) @@ -3189,7 +3191,7 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) * in the spec (NVM Express v1.3d, Section 5.15.4). */ if (min_nsid >= NVME_NSID_BROADCAST - 1) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } for (i = 1; i <= n->num_namespaces; i++) { @@ -3225,11 +3227,11 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req) * Same as in nvme_identify_nslist(), 0xffffffff/0xfffffffe are invalid. */ if (min_nsid >= NVME_NSID_BROADCAST - 1) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } if (c->csi != NVME_CSI_NVM && c->csi != NVME_CSI_ZONED) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } for (i = 1; i <= n->num_namespaces; i++) { @@ -3272,12 +3274,12 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_identify_ns_descr_list(nsid); if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } ns = nvme_ns(n, nsid); if (unlikely(!ns)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } /* @@ -3340,7 +3342,7 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) return nvme_identify_cmd_set(n, req); default: trace_pci_nvme_err_invalid_identify_cns(le32_to_cpu(c->cns)); - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } } @@ -3350,7 +3352,7 @@ static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *req) req->cqe.result = 1; if (nvme_check_sqid(n, sqid)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } return NVME_SUCCESS; @@ -3419,7 +3421,7 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_getfeat(nvme_cid(req), nsid, fid, sel, dw11); if (!nvme_feature_support[fid]) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (nvme_feature_cap[fid] & NVME_FEAT_CAP_NS) { @@ -3431,11 +3433,11 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) * features we can always return Invalid Namespace or Format as we * should do for all other features. */ - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } if (!nvme_ns(n, nsid)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } } @@ -3472,15 +3474,15 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) goto out; } - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; case NVME_ERROR_RECOVERY: if (!nvme_nsid_valid(n, nsid)) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } ns = nvme_ns(n, nsid); if (unlikely(!ns)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } result = ns->features.err_rec; @@ -3530,7 +3532,7 @@ defaults: case NVME_INTERRUPT_VECTOR_CONF: iv = dw11 & 0xffff; if (iv >= n->params.max_ioqpairs + 1) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } result = iv; @@ -3582,34 +3584,34 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_setfeat(nvme_cid(req), nsid, fid, save, dw11); if (save && !(nvme_feature_cap[fid] & NVME_FEAT_CAP_SAVE)) { - return NVME_FID_NOT_SAVEABLE | NVME_DNR; + return NVME_FID_NOT_SAVEABLE; } if (!nvme_feature_support[fid]) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if (nvme_feature_cap[fid] & NVME_FEAT_CAP_NS) { if (nsid != NVME_NSID_BROADCAST) { if (!nvme_nsid_valid(n, nsid)) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } ns = nvme_ns(n, nsid); if (unlikely(!ns)) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } } } else if (nsid && nsid != NVME_NSID_BROADCAST) { if (!nvme_nsid_valid(n, nsid)) { - return NVME_INVALID_NSID | NVME_DNR; + return NVME_INVALID_NSID; } - return NVME_FEAT_NOT_NS_SPEC | NVME_DNR; + return NVME_FEAT_NOT_NS_SPEC; } if (!(nvme_feature_cap[fid] & NVME_FEAT_CAP_CHANGE)) { - return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR; + return NVME_FEAT_NOT_CHANGEABLE; } switch (fid) { @@ -3626,7 +3628,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) n->features.temp_thresh_low = NVME_TEMP_TMPTH(dw11); break; default: - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } if ((n->temperature >= n->features.temp_thresh_hi) || @@ -3675,7 +3677,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) case NVME_NUMBER_OF_QUEUES: if (n->qs_created) { - return NVME_CMD_SEQ_ERROR | NVME_DNR; + return NVME_CMD_SEQ_ERROR; } /* @@ -3683,7 +3685,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) * and NSQR. */ if ((dw11 & 0xffff) == 0xffff || ((dw11 >> 16) & 0xffff) == 0xffff) { - return NVME_INVALID_FIELD | NVME_DNR; + return NVME_INVALID_FIELD; } trace_pci_nvme_setfeat_numq((dw11 & 0xFFFF) + 1, @@ -3701,11 +3703,11 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) case NVME_COMMAND_SET_PROFILE: if (dw11 & 0x1ff) { trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff); - return NVME_CMD_SET_CMB_REJECTED | NVME_DNR; + return NVME_CMD_SET_CMB_REJECTED; } break; default: - return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR; + return NVME_FEAT_NOT_CHANGEABLE; } return NVME_SUCCESS; } @@ -3736,7 +3738,7 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) if (!(nvme_cse_acs[req->cmd.opcode] & NVME_CMD_EFF_CSUPP)) { trace_pci_nvme_err_invalid_admin_opc(req->cmd.opcode); - return NVME_INVALID_OPCODE | NVME_DNR; + return NVME_INVALID_OPCODE; } switch (req->cmd.opcode) { @@ -3764,7 +3766,7 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) assert(false); } - return NVME_INVALID_OPCODE | NVME_DNR; + return NVME_INVALID_OPCODE; } static void nvme_process_sq(void *opaque) From patchwork Wed Feb 10 19:52:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minwoo Im X-Patchwork-Id: 12081693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D97ECC433E0 for ; Wed, 10 Feb 2021 19:55:07 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 61AC964EE5 for ; Wed, 10 Feb 2021 19:55:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 61AC964EE5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:43120 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1l9vZq-0004KJ-HG for qemu-devel@archiver.kernel.org; Wed, 10 Feb 2021 14:55:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:57638) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1l9vYC-0002wR-Hu; Wed, 10 Feb 2021 14:53:24 -0500 Received: from mail-pf1-x42e.google.com ([2607:f8b0:4864:20::42e]:38399) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1l9vYA-0006eR-GO; Wed, 10 Feb 2021 14:53:24 -0500 Received: by mail-pf1-x42e.google.com with SMTP id d26so1982559pfn.5; Wed, 10 Feb 2021 11:53:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JfwUrUZWWV0KQj+jVcadO7gb4ti6g1elPn4bOK9HJ2c=; b=CnOHzuSTCyJ4S8fOCs5kiMYGiVtpUeUYpLEz0Rf1cTLKoQ93jwqUr6mG/dlVHcTQGb 4Ihzn7jJJ3Lxzdo0OuQgguJXZoARtHBVVj8Am7ATGVYrUasLzvLXfVgK0SczyrtZQfPs SyG6D1d4w8ig8ykTQa2tnReoVR2cXAXC1eDAx24EG6EKyLEOvPuii8u/7n0tCSB+E7p8 PF1NLc58H94+6L/Sty6wLhmKVjz5Y5qDefHbBIk4Qb4HZ8WZSnQ/Oy2+E3zwbsTB0RZN Ofw6cr6cNQ2+IoZuJ7Sdr3TKiB3C7Xvo/Wcf5jN+e9Rr14wOI/vl5rIPBkHjFYTbigVr 6/uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JfwUrUZWWV0KQj+jVcadO7gb4ti6g1elPn4bOK9HJ2c=; b=uHMDHzAvHIayg2s5gUejaKFJRgfyygyMPF4iupmyJlPUqGtHNdAx0eBGp5wuT3jMH9 vTqYJxZgLF2Z2VjmhEIku89yHyn3JaUi0vk3tQkvD93LfZJFsQRSIpEdSEMqPQClS7Wp tAkvXRoFuCqXlfyOJf/UHlB3Y68ztFhtKeTrs6HlrCSYeS3MKQDZ1WtD1IFpEVxJjsJ+ B1h6AtEfWRPCGuzyhIHD7eOeZx2C2zWq0C5e0lIatcsneDsIYEBZvfxcqwMaywyXH78Q 06OwRkLv4GdsNGOre54pBtgMzY26MDHI0Z5h/uTAmVp+3XK5UpTMIlTptZHDng//dOmC D5bA== X-Gm-Message-State: AOAM532ZDmmxa3h61indo/2ZkLWzqv74R7DZKn7az2jvMrPiv49B0pjD nZcyPpsbghrwy4QpzSlgqzA6/IXKuo4mpA== X-Google-Smtp-Source: ABdhPJxBlUH8KP/BkHFg+CvR/LtXL+qyQmM3cOKuNhZVWhquqGBT9686XCkDqJgLqc769BvyMl/lKg== X-Received: by 2002:a63:c741:: with SMTP id v1mr4539814pgg.316.1612986800559; Wed, 10 Feb 2021 11:53:20 -0800 (PST) Received: from localhost.localdomain ([211.108.35.36]) by smtp.gmail.com with ESMTPSA id e26sm3221773pfm.87.2021.02.10.11.53.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Feb 2021 11:53:20 -0800 (PST) From: Minwoo Im To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [RFC PATCH 2/3] hw/block/nvme: support command retry delay Date: Thu, 11 Feb 2021 04:52:51 +0900 Message-Id: <20210210195252.19339-3-minwoo.im.dev@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210210195252.19339-1-minwoo.im.dev@gmail.com> References: <20210210195252.19339-1-minwoo.im.dev@gmail.com> Received-SPF: pass client-ip=2607:f8b0:4864:20::42e; envelope-from=minwoo.im.dev@gmail.com; helo=mail-pf1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , "Dr . David Alan Gilbert" , Max Reitz , Klaus Jensen , Minwoo Im , Keith Busch Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Set CRDT1(Command Retry Delay Time 1) in the Identify controller data structure to milliseconds units of 100ms by the given value of 'cmd-retry-delay' parameter which is newly added. If cmd-retry-delay=1000, it will be set CRDT1 to 10. This patch only considers the CRDT1 without CRDT2 and 3 for the simplicity. This patch also introduced set/get feature command handler for Host Behavior feature (16h). In this feature, ACRE(Advanced Command Retry Enable) will be set by the host based on the Identify controller data structure, especially by CRDTs. If 'cmd-retry-delay' is not given, the default value will be -1 which is CRDT will not be configured at all and ACRE will not be supported. In this case, we just set NVME_DNR to the error CQ entry just like we used to. If it's given to positive value, then ACRE will be supported by the device. Signed-off-by: Minwoo Im Reviewed-by: Klaus Jensen --- hw/block/nvme.c | 65 ++++++++++++++++++++++++++++++++++++++++++-- hw/block/nvme.h | 2 ++ include/block/nvme.h | 13 ++++++++- 3 files changed, 77 insertions(+), 3 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 816e0e8e5205..6d3c554a0e99 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -23,7 +23,7 @@ * max_ioqpairs=, \ * aerl=, aer_max_queued=, \ * mdts=,zoned.append_size_limit=, \ - * subsys= \ + * subsys=,cmd-retry-delay= \ * -device nvme-ns,drive=,bus=,nsid=,\ * zoned=, \ * subsys= @@ -71,6 +71,14 @@ * data size being in effect. By setting this property to 0, users can make * ZASL to be equal to MDTS. This property only affects zoned namespaces. * + * - `cmd-retry-delay` + * Command Retry Delay value in unit of millisecond. This value will be + * reported to the CRDT1(Command Retry Delay Time 1) in Identify Controller + * data structure in 100 milliseconds unit. If this is not given, DNR(Do Not + * Retry) bit field in the Status field of CQ entry. If it's given to 0, + * CRD(Command Retry Delay) will be set to 0 which is for retry without + * delay. Otherwise, it will set to 1 to delay for CRDT1 value. + * * nvme namespace device parameters * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * - `subsys` @@ -154,6 +162,7 @@ static const bool nvme_feature_support[NVME_FID_MAX] = { [NVME_WRITE_ATOMICITY] = true, [NVME_ASYNCHRONOUS_EVENT_CONF] = true, [NVME_TIMESTAMP] = true, + [NVME_HOST_BEHAVIOR_SUPPORT] = true, }; static const uint32_t nvme_feature_cap[NVME_FID_MAX] = { @@ -163,6 +172,7 @@ static const uint32_t nvme_feature_cap[NVME_FID_MAX] = { [NVME_NUMBER_OF_QUEUES] = NVME_FEAT_CAP_CHANGE, [NVME_ASYNCHRONOUS_EVENT_CONF] = NVME_FEAT_CAP_CHANGE, [NVME_TIMESTAMP] = NVME_FEAT_CAP_CHANGE, + [NVME_HOST_BEHAVIOR_SUPPORT] = NVME_FEAT_CAP_CHANGE, }; static const uint32_t nvme_cse_acs[256] = { @@ -904,6 +914,16 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len, return status; } +static inline bool nvme_should_retry(NvmeRequest *req) +{ + switch (req->status) { + case NVME_COMMAND_INTERRUPTED: + return true; + default: + return false; + } +} + static void nvme_post_cqes(void *opaque) { NvmeCQueue *cq = opaque; @@ -947,7 +967,13 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) assert(cq->cqid == req->sq->cqid); if (req->status != NVME_SUCCESS) { - req->status |= NVME_DNR; + if (cq->ctrl->features.acre && nvme_should_retry(req)) { + if (cq->ctrl->params.cmd_retry_delay > 0) { + req->status |= NVME_CRD_CRDT1; + } + } else { + req->status |= NVME_DNR; + } } trace_pci_nvme_enqueue_req_completion(nvme_cid(req), cq->cqid, @@ -3401,6 +3427,16 @@ static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) DMA_DIRECTION_FROM_DEVICE, req); } +static uint16_t nvme_get_feature_host_behavior(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeFeatureHostBehavior data = {}; + + data.acre = n->features.acre; + + return nvme_dma(n, (uint8_t *)&data, sizeof(data), + DMA_DIRECTION_FROM_DEVICE, req); +} + static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) { NvmeCmd *cmd = &req->cmd; @@ -3506,6 +3542,8 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) goto out; case NVME_TIMESTAMP: return nvme_get_feature_timestamp(n, req); + case NVME_HOST_BEHAVIOR_SUPPORT: + return nvme_get_feature_host_behavior(n, req); default: break; } @@ -3569,6 +3607,22 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } +static uint16_t nvme_set_feature_host_behavior(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeFeatureHostBehavior data; + int ret; + + ret = nvme_dma(n, (uint8_t *)&data, sizeof(data), + DMA_DIRECTION_TO_DEVICE, req); + if (ret) { + return ret; + } + + n->features.acre = data.acre; + + return NVME_SUCCESS; +} + static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns = NULL; @@ -3700,6 +3754,8 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) break; case NVME_TIMESTAMP: return nvme_set_feature_timestamp(n, req); + case NVME_HOST_BEHAVIOR_SUPPORT: + return nvme_set_feature_host_behavior(n, req); case NVME_COMMAND_SET_PROFILE: if (dw11 & 0x1ff) { trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff); @@ -4699,6 +4755,10 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) id->oacs = cpu_to_le16(0); id->cntrltype = 0x1; + if (n->params.cmd_retry_delay > 0) { + id->crdt1 = cpu_to_le16(DIV_ROUND_UP(n->params.cmd_retry_delay, 100)); + } + /* * Because the controller always completes the Abort command immediately, * there can never be more than one concurrently executing Abort command, @@ -4859,6 +4919,7 @@ static Property nvme_props[] = { DEFINE_PROP_BOOL("legacy-cmb", NvmeCtrl, params.legacy_cmb, false), DEFINE_PROP_SIZE32("zoned.append_size_limit", NvmeCtrl, params.zasl_bs, NVME_DEFAULT_MAX_ZA_SIZE), + DEFINE_PROP_INT32("cmd-retry-delay", NvmeCtrl, params.cmd_retry_delay, -1), DEFINE_PROP_END_OF_LIST(), }; diff --git a/hw/block/nvme.h b/hw/block/nvme.h index cb2b5175f1a1..37940b3ac2d2 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -22,6 +22,7 @@ typedef struct NvmeParams { bool use_intel_id; uint32_t zasl_bs; bool legacy_cmb; + int32_t cmd_retry_delay; } NvmeParams; typedef struct NvmeAsyncEvent { @@ -124,6 +125,7 @@ typedef struct NvmeFeatureVal { uint16_t temp_thresh_low; }; uint32_t async_config; + uint8_t acre; } NvmeFeatureVal; typedef struct NvmeCtrl { diff --git a/include/block/nvme.h b/include/block/nvme.h index 9f8eb3988c0e..e1e2259da933 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -813,6 +813,7 @@ enum NvmeStatusCodes { NVME_SGL_DESCR_TYPE_INVALID = 0x0011, NVME_INVALID_USE_OF_CMB = 0x0012, NVME_INVALID_PRP_OFFSET = 0x0013, + NVME_COMMAND_INTERRUPTED = 0x0021, NVME_CMD_SET_CMB_REJECTED = 0x002b, NVME_INVALID_CMD_SET = 0x002c, NVME_LBA_RANGE = 0x0080, @@ -858,6 +859,7 @@ enum NvmeStatusCodes { NVME_DULB = 0x0287, NVME_MORE = 0x2000, NVME_DNR = 0x4000, + NVME_CRD_CRDT1 = 0x0800, NVME_NO_COMPLETE = 0xffff, }; @@ -987,7 +989,10 @@ typedef struct QEMU_PACKED NvmeIdCtrl { uint8_t rsvd100[11]; uint8_t cntrltype; uint8_t fguid[16]; - uint8_t rsvd128[128]; + uint16_t crdt1; + uint16_t crdt2; + uint16_t crdt3; + uint8_t rsvd134[122]; uint16_t oacs; uint8_t acl; uint8_t aerl; @@ -1131,6 +1136,7 @@ enum NvmeFeatureIds { NVME_WRITE_ATOMICITY = 0xa, NVME_ASYNCHRONOUS_EVENT_CONF = 0xb, NVME_TIMESTAMP = 0xe, + NVME_HOST_BEHAVIOR_SUPPORT = 0x16, NVME_COMMAND_SET_PROFILE = 0x19, NVME_SOFTWARE_PROGRESS_MARKER = 0x80, NVME_FID_MAX = 0x100, @@ -1162,6 +1168,11 @@ typedef enum NvmeGetFeatureSelect { #define NVME_SETFEAT_SAVE(dw10) \ ((dw10 >> NVME_SETFEAT_SAVE_SHIFT) & NVME_SETFEAT_SAVE_MASK) +typedef struct QEMU_PACKED NvmeFeatureHostBehavior { + uint8_t acre; + uint8_t rsvd1[511]; +} NvmeFeatureHostBehavior; + typedef struct QEMU_PACKED NvmeRangeType { uint8_t type; uint8_t attributes; From patchwork Wed Feb 10 19:52:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minwoo Im X-Patchwork-Id: 12081695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9848EC433E6 for ; Wed, 10 Feb 2021 19:55:09 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0E29A64EE6 for ; Wed, 10 Feb 2021 19:55:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E29A64EE6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:43236 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1l9vZr-0004N4-Uz for qemu-devel@archiver.kernel.org; Wed, 10 Feb 2021 14:55:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:57654) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1l9vYE-00030d-RC; Wed, 10 Feb 2021 14:53:26 -0500 Received: from mail-pj1-x102c.google.com ([2607:f8b0:4864:20::102c]:53604) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1l9vYD-0006fl-0h; Wed, 10 Feb 2021 14:53:26 -0500 Received: by mail-pj1-x102c.google.com with SMTP id nm1so1732168pjb.3; Wed, 10 Feb 2021 11:53:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qWyO5oU2O33u8vG+uU5qxpqj3X52/CYYlbOsr1K53FU=; b=qnX3MP5LCOre7LqAEL7tiVXKwm4iRxX6NDv9UHLQWJZA2tNNw+xLFeplkmjiJgvHvD DvyLzngsmC2Y/0kIeVw/+Ui2CgTi5fMRRGePAvuvjhziDnh+/bPs9I8GoLom9hdtIPDb dW5u0t4MwpzK64KqA9Q0mHV357JZbamcPvnXuCTsLOb5b/IMNYJcJuEdWCjv0NyWz/AS zg+iysObpP2Awn6yDrABHDx/nLPMM6jp/G8DGvxo7nBwQp7jM6l0iqiAhvx38vQkNUm/ KAFMY29FlRUdASsw6HYzWwJGubFIbOYukOLT/eZjEhpiJ6D8WCeuEUul6/t8Jdk8OMpP mwiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qWyO5oU2O33u8vG+uU5qxpqj3X52/CYYlbOsr1K53FU=; b=Gr0cC8xtxAZ2/2/ezZXrUqU5ASXBKghlGj0u2cTrZ5BWtBuCmVFD8tKp2Hac5lyEvl byWNpt4xLjzvcxXzhGL95AisYsL782uEFFGvoLsOIT1IKGkzsn5/8qbv47eh9vDHlR89 1pnCG3weCra10NLCDqRoym5WOp+e6pfXtQ8upxxPmFABr92T+Wh+VL9ta8oMPIB8uHUg 1f3GMOdTE37eljgUAfE8SFcQypgtmaQOT5q713j2KPEPMdKS68ATe0F/1xE0mxh6kVzj EvP/5+F8gPDQZ8HnDiXCj/4LACmY5SEvXTnk7jDIOlQ9kXxrXU3QIK1llVRtIX+dQOyz N3mw== X-Gm-Message-State: AOAM530VY+D4hpeL2pG/RxXo5PH+x+hFFUiW7Uks8hUW3bz8fsBMU/lT KQMlBGT/811Ca2F+URgo7rNE0bY8VS7+tg== X-Google-Smtp-Source: ABdhPJyWd2NwZV59QqZzyMuWbxlCIy0/lIlL+cJaXAXnCEF+D+qysnGGhd/w1c5rghHHRlF/fnPM3g== X-Received: by 2002:a17:90b:885:: with SMTP id bj5mr485091pjb.195.1612986803002; Wed, 10 Feb 2021 11:53:23 -0800 (PST) Received: from localhost.localdomain ([211.108.35.36]) by smtp.gmail.com with ESMTPSA id e26sm3221773pfm.87.2021.02.10.11.53.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Feb 2021 11:53:22 -0800 (PST) From: Minwoo Im To: qemu-devel@nongnu.org, qemu-block@nongnu.org Subject: [RFC PATCH 3/3] hw/block/nvme: add nvme_inject_state HMP command Date: Thu, 11 Feb 2021 04:52:52 +0900 Message-Id: <20210210195252.19339-4-minwoo.im.dev@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210210195252.19339-1-minwoo.im.dev@gmail.com> References: <20210210195252.19339-1-minwoo.im.dev@gmail.com> Received-SPF: pass client-ip=2607:f8b0:4864:20::102c; envelope-from=minwoo.im.dev@gmail.com; helo=mail-pj1-x102c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , "Dr . David Alan Gilbert" , Max Reitz , Klaus Jensen , Minwoo Im , Keith Busch Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" nvme_inject_state command is to give a controller state to be. Human Monitor Interface(HMP) supports users to make controller to a specified state of: normal: Normal state (no injection) cmd-interrupted: Commands will be interrupted internally This patch is just a start to give dynamic command from the HMP to the QEMU NVMe device model. If "cmd-interrupted" state is given, then the controller will return all the CQ entries with Command Interrupts status code. Usage: -device nvme,id=nvme0,.... (qemu) nvme_inject_state nvme0 cmd-interrupted (qemu) nvme_inject_state nvme0 normal This feature is required to test Linux kernel NVMe driver for the command retry feature. Signed-off-by: Minwoo Im --- hmp-commands.hx | 13 ++++++++++++ hw/block/nvme.c | 49 +++++++++++++++++++++++++++++++++++++++++++ hw/block/nvme.h | 8 +++++++ include/monitor/hmp.h | 1 + 4 files changed, 71 insertions(+) diff --git a/hmp-commands.hx b/hmp-commands.hx index d4001f9c5dc6..ef288c567b46 100644 --- a/hmp-commands.hx +++ b/hmp-commands.hx @@ -1307,6 +1307,19 @@ SRST Inject PCIe AER error ERST + { + .name = "nvme_inject_state", + .args_type = "id:s,state:s", + .params = "id [normal|cmd-interrupted]", + .help = "inject controller/namespace state", + .cmd = hmp_nvme_inject_state, + }, + +SRST +``nvme_inject_state`` + Inject NVMe controller/namespace state +ERST + { .name = "netdev_add", .args_type = "netdev:O", diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 6d3c554a0e99..42cf5bd113e6 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -123,6 +123,7 @@ #include "sysemu/sysemu.h" #include "qapi/error.h" #include "qapi/visitor.h" +#include "qapi/qmp/qdict.h" #include "sysemu/hostmem.h" #include "sysemu/block-backend.h" #include "exec/memory.h" @@ -132,6 +133,7 @@ #include "trace.h" #include "nvme.h" #include "nvme-ns.h" +#include "monitor/monitor.h" #define NVME_MAX_IOQPAIRS 0xffff #define NVME_DB_SIZE 4 @@ -966,6 +968,14 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) { assert(cq->cqid == req->sq->cqid); + /* + * Override request status field if controller state has been injected by + * the QMP. + */ + if (cq->ctrl->state == NVME_STATE_CMD_INTERRUPTED) { + req->status = NVME_COMMAND_INTERRUPTED; + } + if (req->status != NVME_SUCCESS) { if (cq->ctrl->features.acre && nvme_should_retry(req)) { if (cq->ctrl->params.cmd_retry_delay > 0) { @@ -5025,4 +5035,43 @@ static void nvme_register_types(void) type_register_static(&nvme_bus_info); } +static void nvme_inject_state(NvmeCtrl *n, NvmeState state) +{ + n->state = state; +} + +static const char *nvme_states[] = { + [NVME_STATE_NORMAL] = "normal", + [NVME_STATE_CMD_INTERRUPTED] = "cmd-interrupted", +}; + +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict) +{ + const char *id = qdict_get_str(qdict, "id"); + const char *state = qdict_get_str(qdict, "state"); + PCIDevice *dev; + NvmeCtrl *n; + int ret, i; + + ret = pci_qdev_find_device(id, &dev); + if (ret < 0) { + monitor_printf(mon, "invalid device id %s\n", id); + return; + } + + n = NVME(dev); + + for (i = 0; i < ARRAY_SIZE(nvme_states); i++) { + if (!strcmp(nvme_states[i], state)) { + nvme_inject_state(n, i); + monitor_printf(mon, + "-device nvme,id=%s: state %s injected\n", + id, state); + return; + } + } + + monitor_printf(mon, "invalid state %s\n", state); +} + type_init(nvme_register_types) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 37940b3ac2d2..1af1e0380d9b 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -128,6 +128,11 @@ typedef struct NvmeFeatureVal { uint8_t acre; } NvmeFeatureVal; +typedef enum NvmeState { + NVME_STATE_NORMAL, + NVME_STATE_CMD_INTERRUPTED, +} NvmeState; + typedef struct NvmeCtrl { PCIDevice parent_obj; MemoryRegion bar0; @@ -185,6 +190,8 @@ typedef struct NvmeCtrl { NvmeCQueue admin_cq; NvmeIdCtrl id_ctrl; NvmeFeatureVal features; + + NvmeState state; } NvmeCtrl; static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid) @@ -212,4 +219,5 @@ static inline NvmeCtrl *nvme_ctrl(NvmeRequest *req) int nvme_register_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp); +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict); #endif /* HW_NVME_H */ diff --git a/include/monitor/hmp.h b/include/monitor/hmp.h index ed2913fd18e8..668384ea2e34 100644 --- a/include/monitor/hmp.h +++ b/include/monitor/hmp.h @@ -79,6 +79,7 @@ void hmp_migrate(Monitor *mon, const QDict *qdict); void hmp_device_add(Monitor *mon, const QDict *qdict); void hmp_device_del(Monitor *mon, const QDict *qdict); void hmp_dump_guest_memory(Monitor *mon, const QDict *qdict); +void hmp_nvme_inject_state(Monitor *mon, const QDict *qdict); void hmp_netdev_add(Monitor *mon, const QDict *qdict); void hmp_netdev_del(Monitor *mon, const QDict *qdict); void hmp_getfd(Monitor *mon, const QDict *qdict);