From patchwork Wed May 16 03:03:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "jianchao.wang" X-Patchwork-Id: 10402469 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 747FD601E9 for ; Wed, 16 May 2018 03:03:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 517672842A for ; Wed, 16 May 2018 03:03:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 437E32864A; Wed, 16 May 2018 03:03:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C69CA2842A for ; Wed, 16 May 2018 03:03:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752021AbeEPDDy (ORCPT ); Tue, 15 May 2018 23:03:54 -0400 Received: from aserp2120.oracle.com ([141.146.126.78]:43982 "EHLO aserp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751963AbeEPDDx (ORCPT ); Tue, 15 May 2018 23:03:53 -0400 Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w4G32RAj008759; Wed, 16 May 2018 03:03:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type; s=corp-2017-10-26; bh=YdXanUBE5rPDVXLpv3Irhml0fqlYd+ADsHzHaf7i75o=; b=b+PCEgvplfyQo55332fECGSSct2kSxUbEAuoKxEpTcJPeoyDJb3YPxNR9Tfb38STJ/hU AT3s3BqOOpZXdLc3eaDHSaq+x7Gs/zQzhKMD/RZD0asRUJDTdEUyW7Jn1KjhYteT6pbE +PA0BDJ4tNGw7u1ROxoLYlujBxAmETxVBZygNqc/IL9OqoP+Frn9RirAuDHfrh2wbSKv JYTUZxB5FF/77vxz8NS2vnxB2bCxe8i2kOF1gp5ADafW8/BAVbSi1Ol3B9FkjKLMJ8Qs B7qrrjL0ed/UHKjkGRdtVMJh0ljw0gpoCA5NfxEUVbvEbrj7qjPkf1pmr9vtgoDFbOY3 MA== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by aserp2120.oracle.com with ESMTP id 2hx29w2ubc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 May 2018 03:03:16 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w4G33FAC003826 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 May 2018 03:03:15 GMT Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w4G33Eg3023647; Wed, 16 May 2018 03:03:14 GMT Received: from [10.182.69.179] (/10.182.69.179) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 15 May 2018 20:03:14 -0700 Subject: Re: [PATCH V5 0/9] nvme: pci: fix & improve timeout handling To: Ming Lei Cc: Jens Axboe , linux-block@vger.kernel.org, Laurence Oberman , Sagi Grimberg , James Smart , linux-nvme@lists.infradead.org, Keith Busch , Christoph Hellwig References: <20180511122933.27155-1-ming.lei@redhat.com> <776f21e1-dc19-1b77-9ba4-44f0b8366625@oracle.com> <20180514093850.GA807@ming.t460p> <008cb38d-aa91-6ab7-64d9-417d6c53a1eb@oracle.com> <20180514122211.GB807@ming.t460p> <20180515003332.GB21743@ming.t460p> <20180515125640.GC13679@ming.t460p> From: "jianchao.wang" Message-ID: Date: Wed, 16 May 2018 11:03:09 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180515125640.GC13679@ming.t460p> Content-Language: en-US X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8894 signatures=668698 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1805160027 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Ming On 05/15/2018 08:56 PM, Ming Lei wrote: > Looks a nice fix on nvme_create_queue(), but seems the change on > adapter_alloc_cq() is missed in above patch. > > Could you prepare a formal one so that I may integrate it to V6? Please refer to Thanks Jianchao From 9bb6db79901ef303cd40c4c911604bc053b1ad95 Mon Sep 17 00:00:00 2001 From: Jianchao Wang Date: Wed, 16 May 2018 09:45:45 +0800 Subject: [PATCH] nvme-pci: set nvmeq->cq_vector after alloc cq/sq Currently nvmeq->cq_vector is set before alloc cq/sq. If the alloc cq/sq command timeout, nvme_suspend_queue will invoke free_irq for the nvmeq because the cq_vector is valid, this will cause warning 'Trying to free already-free IRQ xxx'. set nvmeq->cq_vector after alloc cq/sq successes to fix this. Signed-off-by: Jianchao Wang --- drivers/nvme/host/pci.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index fbc71fa..c830092 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1070,7 +1070,7 @@ static int adapter_delete_queue(struct nvme_dev *dev, u8 opcode, u16 id) } static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid, - struct nvme_queue *nvmeq) + struct nvme_queue *nvmeq, int cq_vector) { struct nvme_command c; int flags = NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED; @@ -1085,7 +1085,7 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid, c.create_cq.cqid = cpu_to_le16(qid); c.create_cq.qsize = cpu_to_le16(nvmeq->q_depth - 1); c.create_cq.cq_flags = cpu_to_le16(flags); - c.create_cq.irq_vector = cpu_to_le16(nvmeq->cq_vector); + c.create_cq.irq_vector = cpu_to_le16(cq_vector); return nvme_submit_sync_cmd(dev->ctrl.admin_q, &c, NULL, 0); } @@ -1450,6 +1450,7 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) { struct nvme_dev *dev = nvmeq->dev; int result; + int cq_vector; if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) { unsigned offset = (qid - 1) * roundup(SQ_SIZE(nvmeq->q_depth), @@ -1462,15 +1463,21 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) * A queue's vector matches the queue identifier unless the controller * has only one vector available. */ - nvmeq->cq_vector = dev->num_vecs == 1 ? 0 : qid; - result = adapter_alloc_cq(dev, qid, nvmeq); + cq_vector = dev->num_vecs == 1 ? 0 : qid; + result = adapter_alloc_cq(dev, qid, nvmeq, cq_vector); if (result < 0) - goto release_vector; + goto out; result = adapter_alloc_sq(dev, qid, nvmeq); if (result < 0) goto release_cq; + /* + * set cq_vector after alloc cq/sq, otherwise, if alloc cq/sq command + * timeout, nvme_suspend_queue will invoke free_irq for it and cause warning + * 'Trying to free already-free IRQ xxx' + */ + nvmeq->cq_vector = cq_vector; nvme_init_queue(nvmeq, qid); result = queue_request_irq(nvmeq); if (result < 0) @@ -1478,13 +1485,13 @@ static int nvme_create_queue(struct nvme_queue *nvmeq, int qid) return result; - release_sq: +release_sq: + nvmeq->cq_vector = -1; dev->online_queues--; adapter_delete_sq(dev, qid); - release_cq: +release_cq: adapter_delete_cq(dev, qid); - release_vector: - nvmeq->cq_vector = -1; +out: return result; } -- 2.7.4