From patchwork Thu Aug 17 20:13:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bart Van Assche X-Patchwork-Id: 9907117 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 53B02603B5 for ; Thu, 17 Aug 2017 20:16:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4734628591 for ; Thu, 17 Aug 2017 20:16:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3BD9E28BBC; Thu, 17 Aug 2017 20:16:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9CDB728BBB for ; Thu, 17 Aug 2017 20:16:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753516AbdHQUQa (ORCPT ); Thu, 17 Aug 2017 16:16:30 -0400 Received: from esa5.hgst.iphmx.com ([216.71.153.144]:28347 "EHLO esa5.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753485AbdHQUQ3 (ORCPT ); Thu, 17 Aug 2017 16:16:29 -0400 X-IronPort-AV: E=Sophos;i="5.41,389,1498492800"; d="scan'208";a="42840222" Received: from sjappemgw12.hgst.com (HELO sjappemgw11.hgst.com) ([199.255.44.66]) by ob1.hgst.iphmx.com with ESMTP; 18 Aug 2017 04:13:56 +0800 Received: from thinkpad-bart.sdcorp.global.sandisk.com (HELO thinkpad-bart.int.fusionio.com) ([10.11.172.152]) by sjappemgw11.hgst.com with ESMTP; 17 Aug 2017 13:13:41 -0700 From: Bart Van Assche To: Jens Axboe Cc: linux-block@vger.kernel.org, Christoph Hellwig , Damien Le Moal , Akhil Bhansali , Bart Van Assche , Hannes Reinecke , Johannes Thumshirn Subject: [PATCH 45/55] skd: Introduce skd_process_request() Date: Thu, 17 Aug 2017 13:13:28 -0700 Message-Id: <20170817201338.16537-46-bart.vanassche@wdc.com> X-Mailer: git-send-email 2.14.0 In-Reply-To: <20170817201338.16537-1-bart.vanassche@wdc.com> References: <20170817201338.16537-1-bart.vanassche@wdc.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The only functional change in this patch is that the skd_fitmsg_context in which requests are accumulated is changed from a local variable into a member of struct skd_device. This patch will make the blk-mq conversion easier. Signed-off-by: Bart Van Assche Cc: Christoph Hellwig Cc: Hannes Reinecke Cc: Johannes Thumshirn --- drivers/block/skd_main.c | 237 ++++++++++++++++++++++++----------------------- 1 file changed, 119 insertions(+), 118 deletions(-) diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c index 4b92d711d2d3..1d10373b0da3 100644 --- a/drivers/block/skd_main.c +++ b/drivers/block/skd_main.c @@ -232,6 +232,7 @@ struct skd_device { spinlock_t lock; struct gendisk *disk; struct request_queue *queue; + struct skd_fitmsg_context *skmsg; struct device *class_dev; int gendisk_on; int sync_done; @@ -492,23 +493,128 @@ static bool skd_fail_all(struct request_queue *q) } } -static void skd_request_fn(struct request_queue *q) +static void skd_process_request(struct request *req) { + struct request_queue *const q = req->q; struct skd_device *skdev = q->queuedata; - struct skd_fitmsg_context *skmsg = NULL; - struct fit_msg_hdr *fmh = NULL; - struct skd_request_context *skreq; - struct request *req = NULL; + struct skd_fitmsg_context *skmsg; + struct fit_msg_hdr *fmh; + const u32 tag = blk_mq_unique_tag(req); + struct skd_request_context *const skreq = &skdev->skreq_table[tag]; struct skd_scsi_request *scsi_req; unsigned long io_flags; u32 lba; u32 count; int data_dir; __be64 be_dmaa; - u64 cmdctxt; u32 timo_slot; int flush, fua; - u32 tag; + + WARN_ONCE(tag >= skd_max_queue_depth, "%#x > %#x (nr_requests = %lu)\n", + tag, skd_max_queue_depth, q->nr_requests); + + SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE); + + flush = fua = 0; + + lba = (u32)blk_rq_pos(req); + count = blk_rq_sectors(req); + data_dir = rq_data_dir(req); + io_flags = req->cmd_flags; + + if (req_op(req) == REQ_OP_FLUSH) + flush++; + + if (io_flags & REQ_FUA) + fua++; + + dev_dbg(&skdev->pdev->dev, + "new req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", req, lba, + lba, count, count, data_dir); + + skreq->id = tag + SKD_ID_RW_REQUEST; + skreq->flush_cmd = 0; + skreq->n_sg = 0; + skreq->sg_byte_count = 0; + + skreq->req = req; + skreq->fitmsg_id = 0; + + skreq->data_dir = data_dir == READ ? DMA_FROM_DEVICE : DMA_TO_DEVICE; + + if (req->bio && !skd_preop_sg_list(skdev, skreq)) { + dev_dbg(&skdev->pdev->dev, "error Out\n"); + skd_end_request(skdev, skreq->req, BLK_STS_RESOURCE); + return; + } + + /* Either a FIT msg is in progress or we have to start one. */ + skmsg = skdev->skmsg; + if (!skmsg) { + skmsg = &skdev->skmsg_table[tag]; + skdev->skmsg = skmsg; + + /* Initialize the FIT msg header */ + fmh = &skmsg->msg_buf->fmh; + memset(fmh, 0, sizeof(*fmh)); + fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT; + skmsg->length = sizeof(*fmh); + } else { + fmh = &skmsg->msg_buf->fmh; + } + + skreq->fitmsg_id = skmsg->id; + + scsi_req = &skmsg->msg_buf->scsi[fmh->num_protocol_cmds_coalesced]; + memset(scsi_req, 0, sizeof(*scsi_req)); + + be_dmaa = cpu_to_be64(skreq->sksg_dma_address); + + scsi_req->hdr.tag = skreq->id; + scsi_req->hdr.sg_list_dma_address = be_dmaa; + + if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) { + skd_prep_zerosize_flush_cdb(scsi_req, skreq); + SKD_ASSERT(skreq->flush_cmd == 1); + } else { + skd_prep_rw_cdb(scsi_req, data_dir, lba, count); + } + + if (fua) + scsi_req->cdb[1] |= SKD_FUA_NV; + + scsi_req->hdr.sg_list_len_bytes = cpu_to_be32(skreq->sg_byte_count); + + /* Complete resource allocations. */ + skreq->state = SKD_REQ_STATE_BUSY; + + skmsg->length += sizeof(struct skd_scsi_request); + fmh->num_protocol_cmds_coalesced++; + + /* + * Update the active request counts. + * Capture the timeout timestamp. + */ + skreq->timeout_stamp = atomic_read(&skdev->timeout_stamp); + timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK; + atomic_inc(&skdev->timeout_slot[timo_slot]); + atomic_inc(&skdev->in_flight); + dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id, + atomic_read(&skdev->in_flight)); + + /* + * If the FIT msg buffer is full send it. + */ + if (fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) { + skd_send_fitmsg(skdev, skmsg); + skdev->skmsg = NULL; + } +} + +static void skd_request_fn(struct request_queue *q) +{ + struct skd_device *skdev = q->queuedata; + struct request *req; if (skdev->state != SKD_DRVR_STATE_ONLINE) { if (skd_fail_all(q)) @@ -533,30 +639,12 @@ static void skd_request_fn(struct request_queue *q) * - There are no more FIT msg buffers */ for (;; ) { - - flush = fua = 0; - req = blk_peek_request(q); /* Are there any native requests to start? */ if (req == NULL) break; - lba = (u32)blk_rq_pos(req); - count = blk_rq_sectors(req); - data_dir = rq_data_dir(req); - io_flags = req->cmd_flags; - - if (req_op(req) == REQ_OP_FLUSH) - flush++; - - if (io_flags & REQ_FUA) - fua++; - - dev_dbg(&skdev->pdev->dev, - "new req=%p lba=%u(0x%x) count=%u(0x%x) dir=%d\n", - req, lba, lba, count, count, data_dir); - /* At this point we know there is a request */ /* Are too many requets already in progress? */ @@ -576,103 +664,16 @@ static void skd_request_fn(struct request_queue *q) * available but is still at the head of the free list. */ WARN_ON_ONCE(blk_queue_start_tag(q, req)); - - tag = blk_mq_unique_tag(req); - WARN_ONCE(tag >= skd_max_queue_depth, - "%#x > %#x (nr_requests = %lu)\n", tag, - skd_max_queue_depth, q->nr_requests); - - skreq = &skdev->skreq_table[tag]; - SKD_ASSERT(skreq->state == SKD_REQ_STATE_IDLE); - SKD_ASSERT((skreq->id & SKD_ID_INCR) == 0); - - skreq->id = tag + SKD_ID_RW_REQUEST; - skreq->flush_cmd = 0; - skreq->n_sg = 0; - skreq->sg_byte_count = 0; - - skreq->req = req; - skreq->fitmsg_id = 0; - - skreq->data_dir = data_dir == READ ? DMA_FROM_DEVICE : - DMA_TO_DEVICE; - - if (req->bio && !skd_preop_sg_list(skdev, skreq)) { - dev_dbg(&skdev->pdev->dev, "error Out\n"); - skd_end_request(skdev, skreq->req, BLK_STS_RESOURCE); - continue; - } - - /* Either a FIT msg is in progress or we have to start one. */ - if (skmsg == NULL) { - skmsg = &skdev->skmsg_table[tag]; - - /* Initialize the FIT msg header */ - fmh = &skmsg->msg_buf->fmh; - memset(fmh, 0, sizeof(*fmh)); - fmh->protocol_id = FIT_PROTOCOL_ID_SOFIT; - skmsg->length = sizeof(*fmh); - } - - skreq->fitmsg_id = skmsg->id; - - scsi_req = - &skmsg->msg_buf->scsi[fmh->num_protocol_cmds_coalesced]; - memset(scsi_req, 0, sizeof(*scsi_req)); - - be_dmaa = cpu_to_be64(skreq->sksg_dma_address); - cmdctxt = skreq->id + SKD_ID_INCR; - - scsi_req->hdr.tag = cmdctxt; - scsi_req->hdr.sg_list_dma_address = be_dmaa; - - if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) { - skd_prep_zerosize_flush_cdb(scsi_req, skreq); - SKD_ASSERT(skreq->flush_cmd == 1); - } else { - skd_prep_rw_cdb(scsi_req, data_dir, lba, count); - } - - if (fua) - scsi_req->cdb[1] |= SKD_FUA_NV; - - scsi_req->hdr.sg_list_len_bytes = - cpu_to_be32(skreq->sg_byte_count); - - /* Complete resource allocations. */ - skreq->state = SKD_REQ_STATE_BUSY; - skreq->id += SKD_ID_INCR; - - skmsg->length += sizeof(struct skd_scsi_request); - fmh->num_protocol_cmds_coalesced++; - - /* - * Update the active request counts. - * Capture the timeout timestamp. - */ - skreq->timeout_stamp = atomic_read(&skdev->timeout_stamp); - timo_slot = skreq->timeout_stamp & SKD_TIMEOUT_SLOT_MASK; - atomic_inc(&skdev->timeout_slot[timo_slot]); - atomic_inc(&skdev->in_flight); - dev_dbg(&skdev->pdev->dev, "req=0x%x busy=%d\n", skreq->id, - atomic_read(&skdev->in_flight)); - - /* - * If the FIT msg buffer is full send it. - */ - if (fmh->num_protocol_cmds_coalesced >= skd_max_req_per_msg) { - skd_send_fitmsg(skdev, skmsg); - skmsg = NULL; - fmh = NULL; - } + skd_process_request(req); } /* If the FIT msg buffer is not empty send what we got. */ - if (skmsg) { + if (skdev->skmsg) { + struct fit_msg_hdr *fmh = &skdev->skmsg->msg_buf->fmh; + WARN_ON_ONCE(!fmh->num_protocol_cmds_coalesced); - skd_send_fitmsg(skdev, skmsg); - skmsg = NULL; - fmh = NULL; + skd_send_fitmsg(skdev, skdev->skmsg); + skdev->skmsg = NULL; } /*