From patchwork Thu Feb 9 15:34:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 9564949 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 666FD60572 for ; Thu, 9 Feb 2017 15:36:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 555E728535 for ; Thu, 9 Feb 2017 15:36:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A4E12853F; Thu, 9 Feb 2017 15:36:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B0DF928540 for ; Thu, 9 Feb 2017 15:36:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753921AbdBIPgm (ORCPT ); Thu, 9 Feb 2017 10:36:42 -0500 Received: from mail-lf0-f49.google.com ([209.85.215.49]:35974 "EHLO mail-lf0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753927AbdBIPgI (ORCPT ); Thu, 9 Feb 2017 10:36:08 -0500 Received: by mail-lf0-f49.google.com with SMTP id z134so4506373lff.3 for ; Thu, 09 Feb 2017 07:35:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ExL/g8vc0+to/j6OfBKE7ysB7kSVD0s36ZLsM1b0V9U=; b=jymvwxl2nAO7a4FR0Vx2thlj0SgAxByCDAPyvPFf6FpxsdKpTdyej1vzZ41GoOQWr4 5v+gv7rrxbEim2Gvgj81pkgSnOus7XcWsG0iJxYaZOyGTOqlazKb6SryOypO+74rjw/3 kIYXu4pDckRY01TBFBc/YCH+zCw7vSi0MT8SA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ExL/g8vc0+to/j6OfBKE7ysB7kSVD0s36ZLsM1b0V9U=; b=L90nYAFCSExKDCd7jNWEM+sWYxx+wjonVgudbprBvBeEDAJ7qYc8jfzvmRXhw+z+aN UEWBYz7PqEX3h6CVkBXDwvO5S1IlU25d4UgwjwtNv6fy/wRCGDHYLB3OUTuVldtKFPVQ KKBCUxl7XgajWZNZqdTToMEaKo0QQ6XIcD/lPJh2fW8yrAhEvCTK01XEZZyX5EIrbjVN o1K3IIiah/2Xu5JR/nxmoNfNuxl15HRdpIbo23Y4nCKemAvE7P8XKkA+SmnqQwLs8D1X UtFTL/S+uFj5sLUU6Ctuw1z9fzO6lcXnoXjRJpTcwefGYEQ6XAVPLeK79scRhOgY80Hh bPjw== X-Gm-Message-State: AMke39nzZgsL7l/7ZJYdDU8PmWTgD5Q5HuUke8XR322GnKeZ+o9P9R8eO/BuJ/HpxSiiGJMb X-Received: by 10.25.135.130 with SMTP id j124mr1343262lfd.11.1486654528756; Thu, 09 Feb 2017 07:35:28 -0800 (PST) Received: from gnarp.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id e86sm3670614lji.32.2017.02.09.07.35.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Feb 2017 07:35:27 -0800 (PST) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter , Paolo Valente Cc: Chunyan Zhang , Baolin Wang , linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Linus Walleij Subject: [PATCH 13/16] mmc: queue: issue struct mmc_queue_req items Date: Thu, 9 Feb 2017 16:34:00 +0100 Message-Id: <20170209153403.9730-14-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170209153403.9730-1-linus.walleij@linaro.org> References: <20170209153403.9730-1-linus.walleij@linaro.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of passing two pointers around and messing and reassigning to the left and right, issue mmc_queue_req and dereference the queue from the request where needed. The struct mmc_queue_req is the thing that has a lifecycle after all: this is what we are keepin in out queue. Augment all users to be passed the struct mmc_queue_req as well. Signed-off-by: Linus Walleij --- drivers/mmc/core/block.c | 88 ++++++++++++++++++++++++------------------------ drivers/mmc/core/block.h | 5 ++- drivers/mmc/core/queue.c | 6 ++-- 3 files changed, 50 insertions(+), 49 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 4952a105780e..628a22b9bf41 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1151,9 +1151,9 @@ int mmc_access_rpmb(struct mmc_queue *mq) return false; } -static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_discard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_DISCARD; @@ -1163,8 +1163,8 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) goto fail; } - from = blk_rq_pos(req); - nr = blk_rq_sectors(req); + from = blk_rq_pos(mq_rq->req); + nr = blk_rq_sectors(mq_rq->req); if (mmc_can_discard(card)) arg = MMC_DISCARD_ARG; @@ -1188,13 +1188,12 @@ static void mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) if (!err) mmc_blk_reset_success(md, type); fail: - blk_end_request(req, err, blk_rq_bytes(req)); + blk_end_request(mq_rq->req, err, blk_rq_bytes(mq_rq->req)); } -static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, - struct request *req) +static void mmc_blk_issue_secdiscard_rq(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; unsigned int from, nr, arg; int err = 0, type = MMC_BLK_SECDISCARD; @@ -1204,8 +1203,8 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, goto out; } - from = blk_rq_pos(req); - nr = blk_rq_sectors(req); + from = blk_rq_pos(mq_rq->req); + nr = blk_rq_sectors(mq_rq->req); if (mmc_can_trim(card) && !mmc_erase_group_aligned(card, from, nr)) arg = MMC_SECURE_TRIM1_ARG; @@ -1253,12 +1252,12 @@ static void mmc_blk_issue_secdiscard_rq(struct mmc_queue *mq, if (!err) mmc_blk_reset_success(md, type); out: - blk_end_request(req, err, blk_rq_bytes(req)); + blk_end_request(mq_rq->req, err, blk_rq_bytes(mq_rq->req)); } -static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) +static void mmc_blk_issue_flush(struct mmc_queue_req *mq_rq) { - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; int ret = 0; @@ -1266,7 +1265,7 @@ static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req) if (ret) ret = -EIO; - blk_end_request_all(req, ret); + blk_end_request_all(mq_rq->req, ret); } /* @@ -1614,11 +1613,13 @@ static void mmc_blk_rw_cmd_abort(struct mmc_card *card, struct request *req) * @mq: the queue with the card and host to restart * @req: a new request that want to be started after the current one */ -static void mmc_blk_rw_try_restart(struct mmc_queue *mq) +static void mmc_blk_rw_try_restart(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; + /* Proceed and try to restart the current async request */ - mmc_blk_rw_rq_prep(mq->mqrq_cur, mq->card, 0, mq); - mmc_restart_areq(mq->card->host, &mq->mqrq_cur->areq); + mmc_blk_rw_rq_prep(mq_rq, mq->card, 0, mq); + mmc_restart_areq(mq->card->host, &mq_rq->areq); } void mmc_blk_rw_done(struct mmc_async_req *areq, @@ -1676,11 +1677,11 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, req_pending = mmc_blk_rw_cmd_err(md, card, brq, old_req, req_pending); if (mmc_blk_reset(md, host, type)) { mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } if (!req_pending) { - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } break; @@ -1693,7 +1694,7 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, if (!mmc_blk_reset(md, host, type)) break; mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; case MMC_BLK_DATA_ERR: { int err; @@ -1702,7 +1703,7 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, break; if (err == -ENODEV) { mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } /* Fall through */ @@ -1723,19 +1724,19 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, req_pending = blk_end_request(old_req, -EIO, brq->data.blksz); if (!req_pending) { - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } break; case MMC_BLK_NOMEDIUM: mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; default: pr_err("%s: Unhandled return value (%d)", old_req->rq_disk->disk_name, status); mmc_blk_rw_cmd_abort(card, old_req); - mmc_blk_rw_try_restart(mq); + mmc_blk_rw_try_restart(mq_rq); return; } @@ -1747,15 +1748,16 @@ void mmc_blk_rw_done(struct mmc_async_req *areq, mmc_blk_rw_rq_prep(mq_rq, card, disable_multi, mq); mq_rq->brq.retune_retry_done = retune_retry_done; - mmc_restart_areq(host, &mq->mqrq_cur->areq); + mmc_restart_areq(host, &mq_rq->areq); } } -static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) +static void mmc_blk_issue_rw_rq(struct mmc_queue_req *mq_rq) { + struct mmc_queue *mq = mq_rq->mq; struct mmc_card *card = mq->card; - if (!new_req) { + if (!mq_rq->req) { pr_err("%s: NULL request!\n", __func__); return; } @@ -1765,54 +1767,52 @@ static void mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *new_req) * multiple read or write is allowed */ if (mmc_large_sector(card) && - !IS_ALIGNED(blk_rq_sectors(new_req), 8)) { + !IS_ALIGNED(blk_rq_sectors(mq_rq->req), 8)) { pr_err("%s: Transfer size is not 4KB sector size aligned\n", - new_req->rq_disk->disk_name); - mmc_blk_rw_cmd_abort(card, new_req); + mq_rq->req->rq_disk->disk_name); + mmc_blk_rw_cmd_abort(card, mq_rq->req); return; } - mmc_blk_rw_rq_prep(mq->mqrq_cur, card, 0, mq); - mmc_start_areq(card->host, &mq->mqrq_cur->areq); + mmc_blk_rw_rq_prep(mq_rq, card, 0, mq); + mmc_start_areq(card->host, &mq_rq->areq); } -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq) { int ret; - struct mmc_blk_data *md = mq->blkdata; + struct mmc_blk_data *md = mq_rq->mq->blkdata; struct mmc_card *card = md->queue.card; ret = mmc_blk_part_switch(card, md); if (ret) { - if (req) { - blk_end_request_all(req, -EIO); - } + blk_end_request_all(mq_rq->req, -EIO); return; } - if (req && req_op(req) == REQ_OP_DISCARD) { + if (req_op(mq_rq->req) == REQ_OP_DISCARD) { /* complete ongoing async transfer before issuing discard */ if (card->host->areq) { wait_for_completion(&card->host->areq->complete); card->host->areq = NULL; } - mmc_blk_issue_discard_rq(mq, req); - } else if (req && req_op(req) == REQ_OP_SECURE_ERASE) { + mmc_blk_issue_discard_rq(mq_rq); + } else if (req_op(mq_rq->req) == REQ_OP_SECURE_ERASE) { /* complete ongoing async transfer before issuing secure erase*/ if (card->host->areq) { wait_for_completion(&card->host->areq->complete); card->host->areq = NULL; } - mmc_blk_issue_secdiscard_rq(mq, req); - } else if (req && req_op(req) == REQ_OP_FLUSH) { + mmc_blk_issue_secdiscard_rq(mq_rq); + } else if (req_op(mq_rq->req) == REQ_OP_FLUSH) { /* complete ongoing async transfer before issuing flush */ if (card->host->areq) { wait_for_completion(&card->host->areq->complete); card->host->areq = NULL; } - mmc_blk_issue_flush(mq, req); + mmc_blk_issue_flush(mq_rq); } else { - mmc_blk_issue_rw_rq(mq, req); + mmc_blk_issue_rw_rq(mq_rq); } } diff --git a/drivers/mmc/core/block.h b/drivers/mmc/core/block.h index b4b489911599..0326fa5d8217 100644 --- a/drivers/mmc/core/block.h +++ b/drivers/mmc/core/block.h @@ -3,10 +3,9 @@ struct mmc_async_req; enum mmc_blk_status; -struct mmc_queue; -struct request; +struct mmc_queue_req; void mmc_blk_rw_done(struct mmc_async_req *areq, enum mmc_blk_status status); -void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req); +void mmc_blk_issue_rq(struct mmc_queue_req *mq_rq); #endif diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index c9f28de7b0f4..c4e1ced55796 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -54,6 +54,7 @@ static int mmc_queue_thread(void *d) struct mmc_queue *mq = d; struct request_queue *q = mq->queue; bool claimed_host = false; + struct mmc_queue_req *mq_rq; current->flags |= PF_MEMALLOC; @@ -65,7 +66,8 @@ static int mmc_queue_thread(void *d) set_current_state(TASK_INTERRUPTIBLE); req = blk_fetch_request(q); mq->asleep = false; - mq->mqrq_cur->req = req; + mq_rq = mq->mqrq_cur; + mq_rq->req = req; spin_unlock_irq(q->queue_lock); if (req) { @@ -74,7 +76,7 @@ static int mmc_queue_thread(void *d) if (!claimed_host) mmc_get_card(mq->card); set_current_state(TASK_RUNNING); - mmc_blk_issue_rq(mq, req); + mmc_blk_issue_rq(mq_rq); cond_resched(); /* * Current request becomes previous request