From patchwork Thu May 18 09:29:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 9733003 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 064436020B for ; Thu, 18 May 2017 09:30:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1C53C28823 for ; Thu, 18 May 2017 09:30:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A3B928776; Thu, 18 May 2017 09:30:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D093287CB for ; Thu, 18 May 2017 09:30:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754912AbdERJaS (ORCPT ); Thu, 18 May 2017 05:30:18 -0400 Received: from mail-wr0-f170.google.com ([209.85.128.170]:36371 "EHLO mail-wr0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754929AbdERJaR (ORCPT ); Thu, 18 May 2017 05:30:17 -0400 Received: by mail-wr0-f170.google.com with SMTP id l50so29194591wrc.3 for ; Thu, 18 May 2017 02:30:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=G/IwQ9U0Dm3V+tLyi17MZe4FoulPw25GUV9IuuABn5s=; b=iuLw3v1ICojI/ZO30qpBr6bwD2BuVa5zZ2KI1qjpGUqsWpbLVUhNnFvXR2aqJpI3Ot wbK/HB0RxSsFKKDONhUNQx8e4Wr81ARiOpmR0McEcHMN6Xgi+I7as3hpfQQPtU6/fzW8 coqfawW+jeD7bUsWhXTXg7aayOlmHSNT3qmg8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=G/IwQ9U0Dm3V+tLyi17MZe4FoulPw25GUV9IuuABn5s=; b=BPnxSpbBFfUzQ9TzRCdByPXylxFEoU9M7qi9rN3n/9Vl6eyWRMxoc5QC9KrbtMysXp GKw3J0AyOd2Na3wPgT/flhxI3pL9Ye33dfghij6ol+4H9O0peFBDf+9+6YcQ8ecgMXx+ dZAgoTm+tlwsZ7u4+MIBREBsW+QjkztNaAvCRGynUTWrAb2T3Sv34vJGbXSkG8u6YWvv CNzGO0Y04USRv7qQtdbKXJJl2MaCuStLXiIfr7T2WuM5mX+Cu1pZJ1PiwdFMZea30GY5 X1JyI/SxDZSBDptaYJ00TfFMCEwtkHgs8qHyaf5Dh8iVsOMSxusdckgn67CEKauUqNT/ IGCQ== X-Gm-Message-State: AODbwcALEy06CNYmxFZh8atBoleOvpVxekfRGDguFSdTlJfB6g44gYyf fViqbh6nXEBeVNNx X-Received: by 10.46.5.147 with SMTP id 141mr953519ljf.108.1495099806166; Thu, 18 May 2017 02:30:06 -0700 (PDT) Received: from genomnajs.ideon.se ([85.235.10.227]) by smtp.gmail.com with ESMTPSA id u24sm829895ljd.34.2017.05.18.02.30.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 May 2017 02:30:05 -0700 (PDT) From: Linus Walleij To: linux-mmc@vger.kernel.org, Ulf Hansson , Adrian Hunter Cc: linux-block@vger.kernel.org, Jens Axboe , Christoph Hellwig , Arnd Bergmann , Bartlomiej Zolnierkiewicz , Paolo Valente , Linus Walleij Subject: [PATCH 4/6 v2] mmc: block: move single ioctl() commands to block requests Date: Thu, 18 May 2017 11:29:34 +0200 Message-Id: <20170518092936.9277-4-linus.walleij@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170518092936.9277-1-linus.walleij@linaro.org> References: <20170518092936.9277-1-linus.walleij@linaro.org> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This wraps single ioctl() commands into block requests using the custom block layer request types REQ_OP_DRV_IN and REQ_OP_DRV_OUT. By doing this we are loosening the grip on the big host lock, since two calls to mmc_get_card()/mmc_put_card() are removed. We are storing the ioctl() in/out argument as a pointer in the per-request struct mmc_blk_request container. Since we now let the block layer allocate this data, blk_get_request() will allocate it for us and we can immediately dereference it and use it to pass the argument into the block layer. We refactor the if/else/if/else ladder in mmc_blk_issue_rq() as part of the job, keeping some extra attention to the case when a NULL req is passed into this function and making that pipeline flush more explicit. Tested on the ux500 with the userspace: mmc extcsd read /dev/mmcblk3 resulting in a successful EXTCSD info dump back to the console. This commit fixes a starvation issue in the MMC/SD stack that can be easily provoked in the following way by issueing the following commands in sequence: > dd if=/dev/mmcblk3 of=/dev/null bs=1M & > mmc extcs read /dev/mmcblk3 Before this patch, the extcsd read command would hang (starve) while waiting for the dd command to finish since the block layer was holding the card/host lock. After this patch, the extcsd ioctl() command is nicely interpersed with the rest of the block commands and we can issue a bunch of ioctl()s from userspace while there is some busy block IO going on without any problems. Conversely userspace ioctl()s can no longer starve the block layer by holding the card/host lock. Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Replace the if/else/if/else nest in mmc_blk_issue_rq() with a switch() clause at Ulf's request. - Update to the API change for req_to_mmc_queue_req() --- drivers/mmc/core/block.c | 111 ++++++++++++++++++++++++++++++++++++----------- drivers/mmc/core/queue.h | 3 ++ 2 files changed, 88 insertions(+), 26 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index f4dab1dfd2ab..9fb2bd529156 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -564,8 +564,10 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, { struct mmc_blk_ioc_data *idata; struct mmc_blk_data *md; + struct mmc_queue *mq; struct mmc_card *card; int err = 0, ioc_err = 0; + struct request *req; /* * The caller must have CAP_SYS_RAWIO, and must be calling this on the @@ -591,17 +593,18 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, goto cmd_done; } - mmc_get_card(card); - - ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); - - /* Always switch back to main area after RPMB access */ - if (md->area_type & MMC_BLK_DATA_AREA_RPMB) - mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); - - mmc_put_card(card); - + /* + * Dispatch the ioctl() into the block request queue. + */ + mq = &md->queue; + req = blk_get_request(mq->queue, + idata->ic.write_flag ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, + __GFP_RECLAIM); + req_to_mmc_queue_req(req)->idata = idata; + blk_execute_rq(mq->queue, NULL, req, 0); + ioc_err = req_to_mmc_queue_req(req)->ioc_result; err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata); + blk_put_request(req); cmd_done: mmc_blk_put(md); @@ -611,6 +614,31 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev, return ioc_err ? ioc_err : err; } +/* + * The ioctl commands come back from the block layer after it queued it and + * processed it with all other requests and then they get issued in this + * function. + */ +static void mmc_blk_ioctl_cmd_issue(struct mmc_queue *mq, struct request *req) +{ + struct mmc_queue_req *mq_rq; + struct mmc_blk_ioc_data *idata; + struct mmc_card *card = mq->card; + struct mmc_blk_data *md = mq->blkdata; + int ioc_err; + + mq_rq = req_to_mmc_queue_req(req); + idata = mq_rq->idata; + ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); + mq_rq->ioc_result = ioc_err; + + /* Always switch back to main area after RPMB access */ + if (md->area_type & MMC_BLK_DATA_AREA_RPMB) + mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); + + blk_end_request_all(req, ioc_err); +} + static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, struct mmc_ioc_multi_cmd __user *user) { @@ -1854,23 +1882,54 @@ void mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) goto out; } - if (req && req_op(req) == REQ_OP_DISCARD) { - /* complete ongoing async transfer before issuing discard */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_discard_rq(mq, req); - } else if (req && req_op(req) == REQ_OP_SECURE_ERASE) { - /* complete ongoing async transfer before issuing secure erase*/ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_secdiscard_rq(mq, req); - } else if (req && req_op(req) == REQ_OP_FLUSH) { - /* complete ongoing async transfer before issuing flush */ - if (mq->qcnt) - mmc_blk_issue_rw_rq(mq, NULL); - mmc_blk_issue_flush(mq, req); + if (req) { + switch (req_op(req)) { + case REQ_OP_DRV_IN: + case REQ_OP_DRV_OUT: + /* + * Complete ongoing async transfer before issuing + * ioctl()s + */ + if (mq->qcnt) + mmc_blk_issue_rw_rq(mq, NULL); + mmc_blk_ioctl_cmd_issue(mq, req); + break; + case REQ_OP_DISCARD: + /* + * Complete ongoing async transfer before issuing + * discard. + */ + if (mq->qcnt) + mmc_blk_issue_rw_rq(mq, NULL); + mmc_blk_issue_discard_rq(mq, req); + break; + case REQ_OP_SECURE_ERASE: + /* + * Complete ongoing async transfer before issuing + * secure erase. + */ + if (mq->qcnt) + mmc_blk_issue_rw_rq(mq, NULL); + mmc_blk_issue_secdiscard_rq(mq, req); + break; + case REQ_OP_FLUSH: + /* + * Complete ongoing async transfer before issuing + * flush. + */ + if (mq->qcnt) + mmc_blk_issue_rw_rq(mq, NULL); + mmc_blk_issue_flush(mq, req); + break; + default: + /* Normal request, just issue it */ + mmc_blk_issue_rw_rq(mq, req); + card->host->context_info.is_waiting_last_req = false; + break; + }; } else { - mmc_blk_issue_rw_rq(mq, req); + /* No request, flushing the pipeline with NULL */ + mmc_blk_issue_rw_rq(mq, NULL); card->host->context_info.is_waiting_last_req = false; } diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h index dae31bc0c2d3..005ece9ac7cb 100644 --- a/drivers/mmc/core/queue.h +++ b/drivers/mmc/core/queue.h @@ -22,6 +22,7 @@ static inline bool mmc_req_is_special(struct request *req) struct task_struct; struct mmc_blk_data; +struct mmc_blk_ioc_data; struct mmc_blk_request { struct mmc_request mrq; @@ -40,6 +41,8 @@ struct mmc_queue_req { struct scatterlist *bounce_sg; unsigned int bounce_sg_len; struct mmc_async_req areq; + int ioc_result; + struct mmc_blk_ioc_data *idata; }; struct mmc_queue {