From patchwork Thu Aug 31 11:56:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adrian Hunter X-Patchwork-Id: 9932039 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F205360309 for ; Thu, 31 Aug 2017 12:04:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E466427F94 for ; Thu, 31 Aug 2017 12:04:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D905E2892D; Thu, 31 Aug 2017 12:04:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 636D227F92 for ; Thu, 31 Aug 2017 12:04:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751397AbdHaMEG (ORCPT ); Thu, 31 Aug 2017 08:04:06 -0400 Received: from mga14.intel.com ([192.55.52.115]:35193 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751237AbdHaMEF (ORCPT ); Thu, 31 Aug 2017 08:04:05 -0400 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Aug 2017 05:04:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,453,1498546800"; d="scan'208";a="1167905548" Received: from ahunter-desktop.fi.intel.com ([10.237.72.168]) by orsmga001.jf.intel.com with ESMTP; 31 Aug 2017 05:04:00 -0700 From: Adrian Hunter To: Ulf Hansson Cc: linux-mmc , linux-block , Bough Chen , Alex Lemberg , Mateusz Nowak , Yuliy Izrailov , Jaehoon Chung , Dong Aisheng , Das Asutosh , Zhangfei Gao , Sahitya Tummala , Harjani Ritesh , Venu Byravarasu , Linus Walleij , Shawn Lin Subject: [PATCH V7 07/10] mmc: block: Factor out mmc_setup_queue() Date: Thu, 31 Aug 2017 14:56:53 +0300 Message-Id: <1504180616-14514-8-git-send-email-adrian.hunter@intel.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1504180616-14514-1-git-send-email-adrian.hunter@intel.com> References: <1504180616-14514-1-git-send-email-adrian.hunter@intel.com> Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Factor out some common code that will also be used with blk-mq. Signed-off-by: Adrian Hunter --- drivers/mmc/core/queue.c | 53 ++++++++++++++++++++++++++++-------------------- 1 file changed, 31 insertions(+), 22 deletions(-) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index affa7370ba82..14e9de9c783c 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -223,6 +223,36 @@ static void mmc_exit_request(struct request_queue *q, struct request *req) mq_rq->sg = NULL; } +static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) +{ + struct mmc_host *host = card->host; + u64 limit = BLK_BOUNCE_HIGH; + + if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) + limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; + + queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); + queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); + if (mmc_can_erase(card)) + mmc_queue_setup_discard(mq->queue, card); + + card->bouncesz = mmc_queue_calc_bouncesz(host); + if (card->bouncesz) { + blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512); + blk_queue_max_segments(mq->queue, card->bouncesz / 512); + blk_queue_max_segment_size(mq->queue, card->bouncesz); + } else { + blk_queue_bounce_limit(mq->queue, limit); + blk_queue_max_hw_sectors(mq->queue, + min(host->max_blk_count, host->max_req_size / 512)); + blk_queue_max_segments(mq->queue, host->max_segs); + blk_queue_max_segment_size(mq->queue, host->max_seg_size); + } + + /* Initialize thread_sem even if it is not used */ + sema_init(&mq->thread_sem, 1); +} + /** * mmc_init_queue - initialise a queue structure. * @mq: mmc queue @@ -236,12 +266,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock, const char *subname) { struct mmc_host *host = card->host; - u64 limit = BLK_BOUNCE_HIGH; int ret = -ENOMEM; - if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) - limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; - mq->card = card; mq->queue = blk_alloc_queue(GFP_KERNEL); if (!mq->queue) @@ -260,25 +286,8 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, } blk_queue_prep_rq(mq->queue, mmc_prep_request); - queue_flag_set_unlocked(QUEUE_FLAG_NONROT, mq->queue); - queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, mq->queue); - if (mmc_can_erase(card)) - mmc_queue_setup_discard(mq->queue, card); - card->bouncesz = mmc_queue_calc_bouncesz(host); - if (card->bouncesz) { - blk_queue_max_hw_sectors(mq->queue, card->bouncesz / 512); - blk_queue_max_segments(mq->queue, card->bouncesz / 512); - blk_queue_max_segment_size(mq->queue, card->bouncesz); - } else { - blk_queue_bounce_limit(mq->queue, limit); - blk_queue_max_hw_sectors(mq->queue, - min(host->max_blk_count, host->max_req_size / 512)); - blk_queue_max_segments(mq->queue, host->max_segs); - blk_queue_max_segment_size(mq->queue, host->max_seg_size); - } - - sema_init(&mq->thread_sem, 1); + mmc_setup_queue(mq, card); mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s", host->index, subname ? subname : "");