From patchwork Fri May 18 17:18:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 10411673 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9CC22602CB for ; Fri, 18 May 2018 17:19:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D88E28973 for ; Fri, 18 May 2018 17:19:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 826E9289A9; Fri, 18 May 2018 17:19:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 25E202897D for ; Fri, 18 May 2018 17:19:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751999AbeERRTL (ORCPT ); Fri, 18 May 2018 13:19:11 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:45530 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751545AbeERRTK (ORCPT ); Fri, 18 May 2018 13:19:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:To:From:Sender:Reply-To:Cc:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=PCtodYrO1QsmE/qCdsO77WvoQtYxNDosv2hmD3hz00Y=; b=Yv9cqwQwzZXW3FiOirsSb64Om tZOxeN+PdZWTdBk44Yh5EQLgUiQHwMJLuDxR43o11NefQVApoavrVwrezHZ9Oa2qXWTcybeT8lJJx +iUNfpj+0k1Y6O3ty3n/0RAIlVgnbANes4Xbm1tDpW8L8IBBJy9LCKGo7UWsHV4AoUrR9bOxcLE8p 1YINhy1TZtxnqhwEWN8gsiF13JsTgkq4p2Q93gD3piTKd+IHc0eHVOc05JqdGto7EkVi1ukGdgHoB V6Shp0seqDApw40TWuuB7Cf1j6NsOJnWUpZu9EUi8PQ8oGJrfA18qeXX10gh/T1kT6Qzoalr3OSej jpnIwJakQ==; Received: from 80-109-164-210.cable.dynamic.surfer.at ([80.109.164.210] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fJj25-0007DL-Ea for linux-mmc@vger.kernel.org; Fri, 18 May 2018 17:19:09 +0000 From: Christoph Hellwig To: linux-mmc@vger.kernel.org Subject: [PATCH 7/7] mmc: stop using block layer bounce buffers Date: Fri, 18 May 2018 19:18:47 +0200 Message-Id: <20180518171847.16419-8-hch@lst.de> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180518171847.16419-1-hch@lst.de> References: <20180518171847.16419-1-hch@lst.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If a driver uses the dma API (as indicated by a device with a dma mask) we can rely on the dma mapping API to do any required bounce buffering, and all drivers using bounce buffering or pio now either use the proper highmem-aware accessors or depend on !HIGHMEM. Signed-off-by: Christoph Hellwig --- drivers/mmc/core/queue.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 56e9a803db21..a18541930c01 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -351,17 +351,12 @@ static const struct blk_mq_ops mmc_mq_ops = { static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) { struct mmc_host *host = card->host; - u64 limit = BLK_BOUNCE_HIGH; - - if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) - limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue); blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue); if (mmc_can_erase(card)) mmc_queue_setup_discard(mq->queue, card); - blk_queue_bounce_limit(mq->queue, limit); blk_queue_max_hw_sectors(mq->queue, min(host->max_blk_count, host->max_req_size / 512)); blk_queue_max_segments(mq->queue, host->max_segs);