From patchwork Wed Aug 28 12:35:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Shimoda X-Patchwork-Id: 11118899 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C3D516B1 for ; Wed, 28 Aug 2019 12:37:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E9FB422DA7 for ; Wed, 28 Aug 2019 12:37:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726438AbfH1Mh3 (ORCPT ); Wed, 28 Aug 2019 08:37:29 -0400 Received: from relmlor2.renesas.com ([210.160.252.172]:37304 "EHLO relmlie6.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726339AbfH1Mh2 (ORCPT ); Wed, 28 Aug 2019 08:37:28 -0400 X-IronPort-AV: E=Sophos;i="5.64,441,1559487600"; d="scan'208";a="24934329" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 28 Aug 2019 21:37:25 +0900 Received: from localhost.localdomain (unknown [10.166.17.210]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id B2C57400516A; Wed, 28 Aug 2019 21:37:25 +0900 (JST) From: Yoshihiro Shimoda To: ulf.hansson@linaro.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, joro@8bytes.org, axboe@kernel.dk Cc: wsa+renesas@sang-engineering.com, linux-mmc@vger.kernel.org, iommu@lists.linux-foundation.org, linux-block@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Yoshihiro Shimoda Subject: [PATCH v10 1/4] dma: Introduce dma_get_merge_boundary() Date: Wed, 28 Aug 2019 21:35:40 +0900 Message-Id: <1566995743-5614-2-git-send-email-yoshihiro.shimoda.uh@renesas.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> References: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This patch adds a new DMA API "dma_get_merge_boundary". This function returns the DMA merge boundary if the DMA layer can merge the segments. This patch also adds the implementation for a new dma_map_ops pointer. Signed-off-by: Yoshihiro Shimoda Reviewed-by: Christoph Hellwig Reviewed-by: Simon Horman --- Documentation/DMA-API.txt | 8 ++++++++ include/linux/dma-mapping.h | 6 ++++++ kernel/dma/mapping.c | 11 +++++++++++ 3 files changed, 25 insertions(+) diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt index e47c63b..9c4dd3d 100644 --- a/Documentation/DMA-API.txt +++ b/Documentation/DMA-API.txt @@ -204,6 +204,14 @@ Returns the maximum size of a mapping for the device. The size parameter of the mapping functions like dma_map_single(), dma_map_page() and others should not be larger than the returned value. +:: + + unsigned long + dma_get_merge_boundary(struct device *dev); + +Returns the DMA merge boundary. If the device cannot merge any the DMA address +segments, the function returns 0. + Part Id - Streaming DMA mappings -------------------------------- diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 14702e2..7072b78 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -131,6 +131,7 @@ struct dma_map_ops { int (*dma_supported)(struct device *dev, u64 mask); u64 (*get_required_mask)(struct device *dev); size_t (*max_mapping_size)(struct device *dev); + unsigned long (*get_merge_boundary)(struct device *dev); }; #define DMA_MAPPING_ERROR (~(dma_addr_t)0) @@ -462,6 +463,7 @@ int dma_set_mask(struct device *dev, u64 mask); int dma_set_coherent_mask(struct device *dev, u64 mask); u64 dma_get_required_mask(struct device *dev); size_t dma_max_mapping_size(struct device *dev); +unsigned long dma_get_merge_boundary(struct device *dev); #else /* CONFIG_HAS_DMA */ static inline dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, size_t offset, size_t size, @@ -567,6 +569,10 @@ static inline size_t dma_max_mapping_size(struct device *dev) { return 0; } +static inline unsigned long dma_get_merge_boundary(struct device *dev) +{ + return 0; +} #endif /* CONFIG_HAS_DMA */ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index b0038ca..b3077b5 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -405,3 +405,14 @@ size_t dma_max_mapping_size(struct device *dev) return size; } EXPORT_SYMBOL_GPL(dma_max_mapping_size); + +unsigned long dma_get_merge_boundary(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (!ops || !ops->get_merge_boundary) + return 0; /* can't merge */ + + return ops->get_merge_boundary(dev); +} +EXPORT_SYMBOL_GPL(dma_get_merge_boundary); From patchwork Wed Aug 28 12:35:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Shimoda X-Patchwork-Id: 11118905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C60418EC for ; Wed, 28 Aug 2019 12:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2BBD322DA7 for ; Wed, 28 Aug 2019 12:37:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726512AbfH1Mha (ORCPT ); Wed, 28 Aug 2019 08:37:30 -0400 Received: from relmlor1.renesas.com ([210.160.252.171]:61999 "EHLO relmlie5.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726415AbfH1Mh3 (ORCPT ); Wed, 28 Aug 2019 08:37:29 -0400 X-IronPort-AV: E=Sophos;i="5.64,441,1559487600"; d="scan'208";a="25152955" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 28 Aug 2019 21:37:26 +0900 Received: from localhost.localdomain (unknown [10.166.17.210]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id C9D3940062B8; Wed, 28 Aug 2019 21:37:25 +0900 (JST) From: Yoshihiro Shimoda To: ulf.hansson@linaro.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, joro@8bytes.org, axboe@kernel.dk Cc: wsa+renesas@sang-engineering.com, linux-mmc@vger.kernel.org, iommu@lists.linux-foundation.org, linux-block@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Yoshihiro Shimoda Subject: [PATCH v10 2/4] iommu/dma: Add a new dma_map_ops of get_merge_boundary() Date: Wed, 28 Aug 2019 21:35:41 +0900 Message-Id: <1566995743-5614-3-git-send-email-yoshihiro.shimoda.uh@renesas.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> References: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This patch adds a new dma_map_ops of get_merge_boundary() to expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA. Signed-off-by: Yoshihiro Shimoda Reviewed-by: Simon Horman Acked-by: Joerg Roedel --- drivers/iommu/dma-iommu.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index de68b4a..ad861bd 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1091,6 +1091,13 @@ static int iommu_dma_get_sgtable(struct device *dev, struct sg_table *sgt, return ret; } +static unsigned long iommu_dma_get_merge_boundary(struct device *dev) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + + return (1UL << __ffs(domain->pgsize_bitmap)) - 1; +} + static const struct dma_map_ops iommu_dma_ops = { .alloc = iommu_dma_alloc, .free = iommu_dma_free, @@ -1106,6 +1113,7 @@ static const struct dma_map_ops iommu_dma_ops = { .sync_sg_for_device = iommu_dma_sync_sg_for_device, .map_resource = iommu_dma_map_resource, .unmap_resource = iommu_dma_unmap_resource, + .get_merge_boundary = iommu_dma_get_merge_boundary, }; /* From patchwork Wed Aug 28 12:35:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Shimoda X-Patchwork-Id: 11118917 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B4F2614F7 for ; Wed, 28 Aug 2019 12:37:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9C86E2173E for ; Wed, 28 Aug 2019 12:37:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726457AbfH1Mhc (ORCPT ); Wed, 28 Aug 2019 08:37:32 -0400 Received: from relmlor2.renesas.com ([210.160.252.172]:37304 "EHLO relmlie6.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726341AbfH1Mha (ORCPT ); Wed, 28 Aug 2019 08:37:30 -0400 X-IronPort-AV: E=Sophos;i="5.64,441,1559487600"; d="scan'208";a="24934332" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie6.idc.renesas.com with ESMTP; 28 Aug 2019 21:37:26 +0900 Received: from localhost.localdomain (unknown [10.166.17.210]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 0510E400516A; Wed, 28 Aug 2019 21:37:26 +0900 (JST) From: Yoshihiro Shimoda To: ulf.hansson@linaro.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, joro@8bytes.org, axboe@kernel.dk Cc: wsa+renesas@sang-engineering.com, linux-mmc@vger.kernel.org, iommu@lists.linux-foundation.org, linux-block@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Yoshihiro Shimoda Subject: [PATCH v10 3/4] block: add a helper function to merge the segments Date: Wed, 28 Aug 2019 21:35:42 +0900 Message-Id: <1566995743-5614-4-git-send-email-yoshihiro.shimoda.uh@renesas.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> References: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org This patch adds a helper function whether a queue can merge the segments by the DMA MAP layer (e.g. via IOMMU). Signed-off-by: Yoshihiro Shimoda Reviewed-by: Christoph Hellwig Reviewed-by: Simon Horman --- block/blk-settings.c | 23 +++++++++++++++++++++++ include/linux/blkdev.h | 2 ++ 2 files changed, 25 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 2c18312..c3632fc 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "blk.h" #include "blk-wbt.h" @@ -832,6 +833,28 @@ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua) } EXPORT_SYMBOL_GPL(blk_queue_write_cache); +/** + * blk_queue_can_use_dma_map_merging - configure queue for merging segments. + * @q: the request queue for the device + * @dev: the device pointer for dma + * + * Tell the block layer about merging the segments by dma map of @q. + */ +bool blk_queue_can_use_dma_map_merging(struct request_queue *q, + struct device *dev) +{ + unsigned long boundary = dma_get_merge_boundary(dev); + + if (!boundary) + return false; + + /* No need to update max_segment_size. see blk_queue_virt_boundary() */ + blk_queue_virt_boundary(q, boundary); + + return true; +} +EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging); + static int __init blk_settings_init(void) { blk_max_low_pfn = max_low_pfn - 1; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1ac7901..d62d6e2 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1086,6 +1086,8 @@ extern void blk_queue_dma_alignment(struct request_queue *, int); extern void blk_queue_update_dma_alignment(struct request_queue *, int); extern void blk_queue_rq_timeout(struct request_queue *, unsigned int); extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua); +extern bool blk_queue_can_use_dma_map_merging(struct request_queue *q, + struct device *dev); /* * Number of physical segments as sent to the device. From patchwork Wed Aug 28 12:35:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshihiro Shimoda X-Patchwork-Id: 11118909 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E31D116B1 for ; Wed, 28 Aug 2019 12:37:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CBD882173E for ; Wed, 28 Aug 2019 12:37:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726513AbfH1Mha (ORCPT ); Wed, 28 Aug 2019 08:37:30 -0400 Received: from relmlor1.renesas.com ([210.160.252.171]:63354 "EHLO relmlie5.idc.renesas.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726300AbfH1Mha (ORCPT ); Wed, 28 Aug 2019 08:37:30 -0400 X-IronPort-AV: E=Sophos;i="5.64,441,1559487600"; d="scan'208";a="25152958" Received: from unknown (HELO relmlir5.idc.renesas.com) ([10.200.68.151]) by relmlie5.idc.renesas.com with ESMTP; 28 Aug 2019 21:37:26 +0900 Received: from localhost.localdomain (unknown [10.166.17.210]) by relmlir5.idc.renesas.com (Postfix) with ESMTP id 1C34940062B8; Wed, 28 Aug 2019 21:37:26 +0900 (JST) From: Yoshihiro Shimoda To: ulf.hansson@linaro.org, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, joro@8bytes.org, axboe@kernel.dk Cc: wsa+renesas@sang-engineering.com, linux-mmc@vger.kernel.org, iommu@lists.linux-foundation.org, linux-block@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Yoshihiro Shimoda Subject: [PATCH v10 4/4] mmc: queue: Use bigger segments if DMA MAP layer can merge the segments Date: Wed, 28 Aug 2019 21:35:43 +0900 Message-Id: <1566995743-5614-5-git-send-email-yoshihiro.shimoda.uh@renesas.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> References: <1566995743-5614-1-git-send-email-yoshihiro.shimoda.uh@renesas.com> Sender: linux-mmc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-mmc@vger.kernel.org When the max_segs of a mmc host is smaller than 512, the mmc subsystem tries to use 512 segments if DMA MAP layer can merge the segments, and then the mmc subsystem exposes such information to the block layer by using blk_queue_can_use_dma_map_merging(). Signed-off-by: Yoshihiro Shimoda Reviewed-by: Christoph Hellwig Reviewed-by: Ulf Hansson Reviewed-by: Simon Horman --- drivers/mmc/core/queue.c | 35 ++++++++++++++++++++++++++++++++--- include/linux/mmc/host.h | 1 + 2 files changed, 33 insertions(+), 3 deletions(-) diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c index 7102e2e..1e29b30 100644 --- a/drivers/mmc/core/queue.c +++ b/drivers/mmc/core/queue.c @@ -21,6 +21,8 @@ #include "card.h" #include "host.h" +#define MMC_DMA_MAP_MERGE_SEGMENTS 512 + static inline bool mmc_cqe_dcmd_busy(struct mmc_queue *mq) { /* Allow only 1 DCMD at a time */ @@ -193,6 +195,12 @@ static void mmc_queue_setup_discard(struct request_queue *q, blk_queue_flag_set(QUEUE_FLAG_SECERASE, q); } +static unsigned int mmc_get_max_segments(struct mmc_host *host) +{ + return host->can_dma_map_merge ? MMC_DMA_MAP_MERGE_SEGMENTS : + host->max_segs; +} + /** * mmc_init_request() - initialize the MMC-specific per-request data * @q: the request queue @@ -206,7 +214,7 @@ static int __mmc_init_request(struct mmc_queue *mq, struct request *req, struct mmc_card *card = mq->card; struct mmc_host *host = card->host; - mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp); + mq_rq->sg = mmc_alloc_sg(mmc_get_max_segments(host), gfp); if (!mq_rq->sg) return -ENOMEM; @@ -362,13 +370,23 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH); blk_queue_max_hw_sectors(mq->queue, min(host->max_blk_count, host->max_req_size / 512)); - blk_queue_max_segments(mq->queue, host->max_segs); + if (host->can_dma_map_merge) + WARN(!blk_queue_can_use_dma_map_merging(mq->queue, + mmc_dev(host)), + "merging was advertised but not possible"); + blk_queue_max_segments(mq->queue, mmc_get_max_segments(host)); if (mmc_card_mmc(card)) block_size = card->ext_csd.data_sector_size; blk_queue_logical_block_size(mq->queue, block_size); - blk_queue_max_segment_size(mq->queue, + /* + * After blk_queue_can_use_dma_map_merging() was called with succeed, + * since it calls blk_queue_virt_boundary(), the mmc should not call + * both blk_queue_max_segment_size(). + */ + if (!host->can_dma_map_merge) + blk_queue_max_segment_size(mq->queue, round_down(host->max_seg_size, block_size)); dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue)); @@ -418,6 +436,17 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card) mq->tag_set.cmd_size = sizeof(struct mmc_queue_req); mq->tag_set.driver_data = mq; + /* + * Since blk_mq_alloc_tag_set() calls .init_request() of mmc_mq_ops, + * the host->can_dma_map_merge should be set before to get max_segs + * from mmc_get_max_segments(). + */ + if (host->max_segs < MMC_DMA_MAP_MERGE_SEGMENTS && + dma_get_merge_boundary(mmc_dev(host))) + host->can_dma_map_merge = 1; + else + host->can_dma_map_merge = 0; + ret = blk_mq_alloc_tag_set(&mq->tag_set); if (ret) return ret; diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h index 4a351cb..c5662b3 100644 --- a/include/linux/mmc/host.h +++ b/include/linux/mmc/host.h @@ -396,6 +396,7 @@ struct mmc_host { unsigned int retune_paused:1; /* re-tuning is temporarily disabled */ unsigned int use_blk_mq:1; /* use blk-mq */ unsigned int retune_crc_disable:1; /* don't trigger retune upon crc */ + unsigned int can_dma_map_merge:1; /* merging can be used */ int rescan_disable; /* disable card detection */ int rescan_entered; /* used with nonremovable devices */