From patchwork Fri Oct 6 22:35:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 9990655 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5BC0E60244 for ; Fri, 6 Oct 2017 22:42:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4CF5328E14 for ; Fri, 6 Oct 2017 22:42:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 41F6328E48; Fri, 6 Oct 2017 22:42:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AAC7B28E14 for ; Fri, 6 Oct 2017 22:42:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752919AbdJFWmW (ORCPT ); Fri, 6 Oct 2017 18:42:22 -0400 Received: from mga11.intel.com ([192.55.52.93]:34660 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752924AbdJFWmU (ORCPT ); Fri, 6 Oct 2017 18:42:20 -0400 Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Oct 2017 15:42:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.42,486,1500966000"; d="scan'208";a="159807393" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.125]) by fmsmga005.fm.intel.com with ESMTP; 06 Oct 2017 15:42:19 -0700 Subject: [PATCH v7 07/12] dma-mapping: introduce dma_has_iommu() From: Dan Williams To: linux-nvdimm@lists.01.org Cc: Jan Kara , Ashok Raj , "Darrick J. Wong" , linux-rdma@vger.kernel.org, Greg Kroah-Hartman , Joerg Roedel , Dave Chinner , linux-xfs@vger.kernel.org, linux-mm@kvack.org, Jeff Moyer , linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, Ross Zwisler , David Woodhouse , Robin Murphy , Christoph Hellwig , Marek Szyprowski Date: Fri, 06 Oct 2017 15:35:54 -0700 Message-ID: <150732935473.22363.1853399637339625023.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <150732931273.22363.8436792888326501071.stgit@dwillia2-desk3.amr.corp.intel.com> References: <150732931273.22363.8436792888326501071.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add a helper to determine if the dma mappings set up for a given device are backed by an iommu. In particular, this lets code paths know that a dma_unmap operation will revoke access to memory if the device can not otherwise be quiesced. The need for this knowledge is driven by a need to make RDMA transfers to DAX mappings safe. If the DAX file's block map changes we need to be to reliably stop accesses to blocks that have been freed or re-assigned to a new file. Since PMEM+DAX is currently only enabled for x86, we only update the x86 iommu drivers. Cc: Marek Szyprowski Cc: Robin Murphy Cc: Greg Kroah-Hartman Cc: Joerg Roedel Cc: David Woodhouse Cc: Ashok Raj Cc: Jan Kara Cc: Jeff Moyer Cc: Christoph Hellwig Cc: Dave Chinner Cc: "Darrick J. Wong" Cc: Ross Zwisler Signed-off-by: Dan Williams --- drivers/base/dma-mapping.c | 10 ++++++++++ drivers/iommu/amd_iommu.c | 6 ++++++ drivers/iommu/intel-iommu.c | 6 ++++++ include/linux/dma-mapping.h | 3 +++ 4 files changed, 25 insertions(+) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c index e584eddef0a7..e1b5f103d90e 100644 --- a/drivers/base/dma-mapping.c +++ b/drivers/base/dma-mapping.c @@ -369,3 +369,13 @@ void dma_deconfigure(struct device *dev) of_dma_deconfigure(dev); acpi_dma_deconfigure(dev); } + +bool dma_has_iommu(struct device *dev) +{ + const struct dma_map_ops *ops = get_dma_ops(dev); + + if (ops && ops->has_iommu) + return ops->has_iommu(dev); + return false; +} +EXPORT_SYMBOL(dma_has_iommu); diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 51f8215877f5..873f899fcf57 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2271,6 +2271,11 @@ static struct protection_domain *get_domain(struct device *dev) return domain; } +static bool amd_dma_has_iommu(struct device *dev) +{ + return !IS_ERR(get_domain(dev)); +} + static void update_device_table(struct protection_domain *domain) { struct iommu_dev_data *dev_data; @@ -2689,6 +2694,7 @@ static const struct dma_map_ops amd_iommu_dma_ops = { .unmap_sg = unmap_sg, .dma_supported = amd_iommu_dma_supported, .mapping_error = amd_iommu_mapping_error, + .has_iommu = amd_dma_has_iommu, }; static int init_reserved_iova_ranges(void) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 6784a05dd6b2..243ef42fdad4 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3578,6 +3578,11 @@ static int iommu_no_mapping(struct device *dev) return 0; } +static bool intel_dma_has_iommu(struct device *dev) +{ + return !iommu_no_mapping(dev); +} + static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, size_t size, int dir, u64 dma_mask) { @@ -3872,6 +3877,7 @@ const struct dma_map_ops intel_dma_ops = { .map_page = intel_map_page, .unmap_page = intel_unmap_page, .mapping_error = intel_mapping_error, + .has_iommu = intel_dma_has_iommu, #ifdef CONFIG_X86 .dma_supported = x86_dma_supported, #endif diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 29ce9815da87..659f122c18f5 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -128,6 +128,7 @@ struct dma_map_ops { enum dma_data_direction dir); int (*mapping_error)(struct device *dev, dma_addr_t dma_addr); int (*dma_supported)(struct device *dev, u64 mask); + bool (*has_iommu)(struct device *dev); #ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK u64 (*get_required_mask)(struct device *dev); #endif @@ -221,6 +222,8 @@ static inline const struct dma_map_ops *get_dma_ops(struct device *dev) } #endif +extern bool dma_has_iommu(struct device *dev); + static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, size_t size, enum dma_data_direction dir,