From patchwork Tue Nov 25 17:27:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Murphy X-Patchwork-Id: 5381701 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 73F2E9F39D for ; Tue, 25 Nov 2014 17:34:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A95EA201C7 for ; Tue, 25 Nov 2014 17:34:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D874620142 for ; Tue, 25 Nov 2014 17:34:49 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XtJy0-0006PA-74; Tue, 25 Nov 2014 17:31:56 +0000 Received: from service87.mimecast.com ([91.220.42.44]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XtJuL-00035F-2K for linux-arm-kernel@lists.infradead.org; Tue, 25 Nov 2014 17:28:14 +0000 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by service87.mimecast.com; Tue, 25 Nov 2014 17:27:45 +0000 Received: from e104324-lin.cambridge.arm.com ([10.1.255.212]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Tue, 25 Nov 2014 17:27:44 +0000 From: Robin Murphy To: iommu@lists.linux-foundation.org, dwmw2@infradead.org, sakari.ailus@linux.intel.com Subject: [RFC PATCH 3/4] iommu: make IOVA domain low limit flexible Date: Tue, 25 Nov 2014 17:27:27 +0000 Message-Id: <99c474e58e8ba1041ee927893d87327b939594a3.1416931258.git.robin.murphy@arm.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: References: X-OriginalArrivalTime: 25 Nov 2014 17:27:44.0407 (UTC) FILETIME=[1F8C0270:01D008D5] X-MC-Unique: 114112517274518601 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141125_092809_462903_72D3B34D X-CRM114-Status: GOOD ( 11.92 ) X-Spam-Score: -2.3 (--) Cc: catalin.marinas@arm.com, joro@8bytes.org, will.deacon@arm.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To share the IOVA allocator with other architectures, it needs to accommodate more general aperture restrictions; move the lower limit from a compile-time constant to a runtime domain property to allow IOVA domains with different requirements to co-exist. Also reword the slightly unclear description of alloc_iova since we're touching it anyway. Signed-off-by: Robin Murphy --- drivers/iommu/intel-iommu.c | 9 ++++++--- drivers/iommu/iova.c | 10 ++++++---- include/linux/iova.h | 7 +++---- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 9258860..daba0c2 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -71,6 +71,9 @@ __DOMAIN_MAX_PFN(gaw), (unsigned long)-1)) #define DOMAIN_MAX_ADDR(gaw) (((uint64_t)__DOMAIN_MAX_PFN(gaw)) << VTD_PAGE_SHIFT) +/* IO virtual address start page frame number */ +#define IOVA_START_PFN (1) + #define IOVA_PFN(addr) ((addr) >> PAGE_SHIFT) #define DMA_32BIT_PFN IOVA_PFN(DMA_BIT_MASK(32)) #define DMA_64BIT_PFN IOVA_PFN(DMA_BIT_MASK(64)) @@ -1629,7 +1632,7 @@ static int dmar_init_reserved_ranges(void) struct iova *iova; int i; - init_iova_domain(&reserved_iova_list, DMA_32BIT_PFN); + init_iova_domain(&reserved_iova_list, IOVA_START_PFN, DMA_32BIT_PFN); lockdep_set_class(&reserved_iova_list.iova_rbtree_lock, &reserved_rbtree_key); @@ -1687,7 +1690,7 @@ static int domain_init(struct dmar_domain *domain, int guest_width) int adjust_width, agaw; unsigned long sagaw; - init_iova_domain(&domain->iovad, DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, IOVA_START_PFN, DMA_32BIT_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ @@ -4160,7 +4163,7 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width) { int adjust_width; - init_iova_domain(&domain->iovad, DMA_32BIT_PFN); + init_iova_domain(&domain->iovad, IOVA_START_PFN, DMA_32BIT_PFN); domain_reserve_special_ranges(domain); /* calculate AGAW */ diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 520b8c8..a3dbba8 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -55,11 +55,13 @@ void free_iova_mem(struct iova *iova) } void -init_iova_domain(struct iova_domain *iovad, unsigned long pfn_32bit) +init_iova_domain(struct iova_domain *iovad, unsigned long start_pfn, + unsigned long pfn_32bit) { spin_lock_init(&iovad->iova_rbtree_lock); iovad->rbroot = RB_ROOT; iovad->cached32_node = NULL; + iovad->start_pfn = start_pfn; iovad->dma_32bit_pfn = pfn_32bit; } @@ -162,7 +164,7 @@ move_left: if (!curr) { if (size_aligned) pad_size = iova_get_pad_size(size, limit_pfn); - if ((IOVA_START_PFN + size + pad_size) > limit_pfn) { + if ((iovad->start_pfn + size + pad_size) > limit_pfn) { spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); return -ENOMEM; } @@ -237,8 +239,8 @@ iova_insert_rbtree(struct rb_root *root, struct iova *iova) * @size: - size of page frames to allocate * @limit_pfn: - max limit address * @size_aligned: - set if size_aligned address range is required - * This function allocates an iova in the range limit_pfn to IOVA_START_PFN - * looking from limit_pfn instead from IOVA_START_PFN. If the size_aligned + * This function allocates an iova in the range iovad->start_pfn to limit_pfn, + * searching top-down from limit_pfn to iovad->start_pfn. If the size_aligned * flag is set then the allocated address iova->pfn_lo will be naturally * aligned on roundup_power_of_two(size). */ diff --git a/include/linux/iova.h b/include/linux/iova.h index ad0507c..591b196 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -16,9 +16,6 @@ #include #include -/* IO virtual address start page frame number */ -#define IOVA_START_PFN (1) - /* iova structure */ struct iova { struct rb_node node; @@ -31,6 +28,7 @@ struct iova_domain { spinlock_t iova_rbtree_lock; /* Lock to protect update of rbtree */ struct rb_root rbroot; /* iova domain rbtree root */ struct rb_node *cached32_node; /* Save last alloced node */ + unsigned long start_pfn; /* Lower limit for this domain */ unsigned long dma_32bit_pfn; }; @@ -52,7 +50,8 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size, struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo, unsigned long pfn_hi); void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to); -void init_iova_domain(struct iova_domain *iovad, unsigned long pfn_32bit); +void init_iova_domain(struct iova_domain *iovad, unsigned long start_pfn, + unsigned long pfn_32bit); struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn); void put_iova_domain(struct iova_domain *iovad); struct iova *split_and_remove_iova(struct iova_domain *iovad,