From patchwork Fri Dec 6 03:21:04 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiang Liu X-Patchwork-Id: 3292971 Return-Path: X-Original-To: patchwork-dmaengine@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id CE3BE9F373 for ; Fri, 6 Dec 2013 03:26:15 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E6E6620173 for ; Fri, 6 Dec 2013 03:26:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4C708204EA for ; Fri, 6 Dec 2013 03:26:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753354Ab3LFDUg (ORCPT ); Thu, 5 Dec 2013 22:20:36 -0500 Received: from mga03.intel.com ([143.182.124.21]:56623 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752302Ab3LFDUd (ORCPT ); Thu, 5 Dec 2013 22:20:33 -0500 Received: from azsmga002.ch.intel.com ([10.2.17.35]) by azsmga101.ch.intel.com with ESMTP; 05 Dec 2013 19:20:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.93,838,1378882800"; d="scan'208";a="325986877" Received: from gerry-dev.bj.intel.com ([10.238.158.74]) by AZSMGA002.ch.intel.com with ESMTP; 05 Dec 2013 19:20:29 -0800 From: Jiang Liu To: Joerg Roedel , David Woodhouse , Dan Williams , Vinod Koul , Ashok Raj , Yijing Wang , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Cc: Jiang Liu , Tony Luck , Yinghai Lu , linux-pci@vger.kernel.org, dmaengine@vger.kernel.org Subject: [Patch Part1 V2 01/20] iommu/vt-d: use dedicated bitmap to track remapping entry allocation status Date: Fri, 6 Dec 2013 11:21:04 +0800 Message-Id: <1386300083-6882-2-git-send-email-jiang.liu@linux.intel.com> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1386300083-6882-1-git-send-email-jiang.liu@linux.intel.com> References: <1386300083-6882-1-git-send-email-jiang.liu@linux.intel.com> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently Intel interrupt remapping drivers uses the "present" flag bit in remapping entry to track whether an entry is allocated or not. It works as follow: 1) allocate a remapping entry and set its "present" flag bit to 1 2) compose other fields for the entry 3) update the remapping entry with the composed value The remapping hardware may access the entry between step 1 and step 3, which then obervers an entry with the "present" flag set but random values in all other fields. This patch introduces a dedicated bitmap to track remapping entry allocation status instead of sharing the "present" flag with hardware, thus eliminate the race window. It also simplifies the implementation. Tested-and-reviewed-by: Yijing Wang Signed-off-by: Jiang Liu --- drivers/iommu/intel_irq_remapping.c | 51 +++++++++++++++++------------------ include/linux/intel-iommu.h | 1 + 2 files changed, 25 insertions(+), 27 deletions(-) diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c index bab10b1..282d392 100644 --- a/drivers/iommu/intel_irq_remapping.c +++ b/drivers/iommu/intel_irq_remapping.c @@ -72,7 +72,6 @@ static int alloc_irte(struct intel_iommu *iommu, int irq, u16 count) u16 index, start_index; unsigned int mask = 0; unsigned long flags; - int i; if (!count || !irq_iommu) return -1; @@ -96,32 +95,17 @@ static int alloc_irte(struct intel_iommu *iommu, int irq, u16 count) } raw_spin_lock_irqsave(&irq_2_ir_lock, flags); - do { - for (i = index; i < index + count; i++) - if (table->base[i].present) - break; - /* empty index found */ - if (i == index + count) - break; - - index = (index + count) % INTR_REMAP_TABLE_ENTRIES; - - if (index == start_index) { - raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags); - printk(KERN_ERR "can't allocate an IRTE\n"); - return -1; - } - } while (1); - - for (i = index; i < index + count; i++) - table->base[i].present = 1; - - cfg->remapped = 1; - irq_iommu->iommu = iommu; - irq_iommu->irte_index = index; - irq_iommu->sub_handle = 0; - irq_iommu->irte_mask = mask; - + index = bitmap_find_free_region(table->bitmap, + INTR_REMAP_TABLE_ENTRIES, mask); + if (index < 0) { + printk(KERN_ERR "can't allocate an IRTE\n"); + } else { + cfg->remapped = 1; + irq_iommu->iommu = iommu; + irq_iommu->irte_index = index; + irq_iommu->sub_handle = 0; + irq_iommu->irte_mask = mask; + } raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags); return index; @@ -254,6 +238,8 @@ static int clear_entries(struct irq_2_iommu *irq_iommu) set_64bit(&entry->low, 0); set_64bit(&entry->high, 0); } + bitmap_release_region(iommu->ir_table->bitmap, index, + irq_iommu->irte_mask); return qi_flush_iec(iommu, index, irq_iommu->irte_mask); } @@ -453,6 +439,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode) { struct ir_table *ir_table; struct page *pages; + unsigned long *bitmap; ir_table = iommu->ir_table = kzalloc(sizeof(struct ir_table), GFP_ATOMIC); @@ -470,7 +457,17 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu, int mode) return -ENOMEM; } + bitmap = kcalloc(BITS_TO_LONGS(INTR_REMAP_TABLE_ENTRIES), + sizeof(long), GFP_ATOMIC); + if (bitmap == NULL) { + printk(KERN_ERR "failed to allocate bitmap\n"); + __free_pages(pages, INTR_REMAP_PAGE_ORDER); + kfree(ir_table); + return -ENOMEM; + } + ir_table->base = page_address(pages); + ir_table->bitmap = bitmap; iommu_set_irq_remapping(iommu, mode); return 0; diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index d380c5e..de1e5e9 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -288,6 +288,7 @@ struct q_inval { struct ir_table { struct irte *base; + unsigned long *bitmap; }; #endif