From patchwork Tue Oct 27 17:34:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Keith Busch X-Patchwork-Id: 7499721 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Original-To: patchwork-linux-pci@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B52729F327 for ; Tue, 27 Oct 2015 17:35:38 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C7D512069F for ; Tue, 27 Oct 2015 17:35:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6686E2064E for ; Tue, 27 Oct 2015 17:35:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965726AbbJ0ReR (ORCPT ); Tue, 27 Oct 2015 13:34:17 -0400 Received: from mga03.intel.com ([134.134.136.65]:62624 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965685AbbJ0ReL (ORCPT ); Tue, 27 Oct 2015 13:34:11 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP; 27 Oct 2015 10:34:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,206,1444719600"; d="scan'208";a="589067161" Received: from dcgshare.lm.intel.com ([10.232.118.254]) by FMSMGA003.fm.intel.com with ESMTP; 27 Oct 2015 10:34:09 -0700 Received: by dcgshare.lm.intel.com (Postfix, from userid 1017) id 808C2E0C64; Tue, 27 Oct 2015 11:34:09 -0600 (MDT) From: Keith Busch To: LKML , x86@kernel.org, linux-pci@vger.kernel.org Cc: Jiang Liu , Thomas Gleixner , Dan Williams , Bjorn Helgaas , Bryan Veal , Ingo Molnar , "H. Peter Anvin" , Martin Mares , Jon Derrick , Keith Busch Subject: [RFC PATCHv3 2/4] x86-pci: allow pci domain specific dma ops Date: Tue, 27 Oct 2015 11:34:05 -0600 Message-Id: <1445967247-24310-3-git-send-email-keith.busch@intel.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1445967247-24310-1-git-send-email-keith.busch@intel.com> References: <1445967247-24310-1-git-send-email-keith.busch@intel.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP New x86 pci h/w will require dma operations specific to that domain. This patch allows those domains to register their operations, and sets devices as they are discovere3d in that domain to use them. Signed-off-by: Keith Busch --- arch/x86/include/asm/device.h | 10 ++++++++++ arch/x86/pci/common.c | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+) diff --git a/arch/x86/include/asm/device.h b/arch/x86/include/asm/device.h index 03dd729..3b23897 100644 --- a/arch/x86/include/asm/device.h +++ b/arch/x86/include/asm/device.h @@ -10,6 +10,16 @@ struct dev_archdata { #endif }; +#if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS) +struct dma_domain { + struct list_head node; + struct dma_map_ops *dma_ops; + int domain_nr; +}; +extern void add_dma_domain(struct dma_domain *domain); +extern void del_dma_domain(struct dma_domain *domain); +#endif + struct pdev_archdata { }; diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c index dc78a4a..524183d 100644 --- a/arch/x86/pci/common.c +++ b/arch/x86/pci/common.c @@ -641,6 +641,43 @@ unsigned int pcibios_assign_all_busses(void) return (pci_probe & PCI_ASSIGN_ALL_BUSSES) ? 1 : 0; } +#if defined(CONFIG_X86_DEV_DMA_OPS) && defined(CONFIG_PCI_DOMAINS) +LIST_HEAD(dma_domain_list); +DEFINE_SPINLOCK(dma_domain_list_lock); + +void add_dma_domain(struct dma_domain *domain) +{ + spin_lock(&dma_domain_list_lock); + list_add(&domain->node, &dma_domain_list); + spin_unlock(&dma_domain_list_lock); +} +EXPORT_SYMBOL_GPL(add_dma_domain); + +void del_dma_domain(struct dma_domain *domain) +{ + spin_lock(&dma_domain_list_lock); + list_del(&domain->node); + spin_unlock(&dma_domain_list_lock); +} +EXPORT_SYMBOL_GPL(del_dma_domain); + +static void set_dma_domain_ops(struct pci_dev *pdev) +{ + struct dma_domain *domain; + + spin_lock(&dma_domain_list_lock); + list_for_each_entry(domain, &dma_domain_list, node) { + if (pci_domain_nr(pdev->bus) == domain->domain_nr) { + pdev->dev.archdata.dma_ops = domain->dma_ops; + break; + } + } + spin_unlock(&dma_domain_list_lock); +} +#else +static void set_dma_domain_ops(struct pci_dev *pdev) {} +#endif + int pcibios_add_device(struct pci_dev *dev) { struct setup_data *data; @@ -670,6 +707,7 @@ int pcibios_add_device(struct pci_dev *dev) pa_data = data->next; iounmap(data); } + set_dma_domain_ops(dev); return 0; }