From patchwork Sat Jul 26 03:08:44 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yijing Wang X-Patchwork-Id: 4626541 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 21CDCC033A for ; Sat, 26 Jul 2014 02:49:00 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 56AAA20219 for ; Sat, 26 Jul 2014 02:48:59 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4AE9520211 for ; Sat, 26 Jul 2014 02:48:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XAs0S-00035C-Dr; Sat, 26 Jul 2014 02:46:44 +0000 Received: from [119.145.14.64] (helo=szxga01-in.huawei.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XArzZ-0001Hw-Kc for linux-arm-kernel@lists.infradead.org; Sat, 26 Jul 2014 02:45:52 +0000 Received: from 172.24.2.119 (EHLO szxeml419-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id BZG09041; Sat, 26 Jul 2014 10:43:48 +0800 (CST) Received: from localhost.localdomain (10.175.100.166) by szxeml419-hub.china.huawei.com (10.82.67.158) with Microsoft SMTP Server id 14.3.158.1; Sat, 26 Jul 2014 10:43:34 +0800 From: Yijing Wang To: Subject: [RFC PATCH 07/11] PCI/MSI: Mask MSI-X entry in msix_setup_entries() Date: Sat, 26 Jul 2014 11:08:44 +0800 Message-ID: <1406344128-27055-8-git-send-email-wangyijing@huawei.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1406344128-27055-1-git-send-email-wangyijing@huawei.com> References: <1406344128-27055-1-git-send-email-wangyijing@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.100.166] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140725_194550_034987_4D5B540F X-CRM114-Status: GOOD ( 11.82 ) X-Spam-Score: 0.6 (/) Cc: linux-arch@vger.kernel.org, Russell King , Paul.Mundt@huawei.com, Marc Zyngier , linux-pci@vger.kernel.org, "James E.J. Bottomley" , virtualization@lists.linux-foundation.org, Xinwei Hu , Yijing Wang , Hanjun Guo , Bjorn Helgaas , Wuyun , arnab.basu@freescale.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Save the MSI-X entry initial mask status in msix_setup_entries(), also mask the entry. This is preparation for generic MSI. Signed-off-by: Yijing Wang --- drivers/pci/msi.c | 21 +++++++++++---------- 1 files changed, 11 insertions(+), 10 deletions(-) diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c index f96dd38..41c33da 100644 --- a/drivers/pci/msi.c +++ b/drivers/pci/msi.c @@ -672,7 +672,7 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, struct msix_entry *entries, int nvec) { struct msi_desc *entry; - int i; + int i, offset; for (i = 0; i < nvec; i++) { entry = alloc_msi_entry(dev); @@ -691,6 +691,15 @@ static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, entry->msi_attrib.default_irq = dev->irq; entry->mask_base = base; + msix_clear_and_set_ctrl(dev, 0, + PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE); + offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE + + PCI_MSIX_ENTRY_VECTOR_CTRL; + entry->masked = readl(entry->mask_base + offset); + msix_mask_irq(entry, 1); + msix_clear_and_set_ctrl(dev, + PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0); + list_add_tail(&entry->list, &dev->msi_list); } @@ -704,13 +713,8 @@ static void msix_program_entries(struct pci_dev *dev, int i = 0; list_for_each_entry(entry, &dev->msi_list, list) { - int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE + - PCI_MSIX_ENTRY_VECTOR_CTRL; - entries[i].vector = entry->irq; irq_set_msi_desc(entry->irq, entry); - entry->masked = readl(entry->mask_base + offset); - msix_mask_irq(entry, 1); i++; } } @@ -746,16 +750,13 @@ static int msix_capability_init(struct pci_dev *dev, void __iomem *base, * MSI-X registers. We need to mask all the vectors to prevent * interrupts coming in before they're fully set up. */ - msix_clear_and_set_ctrl(dev, 0, - PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE); - msix_program_entries(dev, entries); /* Set MSI-X enabled bits and unmask the function */ pci_intx_for_msi(dev, 0); dev->msix_enabled = 1; - msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); + msix_clear_and_set_ctrl(dev, 0, PCI_MSIX_FLAGS_ENABLE); return 0;