From patchwork Wed Jun 6 02:06:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xie Yisheng X-Patchwork-Id: 10449541 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E6A7B60284 for ; Wed, 6 Jun 2018 02:17:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB7F029C03 for ; Wed, 6 Jun 2018 02:17:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CFC3629C05; Wed, 6 Jun 2018 02:17:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7935929C03 for ; Wed, 6 Jun 2018 02:17:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752629AbeFFCRg (ORCPT ); Tue, 5 Jun 2018 22:17:36 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:8697 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752424AbeFFCRf (ORCPT ); Tue, 5 Jun 2018 22:17:35 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 543C8F93950D9; Wed, 6 Jun 2018 10:17:21 +0800 (CST) Received: from linux-ibm.site (10.175.102.37) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.382.0; Wed, 6 Jun 2018 10:17:15 +0800 From: Yisheng Xie To: , , CC: , , , , , , , Yisheng Xie Subject: [PATCH v4] PCI ACPI: Avoid panic when PCI IO resource's size is not page aligned Date: Wed, 6 Jun 2018 10:06:33 +0800 Message-ID: <1528250793-57034-1-git-send-email-xieyisheng1@huawei.com> X-Mailer: git-send-email 1.7.12.4 MIME-Version: 1.0 X-Originating-IP: [10.175.102.37] X-CFilter-Loop: Reflected Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Zhou reported a bug on Hisilicon arm64 D06 platform with 64KB page size: [ 2.470908] kernel BUG at lib/ioremap.c:72! [ 2.475079] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 2.480551] Modules linked in: [ 2.483594] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.16.0-rc7-00062-g0b41260-dirty #23 [ 2.491756] Hardware name: Huawei D06/D06, BIOS Hisilicon D06 UEFI Nemo 2.0 RC0 - B120 03/23/2018 [ 2.500614] pstate: 80c00009 (Nzcv daif +PAN +UAO) [ 2.505395] pc : ioremap_page_range+0x268/0x36c [ 2.509912] lr : pci_remap_iospace+0xe4/0x100 [...] [ 2.603733] Call trace: [ 2.606168] ioremap_page_range+0x268/0x36c [ 2.610337] pci_remap_iospace+0xe4/0x100 [ 2.614334] acpi_pci_probe_root_resources+0x1d4/0x214 [ 2.619460] pci_acpi_root_prepare_resources+0x18/0xa8 [ 2.624585] acpi_pci_root_create+0x98/0x214 [ 2.628843] pci_acpi_scan_root+0x124/0x20c [ 2.633013] acpi_pci_root_add+0x224/0x494 [ 2.637096] acpi_bus_attach+0xf8/0x200 [ 2.640918] acpi_bus_attach+0x98/0x200 [ 2.644740] acpi_bus_attach+0x98/0x200 [ 2.648562] acpi_bus_scan+0x48/0x9c [ 2.652125] acpi_scan_init+0x104/0x268 [ 2.655948] acpi_init+0x308/0x374 [ 2.659337] do_one_initcall+0x48/0x14c [ 2.663160] kernel_init_freeable+0x19c/0x250 [ 2.667504] kernel_init+0x10/0x100 [ 2.670979] ret_from_fork+0x10/0x18 The cause is the size of PCI IO resource is 32KB, which is 4K aligned but not 64KB aligned, however, ioremap_page_range() request the range as page aligned or it will trigger a BUG_ON() on ioremap_pte_range() it calls, as ioremap_pte_range increase the addr by PAGE_SIZE, which makes addr != end until trigger BUG_ON, if its incoming end is not page aligned. More detail trace is as following: ioremap_page_range -> ioremap_p4d_range -> ioremap_p4d_range -> ioremap_pud_range -> ioremap_pmd_range -> ioremap_pte_range This patch avoid panic by align the vaddr and phys_addr. Reported-by: Zhou Wang Tested-by: Xiaojun Tan Signed-off-by: Yisheng Xie --- v4: - align vaddr and phys_addr - per Bjorn v3: - pci_remap_iospace() sanitize its arguments instead - per Rafael v2: - Let the caller of ioremap_page_range() align the request by PAGE_SIZE - per Toshi drivers/pci/pci.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index dbfe7c4..652f7d6 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -3537,6 +3537,7 @@ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) { #if defined(PCI_IOBASE) && defined(CONFIG_MMU) unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start; + unsigned long last_vaddr; if (!(res->flags & IORESOURCE_IO)) return -EINVAL; @@ -3544,7 +3545,16 @@ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) if (res->end > IO_SPACE_LIMIT) return -EINVAL; - return ioremap_page_range(vaddr, vaddr + resource_size(res), phys_addr, + /* It will be mess if vaddr's offset is not equal to phys_addr's */ + if ((vaddr & ~PAGE_MASK) != (phys_addr & ~PAGE_MASK)) + return -EINVAL; + + /* Mappings have to be page-aligned */ + last_vaddr = PAGE_ALIGN(vaddr + resource_size(res)); + phys_addr &= PAGE_MASK; + vaddr &= PAGE_MASK; + + return ioremap_page_range(vaddr, last_vaddr, phys_addr, pgprot_device(PAGE_KERNEL)); #else /* this architecture does not have memory mapped I/O space,