From patchwork Thu May 15 14:19:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Salter X-Patchwork-Id: 4182731 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 0044C9F1C0 for ; Thu, 15 May 2014 14:22:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E970520220 for ; Thu, 15 May 2014 14:22:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3FBF42020E for ; Thu, 15 May 2014 14:22:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WkwVx-00027P-4Z; Thu, 15 May 2014 14:20:05 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WkwVt-0001Lw-Lx for linux-arm-kernel@lists.infradead.org; Thu, 15 May 2014 14:20:02 +0000 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s4FEJZL7000731 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 15 May 2014 10:19:35 -0400 Received: from deneb.redhat.com (ovpn-113-147.phx2.redhat.com [10.3.113.147]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id s4FEJYwe016777; Thu, 15 May 2014 10:19:34 -0400 From: Mark Salter To: Catalin Marinas , Will Deacon Subject: [PATCH] arm64: fix pud_huge() for 2-level pagetables Date: Thu, 15 May 2014 10:19:22 -0400 Message-Id: <1400163562-7481-1-git-send-email-msalter@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140515_072001_778009_BEB87401 X-CRM114-Status: GOOD ( 14.44 ) X-Spam-Score: -5.7 (-----) Cc: Steve Capper , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Mark Salter X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The following happens when trying to run a kvm guest on a kernel configured for 64k pages. This doesn't happen with 4k pages: BUG: failure at include/linux/mm.h:297/put_page_testzero()! Kernel panic - not syncing: BUG! CPU: 2 PID: 4228 Comm: qemu-system-aar Tainted: GF 3.13.0-0.rc7.31.sa2.k32v1.aarch64.debug #1 Call trace: [] dump_backtrace+0x0/0x16c [] show_stack+0x14/0x1c [] dump_stack+0x84/0xb0 [] panic+0xf4/0x220 [] free_reserved_area+0x0/0x110 [] free_pages+0x50/0x88 [] kvm_free_stage2_pgd+0x30/0x40 [] kvm_arch_destroy_vm+0x18/0x44 [] kvm_put_kvm+0xf0/0x184 [] kvm_vm_release+0x10/0x1c [] __fput+0xb0/0x288 [] ____fput+0xc/0x14 [] task_work_run+0xa8/0x11c [] do_notify_resume+0x54/0x58 In arch/arm/kvm/mmu.c:unmap_range(), we end up doing an extra put_page() on the stage2 pgd which leads to the BUG in put_page_testzero(). This happens because a pud_huge() test in unmap_range() returns true when it should always be false with 2-level pages tables used by 64k pages. This patch removes support for huge puds if 2-level pagetables are being used. Signed-off-by: Mark Salter --- arch/arm64/mm/hugetlbpage.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 5e9aec3..9bed38f 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -51,7 +51,11 @@ int pmd_huge(pmd_t pmd) int pud_huge(pud_t pud) { +#ifndef __PAGETABLE_PMD_FOLDED return !(pud_val(pud) & PUD_TABLE_BIT); +#else + return 0; +#endif } int pmd_huge_support(void) @@ -64,8 +68,10 @@ static __init int setup_hugepagesz(char *opt) unsigned long ps = memparse(opt, &opt); if (ps == PMD_SIZE) { hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT); +#ifndef __PAGETABLE_PMD_FOLDED } else if (ps == PUD_SIZE) { hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT); +#endif } else { pr_err("hugepagesz: Unsupported page size %lu M\n", ps >> 20); return 0;