From patchwork Wed May 14 06:03:59 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 4172551 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 55A15BFF02 for ; Wed, 14 May 2014 06:14:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E2FC720364 for ; Wed, 14 May 2014 06:14:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5BB7E2035D for ; Wed, 14 May 2014 06:14:50 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WkSQa-0002BE-Te; Wed, 14 May 2014 06:12:32 +0000 Received: from szxga03-in.huawei.com ([119.145.14.66]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WkSQT-0001vM-T8 for linux-arm-kernel@lists.infradead.org; Wed, 14 May 2014 06:12:30 +0000 Received: from 172.24.2.119 (EHLO szxeml212-edg.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id AOQ34775; Wed, 14 May 2014 14:10:51 +0800 (CST) Received: from SZXEML455-HUB.china.huawei.com (10.82.67.198) by szxeml212-edg.china.huawei.com (172.24.2.181) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 14 May 2014 14:10:47 +0800 Received: from LGGEML406-HUB.china.huawei.com (10.72.61.84) by SZXEML455-HUB.china.huawei.com (10.82.67.198) with Microsoft SMTP Server (TLS) id 14.3.158.1; Wed, 14 May 2014 14:10:48 +0800 Received: from kernel-host.huawei (10.107.197.247) by lggeml406-hub.china.huawei.com (10.72.61.84) with Microsoft SMTP Server id 14.3.158.1; Wed, 14 May 2014 14:10:42 +0800 From: Wang Nan To: Russell King Subject: [PATCH] arm: mm: fix lowmem virtual address range check Date: Wed, 14 May 2014 14:03:59 +0800 Message-ID: <1400047439-23961-1-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140513_231226_250405_288F74CD X-CRM114-Status: UNSURE ( 9.93 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -1.4 (-) Cc: Wang Nan , Will Deacon , Geng Hui , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch makes sure the argument of __phys_to_virt is a valid physical address when clear lowmem memory maps. The last few lines prepare_page_table() clear page mapping in the gap between largest low physical memory and the upper bound of lowmem. It uses __phys_to_virt(end) to calculate virtual address from where the clearing start. However, if the platform uses private nonliner __phys_to_virt(), 'end' may goes into another mapping region. This patch uses __phys_to_virt(end - 1) + 1 for insurance purposes. Signed-off-by: Wang Nan Cc: Geng Hui Cc: Will Deacon --- arch/arm/mm/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index b68c6b2..87340ee 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1217,7 +1217,7 @@ static inline void prepare_page_table(void) * Clear out all the kernel space mappings, except for the first * memory bank, up to the vmalloc region. */ - for (addr = __phys_to_virt(end); + for (addr = __phys_to_virt(end - 1) + 1; addr < VMALLOC_START; addr += PMD_SIZE) pmd_clear(pmd_off_k(addr)); }