From patchwork Wed Oct 9 01:31:31 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 3006211 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0A2F8BF924 for ; Wed, 9 Oct 2013 01:33:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2A0E1201B3 for ; Wed, 9 Oct 2013 01:33:57 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2EAEE201A4 for ; Wed, 9 Oct 2013 01:33:56 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VTidq-0005KU-GW; Wed, 09 Oct 2013 01:32:47 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1VTidZ-0003he-Ea; Wed, 09 Oct 2013 01:32:29 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1VTidA-0003eF-16 for linux-arm-kernel@lists.infradead.org; Wed, 09 Oct 2013 01:32:06 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 02FC413F279; Wed, 9 Oct 2013 01:31:46 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id EB10913F29F; Wed, 9 Oct 2013 01:31:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from lauraa-linux1.qualcomm.com (i-global252.qualcomm.com [199.106.103.252]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 7D55313F279; Wed, 9 Oct 2013 01:31:45 +0000 (UTC) From: Laura Abbott To: linux-arm-kernel@lists.infradead.org Subject: [RFC PATCH 4/5] arm: mm: restrict kernel memory permissions if CONFIG_STRICT_MEMORY_RWX set Date: Tue, 8 Oct 2013 18:31:31 -0700 Message-Id: <1381282292-25251-5-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.7.8.3 In-Reply-To: <1381282292-25251-1-git-send-email-lauraa@codeaurora.org> References: <1381282292-25251-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20131008_213204_315078_1D386CD7 X-CRM114-Status: GOOD ( 17.88 ) X-Spam-Score: -2.2 (--) Cc: Larry Bassel , Laura Abbott , Kees Cook X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP If CONFIG_STRICT_MEMORY_RWX is set, make kernel text RX, kernel data/stack RW and rodata RO so that writing on kernel text, executing kernel data or stack, or writing on or executing read-only data is prohibited. Signed-off-by: Larry Bassel Signed-off-by: Laura Abbott --- arch/arm/mm/mmu.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 56 insertions(+), 1 deletions(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index d846334..91db2a0 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1311,6 +1311,60 @@ static void __init kmap_init(void) #endif } +struct custom_map { + unsigned long start; + unsigned long end; + unsigned int type; +}; + +struct custom_map __initdata custom_maps[] = { + { + .start = _stext, + .end = __start_rodata, + .type = MT_MEMORY_RX, + }, + { + .start = __start_rodata, + .end = __init_begin, + .type = MT_MEMORY_R + }, + { + .start = __init_begin, + .end = __arch_info_begin, + .type = MT_MEMORY_RX, + } +}; + +static void __init map_custom_regions(void) +{ +#ifdef CONFIG_STRICT_MEMORY_RWX + int i; + + for (i = 0; i < ARRAY_SIZE(custom_maps); i++) { + struct map_desc map; + unsigned long addr; + + if (!IS_ALIGNED(custom_maps[i].start, PMD_SIZE) || + !IS_ALIGNED(custom_maps[i].end, PMD_SIZE)) { + pr_err("BUG: section %x-%x not aligned to %x\n", + custom_maps[i].start, custom_maps[i].end, + PMD_SIZE); + continue; + } + + for (addr = custom_maps[i].start; + addr < custom_maps[i].end; addr += PMD_SIZE) + pmd_clear(pmd_off_k(addr)); + + map.virtual = custom_maps[i].start; + map.pfn = __phys_to_pfn(__virt_to_phys(custom_maps[i].start)); + map.length = custom_maps[i].end - custom_maps[i].start; + map.type = custom_maps[i].type; + create_mapping(&map); + } +#endif +} + static void __init map_lowmem(void) { struct memblock_region *reg; @@ -1329,10 +1383,11 @@ static void __init map_lowmem(void) map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; - map.type = MT_MEMORY; + map.type = MT_MEMORY_RW; create_mapping(&map); } + map_custom_regions(); } /*