From patchwork Fri Nov 21 21:50:43 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 5358211 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 6CE459F2F1 for ; Fri, 21 Nov 2014 21:55:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 677B5201FA for ; Fri, 21 Nov 2014 21:55:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5F9DE2012E for ; Fri, 21 Nov 2014 21:55:42 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xrw8W-0004rr-Br; Fri, 21 Nov 2014 21:53:04 +0000 Received: from smtp.codeaurora.org ([198.145.11.231]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Xrw73-00046Q-Bk for linux-arm-kernel@lists.infradead.org; Fri, 21 Nov 2014 21:51:35 +0000 Received: from smtp.codeaurora.org (localhost [127.0.0.1]) by smtp.codeaurora.org (Postfix) with ESMTP id 7E2A514030B; Fri, 21 Nov 2014 21:50:53 +0000 (UTC) Received: by smtp.codeaurora.org (Postfix, from userid 486) id 70001140311; Fri, 21 Nov 2014 21:50:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from linux-kernel-memory-lab-01.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) (using TLSv1.1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: lauraa@smtp.codeaurora.org) by smtp.codeaurora.org (Postfix) with ESMTPSA id 7ECC014030B; Fri, 21 Nov 2014 21:50:52 +0000 (UTC) From: Laura Abbott To: Will Deacon , Steve Capper , Mark Rutland Subject: [PATCHv6 6/8] arm64: use fixmap for text patching when text is RO Date: Fri, 21 Nov 2014 13:50:43 -0800 Message-Id: <1416606645-25633-7-git-send-email-lauraa@codeaurora.org> X-Mailer: git-send-email 1.8.2.1 In-Reply-To: <1416606645-25633-1-git-send-email-lauraa@codeaurora.org> References: <1416606645-25633-1-git-send-email-lauraa@codeaurora.org> X-Virus-Scanned: ClamAV using ClamSMTP X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141121_135133_525520_498410A3 X-CRM114-Status: GOOD ( 18.56 ) X-Spam-Score: -0.0 (/) Cc: Catalin Marinas , Laura Abbott , Kees Cook , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When kernel text is marked as read only, it cannot be modified directly. Use a fixmap to modify the text instead in a similar manner to x86 and arm. Reviewed-by: Kees Cook Tested-by: Kees Cook Signed-off-by: Laura Abbott --- arch/arm64/include/asm/fixmap.h | 1 + arch/arm64/include/asm/insn.h | 2 ++ arch/arm64/kernel/insn.c | 72 +++++++++++++++++++++++++++++++++++++++-- arch/arm64/kernel/jump_label.c | 2 +- 4 files changed, 73 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h index db26a2f2..2cd4b0d 100644 --- a/arch/arm64/include/asm/fixmap.h +++ b/arch/arm64/include/asm/fixmap.h @@ -48,6 +48,7 @@ enum fixed_addresses { FIX_BTMAP_END = __end_of_permanent_fixed_addresses, FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1, + FIX_TEXT_POKE0, __end_of_fixed_addresses }; diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 56a9e63..f66853b 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -282,6 +282,7 @@ bool aarch64_insn_is_nop(u32 insn); int aarch64_insn_read(void *addr, u32 *insnp); int aarch64_insn_write(void *addr, u32 insn); +int aarch64_insn_write_early(void *addr, u32 insn); enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn); u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type, u32 insn, u64 imm); @@ -352,6 +353,7 @@ u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst, bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn); int aarch64_insn_patch_text_nosync(void *addr, u32 insn); +int __aarch64_insn_patch_text_nosync(void *addr, u32 insn, bool early); int aarch64_insn_patch_text_sync(void *addrs[], u32 insns[], int cnt); int aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt); #endif /* __ASSEMBLY__ */ diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 8cd27fe..b2cad38 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -19,12 +19,15 @@ #include #include #include +#include #include +#include #include #include #include #include +#include #include #define AARCH64_INSN_SF_BIT BIT(31) @@ -72,6 +75,36 @@ bool __kprobes aarch64_insn_is_nop(u32 insn) } } +static DEFINE_SPINLOCK(patch_lock); + +static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags) +{ + unsigned long uintaddr = (uintptr_t) addr; + bool module = !core_kernel_text(uintaddr); + struct page *page; + + if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX)) + page = vmalloc_to_page(addr); + else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA)) + page = virt_to_page(addr); + else + return addr; + + if (flags) + spin_lock_irqsave(&patch_lock, *flags); + + set_fixmap(fixmap, page_to_phys(page)); + + return (void *) (__fix_to_virt(fixmap) + (uintaddr & ~PAGE_MASK)); +} + +static void __kprobes patch_unmap(int fixmap, unsigned long *flags) +{ + clear_fixmap(fixmap); + + if (flags) + spin_unlock_irqrestore(&patch_lock, *flags); +} /* * In ARMv8-A, A64 instructions have a fixed length of 32 bits and are always * little-endian. @@ -88,10 +121,34 @@ int __kprobes aarch64_insn_read(void *addr, u32 *insnp) return ret; } +static int __kprobes __aarch64_insn_write(void *addr, u32 insn, bool patch) +{ + void *waddr = addr; + unsigned long flags; + int ret; + + if (patch) + waddr = patch_map(addr, FIX_TEXT_POKE0, &flags); + + ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE); + + if (waddr != addr) + patch_unmap(FIX_TEXT_POKE0, &flags); + + return ret; +} + int __kprobes aarch64_insn_write(void *addr, u32 insn) { insn = cpu_to_le32(insn); - return probe_kernel_write(addr, &insn, AARCH64_INSN_SIZE); + return __aarch64_insn_write(addr, insn, true); +} + +int __kprobes aarch64_insn_write_early(void *addr, u32 insn) +{ + insn = cpu_to_le32(insn); + return __aarch64_insn_write(addr, insn, false); + } static bool __kprobes __aarch64_insn_hotpatch_safe(u32 insn) @@ -124,7 +181,7 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn) __aarch64_insn_hotpatch_safe(new_insn); } -int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) +int __kprobes __aarch64_insn_patch_text_nosync(void *addr, u32 insn, bool early) { u32 *tp = addr; int ret; @@ -133,7 +190,11 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) if ((uintptr_t)tp & 0x3) return -EINVAL; - ret = aarch64_insn_write(tp, insn); + if (early) + ret = aarch64_insn_write_early(tp, insn); + else + ret = aarch64_insn_write(tp, insn); + if (ret == 0) flush_icache_range((uintptr_t)tp, (uintptr_t)tp + AARCH64_INSN_SIZE); @@ -141,6 +202,11 @@ int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) return ret; } +int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn) +{ + return __aarch64_insn_patch_text_nosync(addr, insn, false); +} + struct aarch64_insn_patch { void **text_addrs; u32 *new_insns; diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c index 263a166..9ac30bb 100644 --- a/arch/arm64/kernel/jump_label.c +++ b/arch/arm64/kernel/jump_label.c @@ -38,7 +38,7 @@ static void __arch_jump_label_transform(struct jump_entry *entry, } if (is_static) - aarch64_insn_patch_text_nosync(addr, insn); + __aarch64_insn_patch_text_nosync(addr, insn, true); else aarch64_insn_patch_text(&addr, &insn, 1); }