From patchwork Fri Sep 19 12:56:57 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Borkmann X-Patchwork-Id: 4937991 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id AF6069F313 for ; Fri, 19 Sep 2014 12:59:51 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id CE5E6201F4 for ; Fri, 19 Sep 2014 12:59:49 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 57981201FE for ; Fri, 19 Sep 2014 12:59:47 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XUxkq-0001f9-Fg; Fri, 19 Sep 2014 12:57:40 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XUxkn-0001WU-GM for linux-arm-kernel@lists.infradead.org; Fri, 19 Sep 2014 12:57:38 +0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s8JCv3AU028952 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 19 Sep 2014 08:57:03 -0400 Received: from localhost (vpn1-6-22.ams2.redhat.com [10.36.6.22]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s8JCv2Z8013956; Fri, 19 Sep 2014 08:57:02 -0400 From: Daniel Borkmann To: davem@davemloft.net Subject: [PATCH net-next v2] net: bpf: arm: make hole-faulting more robust Date: Fri, 19 Sep 2014 14:56:57 +0200 Message-Id: <1411131417-23667-1-git-send-email-dborkman@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140919_055737_601440_472B9154 X-CRM114-Status: GOOD ( 13.37 ) X-Spam-Score: -5.7 (-----) Cc: Russell King , netdev@vger.kernel.org, Will Deacon , Mircea Gherzan , Catalin Marinas , linux-arm-kernel@lists.infradead.org, Alexei Starovoitov X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Will Deacon pointed out, that the currently used opcode for filling holes, that is 0xe7ffffff, seems not robust enough ... $ echo 0xffffffe7 | xxd -r > test.bin $ arm-linux-gnueabihf-objdump -m arm -D -b binary test.bin ... 0: e7ffffff udf #65535 ; 0xffff ... while for Thumb, it ends up as ... 0: ffff e7ff vqshl.u64 q15, , #63 ... which is a bit fragile. The ARM specification defines some *permanently* guaranteed undefined instruction (UDF) space, for example for ARM in ARMv7-AR, section A5.4 and for Thumb in ARMv7-M, section A5.2.6. Similarly, ptrace, kprobes, kgdb, bug and uprobes make use of such instruction as well to trap. Given mentioned section from the specification, we can find such a universe as (where 'x' denotes 'don't care'): ARM: xxxx 0111 1111 xxxx xxxx xxxx 1111 xxxx Thumb: 1101 1110 xxxx xxxx We therefore should use a more robust opcode that fits both. Russell King suggested that we can even reuse a single 32-bit word, that is, 0xe7fddef1 which will fault if executed in ARM *or* Thumb mode as done in f928d4f2a86f ("ARM: poison the vectors page"). That will still hold our requirements: $ echo 0xf1defde7 | xxd -r > test.bin $ arm-unknown-linux-gnueabi-objdump -m arm -D -b binary test.bin ... 0: e7fddef1 udf #56801 ; 0xdde1 $ echo 0xf1defde7f1defde7f1defde7 | xxd -r > test.bin $ arm-unknown-linux-gnueabi-objdump -marm -Mforce-thumb -D -b binary test.bin ... 0: def1 udf #241 ; 0xf1 2: e7fd b.n 0x0 4: def1 udf #241 ; 0xf1 6: e7fd b.n 0x4 8: def1 udf #241 ; 0xf1 a: e7fd b.n 0x8 So on ARM 0xe7fddef1 conforms to the above UDF pattern, and the low 16 bit likewise correspond to UDF in Thumb case. The 0xe7fd part is an unconditional branch back to the UDF instruction. Signed-off-by: Daniel Borkmann Cc: Russell King Cc: Catalin Marinas Cc: Will Deacon Cc: Mircea Gherzan Cc: Alexei Starovoitov --- v1->v2: - Use single word version instead of separately handling ARM and Thumb arch/arm/net/bpf_jit_32.c | 6 +++--- arch/arm/net/bpf_jit_32.h | 14 ++++++++++++++ 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index 6b45f64..e1268f9 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -16,6 +16,7 @@ #include #include #include + #include #include #include @@ -175,11 +176,10 @@ static inline bool is_load_to_a(u16 inst) static void jit_fill_hole(void *area, unsigned int size) { - /* Insert illegal UND instructions. */ - u32 *ptr, fill_ins = 0xe7ffffff; + u32 *ptr; /* We are guaranteed to have aligned memory. */ for (ptr = area; size >= sizeof(u32); size -= sizeof(u32)) - *ptr++ = fill_ins; + *ptr++ = __opcode_to_mem_arm(ARM_INST_UDF); } static void build_prologue(struct jit_ctx *ctx) diff --git a/arch/arm/net/bpf_jit_32.h b/arch/arm/net/bpf_jit_32.h index afb8462..b2d7d92 100644 --- a/arch/arm/net/bpf_jit_32.h +++ b/arch/arm/net/bpf_jit_32.h @@ -114,6 +114,20 @@ #define ARM_INST_UMULL 0x00800090 +/* + * Use a suitable undefined instruction to use for ARM/Thumb2 faulting. + * We need to be careful not to conflict with those used by other modules + * (BUG, kprobes, etc) and the register_undef_hook() system. + * + * The ARM architecture reference manual guarantees that the following + * instruction space will produce an undefined instruction exception on + * all CPUs: + * + * ARM: xxxx 0111 1111 xxxx xxxx xxxx 1111 xxxx ARMv7-AR, section A5.4 + * Thumb: 1101 1110 xxxx xxxx ARMv7-M, section A5.2.6 + */ +#define ARM_INST_UDF 0xe7fddef1 + /* register */ #define _AL3_R(op, rd, rn, rm) ((op ## _R) | (rd) << 12 | (rn) << 16 | (rm)) /* immediate */