From patchwork Fri Sep 12 07:11:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Borkmann X-Patchwork-Id: 4892231 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id D4E74BEEA5 for ; Fri, 12 Sep 2014 07:14:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3CD702027D for ; Fri, 12 Sep 2014 07:14:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DDA92202A1 for ; Fri, 12 Sep 2014 07:14:52 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XSL1h-0003qq-9K; Fri, 12 Sep 2014 07:12:13 +0000 Received: from mx1.redhat.com ([209.132.183.28]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XSL1d-0003py-C4 for linux-arm-kernel@lists.infradead.org; Fri, 12 Sep 2014 07:12:10 +0000 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s8C7BdHi026462 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 12 Sep 2014 03:11:40 -0400 Received: from localhost (vpn1-5-180.ams2.redhat.com [10.36.5.180]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s8C7Bc7D013799; Fri, 12 Sep 2014 03:11:38 -0400 From: Daniel Borkmann To: will.deacon@arm.com Subject: [PATCH arm64-next] net: bpf: arm64: address randomize and write protect JIT code Date: Fri, 12 Sep 2014 09:11:37 +0200 Message-Id: <1410505897-20122-1-git-send-email-dborkman@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140912_001209_463208_CDF3F91C X-CRM114-Status: GOOD ( 21.24 ) X-Spam-Score: -7.5 (-------) Cc: catalin.marinas@arm.com, linux-kernel@vger.kernel.org, dborkman@redhat.com, zlim.lnx@gmail.com, davem@davemloft.net, linux-arm-kernel@lists.infradead.org, ast@plumgrid.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This is the ARM64 variant for 314beb9bcab ("x86: bpf_jit_comp: secure bpf jit against spraying attacks"). Thanks to commit 11d91a770f1f ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support") which added necessary infrastructure, we can now implement RO marking of eBPF generated JIT image pages and randomize start offset for the JIT code, so that it does not reside directly on a page boundary anymore. Likewise, the holes are filled with illegal instructions. This is basically the ARM64 variant of what we already have in ARM via commit 55309dd3d4cd ("net: bpf: arm: address randomize and write protect JIT code"). Moreover, this commit also presents a merge resolution due to conflicts with commit 60a3b2253c41 ("net: bpf: make eBPF interpreter images read-only") as we don't use kfree() in bpf_jit_free() anymore to release the locked bpf_prog structure, but instead bpf_prog_unlock_free() through a different allocator. JIT tested on aarch64 with BPF test suite. Reference: http://mainisusuallyafunction.blogspot.com/2012/11/attacking-hardened-linux-systems-with.html Signed-off-by: Daniel Borkmann Reviewed-by: Zi Shen Lim Cc: Will Deacon Cc: Catalin Marinas Cc: David S. Miller Cc: Alexei Starovoitov --- README: Will, Catalin, Dave, this is more or less a heads-up: when net-next and arm64-next tree will get both merged into Linus' tree, we will run into a 'silent' merge conflict until someone actually runs eBPF JIT on ARM64 and might notice (I presume) an oops when JIT is freeing bpf_prog. I'd assume nobody actually _runs_ linux-next, but not sure about that though. The reason is that in net-next tree we did some BPF-wide (incl. JIT) changes regarding the allocator which are currently _not_ present here while eBPF JIT currently is _only_ available here. Zi offered to alternatively only have this one-liner replacement and would rebase this one on top of it. Both is fine by me, I think this one would be important to have as well since we have already migrated all other archs that support DEBUG_SET_MODULE_RONX to this (ARM32 with Mircea's consent pending in net-next). This patch here is on top of the module memory leak patch from yesterday but requires some of the dependencies from net-next below. If you want to look it up, on the top of my head, revelant commits from Dave's net-next tree are: 60a3b2253c413cf601783b070507d7dd6620c954 738cbe72adc5c8f2016c4c68aa5162631d4f27e1 55309dd3d4cd7420376a3de0526d6ed24ff8fa76 b954d83421d51d822c42e5ab7b65069b25ad3005 How do we handle this? Would I need to resend this patch when the time comes or would you ARM64 guys take care of it automagically? ;) Thanks a lot! arch/arm64/net/bpf_jit_comp.c | 38 +++++++++++++++++++++++++++++--------- 1 file changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 7ae3354..4b71779 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -19,7 +19,6 @@ #define pr_fmt(fmt) "bpf_jit: " fmt #include -#include #include #include #include @@ -119,6 +118,15 @@ static inline int bpf2a64_offset(int bpf_to, int bpf_from, return to - from; } +static void jit_fill_hole(void *area, unsigned int size) +{ + /* Insert illegal UND instructions. */ + u32 *ptr, fill_ins = 0xe7ffffff; + /* We are guaranteed to have aligned memory. */ + for (ptr = area; size >= sizeof(u32); size -= sizeof(u32)) + *ptr++ = fill_ins; +} + static inline int epilogue_offset(const struct jit_ctx *ctx) { int to = ctx->offset[ctx->prog->len - 1]; @@ -613,8 +621,10 @@ void bpf_jit_compile(struct bpf_prog *prog) void bpf_int_jit_compile(struct bpf_prog *prog) { + struct bpf_binary_header *header; struct jit_ctx ctx; int image_size; + u8 *image_ptr; if (!bpf_jit_enable) return; @@ -636,23 +646,25 @@ void bpf_int_jit_compile(struct bpf_prog *prog) goto out; build_prologue(&ctx); - build_epilogue(&ctx); /* Now we know the actual image size. */ image_size = sizeof(u32) * ctx.idx; - ctx.image = module_alloc(image_size); - if (unlikely(ctx.image == NULL)) + header = bpf_jit_binary_alloc(image_size, &image_ptr, + sizeof(u32), jit_fill_hole); + if (header == NULL) goto out; /* 2. Now, the actual pass. */ + ctx.image = (u32 *)image_ptr; ctx.idx = 0; + build_prologue(&ctx); ctx.body_offset = ctx.idx; if (build_body(&ctx)) { - module_free(NULL, ctx.image); + bpf_jit_binary_free(header); goto out; } @@ -663,17 +675,25 @@ void bpf_int_jit_compile(struct bpf_prog *prog) bpf_jit_dump(prog->len, image_size, 2, ctx.image); bpf_flush_icache(ctx.image, ctx.image + ctx.idx); + + set_memory_ro((unsigned long)header, header->pages); prog->bpf_func = (void *)ctx.image; prog->jited = 1; - out: kfree(ctx.offset); } void bpf_jit_free(struct bpf_prog *prog) { - if (prog->jited) - module_free(NULL, prog->bpf_func); + unsigned long addr = (unsigned long)prog->bpf_func & PAGE_MASK; + struct bpf_binary_header *header = (void *)addr; + + if (!prog->jited) + goto free_filter; + + set_memory_rw(addr, header->pages); + bpf_jit_binary_free(header); - kfree(prog); +free_filter: + bpf_prog_unlock_free(prog); }