From patchwork Fri Jun 7 15:40:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 10982305 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6891917D2 for ; Fri, 7 Jun 2019 15:50:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5761728B72 for ; Fri, 7 Jun 2019 15:50:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4948828B76; Fri, 7 Jun 2019 15:50:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 3BD5E28958 for ; Fri, 7 Jun 2019 15:50:54 +0000 (UTC) Received: (qmail 26037 invoked by uid 550); 7 Jun 2019 15:50:53 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26001 invoked from network); 7 Jun 2019 15:50:52 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559922640; bh=cd/ev9VaIWkYNYVnJKhJjuJBTw0BlAWFxLbrCgag56A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ThUi9svs6vijyCILYkhcgNiYwi4lKkr/EOz265TC7Dc3+5LXz07kjbn2b6Ac/aeAG BlbbADdg/lleLNo/J28USUNgd7xVxF3VpYBwvELO7o7V77UywhCg3k8Eb2ooaxOZDD 2bofmGaZPfuUW9C+7R6fZMcrbBxqQwajnK6w7LSQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Nadav Amit , Rick Edgecombe , "Peter Zijlstra (Intel)" , akpm@linux-foundation.org, ard.biesheuvel@linaro.org, deneen.t.dock@intel.com, kernel-hardening@lists.openwall.com, kristen@linux.intel.com, linux_dti@icloud.com, will.deacon@arm.com, Andy Lutomirski , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Linus Torvalds , Rik van Riel , Thomas Gleixner , Ingo Molnar , Sasha Levin Subject: [PATCH 5.1 85/85] x86/kprobes: Set instruction page as executable Date: Fri, 7 Jun 2019 17:40:10 +0200 Message-Id: <20190607153858.186248816@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190607153849.101321647@linuxfoundation.org> References: <20190607153849.101321647@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP [ Upstream commit 7298e24f904224fa79eb8fd7e0fbd78950ccf2db ] Set the page as executable after allocation. This patch is a preparatory patch for a following patch that makes module allocated pages non-executable. While at it, do some small cleanup of what appears to be unnecessary masking. Signed-off-by: Nadav Amit Signed-off-by: Rick Edgecombe Signed-off-by: Peter Zijlstra (Intel) Cc: Cc: Cc: Cc: Cc: Cc: Cc: Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Dave Hansen Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Rik van Riel Cc: Thomas Gleixner Link: https://lkml.kernel.org/r/20190426001143.4983-11-namit@vmware.com Signed-off-by: Ingo Molnar Signed-off-by: Sasha Levin --- arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index fed46ddb1eef..06058c44ab57 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -431,8 +431,20 @@ void *alloc_insn_page(void) void *page; page = module_alloc(PAGE_SIZE); - if (page) - set_memory_ro((unsigned long)page & PAGE_MASK, 1); + if (!page) + return NULL; + + /* + * First make the page read-only, and only then make it executable to + * prevent it from being W+X in between. + */ + set_memory_ro((unsigned long)page, 1); + + /* + * TODO: Once additional kernel code protection mechanisms are set, ensure + * that the page was not maliciously altered and it is still zeroed. + */ + set_memory_x((unsigned long)page, 1); return page; } @@ -440,8 +452,12 @@ void *alloc_insn_page(void) /* Recover page to RW mode before releasing it */ void free_insn_page(void *page) { - set_memory_nx((unsigned long)page & PAGE_MASK, 1); - set_memory_rw((unsigned long)page & PAGE_MASK, 1); + /* + * First make the page non-executable, and only then make it writable to + * prevent it from being W+X in between. + */ + set_memory_nx((unsigned long)page, 1); + set_memory_rw((unsigned long)page, 1); module_memfree(page); }