From patchwork Wed Aug 31 20:52:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Long X-Patchwork-Id: 9307843 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 102B960487 for ; Wed, 31 Aug 2016 20:54:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F3B8B2909A for ; Wed, 31 Aug 2016 20:54:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E4633290B4; Wed, 31 Aug 2016 20:54:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 668D22909A for ; Wed, 31 Aug 2016 20:54:51 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bfCV9-0008QB-SK; Wed, 31 Aug 2016 20:52:51 +0000 Received: from mail-qk0-x233.google.com ([2607:f8b0:400d:c09::233]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bfCV4-0008NU-Q9 for linux-arm-kernel@lists.infradead.org; Wed, 31 Aug 2016 20:52:48 +0000 Received: by mail-qk0-x233.google.com with SMTP id t7so64858039qkh.1 for ; Wed, 31 Aug 2016 13:52:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=iLOtLIKfjZ385BPp1iPlTtOYKAFlRkRjDXE9hhauSxk=; b=ZkDuSwJPUO+2Z3a1Pcn0XtDMAeGonqaNiKK2hq8Ipgy5596HsD5eerP1Nqphza9zNa OdXqKCmj2TDU9c9aD7UvH70Dxl0LPMzldMZtN2MCPOwhLDS4OMvv5N5fjmdajtLmmUvU s/gUyK/Qi57AgQiEzOjI0iXhHI9DRwBMiFeiE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=iLOtLIKfjZ385BPp1iPlTtOYKAFlRkRjDXE9hhauSxk=; b=C5GIKxRdMbG/W51e6ZiXC7L4M+9bZcp5OkFp0ymv/ghjSUcJ6RzWAsZxUlXOTzYY6y 0eB4B/Wlh180Tdd7OKuh7tA7js01ochC7uhLPYtgOxtNWj/wPQaJAjpOuP84w8NATj/N cmbvW1OERnDOzXI8El7sV/0OLfIMyg15/w6+NPbdf76N/wB2tv+8ne0lbtARE96UPSKJ pWZ357EkeOGdI7iccdDbw+zGJW/jio7KM715MgsLkKkEGi1J051kFme9z66L1vyevh/w Dw+PbJP1f6ldTS8WtXmSxh7ESBEinLsFju/6wiUFSh0ZryCID2W4ANmSuChSLk0d3r9N qnwQ== X-Gm-Message-State: AE9vXwM4CWSedYaGGQBE8dXcXpPwSipsPypI9QEeuwns5ZviYIJVAZGmADOysOJpDwbcPlFU X-Received: by 10.55.41.86 with SMTP id p83mr13645842qkh.93.1472676745244; Wed, 31 Aug 2016 13:52:25 -0700 (PDT) Received: from localhost.localdomain (pool-72-71-243-24.cncdnh.fast00.myfairpoint.net. [72.71.243.24]) by smtp.googlemail.com with ESMTPSA id v68sm986497qkd.9.2016.08.31.13.52.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Aug 2016 13:52:24 -0700 (PDT) From: David Long To: Masami Hiramatsu , Ananth N Mavinakayanahalli , Anil S Keshavamurthy , "David S. Miller" , Will Deacon , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, catalin.marinas@arm.com, Sandeepa Prabhu , William Cohen , Pratyush Anand Subject: [PATCH] arm64: Improve kprobes test for atomic sequence Date: Wed, 31 Aug 2016 16:52:22 -0400 Message-Id: <1472676742-2250-1-git-send-email-dave.long@linaro.org> X-Mailer: git-send-email 2.5.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160831_135246_930393_A29E4B39 X-CRM114-Status: GOOD ( 18.65 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Brown MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: "David A. Long" Kprobes searches backwards a finite number of instructions to determine if there is an attempt to probe a load/store exclusive sequence. It stops when it hits the maximum number of instructions or a load or store exclusive. However this means it can run up past the beginning of the function and start looking at literal constants. This has been shown to cause a false positive and blocks insertion of the probe. To fix this add a test to see if the typical: "stp x29, x30, [sp, #n]!" instruction beginning a function gets hit. This also improves efficiency by not testing code that is not part of the function. There is some possibility that a function will not begin with this instruction, in which case the fixed code will behave no worse than before. There could also be the case that the stp instruction is found further in the body of the function, which could theoretically allow probing of an atomic squence. The likelihood of this seems low, and this would not be the only aspect of kprobes where the user needs to be careful to avoid problems. Signed-off-by: David A. Long --- arch/arm64/kernel/probes/decode-insn.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c index 37e47a9..248e820 100644 --- a/arch/arm64/kernel/probes/decode-insn.c +++ b/arch/arm64/kernel/probes/decode-insn.c @@ -122,16 +122,28 @@ arm_probe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi) static bool __kprobes is_probed_address_atomic(kprobe_opcode_t *scan_start, kprobe_opcode_t *scan_end) { + const u32 stp_x29_x30_sp_pre = 0xa9807bfd; + const u32 stp_ignore_index_mask = 0xffc07fff; + u32 instruction = le32_to_cpu(*scan_start); + while (scan_start > scan_end) { /* - * atomic region starts from exclusive load and ends with - * exclusive store. + * Atomic region starts from exclusive load and ends with + * exclusive store. If we hit a "stp x29, x30, [sp, #n]!" + * assume it is the beginning of the function and end the + * search. This helps avoid false positives from literal + * constants that look like a load-exclusive, in addition + * to being more efficient. */ - if (aarch64_insn_is_store_ex(le32_to_cpu(*scan_start))) + if ((instruction & stp_ignore_index_mask) == stp_x29_x30_sp_pre) return false; - else if (aarch64_insn_is_load_ex(le32_to_cpu(*scan_start))) - return true; + scan_start--; + instruction = le32_to_cpu(*scan_start); + if (aarch64_insn_is_store_ex(instruction)) + return false; + else if (aarch64_insn_is_load_ex(instruction)) + return true; } return false; @@ -142,7 +154,6 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi) { enum kprobe_insn decoded; kprobe_opcode_t insn = le32_to_cpu(*addr); - kprobe_opcode_t *scan_start = addr - 1; kprobe_opcode_t *scan_end = addr - MAX_ATOMIC_CONTEXT_SIZE; #if defined(CONFIG_MODULES) && defined(MODULES_VADDR) struct module *mod; @@ -167,7 +178,7 @@ arm_kprobe_decode_insn(kprobe_opcode_t *addr, struct arch_specific_insn *asi) decoded = arm_probe_decode_insn(insn, asi); if (decoded == INSN_REJECTED || - is_probed_address_atomic(scan_start, scan_end)) + is_probed_address_atomic(addr, scan_end)) return INSN_REJECTED; return decoded;