From patchwork Wed Dec 30 18:56:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tony Luck X-Patchwork-Id: 7944711 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 2157A9F350 for ; Mon, 4 Jan 2016 01:26:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E73C020377 for ; Mon, 4 Jan 2016 01:26:39 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9678920389 for ; Mon, 4 Jan 2016 01:26:38 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 88DA51A24C8; Sun, 3 Jan 2016 17:26:38 -0800 (PST) X-Original-To: linux-nvdimm@ml01.01.org Delivered-To: linux-nvdimm@ml01.01.org Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by ml01.01.org (Postfix) with ESMTP id 2B7541A24C8 for ; Sun, 3 Jan 2016 17:26:37 -0800 (PST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP; 03 Jan 2016 17:26:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,518,1444719600"; d="scan'208";a="719597060" Received: from agluck-desk.sc.intel.com ([10.3.79.148]) by orsmga003.jf.intel.com with SMTP; 03 Jan 2016 17:26:36 -0800 Message-Id: <18380d9d19d5165822d12532127de2fb7a8b8cc7.1451869360.git.tony.luck@intel.com> In-Reply-To: References: From: Tony Luck Date: Wed, 30 Dec 2015 10:56:41 -0800 Subject: [PATCH v6 2/4] x86: Cleanup and add a new exception class To: Ingo Molnar Cc: linux-nvdimm@ml01.01.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Borislav Petkov , Andy Lutomirski , Andrew Morton X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=0.8 required=5.0 tests=BAYES_00, DATE_IN_PAST_96_XX, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=no version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Make per-class functions for exception table fixup. Add #defines and general prettiness to make it clear how all the parts tie together. Add a new class that fills %rax with the fault number of the exception. Signed-off-by: Tony Luck --- arch/x86/include/asm/asm.h | 24 ++++++++++----- arch/x86/include/asm/uaccess.h | 17 ++++++++--- arch/x86/kernel/kprobes/core.c | 2 +- arch/x86/kernel/traps.c | 6 ++-- arch/x86/mm/extable.c | 67 +++++++++++++++++++++++++++--------------- arch/x86/mm/fault.c | 2 +- 6 files changed, 79 insertions(+), 39 deletions(-) diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index b64121ffb2da..1888278d0559 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -44,6 +44,7 @@ /* Exception table entry */ +#define _EXTABLE_BIAS 0x20000000 /* * An exception table entry is 64 bits. The first 32 bits are the offset * from that entry to the potentially faulting instruction. sortextable @@ -54,26 +55,35 @@ * address. All of these are generated by relocations, so we can only * rely on addition. We therefore emit: * - * (target - here) + (class) + 0x20000000 + * (target - here) + (class) + _EXTABLE_BIAS * * This has the property that the two high bits are the class and the * rest is easy to decode. */ -/* There are two bits of extable entry class, added to a signed offset. */ -#define _EXTABLE_CLASS_DEFAULT 0 /* standard uaccess fixup */ -#define _EXTABLE_CLASS_EX 0x80000000 /* uaccess + set uaccess_err */ +/* + * There are two bits of extable entry class giving four classes + */ +#define EXTABLE_CLASS_DEFAULT 0 /* standard uaccess fixup */ +#define EXTABLE_CLASS_FAULT 1 /* provide fault number as well as fixup */ +#define EXTABLE_CLASS_EX 2 /* uaccess + set uaccess_err */ +#define EXTABLE_CLASS_UNUSED 3 /* available for something else */ /* - * The biases are the class constants + 0x20000000, as signed integers. + * The biases are the class constants + _EXTABLE_BIAS, as signed integers. * This can't use ordinary arithmetic -- the assembler isn't that smart. */ -#define _EXTABLE_BIAS_DEFAULT 0x20000000 -#define _EXTABLE_BIAS_EX 0x20000000 - 0x80000000 +#define _EXTABLE_BIAS_DEFAULT _EXTABLE_BIAS +#define _EXTABLE_BIAS_FAULT _EXTABLE_BIAS + 0x40000000 +#define _EXTABLE_BIAS_EX _EXTABLE_BIAS - 0x80000000 +#define _EXTABLE_BIAS_UNUSED _EXTABLE_BIAS - 0x40000000 #define _ASM_EXTABLE(from,to) \ _ASM_EXTABLE_CLASS(from, to, _EXTABLE_BIAS_DEFAULT) +#define _ASM_EXTABLE_FAULT(from,to) \ + _ASM_EXTABLE_CLASS(from, to, _EXTABLE_BIAS_FAULT) + #define _ASM_EXTABLE_EX(from,to) \ _ASM_EXTABLE_CLASS(from, to, _EXTABLE_BIAS_EX) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index a8df874f3e88..b023300cd6f0 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -93,9 +93,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un * The exception table consists of pairs of addresses relative to the * exception table enty itself: the first is the address of an * instruction that is allowed to fault, and the second is the address - * at which the program should continue. No registers are modified, - * so it is entirely up to the continuation code to figure out what to - * do. + * at which the program should continue. The exception table is linked + * soon after the fixup section, so we don't need a full 32-bit offset + * for the fixup. We steal the two upper bits so we can tag exception + * table entries with different classes. In the default class no registers + * are modified, so it is entirely up to the continuation code to figure + * out what to do. * * All the routines below use bits of fixup code that are out of line * with the main instruction path. This means when everything is well, @@ -110,7 +113,13 @@ struct exception_table_entry { #define ARCH_HAS_SORT_EXTABLE #define ARCH_HAS_SEARCH_EXTABLE -extern int fixup_exception(struct pt_regs *regs); +static inline unsigned int +ex_class(const struct exception_table_entry *x) +{ + return (u32)x->fixup >> 30; +} + +extern int fixup_exception(struct pt_regs *regs, int trapnr); extern int early_fixup_exception(unsigned long *ip); /* diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 1deffe6cc873..0f05deeff5ce 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -988,7 +988,7 @@ int kprobe_fault_handler(struct pt_regs *regs, int trapnr) * In case the user-specified fault handler returned * zero, try to fix up. */ - if (fixup_exception(regs)) + if (fixup_exception(regs, trapnr)) return 1; /* diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 346eec73f7db..df25081e5970 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -199,7 +199,7 @@ do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str, } if (!user_mode(regs)) { - if (!fixup_exception(regs)) { + if (!fixup_exception(regs, trapnr)) { tsk->thread.error_code = error_code; tsk->thread.trap_nr = trapnr; die(str, regs, error_code); @@ -453,7 +453,7 @@ do_general_protection(struct pt_regs *regs, long error_code) tsk = current; if (!user_mode(regs)) { - if (fixup_exception(regs)) + if (fixup_exception(regs, X86_TRAP_GP)) return; tsk->thread.error_code = error_code; @@ -699,7 +699,7 @@ static void math_error(struct pt_regs *regs, int error_code, int trapnr) conditional_sti(regs); if (!user_mode(regs)) { - if (!fixup_exception(regs)) { + if (!fixup_exception(regs, X86_TRAP_DE)) { task->thread.error_code = error_code; task->thread.trap_nr = trapnr; die(str, regs, error_code); diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c index 95e2ede71206..7a592ec193d5 100644 --- a/arch/x86/mm/extable.c +++ b/arch/x86/mm/extable.c @@ -8,24 +8,54 @@ ex_insn_addr(const struct exception_table_entry *x) { return (unsigned long)&x->insn + x->insn; } -static inline unsigned int -ex_class(const struct exception_table_entry *x) -{ - return (unsigned int)x->fixup & 0xC0000000; -} - static inline unsigned long ex_fixup_addr(const struct exception_table_entry *x) { - long offset = (long)((u32)x->fixup & 0x3fffffff) - (long)0x20000000; + long offset = (long)((u32)x->fixup & 0x3fffffff) - (long)_EXTABLE_BIAS; return (unsigned long)&x->fixup + offset; } -int fixup_exception(struct pt_regs *regs) +/* Fixup functions for each exception class */ +static int fix_class_default(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr) +{ + regs->ip = ex_fixup_addr(fixup); + return 1; +} +static int fix_class_fault(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr) +{ + regs->ip = ex_fixup_addr(fixup); + regs->ax = trapnr; + return 1; +} +static int fix_class_ex(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr) +{ + /* Special hack for uaccess_err */ + current_thread_info()->uaccess_err = 1; + regs->ip = ex_fixup_addr(fixup); + return 1; +} +static int fix_class_unused(const struct exception_table_entry *fixup, + struct pt_regs *regs, int trapnr) +{ + /* can't happen unless exception table was corrupted */ + BUG_ON(1); + return 0; +} + +static int (*allclasses[])(const struct exception_table_entry *, + struct pt_regs *, int) = { + [EXTABLE_CLASS_DEFAULT] = fix_class_default, + [EXTABLE_CLASS_FAULT] = fix_class_fault, + [EXTABLE_CLASS_EX] = fix_class_ex, + [EXTABLE_CLASS_UNUSED] = fix_class_unused +}; + +int fixup_exception(struct pt_regs *regs, int trapnr) { const struct exception_table_entry *fixup; - unsigned long new_ip; - unsigned int class; #ifdef CONFIG_PNPBIOS if (unlikely(SEGMENT_IS_PNP_CODE(regs->cs))) { @@ -42,17 +72,8 @@ int fixup_exception(struct pt_regs *regs) #endif fixup = search_exception_tables(regs->ip); - if (fixup) { - class = ex_class(fixup); - new_ip = ex_fixup_addr(fixup); - - if (class == _EXTABLE_CLASS_EX) { - /* Special hack for uaccess_err */ - current_thread_info()->uaccess_err = 1; - } - regs->ip = new_ip; - return 1; - } + if (fixup) + return allclasses[ex_class(fixup)](fixup, regs, trapnr); return 0; } @@ -64,8 +85,8 @@ int __init early_fixup_exception(unsigned long *ip) fixup = search_exception_tables(*ip); if (fixup) { - if (ex_class(fixup) == _EXTABLE_CLASS_EX) { - /* uaccess handling not supported during early boot */ + if (ex_class(fixup)) { + /* special handling not supported during early boot */ return 0; } diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index eef44d9a3f77..495946c3f9dd 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -656,7 +656,7 @@ no_context(struct pt_regs *regs, unsigned long error_code, int sig; /* Are we prepared to handle this kernel fault? */ - if (fixup_exception(regs)) { + if (fixup_exception(regs, X86_TRAP_PF)) { /* * Any interrupt that takes a fault gets the fixup. This makes * the below recursive fault logic only apply to a faults from