diff mbox series

[v8,31/33] x86/fred: BUG() when ERETU with %rsp not equal to that when the ring 3 event was just delivered

Message ID 20230410081438.1750-32-xin3.li@intel.com (mailing list archive)
State New, archived
Headers show
Series x86: enable FRED for x86-64 | expand

Commit Message

Li, Xin3 April 10, 2023, 8:14 a.m. UTC
A FRED stack frame generated by a ring 3 event should never be messed up, and
the first thing we must make sure is that at the time an ERETU instruction is
executed, %rsp must have the same address as that when the user level event
was just delivered.

However we don't want to bother the normal code path of ERETU because it's on
the hotest code path, a good choice is to do this check when ERETU faults.

Suggested-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Tested-by: Shan Kang <shan.kang@intel.com>
Signed-off-by: Xin Li <xin3.li@intel.com>
---
 arch/x86/mm/extable.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Thomas Gleixner June 5, 2023, 2:15 p.m. UTC | #1
On Mon, Apr 10 2023 at 01:14, Xin Li wrote:
> A FRED stack frame generated by a ring 3 event should never be messed up, and
> the first thing we must make sure is that at the time an ERETU instruction is
> executed, %rsp must have the same address as that when the user level event
> was just delivered.
>
> However we don't want to bother the normal code path of ERETU because it's on
> the hotest code path, a good choice is to do this check when ERETU
> faults.

Which might be not catching bugs where the wrong frame makes ERETU not
fault.

We have CONFIG_DEBUG_ENTRY for catching this at the proper place.

Thanks,

        tglx
H. Peter Anvin June 5, 2023, 4:42 p.m. UTC | #2
On 6/5/23 07:15, Thomas Gleixner wrote:
> On Mon, Apr 10 2023 at 01:14, Xin Li wrote:
>> A FRED stack frame generated by a ring 3 event should never be messed up, and
>> the first thing we must make sure is that at the time an ERETU instruction is
>> executed, %rsp must have the same address as that when the user level event
>> was just delivered.
>>
>> However we don't want to bother the normal code path of ERETU because it's on
>> the hotest code path, a good choice is to do this check when ERETU
>> faults.
> 
> Which might be not catching bugs where the wrong frame makes ERETU not
> fault.
> 
> We have CONFIG_DEBUG_ENTRY for catching this at the proper place.
> 

This is true, but this BUG() is a cheap test on a slow path, and thus 
can be included in production code.

	-hpa
Thomas Gleixner June 5, 2023, 5:16 p.m. UTC | #3
On Mon, Jun 05 2023 at 09:42, H. Peter Anvin wrote:
> On 6/5/23 07:15, Thomas Gleixner wrote:
>> On Mon, Apr 10 2023 at 01:14, Xin Li wrote:
>>> A FRED stack frame generated by a ring 3 event should never be messed up, and
>>> the first thing we must make sure is that at the time an ERETU instruction is
>>> executed, %rsp must have the same address as that when the user level event
>>> was just delivered.
>>>
>>> However we don't want to bother the normal code path of ERETU because it's on
>>> the hotest code path, a good choice is to do this check when ERETU
>>> faults.
>> 
>> Which might be not catching bugs where the wrong frame makes ERETU not
>> fault.
>> 
>> We have CONFIG_DEBUG_ENTRY for catching this at the proper place.
>> 
>
> This is true, but this BUG() is a cheap test on a slow path, and thus 
> can be included in production code.

No objection.
diff mbox series

Patch

diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 9d82193adf3c..be297d4b137b 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -204,6 +204,14 @@  static bool ex_handler_eretu(const struct exception_table_entry *fixup,
 	unsigned short ss = uregs->ss;
 	unsigned short cs = uregs->cs;
 
+	/*
+	 * A FRED stack frame generated by a ring 3 event should never be
+	 * messed up, and the first thing we must make sure is that at the
+	 * time an ERETU instruction is executed, %rsp must have the same
+	 * address as that when the user level event was just delivered.
+	 */
+	BUG_ON(uregs != current->thread_info.user_pt_regs);
+
 	/*
 	 * Move the NMI bit from the invalid stack frame, which caused ERETU
 	 * to fault, to the fault handler's stack frame, thus to unblock NMI