diff mbox

[kvm-unit-tests] x86: always inline functions called after set_exception_return

Message ID 1449520601-31507-1-git-send-email-dmatlack@google.com (mailing list archive)
State New, archived
Headers show

Commit Message

David Matlack Dec. 7, 2015, 8:36 p.m. UTC
set_exception_return forces exceptions handlers to return to a specific
address instead of returning to the instruction address pushed by the
CPU at the time of the exception. The unit tests apic.c and vmx.c use
this functionality to recover from expected exceptions.

When using set_exception_return we have to be careful not to modify the
stack (such as by doing a function call) as triggering the exception will
likely jump us past the instructions which undo the stack manipulation
(such as a ret). To accomplish this, declare all functions called after
set_exception_return as __always_inline, so that the compiler always
inlines them.

Signed-off-by: David Matlack <dmatlack@google.com>
---
 lib/libcflat.h      | 4 ++++
 lib/x86/processor.h | 2 +-
 x86/vmx.c           | 4 ++--
 3 files changed, 7 insertions(+), 3 deletions(-)

Comments

Paolo Bonzini Dec. 9, 2015, 3:02 p.m. UTC | #1
On 07/12/2015 21:36, David Matlack wrote:
> set_exception_return forces exceptions handlers to return to a specific
> address instead of returning to the instruction address pushed by the
> CPU at the time of the exception. The unit tests apic.c and vmx.c use
> this functionality to recover from expected exceptions.
> 
> When using set_exception_return we have to be careful not to modify the
> stack (such as by doing a function call) as triggering the exception will
> likely jump us past the instructions which undo the stack manipulation
> (such as a ret). To accomplish this, declare all functions called after
> set_exception_return as __always_inline, so that the compiler always
> inlines them.

set_exception_return is generally not a great idea IMHO---thanks for
looking at it.

A couple years ago we discussed adding setjmp/longjmp to libcflat
(http://www.spinics.net/lists/kvm/msg94159.html which is however missing
a 32-bit version).  Making the exceptions do a longjmp would be a much
safer option.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Matlack Dec. 11, 2015, 6:05 p.m. UTC | #2
On Wed, Dec 9, 2015 at 7:02 AM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> On 07/12/2015 21:36, David Matlack wrote:
>> set_exception_return forces exceptions handlers to return to a specific
>> address instead of returning to the instruction address pushed by the
>> CPU at the time of the exception. The unit tests apic.c and vmx.c use
>> this functionality to recover from expected exceptions.
>>
>> When using set_exception_return we have to be careful not to modify the
>> stack (such as by doing a function call) as triggering the exception will
>> likely jump us past the instructions which undo the stack manipulation
>> (such as a ret). To accomplish this, declare all functions called after
>> set_exception_return as __always_inline, so that the compiler always
>> inlines them.
>
> set_exception_return is generally not a great idea IMHO---thanks for
> looking at it.

Yup. This is a band-aid just to fix the current implementation.

>
> A couple years ago we discussed adding setjmp/longjmp to libcflat
> (http://www.spinics.net/lists/kvm/msg94159.html which is however missing
> a 32-bit version).  Making the exceptions do a longjmp would be a much
> safer option.

Good idea! I might give this a try, but don't hold your breath :)

>
> Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/lib/libcflat.h b/lib/libcflat.h
index 9747ccd..9ffb5db 100644
--- a/lib/libcflat.h
+++ b/lib/libcflat.h
@@ -27,6 +27,10 @@ 
 
 #define __unused __attribute__((__unused__))
 
+#ifndef __always_inline
+# define __always_inline inline __attribute__((always_inline))
+#endif
+
 #define xstr(s) xxstr(s)
 #define xxstr(s) #s
 
diff --git a/lib/x86/processor.h b/lib/x86/processor.h
index 95cea1a..c4bc64f 100644
--- a/lib/x86/processor.h
+++ b/lib/x86/processor.h
@@ -149,7 +149,7 @@  static inline u64 rdmsr(u32 index)
     return a | ((u64)d << 32);
 }
 
-static inline void wrmsr(u32 index, u64 val)
+static __always_inline void wrmsr(u32 index, u64 val)
 {
     u32 a = val, d = val >> 32;
     asm volatile ("wrmsr" : : "a"(a), "d"(d), "c"(index) : "memory");
diff --git a/x86/vmx.c b/x86/vmx.c
index f05cd33..28cd349 100644
--- a/x86/vmx.c
+++ b/x86/vmx.c
@@ -117,7 +117,7 @@  static void __attribute__((__used__)) syscall_handler(u64 syscall_no)
 		current->syscall_handler(syscall_no);
 }
 
-static inline int vmx_on()
+static __always_inline int vmx_on()
 {
 	bool ret;
 	u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF;
@@ -126,7 +126,7 @@  static inline int vmx_on()
 	return ret;
 }
 
-static inline int vmx_off()
+static __always_inline int vmx_off()
 {
 	bool ret;
 	u64 rflags = read_rflags() | X86_EFLAGS_CF | X86_EFLAGS_ZF;