Message ID | 20181213213135.12913-6-sean.j.christopherson@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86: Add vDSO exception fixup for SGX | expand |
On 2018-12-14 03:01, Sean Christopherson wrote: > +struct sgx_enclave_regs { > + __u64 rdi; > + __u64 rsi; > + __u64 rdx; > + __u64 r8; > + __u64 r9; > + __u64 r10; > +}; This is fine, but why not just cover all 13 normal registers that are not used by SGX? Minor comments below. > +/** > + * struct sgx_enclave_exception - structure to pass register in/out of enclave Typo in struct name. > + * by way of __vdso_sgx_enter_enclave > + * > + * @rdi: value of %rdi, loaded/saved on enter/exit > + * @rsi: value of %rsi, loaded/saved on enter/exit > + * @rdx: value of %rdx, loaded/saved on enter/exit > + * @r8: value of %r8, loaded/saved on enter/exit > + * @r9: value of %r9, loaded/saved on enter/exit > + * @r10: value of %r10, loaded/saved on enter/exit > + */ > + /* load leaf, TCS and AEP for ENCLU */ > + mov %edi, %eax > + mov %rsi, %rbx > + lea 1f(%rip), %rcx If you move this below the jump, you can use %rcx for @regs > + > + /* optionally copy @regs to registers */ > + test %rdx, %rdx > + je 1f > + > + mov %rdx, %r11 > + mov RDI(%r11), %rdi > + mov RSI(%r11), %rsi > + mov RDX(%r11), %rdx > + mov R8(%r11), %r8 > + mov R9(%r11), %r9 > + mov R10(%r11), %r10 > + > +1: enclu > + > + /* ret = 0 */ > + xor %eax, %eax > + > + /* optionally copy registers to @regs */ > + mov -0x8(%rsp), %r11 > + test %r11, %r11 > + je 2f > + > + mov %rdi, RDI(%r11) > + mov %rsi, RSI(%r11) > + mov %rdx, RDX(%r11) > + mov %r8, R8(%r11) > + mov %r9, R9(%r11) > + mov %r10, R10(%r11) Here you can use %rax for @regs and clear it at the end. > +2: pop %rbx > + pop %r12 > + pop %r13 > + pop %r14 > + pop %r15 > + pop %rbp > + ret x86-64 ABI requires that you call CLD here (enclave may set it). -- Jethro Beekman | Fortanix
On Fri, Dec 14, 2018 at 09:55:49AM +0000, Jethro Beekman wrote: > On 2018-12-14 03:01, Sean Christopherson wrote: > >+struct sgx_enclave_regs { > >+ __u64 rdi; > >+ __u64 rsi; > >+ __u64 rdx; > >+ __u64 r8; > >+ __u64 r9; > >+ __u64 r10; > >+}; > > This is fine, but why not just cover all 13 normal registers that are not > used by SGX? Trying to balance flexibility/usability with unecessary overhead. And I think this ABI meshes well with the idea of requiring the enclave to be compliant with the x86-64 ABI (see below). > Minor comments below. > > >+/** > >+ * struct sgx_enclave_exception - structure to pass register in/out of enclave > > Typo in struct name. Doh, thanks. > >+ * by way of __vdso_sgx_enter_enclave > >+ * > >+ * @rdi: value of %rdi, loaded/saved on enter/exit > >+ * @rsi: value of %rsi, loaded/saved on enter/exit > >+ * @rdx: value of %rdx, loaded/saved on enter/exit > >+ * @r8: value of %r8, loaded/saved on enter/exit > >+ * @r9: value of %r9, loaded/saved on enter/exit > >+ * @r10: value of %r10, loaded/saved on enter/exit > >+ */ > > >+ /* load leaf, TCS and AEP for ENCLU */ > >+ mov %edi, %eax > >+ mov %rsi, %rbx > >+ lea 1f(%rip), %rcx > > If you move this below the jump, you can use %rcx for @regs EDI needs to be moved to EAX before it is potentially overwritten below and I wanted the loading of the three registers used by hardware grouped together. And IMO using the same register for accessing the structs in all flows improves readability. > >+ > >+ /* optionally copy @regs to registers */ > >+ test %rdx, %rdx > >+ je 1f > >+ > >+ mov %rdx, %r11 > >+ mov RDI(%r11), %rdi > >+ mov RSI(%r11), %rsi > >+ mov RDX(%r11), %rdx > >+ mov R8(%r11), %r8 > >+ mov R9(%r11), %r9 > >+ mov R10(%r11), %r10 > >+ > >+1: enclu > >+ > >+ /* ret = 0 */ > >+ xor %eax, %eax > >+ > >+ /* optionally copy registers to @regs */ > >+ mov -0x8(%rsp), %r11 > >+ test %r11, %r11 > >+ je 2f > >+ > >+ mov %rdi, RDI(%r11) > >+ mov %rsi, RSI(%r11) > >+ mov %rdx, RDX(%r11) > >+ mov %r8, R8(%r11) > >+ mov %r9, R9(%r11) > >+ mov %r10, R10(%r11) > > Here you can use %rax for @regs and clear it at the end. Clearing RAX early avoids the use of another label, though obviously that's not exactly critical. The comment about using the same register for accessing structs applies here as well. > >+2: pop %rbx > >+ pop %r12 > >+ pop %r13 > >+ pop %r14 > >+ pop %r15 > >+ pop %rbp > >+ ret > > x86-64 ABI requires that you call CLD here (enclave may set it). Ugh. Technically MXCSR and the x87 CW also need to be preserved. What if rather than treating the enclave as hostile we require it to be compliant with the x86-64 ABI like any other function? That would solve the EFLAGS.DF, MXCSR and x87 issues without adding unnecessary overhead. And we wouldn't have to save/restore R12-R15. It'd mean we couldn't use the stack's red zone to hold @regs and @e, but that's poor form anyways. > > -- > Jethro Beekman | Fortanix >
On Fri, Dec 14, 2018 at 07:12:04AM -0800, Sean Christopherson wrote: > On Fri, Dec 14, 2018 at 09:55:49AM +0000, Jethro Beekman wrote: > > On 2018-12-14 03:01, Sean Christopherson wrote: > > >+2: pop %rbx > > >+ pop %r12 > > >+ pop %r13 > > >+ pop %r14 > > >+ pop %r15 > > >+ pop %rbp > > >+ ret > > > > x86-64 ABI requires that you call CLD here (enclave may set it). > > Ugh. Technically MXCSR and the x87 CW also need to be preserved. > > What if rather than treating the enclave as hostile we require it to be > compliant with the x86-64 ABI like any other function? That would solve > the EFLAGS.DF, MXCSR and x87 issues without adding unnecessary overhead. > And we wouldn't have to save/restore R12-R15. It'd mean we couldn't use > the stack's red zone to hold @regs and @e, but that's poor form anyways. Grr, except the processor crushes R12-R15, FCW and MXCSR on asynchronous exits. But not EFLAGS.DF, that's real helpful.
On Fri, Dec 14, 2018 at 07:38:30AM -0800, Sean Christopherson wrote: > On Fri, Dec 14, 2018 at 07:12:04AM -0800, Sean Christopherson wrote: > > On Fri, Dec 14, 2018 at 09:55:49AM +0000, Jethro Beekman wrote: > > > On 2018-12-14 03:01, Sean Christopherson wrote: > > > >+2: pop %rbx > > > >+ pop %r12 > > > >+ pop %r13 > > > >+ pop %r14 > > > >+ pop %r15 > > > >+ pop %rbp > > > >+ ret > > > > > > x86-64 ABI requires that you call CLD here (enclave may set it). > > > > Ugh. Technically MXCSR and the x87 CW also need to be preserved. > > > > What if rather than treating the enclave as hostile we require it to be > > compliant with the x86-64 ABI like any other function? That would solve > > the EFLAGS.DF, MXCSR and x87 issues without adding unnecessary overhead. > > And we wouldn't have to save/restore R12-R15. It'd mean we couldn't use > > the stack's red zone to hold @regs and @e, but that's poor form anyways. > > Grr, except the processor crushes R12-R15, FCW and MXCSR on asynchronous > exits. But not EFLAGS.DF, that's real helpful. I can think of three options that are at least somewhat reasonable: 1) Save/restore MXCSR and FCW + 100% compliant with the x86-64 ABI + Callable from any code + Minimal documentation required - Restoring MXCSR/FCW is likely unnecessary 99% of the time - Slow 2) Clear EFLAGS.DF but not save/restore MXCSR and FCW + Mostly compliant with the x86-64 ABI + Callable from any code that doesn't use SIMD registers - Need to document deviations from x86-64 ABI 3) Require the caller to save/restore everything. + Fast + Userspace can pass all GPRs to the enclave (minus EAX, RBX and RCX) - Completely custom ABI - For all intents and purposes must be called from an assembly wrapper Option (3) actually isn't all that awful. RCX can be used to pass an optional pointer to a 'struct sgx_enclave_exception' and we can still return standard error codes, e.g. -EFAULT. E.g.: /** * __vdso_sgx_enter_enclave() - Enter an SGX enclave * * %eax: ENCLU leaf, must be EENTER or ERESUME * %rbx: TCS, must be non-NULL * %rcx: Optional pointer to 'struct sgx_enclave_exception' * * Return: * 0 on a clean entry/exit to/from the enclave * -EINVAL if ENCLU leaf is not allowed or if TCS is NULL * -EFAULT if ENCLU or the enclave faults */ ENTRY(__vdso_sgx_enter_enclave) /* EENTER <= leaf <= ERESUME */ cmp $0x2, %eax jb bad_input cmp $0x3, %eax ja bad_input /* TCS must be non-NULL */ test %rbx, %rbx je bad_input /* save @exception pointer */ push %rcx /* load leaf, TCS and AEP for ENCLU */ lea 1f(%rip), %rcx 1: enclu add 0x8, %rsp xor %eax, %eax ret bad_input: mov $(-EINVAL), %rax ret .pushsection .fixup, "ax" 2: pop %rcx test %rcx, %rcx je 3f mov %eax, EX_LEAF(%rcx) mov %di, EX_TRAPNR(%rcx) mov %si, EX_ERROR_CODE(%rcx) mov %rdx, EX_ADDRESS(%rcx) 3: mov $(-EFAULT), %rax ret .popsection _ASM_VDSO_EXTABLE_HANDLE(1b, 3b) ENDPROC(__vdso_sgx_enter_enclave)
On Fri, Dec 14, 2018 at 09:03:11AM -0800, Sean Christopherson wrote: > On Fri, Dec 14, 2018 at 07:38:30AM -0800, Sean Christopherson wrote: > > On Fri, Dec 14, 2018 at 07:12:04AM -0800, Sean Christopherson wrote: > > > On Fri, Dec 14, 2018 at 09:55:49AM +0000, Jethro Beekman wrote: > > > > On 2018-12-14 03:01, Sean Christopherson wrote: > > > > >+2: pop %rbx > > > > >+ pop %r12 > > > > >+ pop %r13 > > > > >+ pop %r14 > > > > >+ pop %r15 > > > > >+ pop %rbp > > > > >+ ret > > > > > > > > x86-64 ABI requires that you call CLD here (enclave may set it). > > > > > > Ugh. Technically MXCSR and the x87 CW also need to be preserved. > > > > > > What if rather than treating the enclave as hostile we require it to be > > > compliant with the x86-64 ABI like any other function? That would solve > > > the EFLAGS.DF, MXCSR and x87 issues without adding unnecessary overhead. > > > And we wouldn't have to save/restore R12-R15. It'd mean we couldn't use > > > the stack's red zone to hold @regs and @e, but that's poor form anyways. > > > > Grr, except the processor crushes R12-R15, FCW and MXCSR on asynchronous > > exits. But not EFLAGS.DF, that's real helpful. > > I can think of three options that are at least somewhat reasonable: > > 1) Save/restore MXCSR and FCW > > + 100% compliant with the x86-64 ABI > + Callable from any code > + Minimal documentation required > - Restoring MXCSR/FCW is likely unnecessary 99% of the time > - Slow > > 2) Clear EFLAGS.DF but not save/restore MXCSR and FCW > > + Mostly compliant with the x86-64 ABI > + Callable from any code that doesn't use SIMD registers > - Need to document deviations from x86-64 ABI > > 3) Require the caller to save/restore everything. > > + Fast > + Userspace can pass all GPRs to the enclave (minus EAX, RBX and RCX) > - Completely custom ABI > - For all intents and purposes must be called from an assembly wrapper > > > Option (3) actually isn't all that awful. RCX can be used to pass an > optional pointer to a 'struct sgx_enclave_exception' and we can still > return standard error codes, e.g. -EFAULT. Entering and exiting a syscall requires an assembly wrapper, and that doesn't seem completely unreasonable. It's an easy bit of inline assembly.
On Fri, Dec 14, 2018 at 10:20:39AM -0800, Josh Triplett wrote: > On Fri, Dec 14, 2018 at 09:03:11AM -0800, Sean Christopherson wrote: > > On Fri, Dec 14, 2018 at 07:38:30AM -0800, Sean Christopherson wrote: > > > On Fri, Dec 14, 2018 at 07:12:04AM -0800, Sean Christopherson wrote: > > > > On Fri, Dec 14, 2018 at 09:55:49AM +0000, Jethro Beekman wrote: > > > > > On 2018-12-14 03:01, Sean Christopherson wrote: > > > > > >+2: pop %rbx > > > > > >+ pop %r12 > > > > > >+ pop %r13 > > > > > >+ pop %r14 > > > > > >+ pop %r15 > > > > > >+ pop %rbp > > > > > >+ ret > > > > > > > > > > x86-64 ABI requires that you call CLD here (enclave may set it). > > > > > > > > Ugh. Technically MXCSR and the x87 CW also need to be preserved. > > > > > > > > What if rather than treating the enclave as hostile we require it to be > > > > compliant with the x86-64 ABI like any other function? That would solve > > > > the EFLAGS.DF, MXCSR and x87 issues without adding unnecessary overhead. > > > > And we wouldn't have to save/restore R12-R15. It'd mean we couldn't use > > > > the stack's red zone to hold @regs and @e, but that's poor form anyways. > > > > > > Grr, except the processor crushes R12-R15, FCW and MXCSR on asynchronous > > > exits. But not EFLAGS.DF, that's real helpful. > > > > I can think of three options that are at least somewhat reasonable: > > > > 1) Save/restore MXCSR and FCW > > > > + 100% compliant with the x86-64 ABI > > + Callable from any code > > + Minimal documentation required > > - Restoring MXCSR/FCW is likely unnecessary 99% of the time > > - Slow > > > > 2) Clear EFLAGS.DF but not save/restore MXCSR and FCW > > > > + Mostly compliant with the x86-64 ABI > > + Callable from any code that doesn't use SIMD registers > > - Need to document deviations from x86-64 ABI > > > > 3) Require the caller to save/restore everything. > > > > + Fast > > + Userspace can pass all GPRs to the enclave (minus EAX, RBX and RCX) > > - Completely custom ABI > > - For all intents and purposes must be called from an assembly wrapper > > > > > > Option (3) actually isn't all that awful. RCX can be used to pass an > > optional pointer to a 'struct sgx_enclave_exception' and we can still > > return standard error codes, e.g. -EFAULT. > > Entering and exiting a syscall requires an assembly wrapper, and that > doesn't seem completely unreasonable. It's an easy bit of inline > assembly. The code I posted had a few typos (stupid AT&T syntax), but with those fixed the idea checks out. My initial reaction to a barebones ABI was that it would be a "documentation nightmare", but it's not too bad if it returns actual error codes and fills in a struct on exceptions instead of stuffing registers. And with the MXCSR/FCW issues it might actually be less documentation in the long run since we can simply say that all state is the caller's responsibility. I *really* like that we basically eliminate bikeshedding on which GPRs to pass to/from the enclave.
> On Dec 14, 2018, at 9:03 AM, Sean Christopherson <sean.j.christopherson@intel.com> wrote: > >> On Fri, Dec 14, 2018 at 07:38:30AM -0800, Sean Christopherson wrote: >>> On Fri, Dec 14, 2018 at 07:12:04AM -0800, Sean Christopherson wrote: >>>> On Fri, Dec 14, 2018 at 09:55:49AM +0000, Jethro Beekman wrote: >>>>> On 2018-12-14 03:01, Sean Christopherson wrote: >>>>> +2: pop %rbx >>>>> + pop %r12 >>>>> + pop %r13 >>>>> + pop %r14 >>>>> + pop %r15 >>>>> + pop %rbp >>>>> + ret >>>> >>>> x86-64 ABI requires that you call CLD here (enclave may set it). >>> >>> Ugh. Technically MXCSR and the x87 CW also need to be preserved. >>> >>> What if rather than treating the enclave as hostile we require it to be >>> compliant with the x86-64 ABI like any other function? That would solve >>> the EFLAGS.DF, MXCSR and x87 issues without adding unnecessary overhead. >>> And we wouldn't have to save/restore R12-R15. It'd mean we couldn't use >>> the stack's red zone to hold @regs and @e, but that's poor form anyways. >> >> Grr, except the processor crushes R12-R15, FCW and MXCSR on asynchronous >> exits. But not EFLAGS.DF, that's real helpful. > > I can think of three options that are at least somewhat reasonable: > > 1) Save/restore MXCSR and FCW > > + 100% compliant with the x86-64 ABI > + Callable from any code > + Minimal documentation required > - Restoring MXCSR/FCW is likely unnecessary 99% of the time > - Slow > > 2) Clear EFLAGS.DF but not save/restore MXCSR and FCW > > + Mostly compliant with the x86-64 ABI > + Callable from any code that doesn't use SIMD registers > - Need to document deviations from x86-64 ABI > > 3) Require the caller to save/restore everything. > > + Fast > + Userspace can pass all GPRs to the enclave (minus EAX, RBX and RCX) > - Completely custom ABI > - For all intents and purposes must be called from an assembly wrapper > > Option (3) actually isn't all that awful. RCX can be used to pass an > optional pointer to a 'struct sgx_enclave_exception' and we can still > return standard error codes, e.g. -EFAULT. I like 3, but: > > E.g.: > > /** > * __vdso_sgx_enter_enclave() - Enter an SGX enclave > * > * %eax: ENCLU leaf, must be EENTER or ERESUME > * %rbx: TCS, must be non-NULL > * %rcx: Optional pointer to 'struct sgx_enclave_exception' > * > * Return: > * 0 on a clean entry/exit to/from the enclave > * -EINVAL if ENCLU leaf is not allowed or if TCS is NULL > * -EFAULT if ENCLU or the enclave faults > */ > ENTRY(__vdso_sgx_enter_enclave) > /* EENTER <= leaf <= ERESUME */ > cmp $0x2, %eax > jb bad_input > > cmp $0x3, %eax > ja bad_input > > /* TCS must be non-NULL */ > test %rbx, %rbx > je bad_input > > /* save @exception pointer */ > push %rcx > > /* load leaf, TCS and AEP for ENCLU */ > lea 1f(%rip), %rcx > 1: enclu > > add 0x8, %rsp > xor %eax, %eax > ret > > bad_input: > mov $(-EINVAL), %rax > ret > > .pushsection .fixup, "ax" > 2: pop %rcx > test %rcx, %rcx > je 3f > > mov %eax, EX_LEAF(%rcx) > mov %di, EX_TRAPNR(%rcx) > mov %si, EX_ERROR_CODE(%rcx) > mov %rdx, EX_ADDRESS(%rcx) > 3: mov $(-EFAULT), %rax > ret I’m not totally sold on -EFAULT as the error code. That usually indicates a bad pointer. I’m not sure I have a better suggestion. > .popsection > > _ASM_VDSO_EXTABLE_HANDLE(1b, 3b) > > ENDPROC(__vdso_sgx_enter_enclave)
On Fri, Dec 14, 2018 at 10:44:10AM -0800, Andy Lutomirski wrote: > > > On Dec 14, 2018, at 9:03 AM, Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > > > .pushsection .fixup, "ax" > > 2: pop %rcx > > test %rcx, %rcx > > je 3f > > > > mov %eax, EX_LEAF(%rcx) > > mov %di, EX_TRAPNR(%rcx) > > mov %si, EX_ERROR_CODE(%rcx) > > mov %rdx, EX_ADDRESS(%rcx) > > 3: mov $(-EFAULT), %rax > > ret > > I’m not totally sold on -EFAULT as the error code. That usually > indicates a bad pointer. I’m not sure I have a better suggestion. Hmm, one idea would be to return positive signal numbers, e.g. SIGILL for #UD. I don't like that approach though as it adds a fair amount of code to the fixup handler for dubious value, e.g. userspace would still need to check the exception error code to determine if the EPC is lost. And we'd have to update the vDSO if a new exception and/or signal was added, e.g. #CP for CET. Encapsulating "you faulted" in a single error code seems cleaner for both kernel and userspace code, and -EFAULT makes that pretty obvious even though we're bastardizing its meaning a bit. In general, I'd prefer to return only 0 or negative values so that userspace can easily merge in their own (positive value) error codes from the enclave, e.g. in the vDSO wrapper: /* Enclave's return value is in RDI, overwrite RAX on success */ test %rax, %rax cmove %rdi, %rax ret
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index b8f7c301b88f..5e28f838d8aa 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -18,6 +18,7 @@ VDSO32-$(CONFIG_IA32_EMULATION) := y # files to link into the vdso vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o +vobjs-$(VDSO64-y) += vsgx_enter_enclave.o # files to link into kernel obj-y += vma.o extable.o @@ -85,6 +86,7 @@ CFLAGS_REMOVE_vdso-note.o = -pg CFLAGS_REMOVE_vclock_gettime.o = -pg CFLAGS_REMOVE_vgetcpu.o = -pg CFLAGS_REMOVE_vvar.o = -pg +CFLAGS_REMOVE_vsgx_enter_enclave.o = -pg # # X32 processes use x32 vDSO to access 64bit kernel data. diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S index d3a2dce4cfa9..50952a995a6c 100644 --- a/arch/x86/entry/vdso/vdso.lds.S +++ b/arch/x86/entry/vdso/vdso.lds.S @@ -25,6 +25,7 @@ VERSION { __vdso_getcpu; time; __vdso_time; + __vdso_sgx_enter_enclave; local: *; }; } diff --git a/arch/x86/entry/vdso/vsgx_enter_enclave.S b/arch/x86/entry/vdso/vsgx_enter_enclave.S new file mode 100644 index 000000000000..0e4cd8a9549a --- /dev/null +++ b/arch/x86/entry/vdso/vsgx_enter_enclave.S @@ -0,0 +1,136 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include <linux/linkage.h> +#include <asm/export.h> +#include <asm/errno.h> + +#include "extable.h" + +#define RDI 0*8 +#define RSI 1*8 +#define RDX 2*8 +#define R8 3*8 +#define R9 4*8 +#define R10 5*8 + +#define EX_LEAF 0*8 +#define EX_TRAPNR 0*8+4 +#define EX_ERROR_CODE 0*8+6 +#define EX_ADDRESS 1*8 + +.code64 +.section .text, "ax" + +/* + * long __vdso_sgx_enter_enclave(__u32 leaf, void *tcs + * struct sgx_enclave_regs *regs + * struct sgx_enclave_exception *e) + * { + * if (leaf != SGX_EENTER && leaf != SGX_ERESUME) + * return -EINVAL; + * + * if (!tcs) + * return -EINVAL; + * + * if (regs) + * copy_regs_to_cpu(regs); + * + * try { + * ENCLU[leaf]; + * } catch (exception) { + * if (e) + * *e = exception; + * return -EFAULT; + * } + * + * if (regs) + * copy_cpu_to_regs(regs); + * return 0; + * } + */ +ENTRY(__vdso_sgx_enter_enclave) + /* EENTER <= leaf <= ERESUME */ + lea -0x2(%edi), %eax + cmp $0x1, %eax + ja bad_input + + /* TCS must be non-NULL */ + test %rsi, %rsi + je bad_input + + /* save non-volatile registers */ + push %rbp + mov %rsp, %rbp + push %r15 + push %r14 + push %r13 + push %r12 + push %rbx + + /* save @regs and @e to the red zone */ + mov %rdx, -0x8(%rsp) + mov %rcx, -0x10(%rsp) + + /* load leaf, TCS and AEP for ENCLU */ + mov %edi, %eax + mov %rsi, %rbx + lea 1f(%rip), %rcx + + /* optionally copy @regs to registers */ + test %rdx, %rdx + je 1f + + mov %rdx, %r11 + mov RDI(%r11), %rdi + mov RSI(%r11), %rsi + mov RDX(%r11), %rdx + mov R8(%r11), %r8 + mov R9(%r11), %r9 + mov R10(%r11), %r10 + +1: enclu + + /* ret = 0 */ + xor %eax, %eax + + /* optionally copy registers to @regs */ + mov -0x8(%rsp), %r11 + test %r11, %r11 + je 2f + + mov %rdi, RDI(%r11) + mov %rsi, RSI(%r11) + mov %rdx, RDX(%r11) + mov %r8, R8(%r11) + mov %r9, R9(%r11) + mov %r10, R10(%r11) + + /* restore non-volatile registers and return */ +2: pop %rbx + pop %r12 + pop %r13 + pop %r14 + pop %r15 + pop %rbp + ret + +bad_input: + mov $(-EINVAL), %rax + ret + +.pushsection .fixup, "ax" +3: mov -0x10(%rsp), %r11 + test %r11, %r11 + je 4f + + mov %eax, EX_LEAF(%r11) + mov %di, EX_TRAPNR(%r11) + mov %si, EX_ERROR_CODE(%r11) + mov %rdx, EX_ADDRESS(%r11) +4: mov $(-EFAULT), %rax + jmp 2b +.popsection + +_ASM_VDSO_EXTABLE_HANDLE(1b, 3b) + +ENDPROC(__vdso_sgx_enter_enclave) diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h index 266b813eefa1..4f840b334369 100644 --- a/arch/x86/include/uapi/asm/sgx.h +++ b/arch/x86/include/uapi/asm/sgx.h @@ -96,4 +96,48 @@ struct sgx_enclave_modify_pages { __u8 op; } __attribute__((__packed__)); +/** + * struct sgx_enclave_exception - structure to pass register in/out of enclave + * by way of __vdso_sgx_enter_enclave + * + * @rdi: value of %rdi, loaded/saved on enter/exit + * @rsi: value of %rsi, loaded/saved on enter/exit + * @rdx: value of %rdx, loaded/saved on enter/exit + * @r8: value of %r8, loaded/saved on enter/exit + * @r9: value of %r9, loaded/saved on enter/exit + * @r10: value of %r10, loaded/saved on enter/exit + */ +struct sgx_enclave_regs { + __u64 rdi; + __u64 rsi; + __u64 rdx; + __u64 r8; + __u64 r9; + __u64 r10; +}; + +/** + * struct sgx_enclave_exception - structure to report exceptions encountered in + * __vdso_sgx_enter_enclave + * + * @leaf: ENCLU leaf from %rax at time of exception + * @trapnr: exception trap number, a.k.a. fault vector + * @error_cdde: exception error code + * @address: exception address, e.g. CR2 on a #PF + */ +struct sgx_enclave_exception { + __u32 leaf; + __u16 trapnr; + __u16 error_code; + __u64 address; +}; + +/** + * typedef __vsgx_enter_enclave_t - Function pointer prototype for + * __vdso_sgx_enter_enclave + */ +typedef long (*__vsgx_enter_enclave_t)(__u32 leaf, void *tcs, + struct sgx_enclave_regs *regs, + struct sgx_enclave_exception *e); + #endif /* _UAPI_ASM_X86_SGX_H */
Intel Software Guard Extensions (SGX) SGX introduces a new CPL3-only enclave mode that runs as a sort of black box shared object that is hosted by an untrusted normal CPL3 process. Enclave transitions have semantics that are a lovely blend of SYCALL, SYSRET and VM-Exit. In a non-faulting scenario, entering and exiting an enclave can only be done through SGX-specific instructions, EENTER and EEXIT respectively. EENTER+EEXIT is analogous to SYSCALL+SYSRET, e.g. EENTER/SYSCALL load RCX with the next RIP and EEXIT/SYSRET load RIP from R{B,C}X. But in a faulting/interrupting scenario, enclave transitions act more like VM-Exit and VMRESUME. Maintaining the black box nature of the enclave means that hardware must automatically switch CPU context when an Asynchronous Exiting Event (AEE) occurs, an AEE being any interrupt or exception (exceptions are AEEs because asynchronous in this context is relative to the enclave and not CPU execution, e.g. the enclave doesn't get an opportunity to save/fuzz CPU state). Like VM-Exits, all AEEs jump to a common location, referred to as the Asynchronous Exiting Point (AEP). The AEP is specified at enclave entry via register passed to EENTER/ERESUME, similar to how the hypervisor specifies the VM-Exit point (via VMCS.HOST_RIP at VMLAUNCH/VMRESUME). Resuming the enclave/VM after the exiting event is handled is done via ERESUME/VMRESUME respectively. In SGX, AEEs that are handled by the kernel, e.g. INTR, NMI and most page faults, IRET will journey back to the AEP which then ERESUMEs th enclave. Enclaves also behave a bit like VMs in the sense that they can generate exceptions as part of their normal operation that for all intents and purposes need to handled in the enclave/VM. However, unlike VMX, SGX doesn't allow the host to modify its guest's, a.k.a. enclave's, state, as doing so would circumvent the enclave's security. So to handle an exception, the enclave must first be re-entered through the normal EENTER flow (SYSCALL/SYSRET behavior), and then resumed via ERESUME (VMRESUME behavior) after the source of the exception is resolved. All of the above is just the tip of the iceberg when it comes to running an enclave. But, SGX was designed in such a way that the host process can utilize a library to build, launch and run an enclave. This is roughly analogous to how e.g. libc implementations are used by most applications so that the application can focus on its business logic. The big gotcha is that because enclaves can generate *and* handle exceptions, any SGX library must be prepared to handle nearly any exception at any time (well, any time a thread is executing in an enclave). In Linux, this means the SGX library must register a signal handler in order to intercept relevant exceptions and forward them to the enclave (or in some cases, take action on behalf of the enclave). Unfortunately, Linux's signal mechanism doesn't mesh well with libraries, e.g. signal handlers are process wide, are difficult to chain, etc... This becomes particularly nasty when using multiple levels of libraries that register signal handlers, e.g. running an enclave via cgo inside of the Go runtime. In comes vDSO to save the day. Now that vDSO can fixup exceptions, add a function, __vdso_sgx_enter_enclave(), to wrap enclave transitions and intercept any exceptions that occur when running the enclave. __vdso_sgx_enter_enclave() accepts four parameters: - The ENCLU leaf to execute (must be EENTER or ERESUME). - A pointer to a Thread Control Structure (TCS). A TCS is a page within the enclave that defines/tracks the context of an enclave thread. - An optional 'struct sgx_enclave_regs' pointer. If defined, the corresponding registers are loaded prior to entering the enclave and saved after (cleanly) exiting the enclave. The effective enclave register ABI follows the kernel x86-64 ABI. The x86-64 userspace ABI is not used due to RCX being usurped by hardware to pass the return RIP to the enclave. - An optional 'struct sgx_enclave_exception' pointer. If provided, the struct is filled with the faulting ENCLU leaf, trapnr, error code and address if an unhandled exception occurs on ENCLU or in the enclave. An unhandled exception is an exception that would normally be delivered to userspace via a signal, e.g. SIGSEGV. Note that this means that not all enclave exits are reported to the caller, e.g. interrupts and faults that are handled by the kernel do not trigger fixup and IRET back to ENCLU[ERESUME], i.e. unconditionally resume the enclave. Suggested-by: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Haitao Huang <haitao.huang@linux.intel.com> Cc: Jethro Beekman <jethro@fortanix.com> Cc: Dr. Greg Wettstein <greg@enjellic.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> --- arch/x86/entry/vdso/Makefile | 2 + arch/x86/entry/vdso/vdso.lds.S | 1 + arch/x86/entry/vdso/vsgx_enter_enclave.S | 136 +++++++++++++++++++++++ arch/x86/include/uapi/asm/sgx.h | 44 ++++++++ 4 files changed, 183 insertions(+) create mode 100644 arch/x86/entry/vdso/vsgx_enter_enclave.S