From patchwork Fri Dec 14 21:57:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 10731721 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3923713BF for ; Fri, 14 Dec 2018 21:57:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0DE82D7E2 for ; Fri, 14 Dec 2018 21:57:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D533A2D7F8; Fri, 14 Dec 2018 21:57:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D14262D7E2 for ; Fri, 14 Dec 2018 21:57:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730987AbeLNV5r (ORCPT ); Fri, 14 Dec 2018 16:57:47 -0500 Received: from mga01.intel.com ([192.55.52.88]:13324 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731274AbeLNV5f (ORCPT ); Fri, 14 Dec 2018 16:57:35 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Dec 2018 13:57:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,354,1539673200"; d="scan'208";a="118650679" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.154]) by FMSMGA003.fm.intel.com with ESMTP; 14 Dec 2018 13:57:34 -0800 From: Sean Christopherson To: Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, Dave Hansen , Peter Zijlstra , Jarkko Sakkinen Cc: "H. Peter Anvin" , linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, Andy Lutomirski , Josh Triplett , Haitao Huang , Jethro Beekman , "Dr . Greg Wettstein" Subject: [RFC PATCH v5 5/5] x86/vdso: Add __vdso_sgx_enter_enclave() to wrap SGX enclave transitions Date: Fri, 14 Dec 2018 13:57:29 -0800 Message-Id: <20181214215729.4221-6-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181214215729.4221-1-sean.j.christopherson@intel.com> References: <20181214215729.4221-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Sender: linux-sgx-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Intel Software Guard Extensions (SGX) SGX introduces a new CPL3-only enclave mode that runs as a sort of black box shared object that is hosted by an untrusted normal CPL3 process. Enclave transitions have semantics that are a lovely blend of SYCALL, SYSRET and VM-Exit. In a non-faulting scenario, entering and exiting an enclave can only be done through SGX-specific instructions, EENTER and EEXIT respectively. EENTER+EEXIT is analogous to SYSCALL+SYSRET, e.g. EENTER/SYSCALL load RCX with the next RIP and EEXIT/SYSRET load RIP from R{B,C}X. But in a faulting/interrupting scenario, enclave transitions act more like VM-Exit and VMRESUME. Maintaining the black box nature of the enclave means that hardware must automatically switch CPU context when an Asynchronous Exiting Event (AEE) occurs, an AEE being any interrupt or exception (exceptions are AEEs because asynchronous in this context is relative to the enclave and not CPU execution, e.g. the enclave doesn't get an opportunity to save/fuzz CPU state). Like VM-Exits, all AEEs jump to a common location, referred to as the Asynchronous Exiting Point (AEP). The AEP is specified at enclave entry via register passed to EENTER/ERESUME, similar to how the hypervisor specifies the VM-Exit point (via VMCS.HOST_RIP at VMLAUNCH/VMRESUME). Resuming the enclave/VM after the exiting event is handled is done via ERESUME/VMRESUME respectively. In SGX, AEEs that are handled by the kernel, e.g. INTR, NMI and most page faults, IRET will journey back to the AEP which then ERESUMEs th enclave. Enclaves also behave a bit like VMs in the sense that they can generate exceptions as part of their normal operation that for all intents and purposes need to handled in the enclave/VM. However, unlike VMX, SGX doesn't allow the host to modify its guest's, a.k.a. enclave's, state, as doing so would circumvent the enclave's security. So to handle an exception, the enclave must first be re-entered through the normal EENTER flow (SYSCALL/SYSRET behavior), and then resumed via ERESUME (VMRESUME behavior) after the source of the exception is resolved. All of the above is just the tip of the iceberg when it comes to running an enclave. But, SGX was designed in such a way that the host process can utilize a library to build, launch and run an enclave. This is roughly analogous to how e.g. libc implementations are used by most applications so that the application can focus on its business logic. The big gotcha is that because enclaves can generate *and* handle exceptions, any SGX library must be prepared to handle nearly any exception at any time (well, any time a thread is executing in an enclave). In Linux, this means the SGX library must register a signal handler in order to intercept relevant exceptions and forward them to the enclave (or in some cases, take action on behalf of the enclave). Unfortunately, Linux's signal mechanism doesn't mesh well with libraries, e.g. signal handlers are process wide, are difficult to chain, etc... This becomes particularly nasty when using multiple levels of libraries that register signal handlers, e.g. running an enclave via cgo inside of the Go runtime. In comes vDSO to save the day. Now that vDSO can fixup exceptions, add a function, __vdso_sgx_enter_enclave(), to wrap enclave transitions and intercept any exceptions that occur when running the enclave. __vdso_sgx_enter_enclave() does NOT adhere to the x86-64 ABI and instead uses a custom calling convention. The primary motivation is to avoid issues that arise due to asynchronous enclave exits. The x86-64 ABI requires that EFLAGS.DF, MXCSR and FCW be preserved by the callee, and unfortunately for the vDSO, the aformentioned registers/bits are not restored after an asynchronous exit, e.g. EFLAGS.DF is in an unknown state while MXCSR and FCW are reset to their init values. So the vDSO cannot simply pass the buck by requiring enclaves to adhere to the x86-64 ABI. That leaves three somewhat reasonable options: 1) Save/restore non-volatile GPRs, MXCSR and FCW, and clear EFLAGS.DF + 100% compliant with the x86-64 ABI + Callable from any code + Minimal documentation required - Restoring MXCSR/FCW is likely unnecessary 99% of the time - Slow 2) Save/restore non-volatile GPRs and clear EFLAGS.DF + Mostly compliant with the x86-64 ABI + Callable from any code that doesn't use SIMD registers - Need to document deviations from x86-64 ABI, i.e. MXCSR and FCW 3) Require the caller to save/restore everything. + Fast + Userspace can pass all GPRs to the enclave (minus EAX, RBX and RCX) - Custom ABI - For all intents and purposes must be called from an assembly wrapper __vdso_sgx_enter_enclave() implements option (3). The custom ABI is mostly a documentation issue, and even that is offset by the fact that being more similar to hardware's ENCLU[EENTER/ERESUME] ABI reduces the amount of documentation needed for the vDSO, e.g. options (2) and (3) would need to document which registers are marshalled to/from enclaves. Requiring an assembly wrapper imparts minimal pain on userspace as SGX libraries and/or applications need a healthy chunk of assembly, e.g. in the enclave, regardless of the vDSO's implementation. Suggested-by: Andy Lutomirski Cc: Andy Lutomirski Cc: Jarkko Sakkinen Cc: Dave Hansen Cc: Josh Triplett Cc: Haitao Huang Cc: Jethro Beekman Cc: Dr. Greg Wettstein Signed-off-by: Sean Christopherson --- arch/x86/entry/vdso/Makefile | 2 + arch/x86/entry/vdso/vdso.lds.S | 1 + arch/x86/entry/vdso/vsgx_enter_enclave.S | 97 ++++++++++++++++++++++++ arch/x86/include/uapi/asm/sgx.h | 18 +++++ 4 files changed, 118 insertions(+) create mode 100644 arch/x86/entry/vdso/vsgx_enter_enclave.S diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index b8f7c301b88f..5e28f838d8aa 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -18,6 +18,7 @@ VDSO32-$(CONFIG_IA32_EMULATION) := y # files to link into the vdso vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o +vobjs-$(VDSO64-y) += vsgx_enter_enclave.o # files to link into kernel obj-y += vma.o extable.o @@ -85,6 +86,7 @@ CFLAGS_REMOVE_vdso-note.o = -pg CFLAGS_REMOVE_vclock_gettime.o = -pg CFLAGS_REMOVE_vgetcpu.o = -pg CFLAGS_REMOVE_vvar.o = -pg +CFLAGS_REMOVE_vsgx_enter_enclave.o = -pg # # X32 processes use x32 vDSO to access 64bit kernel data. diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S index d3a2dce4cfa9..50952a995a6c 100644 --- a/arch/x86/entry/vdso/vdso.lds.S +++ b/arch/x86/entry/vdso/vdso.lds.S @@ -25,6 +25,7 @@ VERSION { __vdso_getcpu; time; __vdso_time; + __vdso_sgx_enter_enclave; local: *; }; } diff --git a/arch/x86/entry/vdso/vsgx_enter_enclave.S b/arch/x86/entry/vdso/vsgx_enter_enclave.S new file mode 100644 index 000000000000..af572adcd8ed --- /dev/null +++ b/arch/x86/entry/vdso/vsgx_enter_enclave.S @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include + +#include "extable.h" + +#define EX_LEAF 0*8 +#define EX_TRAPNR 0*8+4 +#define EX_ERROR_CODE 0*8+6 +#define EX_ADDRESS 1*8 + +.code64 +.section .text, "ax" + +/** + * __vdso_sgx_enter_enclave() - Enter an SGX enclave + * + * %eax: ENCLU leaf, must be EENTER or ERESUME + * %rbx: TCS, must be non-NULL + * %rcx: Optional pointer to 'struct sgx_enclave_exception' + * + * Return: + * 0 on a clean entry/exit to/from the enclave + * -EINVAL if ENCLU leaf is not allowed or if TCS is NULL + * -EFAULT if ENCLU or the enclave faults + * + * Note that __vdso_sgx_enter_enclave() is not compliant with the x86-64 ABI. + * All registers except RSP must be treated as volatile from the caller's + * perspective, including but not limited to GPRs, EFLAGS.DF, MXCSR, FCW, etc... + * Conversely, the enclave being run must preserve the untrusted RSP and stack. + * + * __vdso_sgx_enter_enclave(u32 leaf, void *tcs, + * struct sgx_enclave_exception *exception_info) + * { + * if (leaf != SGX_EENTER && leaf != SGX_ERESUME) + * return -EINVAL; + * + * if (!tcs) + * return -EINVAL; + * + * try { + * ENCLU[leaf]; + * } catch (exception) { + * if (e) + * *e = exception; + * return -EFAULT; + * } + * + * return 0; + * } + */ +ENTRY(__vdso_sgx_enter_enclave) + /* EENTER <= leaf <= ERESUME */ + cmp $0x2, %eax + jb bad_input + + cmp $0x3, %eax + ja bad_input + + /* TCS must be non-NULL */ + test %rbx, %rbx + je bad_input + + /* Save @exception_info */ + push %rcx + + /* Load AEP for ENCLU */ + lea 1f(%rip), %rcx +1: enclu + + add $0x8, %rsp + xor %eax, %eax + ret + +bad_input: + mov $(-EINVAL), %rax + ret + +.pushsection .fixup, "ax" + /* Re-load @exception_info and fill it (if it's non-NULL) */ +2: pop %rcx + test %rcx, %rcx + je 3f + + mov %eax, EX_LEAF(%rcx) + mov %di, EX_TRAPNR(%rcx) + mov %si, EX_ERROR_CODE(%rcx) + mov %rdx, EX_ADDRESS(%rcx) +3: mov $(-EFAULT), %rax + ret +.popsection + +_ASM_VDSO_EXTABLE_HANDLE(1b, 2b) + +ENDPROC(__vdso_sgx_enter_enclave) diff --git a/arch/x86/include/uapi/asm/sgx.h b/arch/x86/include/uapi/asm/sgx.h index 266b813eefa1..099cf8025dd8 100644 --- a/arch/x86/include/uapi/asm/sgx.h +++ b/arch/x86/include/uapi/asm/sgx.h @@ -96,4 +96,22 @@ struct sgx_enclave_modify_pages { __u8 op; } __attribute__((__packed__)); +/** + * struct sgx_enclave_exception - structure to report exceptions encountered in + * __vdso_sgx_enter_enclave + * + * @leaf: ENCLU leaf from %rax at time of exception + * @trapnr: exception trap number, a.k.a. fault vector + * @error_cdde: exception error code + * @address: exception address, e.g. CR2 on a #PF + * @reserved: reserved for future use + */ +struct sgx_enclave_exception { + __u32 leaf; + __u16 trapnr; + __u16 error_code; + __u64 address; + __u64 reserved[2]; +}; + #endif /* _UAPI_ASM_X86_SGX_H */