From patchwork Thu Dec 8 12:38:24 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 9466331 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DF6C46071E for ; Thu, 8 Dec 2016 12:38:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3351284EC for ; Thu, 8 Dec 2016 12:38:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C7EFF28526; Thu, 8 Dec 2016 12:38:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5CDB8284EC for ; Thu, 8 Dec 2016 12:38:49 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6DC6581F5B for ; Thu, 8 Dec 2016 04:38:49 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 7DD9981F5B for ; Thu, 8 Dec 2016 04:38:48 -0800 (PST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 08 Dec 2016 04:38:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.33,319,1477983600"; d="scan'208"; a="1096390494" Received: from jsakkine-mobl1.tm.intel.com (HELO localhost) ([10.237.50.66]) by fmsmga002.fm.intel.com with ESMTP; 08 Dec 2016 04:38:46 -0800 From: Jarkko Sakkinen To: intel-sgx-kernel-dev@lists.01.org Date: Thu, 8 Dec 2016 14:38:24 +0200 Message-Id: <20161208123828.21834-7-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20161208123828.21834-1-jarkko.sakkinen@linux.intel.com> References: <20161208123828.21834-1-jarkko.sakkinen@linux.intel.com> Subject: [intel-sgx-kernel-dev] [PATCH v8 06/10] intel_sgx: kill the enclave when any of its VMAs are closed X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP ELRANGE protects from non-enclave access but still closing any of the VMAs leaves the enclave and remaining VMAs to a state that is not wanted for any legit application using enclaves. Therefore it is in this situation advisable to kill the whole enclave. One could also consider hardening the vma_open callback in a way that it would also kill the enclave if any new VMAs areis created after the first EADD or in the corner case after the first EINIT (if enclave does only have the SECS page). This makes sense because you could enforce that EPCM entries and PTEs stay in sync, which is a good property. However, most of the time you want to implement a relocator for the user space that runs inside the enclave because the ELRANGE position varies based on which address mmap() gives. Since the enclave code is signed and measured the relocation can only happen after EINIT. That's why we will not do such enforcement. The ramifications of this are not too bad. In the worst case you could end up crashing your user space process in a safe way (#GP due to EPCM/PTE mismatch). Signed-off-by: Jarkko Sakkinen --- drivers/platform/x86/intel_sgx.h | 2 +- drivers/platform/x86/intel_sgx_ioctl.c | 10 +++++--- drivers/platform/x86/intel_sgx_util.c | 13 ++++------ drivers/platform/x86/intel_sgx_vma.c | 45 +++++++++++++++------------------- 4 files changed, 32 insertions(+), 38 deletions(-) diff --git a/drivers/platform/x86/intel_sgx.h b/drivers/platform/x86/intel_sgx.h index 464763d..018cd03 100644 --- a/drivers/platform/x86/intel_sgx.h +++ b/drivers/platform/x86/intel_sgx.h @@ -130,12 +130,12 @@ enum sgx_encl_flags { SGX_ENCL_DEBUG = BIT(1), SGX_ENCL_SECS_EVICTED = BIT(2), SGX_ENCL_SUSPEND = BIT(3), + SGX_ENCL_DEAD = BIT(4), }; struct sgx_encl { unsigned int flags; unsigned int secs_child_cnt; - unsigned int vma_cnt; struct mutex lock; struct task_struct *owner; struct mm_struct *mm; diff --git a/drivers/platform/x86/intel_sgx_ioctl.c b/drivers/platform/x86/intel_sgx_ioctl.c index b377200..d0113b0 100644 --- a/drivers/platform/x86/intel_sgx_ioctl.c +++ b/drivers/platform/x86/intel_sgx_ioctl.c @@ -255,7 +255,10 @@ static bool sgx_process_add_page_req(struct sgx_add_page_req *req) mutex_lock(&encl->lock); - if (!encl->vma_cnt || sgx_find_encl(encl->mm, encl_page->addr, &vma)) + if (encl->flags & SGX_ENCL_DEAD) + goto out; + + if (sgx_find_encl(encl->mm, encl_page->addr, &vma)) goto out; backing = sgx_get_backing(encl, encl_page); @@ -317,7 +320,7 @@ static void sgx_add_page_worker(struct work_struct *work) do { schedule(); - if (encl->flags & SGX_ENCL_SUSPEND) + if (encl->flags & SGX_ENCL_DEAD) skip_rest = true; mutex_lock(&encl->lock); @@ -578,7 +581,6 @@ static long sgx_ioc_enclave_create(struct file *filep, unsigned int cmd, up_read(¤t->mm->mmap_sem); goto out; } - encl->vma_cnt++; vma->vm_private_data = encl; up_read(¤t->mm->mmap_sem); @@ -682,7 +684,7 @@ static int __encl_add_page(struct sgx_encl *encl, mutex_lock(&encl->lock); - if (encl->flags & SGX_ENCL_INITIALIZED) { + if (encl->flags & (SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD)) { ret = -EINVAL; goto out; } diff --git a/drivers/platform/x86/intel_sgx_util.c b/drivers/platform/x86/intel_sgx_util.c index 3878d9a..d5238c2 100644 --- a/drivers/platform/x86/intel_sgx_util.c +++ b/drivers/platform/x86/intel_sgx_util.c @@ -135,21 +135,18 @@ void sgx_zap_tcs_ptes(struct sgx_encl *encl, struct vm_area_struct *vma) bool sgx_pin_mm(struct sgx_encl *encl) { - if (encl->flags & SGX_ENCL_SUSPEND) - return false; - mutex_lock(&encl->lock); - if (encl->vma_cnt) { - atomic_inc(&encl->mm->mm_count); - } else { + if (encl->flags & SGX_ENCL_DEAD) { mutex_unlock(&encl->lock); return false; } + + atomic_inc(&encl->mm->mm_count); mutex_unlock(&encl->lock); down_read(&encl->mm->mmap_sem); - if (!encl->vma_cnt) { + if (encl->flags & SGX_ENCL_DEAD) { sgx_unpin_mm(encl); return false; } @@ -177,7 +174,7 @@ void sgx_invalidate(struct sgx_encl *encl) break; } - encl->vma_cnt = 0; + encl->flags |= SGX_ENCL_DEAD; } int sgx_find_encl(struct mm_struct *mm, unsigned long addr, diff --git a/drivers/platform/x86/intel_sgx_vma.c b/drivers/platform/x86/intel_sgx_vma.c index 0649978..f273282 100644 --- a/drivers/platform/x86/intel_sgx_vma.c +++ b/drivers/platform/x86/intel_sgx_vma.c @@ -70,48 +70,43 @@ static void sgx_vma_open(struct vm_area_struct *vma) { - struct sgx_encl *encl; + struct sgx_encl *encl = vma->vm_private_data; - /* Was vm_private_data nullified as a result of the previous fork? */ - encl = vma->vm_private_data; - if (!encl) - goto out_fork; + /* When forking for the second time vm_private_data is already set to + * NULL. + */ + if (!encl) { + zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); + return; + } - /* Was the process forked? mm_struct changes when the process is - * forked. + /* Invalidate enclave when the process has been forked for the first + * time or pages have been added because PTEs must bee in sync with + * the EPCM entries. */ mutex_lock(&encl->lock); if (encl->mm != vma->vm_mm) { - mutex_unlock(&encl->lock); - goto out_fork; + sgx_invalidate(encl); + vma->vm_private_data = NULL; } - encl->vma_cnt++; mutex_unlock(&encl->lock); - kref_get(&encl->refcount); - return; -out_fork: - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); - vma->vm_private_data = NULL; + if (vma->vm_private_data) + kref_get(&encl->refcount); } static void sgx_vma_close(struct vm_area_struct *vma) { struct sgx_encl *encl = vma->vm_private_data; - /* If process was forked, VMA is still there but - * vm_private_data is set to NULL. + /* When forking for the second time vm_private_data is already set to + * NULL. */ if (!encl) return; mutex_lock(&encl->lock); - encl->vma_cnt--; - vma->vm_private_data = NULL; - - sgx_zap_tcs_ptes(encl, vma); - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); - + sgx_invalidate(encl); mutex_unlock(&encl->lock); kref_put(&encl->refcount, sgx_encl_release); @@ -187,7 +182,7 @@ static struct sgx_encl_page *sgx_vma_do_fault(struct vm_area_struct *vma, mutex_lock(&encl->lock); - if (!encl->vma_cnt) { + if (encl->flags & SGX_ENCL_DEAD) { entry = ERR_PTR(-EFAULT); goto out; } @@ -370,7 +365,7 @@ static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, if (!(encl->flags & SGX_ENCL_DEBUG) || !(encl->flags & SGX_ENCL_INITIALIZED) || - (encl->flags & SGX_ENCL_SUSPEND)) + (encl->flags & SGX_ENCL_DEAD)) return -EFAULT; sgx_dbg(encl, "%s addr=0x%lx, len=%d\n", op_str, addr, len);