From patchwork Wed Dec 21 13:45:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 9483013 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0535D60772 for ; Wed, 21 Dec 2016 13:46:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E4FDF28307 for ; Wed, 21 Dec 2016 13:46:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D9B7F283F7; Wed, 21 Dec 2016 13:46:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 85D2F28307 for ; Wed, 21 Dec 2016 13:46:06 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 2B84F822E5 for ; Wed, 21 Dec 2016 05:46:06 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 839ED822E5 for ; Wed, 21 Dec 2016 05:46:04 -0800 (PST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP; 21 Dec 2016 05:46:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.33,383,1477983600"; d="scan'208"; a="1074833523" Received: from spinedob-mobl.ger.corp.intel.com (HELO localhost) ([10.252.18.179]) by orsmga001.jf.intel.com with ESMTP; 21 Dec 2016 05:46:01 -0800 From: Jarkko Sakkinen To: intel-sgx-kernel-dev@lists.01.org Date: Wed, 21 Dec 2016 15:45:54 +0200 Message-Id: <1482327954-13747-1-git-send-email-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.7.4 Subject: [intel-sgx-kernel-dev] [PATCH] intel_sgx: simplify sgx_write_pages() X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP Now that sgx_ewb flow has a sane error recovery flow we can simplify sgx_write_pages() significantly by moving the pinning of backing page into sgx_ewb(). This was not possible before as in some situations pinning could legally fail. Signed-off-by: Jarkko Sakkinen Reviewed-by: Sean Christopherson Tested-by: Sean Christopherson --- drivers/platform/x86/intel_sgx_page_cache.c | 63 ++++++++++++----------------- 1 file changed, 25 insertions(+), 38 deletions(-) diff --git a/drivers/platform/x86/intel_sgx_page_cache.c b/drivers/platform/x86/intel_sgx_page_cache.c index 36d4d54..d073057 100644 --- a/drivers/platform/x86/intel_sgx_page_cache.c +++ b/drivers/platform/x86/intel_sgx_page_cache.c @@ -233,48 +233,57 @@ static void sgx_etrack(struct sgx_epc_page *epc_page) } static int __sgx_ewb(struct sgx_encl *encl, - struct sgx_encl_page *encl_page, - struct page *backing) + struct sgx_encl_page *encl_page) { struct sgx_page_info pginfo; + struct page *backing; void *epc; void *va; int ret; - pginfo.srcpge = (unsigned long)kmap_atomic(backing); + backing = sgx_get_backing(encl, encl_page); + if (IS_ERR(backing)) { + ret = PTR_ERR(backing); + sgx_warn(encl, "pinning the backing page for EWB failed with %d\n", + ret); + return ret; + } + epc = sgx_get_epc_page(encl_page->epc_page); va = sgx_get_epc_page(encl_page->va_page->epc_page); + pginfo.srcpge = (unsigned long)kmap_atomic(backing); pginfo.pcmd = (unsigned long)&encl_page->pcmd; pginfo.linaddr = 0; pginfo.secs = 0; ret = __ewb(&pginfo, epc, (void *)((unsigned long)va + encl_page->va_offset)); + kunmap_atomic((void *)(unsigned long)pginfo.srcpge); sgx_put_epc_page(va); sgx_put_epc_page(epc); - kunmap_atomic((void *)(unsigned long)pginfo.srcpge); + sgx_put_backing(backing, true); return ret; } static bool sgx_ewb(struct sgx_encl *encl, - struct sgx_encl_page *entry, - struct page *backing) + struct sgx_encl_page *entry) { - int ret = __sgx_ewb(encl, entry, backing); + int ret = __sgx_ewb(encl, entry); if (ret == SGX_NOT_TRACKED) { /* slow path, IPI needed */ smp_call_function(sgx_ipi_cb, NULL, 1); - ret = __sgx_ewb(encl, entry, backing); + ret = __sgx_ewb(encl, entry); } if (ret) { /* make enclave inaccessible */ sgx_invalidate(encl); smp_call_function(sgx_ipi_cb, NULL, 1); - sgx_err(encl, "EWB returned %d, enclave killed\n", ret); + if (ret > 0) + sgx_err(encl, "EWB returned %d, enclave killed\n", ret); return false; } @@ -294,11 +303,8 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) { struct sgx_encl_page *entry; struct sgx_encl_page *tmp; - struct page *pages[SGX_NR_SWAP_CLUSTER_MAX + 1]; struct vm_area_struct *evma; unsigned int free_flags; - int cnt = 0; - int i = 0; if (list_empty(src)) return; @@ -316,25 +322,14 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) continue; } - pages[cnt] = sgx_get_backing(encl, entry); - if (IS_ERR(pages[cnt])) { - list_del(&entry->load_list); - list_add_tail(&entry->load_list, &encl->load_list); - entry->flags &= ~SGX_ENCL_PAGE_RESERVED; - continue; - } - zap_vma_ptes(evma, entry->addr, PAGE_SIZE); sgx_eblock(entry->epc_page); - cnt++; } /* ETRACK */ sgx_etrack(encl->secs_page.epc_page); /* EWB */ - i = 0; - while (!list_empty(src)) { entry = list_first_entry(src, struct sgx_encl_page, load_list); @@ -344,29 +339,21 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) evma = sgx_find_vma(encl, entry->addr); if (evma) { - if (sgx_ewb(encl, entry, pages[i])) + if (sgx_ewb(encl, entry)) free_flags = SGX_FREE_SKIP_EREMOVE; encl->secs_child_cnt--; } sgx_free_encl_page(entry, encl, free_flags); - sgx_put_backing(pages[i++], evma); } - /* Allow SECS page eviction only when the encl is initialized. */ - if (!encl->secs_child_cnt && - (encl->flags & SGX_ENCL_INITIALIZED)) { - pages[cnt] = sgx_get_backing(encl, &encl->secs_page); - if (!IS_ERR(pages[cnt])) { - free_flags = 0; - if (sgx_ewb(encl, &encl->secs_page, pages[cnt])) - free_flags = SGX_FREE_SKIP_EREMOVE; - - encl->flags |= SGX_ENCL_SECS_EVICTED; + if (!encl->secs_child_cnt && (encl->flags & SGX_ENCL_INITIALIZED)) { + free_flags = 0; + if (sgx_ewb(encl, &encl->secs_page)) + free_flags = SGX_FREE_SKIP_EREMOVE; - sgx_free_encl_page(&encl->secs_page, encl, free_flags); - sgx_put_backing(pages[cnt], true); - } + encl->flags |= SGX_ENCL_SECS_EVICTED; + sgx_free_encl_page(&encl->secs_page, encl, free_flags); } mutex_unlock(&encl->lock);