From patchwork Wed Dec 7 13:00:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 9464443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2B8BA60512 for ; Wed, 7 Dec 2016 13:01:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CFDF28473 for ; Wed, 7 Dec 2016 13:01:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1FF01284D5; Wed, 7 Dec 2016 13:01:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B568E28473 for ; Wed, 7 Dec 2016 13:01:09 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id BD1F981F20 for ; Wed, 7 Dec 2016 05:01:09 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 5694E81F20 for ; Wed, 7 Dec 2016 05:01:08 -0800 (PST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 07 Dec 2016 05:01:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.33,310,1477983600"; d="scan'208"; a="1078680072" Received: from jsakkine-mobl1.tm.intel.com (HELO localhost) ([10.237.50.54]) by fmsmga001.fm.intel.com with ESMTP; 07 Dec 2016 05:01:06 -0800 From: Jarkko Sakkinen To: intel-sgx-kernel-dev@lists.01.org Date: Wed, 7 Dec 2016 15:00:44 +0200 Message-Id: <20161207130045.22615-9-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20161207130045.22615-1-jarkko.sakkinen@linux.intel.com> References: <20161207130045.22615-1-jarkko.sakkinen@linux.intel.com> Subject: [intel-sgx-kernel-dev] [PATCH v7 8/9] intel_sgx: add LRU algorithm to page swapping X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP From: Sean Christopherson Test and clear the A bit of an EPC page when isolating pages during EPC page swap. Move accessed pages to the end of the load list instead of the eviction list, i.e. isolate only those pages that have not been accessed since the last time the swapping flow was run. This basic LRU algorithm yields a significant improvement in throughput when the system is under a heavy EPC pressure. This patch is based on the code originally written Serge Ayoun for the out-of-tree driver with a bug fix that the EPC pages are moved always to the end of the list if they have been lately accessed. For unknown reason a version of out-of-tree driver slipped into the Github, which this regression. Signed-off-by: Sean Christopherson Reviewed-by: Jarkko Sakkinen Tested-by: Jarkko Sakkinen --- drivers/platform/x86/intel_sgx.h | 2 +- drivers/platform/x86/intel_sgx_ioctl.c | 1 + drivers/platform/x86/intel_sgx_page_cache.c | 59 +++++++++++++++++++++-------- drivers/platform/x86/intel_sgx_vma.c | 1 + 4 files changed, 46 insertions(+), 17 deletions(-) diff --git a/drivers/platform/x86/intel_sgx.h b/drivers/platform/x86/intel_sgx.h index add3565..b659b71 100644 --- a/drivers/platform/x86/intel_sgx.h +++ b/drivers/platform/x86/intel_sgx.h @@ -193,7 +193,7 @@ long sgx_compat_ioctl(struct file *filep, unsigned int cmd, unsigned long arg); #endif /* Utility functions */ - +int sgx_test_and_clear_young(struct sgx_encl_page *page, struct sgx_encl *encl); void *sgx_get_epc_page(struct sgx_epc_page *entry); void sgx_put_epc_page(void *epc_page_vaddr); struct page *sgx_get_backing(struct sgx_encl *encl, diff --git a/drivers/platform/x86/intel_sgx_ioctl.c b/drivers/platform/x86/intel_sgx_ioctl.c index ab0a4a3..53fa510 100644 --- a/drivers/platform/x86/intel_sgx_ioctl.c +++ b/drivers/platform/x86/intel_sgx_ioctl.c @@ -296,6 +296,7 @@ static bool sgx_process_add_page_req(struct sgx_add_page_req *req) } encl_page->epc_page = epc_page; + sgx_test_and_clear_young(encl_page, encl); list_add_tail(&encl_page->load_list, &encl->load_list); mutex_unlock(&encl->lock); diff --git a/drivers/platform/x86/intel_sgx_page_cache.c b/drivers/platform/x86/intel_sgx_page_cache.c index 49dd664..69868fd 100644 --- a/drivers/platform/x86/intel_sgx_page_cache.c +++ b/drivers/platform/x86/intel_sgx_page_cache.c @@ -77,6 +77,42 @@ static unsigned int sgx_nr_high_pages; struct task_struct *ksgxswapd_tsk; static DECLARE_WAIT_QUEUE_HEAD(ksgxswapd_waitq); + +static int sgx_test_and_clear_young_cb(pte_t *ptep, pgtable_t token, + unsigned long addr, void *data) +{ + pte_t pte; + int ret; + + ret = pte_young(*ptep); + if (ret) { + pte = pte_mkold(*ptep); + set_pte_at((struct mm_struct *)data, addr, ptep, pte); + } + + return ret; +} + +/** + * sgx_test_and_clear_young() - Test and reset the accessed bit + * @page: enclave EPC page to be tested for recent access + * @encl: enclave which owns @page + * + * Checks the Access (A) bit from the PTE corresponding to the + * enclave page and clears it. Returns 1 if the page has been + * recently accessed and 0 if not. + */ +int sgx_test_and_clear_young(struct sgx_encl_page *page, struct sgx_encl *encl) +{ + struct vm_area_struct *vma = sgx_find_vma(encl, page->addr); + + if (!vma) + return 0; + + return apply_to_page_range(vma->vm_mm, page->addr, PAGE_SIZE, + sgx_test_and_clear_young_cb, vma->vm_mm); +} + static struct sgx_tgid_ctx *sgx_isolate_tgid_ctx(unsigned long nr_to_scan) { struct sgx_tgid_ctx *ctx = NULL; @@ -166,7 +202,8 @@ static void sgx_isolate_pages(struct sgx_encl *encl, struct sgx_encl_page, load_list); - if (!(entry->flags & SGX_ENCL_PAGE_RESERVED)) { + if (!sgx_test_and_clear_young(entry, encl) && + !(entry->flags & SGX_ENCL_PAGE_RESERVED)) { entry->flags |= SGX_ENCL_PAGE_RESERVED; list_move_tail(&entry->load_list, dst); } else { @@ -268,19 +305,6 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) entry = list_first_entry(src, struct sgx_encl_page, load_list); - if (!sgx_pin_mm(encl)) { - while (!list_empty(src)) { - entry = list_first_entry(src, struct sgx_encl_page, - load_list); - list_del(&entry->load_list); - mutex_lock(&encl->lock); - sgx_free_encl_page(entry, encl, 0); - mutex_unlock(&encl->lock); - } - - return; - } - mutex_lock(&encl->lock); /* EBLOCK */ @@ -346,8 +370,6 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) } mutex_unlock(&encl->lock); - - sgx_unpin_mm(encl); } static void sgx_swap_pages(unsigned long nr_to_scan) @@ -364,9 +386,14 @@ static void sgx_swap_pages(unsigned long nr_to_scan) if (!encl) goto out; + if (!sgx_pin_mm(encl)) + goto out_enclave; + sgx_isolate_pages(encl, &cluster, nr_to_scan); sgx_write_pages(encl, &cluster); + sgx_unpin_mm(encl); +out_enclave: kref_put(&encl->refcount, sgx_encl_release); out: kref_put(&ctx->refcount, sgx_tgid_ctx_release); diff --git a/drivers/platform/x86/intel_sgx_vma.c b/drivers/platform/x86/intel_sgx_vma.c index 54690de..d9f8b4e 100644 --- a/drivers/platform/x86/intel_sgx_vma.c +++ b/drivers/platform/x86/intel_sgx_vma.c @@ -260,6 +260,7 @@ static struct sgx_encl_page *sgx_vma_do_fault(struct vm_area_struct *vma, /* Do not free */ epc_page = NULL; + sgx_test_and_clear_young(entry, encl); list_add_tail(&entry->load_list, &encl->load_list); out: mutex_unlock(&encl->lock);