From patchwork Thu Aug 4 20:14:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Dhanraj, Vijay" X-Patchwork-Id: 12936681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D21E1C19F21 for ; Thu, 4 Aug 2022 20:17:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235059AbiHDURK (ORCPT ); Thu, 4 Aug 2022 16:17:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38350 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232290AbiHDURJ (ORCPT ); Thu, 4 Aug 2022 16:17:09 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C60CF17072 for ; Thu, 4 Aug 2022 13:17:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1659644228; x=1691180228; h=from:to:cc:subject:date:message-id; bh=i9Mgk9Uxq5rSZ5k7gOa22kaim4rAuQBnX7kNHaMyZDQ=; b=W193lOdJG7Xp2gOt0noOqEeeYY4si4ugjpcW0gB8a2JzO7atTcqvowey vBfVhOTdGIBKsOxgLV5J7iTVUW4VNiEMeC6Rd7X8l868NDfnPm/GQktrA INJMHPJX/jiW5ykKG3A+nzb5vLzOknVou0w0v4AuUb3ERn5ht2y2tHCPD UzsReaG/GMDgSFiUudgP2gibTfGb3qOq1SySJ2cDA30yBgvyHbcp1RSzw 8Dbedk14POd5fCPCz7riw7OpfRLm45GZzwmgNLdp2Tyoc+jegSsx55ce+ KLuvK1Dv7zZRo7AzWjcIMH9206uP6PBggkl/yn2lxx24xngAw1OqaCY9Q g==; X-IronPort-AV: E=McAfee;i="6400,9594,10429"; a="315919489" X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="315919489" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Aug 2022 13:17:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,216,1654585200"; d="scan'208";a="579216681" Received: from sdp.jf.intel.com ([10.165.58.156]) by orsmga006.jf.intel.com with ESMTP; 04 Aug 2022 13:17:08 -0700 From: vijay.dhanraj@intel.com To: linux-sgx@vger.kernel.org, jarkko@kernel.org, reinette.chatre@intel.com, dave.hansen@linux.intel.com Cc: haitao.huang@intel.com Subject: [PATCH] Add SGX selftest `augment_via_eaccept_long` Date: Thu, 4 Aug 2022 13:14:56 -0700 Message-Id: <20220804201456.33418-1-vijay.dhanraj@intel.com> X-Mailer: git-send-email 2.17.1 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org From: Vijay Dhanraj This commit adds a new test case which is same as `augment_via_eaccept` but adds more number of EPC pages to stress test `EAUG` via `EACCEPT`. Signed-off-by: Vijay Dhanraj Signed-off-by: Haitao Huang --- tools/testing/selftests/sgx/load.c | 5 +- tools/testing/selftests/sgx/main.c | 120 +++++++++++++++++++++++- tools/testing/selftests/sgx/main.h | 3 +- tools/testing/selftests/sgx/sigstruct.c | 2 +- 4 files changed, 125 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/sgx/load.c b/tools/testing/selftests/sgx/load.c index 94bdeac1cf04..7de1b15c90b1 100644 --- a/tools/testing/selftests/sgx/load.c +++ b/tools/testing/selftests/sgx/load.c @@ -171,7 +171,8 @@ uint64_t encl_get_entry(struct encl *encl, const char *symbol) return 0; } -bool encl_load(const char *path, struct encl *encl, unsigned long heap_size) +bool encl_load(const char *path, struct encl *encl, unsigned long heap_size, + unsigned long edmm_size) { const char device_path[] = "/dev/sgx_enclave"; struct encl_segment *seg; @@ -300,7 +301,7 @@ bool encl_load(const char *path, struct encl *encl, unsigned long heap_size) encl->src_size = encl->segment_tbl[j].offset + encl->segment_tbl[j].size; - for (encl->encl_size = 4096; encl->encl_size < encl->src_size; ) + for (encl->encl_size = 4096; encl->encl_size < encl->src_size + edmm_size;) encl->encl_size <<= 1; return true; diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c index 9820b3809c69..65e79682f75e 100644 --- a/tools/testing/selftests/sgx/main.c +++ b/tools/testing/selftests/sgx/main.c @@ -25,6 +25,8 @@ static const uint64_t MAGIC = 0x1122334455667788ULL; static const uint64_t MAGIC2 = 0x8877665544332211ULL; vdso_sgx_enter_enclave_t vdso_sgx_enter_enclave; +static const unsigned long edmm_size = 8589934592; //8G + /* * Security Information (SECINFO) data structure needed by a few SGX * instructions (eg. ENCLU[EACCEPT] and ENCLU[EMODPE]) holds meta-data @@ -183,7 +185,7 @@ static bool setup_test_encl(unsigned long heap_size, struct encl *encl, unsigned int i; void *addr; - if (!encl_load("test_encl.elf", encl, heap_size)) { + if (!encl_load("test_encl.elf", encl, heap_size, edmm_size)) { encl_delete(encl); TH_LOG("Failed to load the test enclave."); return false; @@ -1210,6 +1212,122 @@ TEST_F(enclave, augment_via_eaccept) munmap(addr, PAGE_SIZE); } +/* + * Test for the addition of large number of pages to an initialized enclave via + * a pre-emptive run of EACCEPT on page to be added. + */ +#define TIMEOUT_LONG 900 /* seconds */ +TEST_F_TIMEOUT(enclave, augment_via_eaccept_long, TIMEOUT_LONG) +{ + struct encl_op_get_from_addr get_addr_op; + struct encl_op_put_to_addr put_addr_op; + struct encl_op_eaccept eaccept_op; + size_t total_size = 0; + void *addr; + unsigned long i; + + if (!sgx2_supported()) + SKIP(return, "SGX2 not supported"); + + ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata)); + + memset(&self->run, 0, sizeof(self->run)); + self->run.tcs = self->encl.encl_base; + + for (i = 0; i < self->encl.nr_segments; i++) { + struct encl_segment *seg = &self->encl.segment_tbl[i]; + + total_size += seg->size; + TH_LOG("test enclave: total_size = %ld, seg->size = %ld", total_size, seg->size); + } + + /* + * Actual enclave size is expected to be larger than the loaded + * test enclave since enclave size must be a power of 2 in bytes while + * test_encl does not consume it all. + */ + EXPECT_LT(total_size + edmm_size, self->encl.encl_size); + + /* + * mmap() a page at end of existing enclave to be used for dynamic + * EPC page. + * + * Kernel will allow new mapping using any permissions if it + * falls into the enclave's address range but not backed + * by existing enclave pages. + */ + TH_LOG("mmaping pages at end of enclave..."); + addr = mmap((void *)self->encl.encl_base + total_size, edmm_size, + PROT_READ | PROT_WRITE | PROT_EXEC, MAP_SHARED | MAP_FIXED, + self->encl.fd, 0); + EXPECT_NE(addr, MAP_FAILED); + + self->run.exception_vector = 0; + self->run.exception_error_code = 0; + self->run.exception_addr = 0; + + /* + * Run EACCEPT on new page to trigger the #PF->EAUG->EACCEPT(again + * without a #PF). All should be transparent to userspace. + */ + TH_LOG("Entering enclave to run EACCEPT for each page of %zd bytes may take a while ...", + edmm_size); + eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; + eaccept_op.ret = 0; + eaccept_op.header.type = ENCL_OP_EACCEPT; + + for (i = 0; i < edmm_size; i += 4096) { + eaccept_op.epc_addr = (uint64_t)(addr + i); + + EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); + if (self->run.exception_vector == 14 && + self->run.exception_error_code == 4 && + self->run.exception_addr == self->encl.encl_base) { + munmap(addr, edmm_size); + SKIP(return, "Kernel does not support adding pages to initialized enclave"); + } + + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + ASSERT_EQ(eaccept_op.ret, 0); + ASSERT_EQ(self->run.function, EEXIT); + } + + /* + * New page should be accessible from within enclave - attempt to + * write to it. + */ + put_addr_op.value = MAGIC; + put_addr_op.addr = (unsigned long)addr; + put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS; + + EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0); + + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + + /* + * Read memory from newly added page that was just written to, + * confirming that data previously written (MAGIC) is present. + */ + get_addr_op.value = 0; + get_addr_op.addr = (unsigned long)addr; + get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS; + + EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0); + + EXPECT_EQ(get_addr_op.value, MAGIC); + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + + munmap(addr, edmm_size); +} + /* * SGX2 page type modification test in two phases: * Phase 1: diff --git a/tools/testing/selftests/sgx/main.h b/tools/testing/selftests/sgx/main.h index fc585be97e2f..fe5d39ac0e1e 100644 --- a/tools/testing/selftests/sgx/main.h +++ b/tools/testing/selftests/sgx/main.h @@ -35,7 +35,8 @@ extern unsigned char sign_key[]; extern unsigned char sign_key_end[]; void encl_delete(struct encl *ctx); -bool encl_load(const char *path, struct encl *encl, unsigned long heap_size); +bool encl_load(const char *path, struct encl *encl, unsigned long heap_size, + unsigned long edmm_size); bool encl_measure(struct encl *encl); bool encl_build(struct encl *encl); uint64_t encl_get_entry(struct encl *encl, const char *symbol); diff --git a/tools/testing/selftests/sgx/sigstruct.c b/tools/testing/selftests/sgx/sigstruct.c index 50c5ab1aa6fa..6000cf0e4975 100644 --- a/tools/testing/selftests/sgx/sigstruct.c +++ b/tools/testing/selftests/sgx/sigstruct.c @@ -343,7 +343,7 @@ bool encl_measure(struct encl *encl) if (!ctx) goto err; - if (!mrenclave_ecreate(ctx, encl->src_size)) + if (!mrenclave_ecreate(ctx, encl->encl_size)) goto err; for (i = 0; i < encl->nr_segments; i++) {