From patchwork Wed Jun 8 03:26:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12872887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB1F9C43334 for ; Wed, 8 Jun 2022 06:00:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234031AbiFHGAs (ORCPT ); Wed, 8 Jun 2022 02:00:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235952AbiFHFuC (ORCPT ); Wed, 8 Jun 2022 01:50:02 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C4A01BBAC6 for ; Tue, 7 Jun 2022 20:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654658804; x=1686194804; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=snECM3a9Dp6/x5oc/EHTo3dC1FRqmefM+pun0S/Vau4=; b=Egk68iOtQc7+xvG4E6vItGd/UbmwqR4RXghzkyOIOOcbRLz0L9wkOhA3 Pu/WccFu4XZn/5n3mNHJ0/VLkXzLQmX2CcmcWnkoMAoj/m5TAYFi90UgD MkVPelilIIE13pfOVdb3gpqNt6b8gLUM2FbcWHV+Nc9NjBlMkZPZ4UOrJ AMUw7qUmD+eyRyY84yaf+mHYNXxuLu167gfe08ilRqKlsLLsP0p0gDHO2 2vSkh0KRV5fvgQPlkfXeTBxS0VtxnzYJXj6BXoBgGo1StqPGyqr4mwl3q u8TkILQBENkPzhPow5vlhF13rdHiMeQst3GzomBCZbfZbHq1IghRY6sUz g==; X-IronPort-AV: E=McAfee;i="6400,9594,10371"; a="276819875" X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="276819875" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 20:26:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="670325058" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by FMSMGA003.fm.intel.com with ESMTP; 07 Jun 2022 20:26:41 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com, jarkko@kernel.org, dave.hansen@linux.intel.com Cc: seanjc@google.com, kai.huang@intel.com, fan.du@intel.com, cathy.zhang@intel.com, zhiquan1.li@intel.com Subject: [PATCH v4 1/3] x86/sgx: Repurpose the owner field as the virtual address of virtual EPC page Date: Wed, 8 Jun 2022 11:26:52 +0800 Message-Id: <20220608032654.1764936-2-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220608032654.1764936-1-zhiquan1.li@intel.com> References: <20220608032654.1764936-1-zhiquan1.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org When a page triggers a machine check, it only reports the physical address of EPC page. But in order to inject #MC into hypervisor, the virtual address is required. Then repurpose the "owner" field as the virtual address of the virtual EPC page so that arch_memory_failure() can easily retrieve it. Add a new EPC page flag - SGX_EPC_PAGE_KVM_GUEST to interpret the meaning of the field. Signed-off-by: Zhiquan Li Acked-by: Kai Huang --- Changes since V3: - Move the definition of EPC page flag SGX_EPC_PAGE_KVM_GUEST from Cathy's third patch of SGX rebootless recovery patch set but discard irrelevant portion, since it might need more time to re-forge and these are two different features. Link: https://lore.kernel.org/linux-sgx/41704e5d4c03b49fcda12e695595211d950cfb08.camel@kernel.org/T/#m9782d23496cacecb7da07a67daa79f4b322ae170 Changes since V2: - Rework the patch suggested by Jarkko. - Remove struct sgx_vepc_page and relevant code. - Remove new EPC page flag SGX_EPC_PAGE_IS_VEPC definition as it is duplicated to SGX_EPC_PAGE_KVM_GUEST. Link: https://lore.kernel.org/linux-sgx/eb95b32ecf3d44a695610cf7f2816785@intel.com/T/#u Changes since V1: - Add documentation suggested by Jarkko. --- arch/x86/kernel/cpu/sgx/sgx.h | 2 ++ arch/x86/kernel/cpu/sgx/virt.c | 4 +++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 0f17def9fe6f..b43582da1bcf 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -28,6 +28,8 @@ /* Pages on free list */ #define SGX_EPC_PAGE_IS_FREE BIT(1) +/* Pages allocated for KVM guest */ +#define SGX_EPC_PAGE_KVM_GUEST BIT(2) struct sgx_epc_page { unsigned int section; diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c index 6a77a14eee38..776ae5c1c032 100644 --- a/arch/x86/kernel/cpu/sgx/virt.c +++ b/arch/x86/kernel/cpu/sgx/virt.c @@ -46,10 +46,12 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc, if (epc_page) return 0; - epc_page = sgx_alloc_epc_page(vepc, false); + epc_page = sgx_alloc_epc_page((void *)addr, false); if (IS_ERR(epc_page)) return PTR_ERR(epc_page); + epc_page->flags |= SGX_EPC_PAGE_KVM_GUEST; + ret = xa_err(xa_store(&vepc->page_array, index, epc_page, GFP_KERNEL)); if (ret) goto err_free; From patchwork Wed Jun 8 03:26:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12872890 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70420CCA485 for ; Wed, 8 Jun 2022 06:01:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234299AbiFHGBF (ORCPT ); Wed, 8 Jun 2022 02:01:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236020AbiFHFuC (ORCPT ); Wed, 8 Jun 2022 01:50:02 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64596199487 for ; Tue, 7 Jun 2022 20:26:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654658808; x=1686194808; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mrBT2mIUlDBeJwvGkg9yb3HPncINDIm2dfbrfFQr4k4=; b=gNpM2l8TZirpEd/p2txHr7Q0Gzb0ia5tvh6M2U86VUOHIdDFOWSc08CD zYQAIU/69DZFG0gCU8PhXW2hdxtJvz1oo7njxonp8EjtvyFkdaobVtCPf //ToLKNdERteLcfdbDWbz6vdKMGlcYn8EQ6jIX/U1BRD2WpNpx2W3qZHm PHi8073wJpt8thpmQ7YqAhJywbe58Pqcol9ec+7y8MXY1WXqpmAphzTJb LRCWJI+Taa3Keo6QAVKBJAiYi7/KRw1qCjW+yWlzQmTIgcFgPNLC1HD+L grgIpYUdA4oUuLQ/CxcdSsxs6gtcB5R6dIJz1mzj1s5ibj13hapGfWv/q g==; X-IronPort-AV: E=McAfee;i="6400,9594,10371"; a="276819876" X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="276819876" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 20:26:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="670325082" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by FMSMGA003.fm.intel.com with ESMTP; 07 Jun 2022 20:26:45 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com, jarkko@kernel.org, dave.hansen@linux.intel.com Cc: seanjc@google.com, kai.huang@intel.com, fan.du@intel.com, cathy.zhang@intel.com, zhiquan1.li@intel.com Subject: [PATCH v4 2/3] x86/sgx: Fine grained SGX MCA behavior for virtualization Date: Wed, 8 Jun 2022 11:26:53 +0800 Message-Id: <20220608032654.1764936-3-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220608032654.1764936-1-zhiquan1.li@intel.com> References: <20220608032654.1764936-1-zhiquan1.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org When VM guest access a SGX EPC page with memory failure, current behavior will kill the guest, expected only kill the SGX application inside it. To fix it we send SIGBUS with code BUS_MCEERR_AR and some extra information for hypervisor to inject #MC information to guest, which is helpful in SGX case. The rest of things are guest side. Currently the hypervisor like Qemu already has mature facility to convert HVA to GPA and inject #MC to the guest OS. Unlike host enclaves, virtual EPC instance cannot be shared by multiple VMs. It is because how enclaves are created is totally up to the guest. Sharing virtual EPC instance will be very likely to unexpectedly break enclaves in all VMs. SGX virtual EPC driver doesn't explicitly prevent virtual EPC instance being shared by multiple VMs via fork(). However KVM doesn't support running a VM across multiple mm structures, and the de facto userspace hypervisor (Qemu) doesn't use fork() to create a new VM, so in practice this should not happen. Signed-off-by: Zhiquan Li Acked-by: Kai Huang Link: https://lore.kernel.org/linux-sgx/443cb425-009c-2784-56f4-5e707122de76@intel.com/T/#m1d1f4098f4fad78034e8706a60e4d79c119db407 --- No changes since V3. Changes since V2: - Retrieve virtual address from "owner" field of struct sgx_epc_page, instead of struct sgx_vepc_page. - Replace EPC page flag SGX_EPC_PAGE_IS_VEPC with SGX_EPC_PAGE_KVM_GUEST as they are duplicated. Changes since V1: - Add Acked-by from Kai Huang. - Add Kai’s excellent explanation regarding to why we no need to consider that one virtual EPC be shared by two guests. --- arch/x86/kernel/cpu/sgx/main.c | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index ab4ec54bbdd9..faca7f73b06d 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -715,6 +715,8 @@ int arch_memory_failure(unsigned long pfn, int flags) struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT); struct sgx_epc_section *section; struct sgx_numa_node *node; + int ret = 0; + unsigned long vaddr; /* * mm/memory-failure.c calls this routine for all errors @@ -731,8 +733,26 @@ int arch_memory_failure(unsigned long pfn, int flags) * error. The signal may help the task understand why the * enclave is broken. */ - if (flags & MF_ACTION_REQUIRED) - force_sig(SIGBUS); + if (flags & MF_ACTION_REQUIRED) { + /* + * Provide extra info to the task so that it can make further + * decision but not simply kill it. This is quite useful for + * virtualization case. + */ + if (page->flags & SGX_EPC_PAGE_KVM_GUEST) { + /* + * The "owner" field is repurposed as the virtual address + * of virtual EPC page. + */ + vaddr = (unsigned long)page->owner & PAGE_MASK; + ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)vaddr, + PAGE_SHIFT); + if (ret < 0) + pr_err("Memory failure: Error sending signal to %s:%d: %d\n", + current->comm, current->pid, ret); + } else + force_sig(SIGBUS); + } section = &sgx_epc_sections[page->section]; node = section->node; From patchwork Wed Jun 8 03:26:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12872888 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D2C2C43334 for ; Wed, 8 Jun 2022 06:01:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234073AbiFHGBA (ORCPT ); Wed, 8 Jun 2022 02:01:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236075AbiFHFuE (ORCPT ); Wed, 8 Jun 2022 01:50:04 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E530A1D1048 for ; Tue, 7 Jun 2022 20:26:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654658815; x=1686194815; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QcQvD5XXOCw1M3uSji8Ul6jYWHA8sqYGaqMLrLAuiic=; b=U1+g3uEM2HafEkxenbS6TgcVlV6OrbBLdlAWV9Vl6AIExlO9EEypAm2M Lu1FF/Crt0ZtjsprrI1udd33zBeyEHBT0Bkyyv2GvTKgowtORSq+bVwFt BuUjPgn5/PfCzzLnHZzqSyaSf4yii216ywUi1r0Y+tBiMxvL3Eget+Vi4 KiThwLPrRYOLQjZHJQ+BDf8kksghJawE26auxHYoT18uFJjMPDNV5NSOr IAHZmix+zxk2VtT9YGJgSRUqTkm/LUC+PpXjMW0YCG5naQ8MoAdNxF4C5 /uWraVoLiKQhcKzaq9eLWmn52K0rijN/AnJqAMoeJksgwTHkc6dIOS5DF w==; X-IronPort-AV: E=McAfee;i="6400,9594,10371"; a="302142952" X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="302142952" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 20:26:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="670325111" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by FMSMGA003.fm.intel.com with ESMTP; 07 Jun 2022 20:26:53 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com, jarkko@kernel.org, dave.hansen@linux.intel.com Cc: seanjc@google.com, kai.huang@intel.com, fan.du@intel.com, cathy.zhang@intel.com, zhiquan1.li@intel.com Subject: [PATCH v4 3/3] x86/sgx: Fine grained SGX MCA behavior for normal case Date: Wed, 8 Jun 2022 11:26:54 +0800 Message-Id: <20220608032654.1764936-4-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220608032654.1764936-1-zhiquan1.li@intel.com> References: <20220608032654.1764936-1-zhiquan1.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org When the application accesses a SGX EPC page with memory failure, the task will receive a SIGBUS signal without any extra info, unless the EPC page has SGX_EPC_PAGE_KVM_GUEST flag. However, in some cases, we only use SGX in sub-task and we don't expect the entire task group be killed due to a SGX EPC page for a sub-task has memory failure. To fix it, we extend the solution for normal case. That is, the SGX regular EPC page with memory failure will trigger a SIGBUS signal with code BUS_MCEERR_AR and additional info, so that the user has opportunity to make further decision. Suppose an enclave is shared by multiple processes, when an enclave page triggers a machine check, the enclave will be disabled so that it couldn't be entered again. Killing other processes with the same enclave mapped would perhaps be overkill, but they are going to find that the enclave is "dead" next time they try to use it. Thanks for Jarkko's head up and Tony's clarification on this point. Our intension is to provide additional info so that the application has more choices. Current behavior looks gently, and we don't want to change it. Signed-off-by: Zhiquan Li --- No changes since V3. Changes since V2: - Adapted the code since struct sgx_vepc_page was discarded. - Replace EPC page flag SGX_EPC_PAGE_IS_VEPC with SGX_EPC_PAGE_KVM_GUEST as they are duplicated. Changes since V1: - Add valuable information from Jarkko and Tony the into commit message. Signed-off-by: Zhiquan Li --- arch/x86/kernel/cpu/sgx/main.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index faca7f73b06d..69a2a29c8957 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -739,12 +739,15 @@ int arch_memory_failure(unsigned long pfn, int flags) * decision but not simply kill it. This is quite useful for * virtualization case. */ - if (page->flags & SGX_EPC_PAGE_KVM_GUEST) { + if (page->owner) { /* * The "owner" field is repurposed as the virtual address * of virtual EPC page. */ - vaddr = (unsigned long)page->owner & PAGE_MASK; + if (page->flags & SGX_EPC_PAGE_KVM_GUEST) + vaddr = (unsigned long)page->owner & PAGE_MASK; + else + vaddr = (unsigned long)page->owner->desc & PAGE_MASK; ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)vaddr, PAGE_SHIFT); if (ret < 0)