From patchwork Tue May 10 03:17:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12844454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E644C433F5 for ; Tue, 10 May 2022 03:16:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235267AbiEJDUi (ORCPT ); Mon, 9 May 2022 23:20:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235257AbiEJDUf (ORCPT ); Mon, 9 May 2022 23:20:35 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 903DF18B32 for ; Mon, 9 May 2022 20:16:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652152599; x=1683688599; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=vOByD50p83RxYVZANLAfmQHpElgLVmhpxScClDgO0yE=; b=Lc9huXA5PmVnTU9yAxx8zHASgza7eKTWTqORryyXX+Tc0ZYyfwR2Aora aE3DdnI23m8F+sipQifrxygOPx8qTUtON2FFGTEuPSWxxSa4JUWPtl8X1 SFFljva1v56CBndOFimU8IQrFNat2WO51AmY2xFFyMMO4BgOjNIRbBPTg GQHX8JZASZ3yjLo3xItQoEkuW9rvWu8MMh9QjAng7XD/L3B0t8lNRJTOT /UheK6QBT6aaY1/7P+oOjx9EPLTpJ6fUeR2vjY6jpmHbvGX58tTy8+WQh u3+YLuN4aCKq1h0BVvuQaNjgwNcPDLUswGT1G2j/S6lO2mboM7OEoxo+V g==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249144606" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="249144606" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:16:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710818569" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 20:16:37 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com Cc: jarkko@kernel.org, dave.hansen@linux.intel.com, seanjc@google.com, fan.du@intel.com, zhiquan1.li@intel.com Subject: [PATCH 1/4] x86/sgx: Move struct sgx_vepc definition to sgx.h Date: Tue, 10 May 2022 11:17:25 +0800 Message-Id: <20220510031725.3181361-1-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Move struct sgx_vepc definition to sgx.h so that it can be used outside of virt.c. Signed-off-by: Zhiquan Li --- arch/x86/kernel/cpu/sgx/sgx.h | 5 +++++ arch/x86/kernel/cpu/sgx/virt.c | 5 ----- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 0f17def9fe6f..83ff8c3e81cf 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -101,4 +101,9 @@ static inline int __init sgx_vepc_init(void) void sgx_update_lepubkeyhash(u64 *lepubkeyhash); +struct sgx_vepc { + struct xarray page_array; + struct mutex lock; +}; + #endif /* _X86_SGX_H */ diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c index 6a77a14eee38..c9c8638b5dc4 100644 --- a/arch/x86/kernel/cpu/sgx/virt.c +++ b/arch/x86/kernel/cpu/sgx/virt.c @@ -18,11 +18,6 @@ #include "encls.h" #include "sgx.h" -struct sgx_vepc { - struct xarray page_array; - struct mutex lock; -}; - /* * Temporary SECS pages that cannot be EREMOVE'd due to having child in other * virtual EPC instances, and the lock to protect it. From patchwork Tue May 10 03:17:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12844455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B860CC433F5 for ; Tue, 10 May 2022 03:16:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235232AbiEJDUq (ORCPT ); Mon, 9 May 2022 23:20:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235245AbiEJDUp (ORCPT ); Mon, 9 May 2022 23:20:45 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B71F315828 for ; Mon, 9 May 2022 20:16:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652152609; x=1683688609; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=neg7NnNv/yzAlXlRwTKteKAwzitNLa0sNEHhaQb3tnY=; b=S5c6v7mxdytuSUjwalvdI8j4ZBB2iK2QmzULO9bT4ubV/FRBeZPZ9HfO lI5nlIjMGsyPHAmr5SEoawLL0wvrzpVo0xNH3dIa3GrHfbvslqwEXNF17 crb8kx65yu6TpJc3jngyDmVmpbbrHGk3mlruX84PYIhIM8hxDDhtOYHgf O/c1L+EeQgw1Wra0FsAZYThVjFwGmKZGXwg8CCLDAv10tZE9rkpdINw9t Ic/Ndmi05ymDIwaXgCriNNbSAhElTkWYM0GWRkA+sWpuugfgyAbGSnRiw l7Ule9qkrnO2z55LyXEERWy47Ud9wL8QpaBPag2mZ3ZjB2Dcks/GTjQJB g==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249144641" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="249144641" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:16:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710818585" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 20:16:47 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com Cc: jarkko@kernel.org, dave.hansen@linux.intel.com, seanjc@google.com, fan.du@intel.com, zhiquan1.li@intel.com Subject: [PATCH 2/4] x86/sgx: add struct sgx_vepc_page to manage EPC pages for vepc Date: Tue, 10 May 2022 11:17:37 +0800 Message-Id: <20220510031737.3181410-1-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Current SGX data structures are insufficient to track the EPC pages for vepc. For example, if we want to retrieve the virtual address of an EPC page allocated to an enclave on host, we can find this info from its owner, the ‘desc’ field of struct sgx_encl_page. However, if the EPC page is allocated to a KVM guest, this is not available, as their owner is a shared vepc. So, we introduce struct sgx_vepc_page which can be the owner of EPC pages for vepc and saves the useful info of EPC pages for vepc, like struct sgx_encl_page. Canonical memory failure collects victim tasks by iterating all the tasks one by one and use reverse mapping to get victim tasks’ virtual address. This is not necessary for SGX - as one EPC page can be mapped to ONE enclave only. So, this 1:1 mapping enforcement allows us to find task virtual address with physical address directly. Signed-off-by: Zhiquan Li --- arch/x86/kernel/cpu/sgx/sgx.h | 7 +++++++ arch/x86/kernel/cpu/sgx/virt.c | 24 +++++++++++++++++++----- 2 files changed, 26 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 83ff8c3e81cf..cc01d992453a 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -28,6 +28,8 @@ /* Pages on free list */ #define SGX_EPC_PAGE_IS_FREE BIT(1) +/* Pages is used by VM guest */ +#define SGX_EPC_PAGE_IS_VEPC BIT(2) struct sgx_epc_page { unsigned int section; @@ -106,4 +108,9 @@ struct sgx_vepc { struct mutex lock; }; +struct sgx_vepc_page { + unsigned long vaddr; + struct sgx_vepc *vepc; +}; + #endif /* _X86_SGX_H */ diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c index c9c8638b5dc4..d7945a47ced8 100644 --- a/arch/x86/kernel/cpu/sgx/virt.c +++ b/arch/x86/kernel/cpu/sgx/virt.c @@ -29,6 +29,7 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc, struct vm_area_struct *vma, unsigned long addr) { struct sgx_epc_page *epc_page; + struct sgx_vepc_page *owner; unsigned long index, pfn; int ret; @@ -41,13 +42,22 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc, if (epc_page) return 0; - epc_page = sgx_alloc_epc_page(vepc, false); - if (IS_ERR(epc_page)) - return PTR_ERR(epc_page); + owner = kzalloc(sizeof(*owner), GFP_KERNEL); + if (!owner) + return -ENOMEM; + owner->vepc = vepc; + owner->vaddr = addr & PAGE_MASK; + + epc_page = sgx_alloc_epc_page(owner, false); + if (IS_ERR(epc_page)) { + ret = PTR_ERR(epc_page); + goto err_free_owner; + } + epc_page->flags = SGX_EPC_PAGE_IS_VEPC; ret = xa_err(xa_store(&vepc->page_array, index, epc_page, GFP_KERNEL)); if (ret) - goto err_free; + goto err_free_page; pfn = PFN_DOWN(sgx_get_epc_phys_addr(epc_page)); @@ -61,8 +71,10 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc, err_delete: xa_erase(&vepc->page_array, index); -err_free: +err_free_page: sgx_free_epc_page(epc_page); +err_free_owner: + kfree(owner); return ret; } @@ -122,6 +134,7 @@ static int sgx_vepc_remove_page(struct sgx_epc_page *epc_page) static int sgx_vepc_free_page(struct sgx_epc_page *epc_page) { + struct sgx_vepc_page *owner = (struct sgx_vepc_page *)epc_page->owner; int ret = sgx_vepc_remove_page(epc_page); if (ret) { /* @@ -141,6 +154,7 @@ static int sgx_vepc_free_page(struct sgx_epc_page *epc_page) return ret; } + kfree(owner); sgx_free_epc_page(epc_page); return 0; } From patchwork Tue May 10 03:17:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12844456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B9F0C433F5 for ; Tue, 10 May 2022 03:17:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235245AbiEJDU6 (ORCPT ); Mon, 9 May 2022 23:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235257AbiEJDU4 (ORCPT ); Mon, 9 May 2022 23:20:56 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92BBB26FE for ; Mon, 9 May 2022 20:16:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652152619; x=1683688619; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=x2B1yQsCDywYxBD1Y8ouYiSrHNNkIfO2eJFEh0mzgr4=; b=IirbapKtvw8nxkB8NeTJ+VI49/Y5NFZcTk8/3AEIkOuJLEVaH07bIphI OJBQMCjcQkXGG7TdrMo4dQ8pWotbMi1TnP6VV3UlqIIumz3h8C9f2LBxt iSJW0ATg9h8tUKID5kOBJ305ahAn8UkFWXyq66ZyhjB/oeEEu2wBdbd3u sDyhdmcn+IALLIwczOpZzhPxhCfnPHKv+bj6CInOHusyDQNzwDPTKFjYN QGds/q9cWnod7ToZdRjRZJ98qwWmRZU45CaHD4SgURrE8rjtFw3J1VDkH yY1pZ2UHC9biYb8cdXuAUZPD28Z1rldi0VxuwWFnLdoLNUwnexzL9pORc Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="269177127" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="269177127" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:16:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710818644" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 20:16:57 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com Cc: jarkko@kernel.org, dave.hansen@linux.intel.com, seanjc@google.com, fan.du@intel.com, zhiquan1.li@intel.com Subject: [PATCH 3/4] x86/sgx: Fine grained SGX MCA behavior for virtualization Date: Tue, 10 May 2022 11:17:48 +0800 Message-Id: <20220510031748.3181459-1-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org When VM guest access a SGX EPC page with memory failure, current behavior will kill the guest, expected only kill the SGX application inside it. To fix it we send SIGBUS with code BUS_MCEERR_AR and some extra information for hypervisor to inject #MC information to guest, which is helpful in SGX case. Signed-off-by: Zhiquan Li --- arch/x86/kernel/cpu/sgx/main.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 8e4bc6453d26..81801ab0009e 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -710,6 +710,8 @@ int arch_memory_failure(unsigned long pfn, int flags) struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT); struct sgx_epc_section *section; struct sgx_numa_node *node; + struct sgx_vepc_page *owner; + int ret = 0; /* * mm/memory-failure.c calls this routine for all errors @@ -726,8 +728,22 @@ int arch_memory_failure(unsigned long pfn, int flags) * error. The signal may help the task understand why the * enclave is broken. */ - if (flags & MF_ACTION_REQUIRED) - force_sig(SIGBUS); + if (flags & MF_ACTION_REQUIRED) { + /* + * In case the error memory is accessed by VM guest, provide + * extra info for hypervisor to make further decision but not + * simply kill it. + */ + if (page->flags & SGX_EPC_PAGE_IS_VEPC) { + owner = (struct sgx_vepc_page *)page->owner; + ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)owner->vaddr, + PAGE_SHIFT); + if (ret < 0) + pr_err("Memory failure: Error sending signal to %s:%d: %d\n", + current->comm, current->pid, ret); + } else + force_sig(SIGBUS); + } section = &sgx_epc_sections[page->section]; node = section->node; From patchwork Tue May 10 03:17:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Zhiquan Li X-Patchwork-Id: 12844457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68ED7C433F5 for ; Tue, 10 May 2022 03:17:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235257AbiEJDVU (ORCPT ); Mon, 9 May 2022 23:21:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235283AbiEJDVG (ORCPT ); Mon, 9 May 2022 23:21:06 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6CB2C13F5B for ; Mon, 9 May 2022 20:17:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652152630; x=1683688630; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=ICIJXWmrv+60f70pWneQ0AU7qcHx6sOBLX6B13GWKLY=; b=N8R9S5voU8ugCj88V9ODq4aqcgr9EkIIOR5jCoIERRNetD1/AudJEkIv 9DKGV9mMOt7Id328e/9M7PBCkV1ehM13t/LBQkFV9mHFJ9ejR7OeAZMGr 2iAp2opHOib4IWWY3iaBokRjet9mHWKQAcyfFd1mKIabmRi8Yv+Jg+xim 7oiNPpkrEy9uRY3kBQCix/DwVBqT/IclPEr48xx8PHPcM2nGQUk96CLmD JtkFcnOef/yNochKirCy7lJkrDv6pSLAN4DhRHXXgM3DWuLxlyo7xtc/l YUDctpO4JgtokLr9p3qoYDVqXEvKccCNdn9+lAe9LE0FBtVL+X7OhXggB Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10342"; a="249775633" X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="249775633" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 May 2022 20:17:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,213,1647327600"; d="scan'208";a="710818859" Received: from zhiquan-linux-dev.bj.intel.com ([10.238.155.101]) by fmsmga001.fm.intel.com with ESMTP; 09 May 2022 20:17:08 -0700 From: Zhiquan Li To: linux-sgx@vger.kernel.org, tony.luck@intel.com Cc: jarkko@kernel.org, dave.hansen@linux.intel.com, seanjc@google.com, fan.du@intel.com, zhiquan1.li@intel.com Subject: [PATCH 4/4] x86/sgx: Fine grained SGX MCA behavior for normal case Date: Tue, 10 May 2022 11:17:58 +0800 Message-Id: <20220510031758.3181525-1-zhiquan1.li@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org When the application accesses a SGX EPC page with memory failure, the task will receive a SIGBUS signal without any extra info, unless the EPC page has SGX_EPC_PAGE_IS_VEPC flag. However, in some cases, we only use SGX in sub-task and we don’t expect the entire task group be killed due to a SGX EPC page for a sub-task has memory failure. To fix it, we extend the solution for normal case. That is, the SGX regular EPC page with memory failure will trigger a SIGBUS signal with code BUS_MCEERR_AR and additional info, so that the user has opportunity to make further decision. Signed-off-by: Zhiquan Li --- arch/x86/kernel/cpu/sgx/main.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 81801ab0009e..b43fb374b5cd 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -710,7 +710,8 @@ int arch_memory_failure(unsigned long pfn, int flags) struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT); struct sgx_epc_section *section; struct sgx_numa_node *node; - struct sgx_vepc_page *owner; + struct sgx_encl_page *owner; + unsigned long vaddr; int ret = 0; /* @@ -729,14 +730,17 @@ int arch_memory_failure(unsigned long pfn, int flags) * enclave is broken. */ if (flags & MF_ACTION_REQUIRED) { - /* - * In case the error memory is accessed by VM guest, provide - * extra info for hypervisor to make further decision but not - * simply kill it. - */ - if (page->flags & SGX_EPC_PAGE_IS_VEPC) { - owner = (struct sgx_vepc_page *)page->owner; - ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)owner->vaddr, + owner = page->owner; + if (owner) { + /* + * Provide extra info to the task so that it can make further + * decision but not simply kill it. + */ + if (page->flags & SGX_EPC_PAGE_IS_VEPC) + vaddr = ((struct sgx_vepc_page *)owner)->vaddr; + else + vaddr = owner->desc & PAGE_MASK; + ret = force_sig_mceerr(BUS_MCEERR_AR, (void __user *)vaddr, PAGE_SHIFT); if (ret < 0) pr_err("Memory failure: Error sending signal to %s:%d: %d\n",