From patchwork Thu May 18 23:14:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13247512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5712DC77B7A for ; Thu, 18 May 2023 23:15:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB62D90000C; Thu, 18 May 2023 19:15:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C4324900009; Thu, 18 May 2023 19:15:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A444990000D; Thu, 18 May 2023 19:15:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8C58E90000C for ; Thu, 18 May 2023 19:15:07 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 69F18A054C for ; Thu, 18 May 2023 23:15:07 +0000 (UTC) X-FDA: 80804933454.13.30ABAC5 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf16.hostedemail.com (Postfix) with ESMTP id 2CFF3180010 for ; Thu, 18 May 2023 23:15:04 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RS08nMGa; spf=none (imf16.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684451705; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oGRDwK5r+gHSrZclkxM5aTLGsgT2GEXWwSoW4pfzbqM=; b=sd+DJE+1qFBgnSyDjTO442XOwL8R9EkwSIdwSPgEWmWb9Z0+n7TI83Dh9eg02VSr7joCjN TJE9xaMWAOO/wtyLJP8XdTOmy8rqDtk4u40+gkGCADQTKuqBCTVoV9rQmZmIF3gWJ5XfFz I8yhtUfni6Dea6BArXuJKNxvu/8QnNk= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RS08nMGa; spf=none (imf16.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684451705; a=rsa-sha256; cv=none; b=hymRIkREw6QfRVvRLMzzYM11vDOOsW9+P18C4kGNKHCzLCA2jj2dP0R1Z5X76Qmh55HrL5 8aKk77cx5DENkpRmVjfCmqaq7D5diTOYbgpY2TbILLbnulWqOK7tchy4/XWAEB6W5txv6A Qz+a/+N9nMBommlCDfRpA3d3tiIqpbY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684451705; x=1715987705; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aHz7WO7wShdd679ArCfVvIALDE990d9+23AGlVsKS9M=; b=RS08nMGamIcZd/L/EFr5pBHCLZBqr9tpByoymGLKhFtMmXpabA8W7Pi/ 3UYqvN1c6Llv1JoMGHSCfE3ydpjxHCkRV72lRV8tEBRQ1L4nS/TWyOoI+ BTZkid7yjuZkqURkGtQILugVuiPdiTACLKg6aPMZDAypNVEE9GRsgPtBx zUss+a3aEU82EnvtNRG7fWZcn/fM8Vg1Xik24lblG+C2/Q74VBf24/pVp AZIOs+Z5nQNvqoGgFmEHUUxi13aEZvdbYJ2S0ZT1v7/WVflTSgjNG980T DZJNKhc23xWDZvtLScISTyD3UvwzW87z77N19x0Xtgf2fJPXDcGyA+Bzl g==; X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="341652142" X-IronPort-AV: E=Sophos;i="6.00,175,1681196400"; d="scan'208";a="341652142" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 16:15:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10714"; a="772065105" X-IronPort-AV: E=Sophos;i="6.00,175,1681196400"; d="scan'208";a="772065105" Received: from rkiyama-mobl1.amr.corp.intel.com (HELO box.shutemov.name) ([10.251.222.16]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2023 16:14:52 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 5530610DFD2; Fri, 19 May 2023 02:14:40 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv12 6/9] efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory Date: Fri, 19 May 2023 02:14:31 +0300 Message-Id: <20230518231434.26080-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230518231434.26080-1-kirill.shutemov@linux.intel.com> References: <20230518231434.26080-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2CFF3180010 X-Rspam-User: X-Stat-Signature: wsphs5ah17j1y9w4mjiq8xkegu17cjka X-Rspamd-Server: rspam01 X-HE-Tag: 1684451704-143954 X-HE-Meta: U2FsdGVkX19l0O+ZXCiI4+NZFecev8ZZFfqLJQsXqmALU2EcsmjgAf5pIoHNyybZbLekTIS7P+EX5LEF4lCvLQLZnyhVJInx+GMQRod/s7QRyGDWjdz5aAPBWERllPATjz7zE4yf0YA4F8EKSTtodgT6Zig1AcFMxRxptNuKTKH7JIPu+2faOpWq7OwGpg0pTZZ11yAGWMSitvJVdI5fvNVT+hK1sDAZnO058kqTH+tJt9m5Xu0JeqUAoh0wK4bWepQTl4AGDuXzuCF2vtmKStghw7ZH29AH5yqwqgOUuNEnVkgdp0rSQVWGm6Jr6hnLkvAHn30fh2ZH04M1k9X/2lLiy3GJhJVGMKxGfuLwSBIVijHmO0bYkQNj3+HucYo3ykz8fuh6do2LnDDu7FjzLy36nNdwpFV/dxaCTvkS7V9ptGX+L3srNKdVKd+MfX3K9S93YLKAJC4d7Q9mW0N7DM6/Z9TDleKEnQuolQOo1mLSKZphkQQO2ycmzBJ1I1UsfZpy7InpGBOf6iAVZqInM0QVCjPBX/vYGUFeI8UUbBj7tSa+MLuj32Ci2TECd032gIG+tSqpDj7JQsROXIEdzNZ1tElEKXE2wjJdz/JZ9GRBDlLZ83DKE+KvWIddZAS7GtVGSC5MB81VhfdGxeax/oBe3d16zqs5hd9nNMWqsmnZZtcmKorZuiEQuDYN7HBbpWj2GWcsxlQQDWf6i+GgqLgI4Zy9+rk2TBvWBQe+IzRKCutigsIDRZzq2ph6j/ZHqcsRT1Lubztq4ykqDNLYas4tAsgCYBwiHQ2hoXkSRbYp/82vQ+loB3Qgj+phbaTuS3C/A4z7jQwHT9mEXTOqqPjTFahrVGTAdkic9H/QEaid86S77b/B8A8iR8+uKflrxNqoXssWP1TWJVNG6wYGWe7bwgPx8do+GzSpUkW799HF9WwY3riAwwwENLWte7dAKYlYCAyl35H+FXNEPZS itD42ubp XMZwr0NWobklETFsukA86Ka9EoahEPO1pk8n89y/8BIte3siz44YR9yuZwW/7JfW5CMIQ/GzIvExSMuDphJsuT++mGrm5ENkFiF6od0e5Z2vOYViAYHuH0Q5cxb7pTZFKc23BUOoy2oEqJRzZwS+JIOCHxscsK6z5xzLoKcOxrnVBJmxpZGtaGlTHCpEcuaXkxEtZEAzIQ3Tb7+oiAZhNRKff4qrpWfFoFPkZgfxLPYQ2lwwb8KTfvWh2EENCedyAcG8nvyFVyMAgvdI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen Reviewed-by: Ard Biesheuvel Reviewed-by: Tom Lendacky --- drivers/firmware/efi/unaccepted_memory.c | 35 ++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c index bb91c41f76fb..3d1ca60916dd 100644 --- a/drivers/firmware/efi/unaccepted_memory.c +++ b/drivers/firmware/efi/unaccepted_memory.c @@ -37,6 +37,34 @@ void accept_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * load_unaligned_zeropad() can lead to unwanted loads across page + * boundaries. The unwanted loads are typically harmless. But, they + * might be made to totally unrelated or even unmapped memory. + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now + * #VE) to recover from these unwanted loads. + * + * But, this approach does not work for unaccepted memory. For TDX, a + * load from unaccepted memory will not lead to a recoverable exception + * within the guest. The guest will exit to the VMM where the only + * recourse is to terminate the guest. + * + * There are two parts to fix this issue and comprehensively avoid + * access to unaccepted memory. Together these ensure that an extra + * "guard" page is accepted in addition to the memory that needs to be + * used: + * + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) + * checks up to end+unit_size if 'end' is aligned on a unit_size + * boundary. + * + * 2. Implicitly extend accept_memory(start, end) to end+unit_size if + * 'end' is aligned on a unit_size boundary. (immediately following + * this comment) + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; @@ -84,6 +112,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * Also consider the unaccepted state of the *next* page. See fix #1 in + * the comment on load_unaligned_zeropad() in accept_memory(). + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE;