From patchwork Sat May 13 22:04:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13240348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 267B7C7EE25 for ; Sat, 13 May 2023 22:05:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4BBD280003; Sat, 13 May 2023 18:04:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFC68280001; Sat, 13 May 2023 18:04:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB60F280004; Sat, 13 May 2023 18:04:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 819B4280001 for ; Sat, 13 May 2023 18:04:56 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3C22181137 for ; Sat, 13 May 2023 22:04:56 +0000 (UTC) X-FDA: 80786612592.09.E1B86A8 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf11.hostedemail.com (Postfix) with ESMTP id 2BDF24000D for ; Sat, 13 May 2023 22:04:53 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ZGOarvC3; spf=none (imf11.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684015494; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OQE2MEAlghjkxnRXmdNB2Xf64La8lXGjBEDZAIlZfb0=; b=QcOfpOZNBUVwd91hw7IsxL1tiL8JCpi5yQGuAmSNabkHz/PQd8I8OLeJSTWfe2urLKdIZS XeBvIKfAwJHis8NHKcX023sBzjLD5kpwVauDx/CcoFmKTpGVOI1bqla7eK69398c6gr5I4 q75yhd0qooPn9Pk1O9wtgC9Iu5wUqP4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684015494; a=rsa-sha256; cv=none; b=ut6GWl1Ox7vfxNvi4s7gUCRJ6ugSFCcSfOoPOLXWPVWAVvOr3d/8zLNTmkXH4N0dviU9zI mdpbwOjXqpV7/EFhY9YyALNskkP7Lhb8Hyhu/d0J137KpqMVstyE52mMpo4lv+tdEtggR0 NW9TYpkO3jfxk/FtPvoO8qSD5h0nwGU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ZGOarvC3; spf=none (imf11.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684015494; x=1715551494; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gRzQhhRAaTLGAt93TT2mPFzt+Xa/PSDkJomh6JXhxmM=; b=ZGOarvC33wEZOj2HEnomZJPih6ifrJ72XImoE38t0kU2wRlsKn6ljL5t YF/p4hrFvG/AdO4Fbewl++UNW7Jreu0+dxrBt/8WnKaRCtyWaJ07WCYCr 5UTiZVQfeDsQgWe95HVo3ZtKbdgv3HrpElsFsFchZHXgjrr3ynXgUzRR3 awWnJxjy+DW/g9kpaBl98g7DF8X1pvumATE8W+oV54B86ZIwf542pPLx9 C7esRCDm1otL0Uk6L6/RdqF4TeiHfO0lvnbhQsSrZX90T2hm0nY7ZSLXO HNZ+mfxSP1s456VFF6C7GgFQj7VRDI2WvMTomMCVt6bYMJXJRNj/9sB3O g==; X-IronPort-AV: E=McAfee;i="6600,9927,10709"; a="437325200" X-IronPort-AV: E=Sophos;i="5.99,273,1677571200"; d="scan'208";a="437325200" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2023 15:04:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10709"; a="790199674" X-IronPort-AV: E=Sophos;i="5.99,273,1677571200"; d="scan'208";a="790199674" Received: from sorinaau-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.62.145]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2023 15:04:45 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 1A75810CF7C; Sun, 14 May 2023 01:04:34 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv11 6/9] efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory Date: Sun, 14 May 2023 01:04:15 +0300 Message-Id: <20230513220418.19357-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20230513220418.19357-1-kirill.shutemov@linux.intel.com> References: <20230513220418.19357-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Stat-Signature: iq4o7yae9bwrkqjxiuc18h165xcdxh6x X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 2BDF24000D X-HE-Tag: 1684015493-74982 X-HE-Meta: U2FsdGVkX18fC2wbqZyNnkqBrgxEP2+vBrir/9kuFKETSc5Jbpe3krHlER/ZkyWPke13lwGXrdQRsHp9GHj8jTPIkHPzWHzTdmE1Y0wn4CxHzoHZScms1F06JdeADhc19Lmv4TgIuo3l1H0B6+4IytuNURrE6Kcu4ZelJtV1L+wEbi24e6y8gQSBqGAUkFFXgx9P6RYE3YkeRJzr2gFD7dDnZyJLsVv1D04YcHgmEapB4zro02UMD6/0riSpELh1VAUxboYMhIYGnVAJIPnZ6Dn1ox+He8GaywZnPx8I/41vovZ841NhoU1LWphfXye1rx4PbXsV1q3fCq914IGNlg9jX4FXJMYcPk+DLxpC5aBkNwbra7YjaiYDaaHh98vYgzRTp7MvdQhY1m7GM3l65vhSLzuW7KSAanPQn4ZaLOwt5RdWp/r8zI/FF+qPL1c6SVxzcFIKxsAtBVEku5/39obqN6VtVxA/wNVcd14pMTXHK2kdERNauKtBW2aFD8Ef5MmK8VUiOmSG4gqb3i8NGUSm5d3FqzuALP4xHWtPrgXBRLo6et00KRZMsLR2x13WIODf3kZ937oueAVB2o2cH8DMpxfzNZZd+n0ZKCkrcD6SN+B5n/uGVjzU6uOyMblr6NzHjr+IMQ+aBUxflWRwr8foeKC3H5zUmZWO+ZwurbCdmx9M/NT0OeCmpr6zdmqW0IEPeetqOGJLR0YavlJXJW/YbG1Ve+MLaJiP5lGY9SpS14BPlNyKQ5dqVgEZ7jaEkUfhzsDOwIYvxhVc0Lz8a/+EcbSwEEXt2yFlVFvBrLyTq2x9UUakaa/U6y9btlZn9YXejSH93d6A+jgbF0iQCQJipQA5zMBYwjPsnoAs4KXDu6Bln921byvfOOWgoV4+dd28ec3LzlV8VYLklnB+Z0aiPpCD+xvLbV7BJogU5NuANWFw7AqpD/xrXvTwy7DpjfrJroko+MZNTYDr+5Q GbWIasgl rNqiixdQp69qEZcg+UCeVUKSIQNexR0qeSzXkrHl4/uW0dfpVgquDnDyZzXV0r3wgShJ4qTPYUDMMxVDcnuD+WLDGVG2taS7H6ciST3+X/VTrbSUbWcjLJQ/QYbNSQJJxExnKQdLNET8R1lrJ6x9nde69aQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen Reviewed-by: Tom Lendacky --- drivers/firmware/efi/unaccepted_memory.c | 35 ++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c index bb91c41f76fb..3d1ca60916dd 100644 --- a/drivers/firmware/efi/unaccepted_memory.c +++ b/drivers/firmware/efi/unaccepted_memory.c @@ -37,6 +37,34 @@ void accept_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * load_unaligned_zeropad() can lead to unwanted loads across page + * boundaries. The unwanted loads are typically harmless. But, they + * might be made to totally unrelated or even unmapped memory. + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now + * #VE) to recover from these unwanted loads. + * + * But, this approach does not work for unaccepted memory. For TDX, a + * load from unaccepted memory will not lead to a recoverable exception + * within the guest. The guest will exit to the VMM where the only + * recourse is to terminate the guest. + * + * There are two parts to fix this issue and comprehensively avoid + * access to unaccepted memory. Together these ensure that an extra + * "guard" page is accepted in addition to the memory that needs to be + * used: + * + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) + * checks up to end+unit_size if 'end' is aligned on a unit_size + * boundary. + * + * 2. Implicitly extend accept_memory(start, end) to end+unit_size if + * 'end' is aligned on a unit_size boundary. (immediately following + * this comment) + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; @@ -84,6 +112,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) start -= unaccepted->phys_base; end -= unaccepted->phys_base; + /* + * Also consider the unaccepted state of the *next* page. See fix #1 in + * the comment on load_unaligned_zeropad() in accept_memory(). + */ + if (!(end % unit_size)) + end += unit_size; + /* Make sure not to overrun the bitmap */ if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE;