From patchwork Wed Dec 7 01:49:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13066534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7C09C352A1 for ; Wed, 7 Dec 2022 01:50:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABB8D8E0010; Tue, 6 Dec 2022 20:50:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A43288E000B; Tue, 6 Dec 2022 20:50:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 847438E0010; Tue, 6 Dec 2022 20:50:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 74E3E8E000B for ; Tue, 6 Dec 2022 20:50:05 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2B2A61204EF for ; Wed, 7 Dec 2022 01:50:05 +0000 (UTC) X-FDA: 80213829570.30.1446282 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf27.hostedemail.com (Postfix) with ESMTP id 6A8D140009 for ; Wed, 7 Dec 2022 01:50:04 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BJApDLBR; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf27.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670377804; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VXeJx15b0x9FOojJzVSBusJ7ni62omarhCf6lRF7Rxs=; b=d2BkrTigHDBLWfOaG4/Ia71luzNnecT0z9qM3qDsPibzDa7phpZ7D+9F5aK5H05GjZqzQj MSBtzaJroapFrve9r6dGOo92lrbAlJrc+InKJRPMJOqRCVmpe1xVtHsvbWNA9A19kT3MC7 Cg7vlRhX3yGyvKva79uFyOKjIJZuuis= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BJApDLBR; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf27.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670377804; a=rsa-sha256; cv=none; b=htj02rNGyvoigIRluLir+1ps1mx/B0XU/rWvaHL5f3Pjn+bGPMM71en8svX2qNs54cBSts gguLc7zBzq+lJRbNnS5jCZbgr1/vN47hFgDoxMbxHdaX7Xl2d+le5/scD0C5b6pnsoliq0 z/Rex1sKggY7sJrP1WmtPDMtGKT+Oqw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670377804; x=1701913804; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tWBdt2o2/am+QkNoUeL/v5kYgnfgBeMa0WLAghCR9t0=; b=BJApDLBRKgpZIc/HOdf8mvYPTDGxwfPTU9wt+AED+km1cnv0DLS5mzQx cWyMrrXSZ1ziw0ydJTzeOiG+oIytoUPHQojaQPlZ4eUu5BsvpJZPADAFM C4hg9Nmd+8uTB5ae7LfJ/6wHhglIO5/LrOdFpQW6gp7R2RjnlS7yJ6fDT 0XI7CUhJs4iM0srrVBZQO9jqeg0AdmsgU/xQnXCJPKVb13tXzH+n3rRw+ 7YW3Z5zrwuFKBdZ8b3ItKkZPAjiG1Vlq3SpNaDIQAdLPbD+Ykyy2yGNTa Poy2bYbinWxsa5FlgvqG5nt51LLv7yKsVjMbYgYoYlYFmwUuCXRMbJFND Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="315494562" X-IronPort-AV: E=Sophos;i="5.96,223,1665471600"; d="scan'208";a="315494562" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2022 17:50:02 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10553"; a="646427704" X-IronPort-AV: E=Sophos;i="5.96,223,1665471600"; d="scan'208";a="646427704" Received: from puneets1-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.38.123]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2022 17:49:52 -0800 Received: by box.shutemov.name (Postfix, from userid 1000) id 97FD3109C8E; Wed, 7 Dec 2022 04:49:39 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Dave Hansen , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Dave Hansen Subject: [PATCHv8 10/14] x86/mm: Avoid load_unaligned_zeropad() stepping into unaccepted memory Date: Wed, 7 Dec 2022 04:49:29 +0300 Message-Id: <20221207014933.8435-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.38.0 In-Reply-To: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> References: <20221207014933.8435-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6A8D140009 X-Stat-Signature: ho83nr46am3qmmyhf8kmudqwznqs6gt6 X-Spamd-Result: default: False [-2.20 / 9.00]; BAYES_HAM(-6.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_TWELVE(0.00)[33]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[4]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1670377804-243123 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are three parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+2M if 'end' is aligned on a 2M boundary. It may require checking 2M chunk beyond end of RAM. The bitmap allocation is modified to accommodate this. 2. Implicitly extend accept_memory(start, end) to end+2M if 'end' is aligned on a 2M boundary. 3. Set PageUnaccepted() on both memory that itself needs to be accepted *and* memory where the next page needs to be accepted. Essentially, make PageUnaccepted(page) a marker for whether work needs to be done to make 'page' usable. That work might include accepting pages in addition to 'page' itself. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might have PageUnaccepted() set on them. PageUnaccepted(page) is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov Reviewed-by: Dave Hansen --- arch/x86/mm/unaccepted_memory.c | 39 +++++++++++++++++++++++++ drivers/firmware/efi/libstub/x86-stub.c | 7 +++++ 2 files changed, 46 insertions(+) diff --git a/arch/x86/mm/unaccepted_memory.c b/arch/x86/mm/unaccepted_memory.c index 1df918b21469..a0a58486eb74 100644 --- a/arch/x86/mm/unaccepted_memory.c +++ b/arch/x86/mm/unaccepted_memory.c @@ -23,6 +23,38 @@ void accept_memory(phys_addr_t start, phys_addr_t end) bitmap = __va(boot_params.unaccepted_memory); range_start = start / PMD_SIZE; + /* + * load_unaligned_zeropad() can lead to unwanted loads across page + * boundaries. The unwanted loads are typically harmless. But, they + * might be made to totally unrelated or even unmapped memory. + * load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now + * #VE) to recover from these unwanted loads. + * + * But, this approach does not work for unaccepted memory. For TDX, a + * load from unaccepted memory will not lead to a recoverable exception + * within the guest. The guest will exit to the VMM where the only + * recourse is to terminate the guest. + * + * There are three parts to fix this issue and comprehensively avoid + * access to unaccepted memory. Together these ensure that an extra + * "guard" page is accepted in addition to the memory that needs to be + * used: + * + * 1. Implicitly extend the range_contains_unaccepted_memory(start, end) + * checks up to end+2M if 'end' is aligned on a 2M boundary. + * + * 2. Implicitly extend accept_memory(start, end) to end+2M if 'end' is + * aligned on a 2M boundary. (immediately following this comment) + * + * 3. Set PageUnaccepted() on both memory that itself needs to be + * accepted *and* memory where the next page needs to be accepted. + * Essentially, make PageUnaccepted(page) a marker for whether work + * needs to be done to make 'page' usable. That work might include + * accepting pages in addition to 'page' itself. + */ + if (!(end % PMD_SIZE)) + end += PMD_SIZE; + spin_lock_irqsave(&unaccepted_memory_lock, flags); for_each_set_bitrange_from(range_start, range_end, bitmap, DIV_ROUND_UP(end, PMD_SIZE)) { @@ -46,6 +78,13 @@ bool range_contains_unaccepted_memory(phys_addr_t start, phys_addr_t end) bitmap = __va(boot_params.unaccepted_memory); + /* + * Also consider the unaccepted state of the *next* page. See fix #1 in + * the comment on load_unaligned_zeropad() in accept_memory(). + */ + if (!(end % PMD_SIZE)) + end += PMD_SIZE; + spin_lock_irqsave(&unaccepted_memory_lock, flags); while (start < end) { if (test_bit(start / PMD_SIZE, bitmap)) { diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c index 27b9eed5883b..f375ab784c78 100644 --- a/drivers/firmware/efi/libstub/x86-stub.c +++ b/drivers/firmware/efi/libstub/x86-stub.c @@ -715,6 +715,13 @@ static efi_status_t allocate_unaccepted_bitmap(struct boot_params *params, return EFI_SUCCESS; } + /* + * range_contains_unaccepted_memory() may need to check one 2M chunk + * beyond the end of RAM to deal with load_unaligned_zeropad(). Make + * sure that the bitmap is large enough handle it. + */ + max_addr += PMD_SIZE; + /* * If unaccepted memory is present, allocate a bitmap to track what * memory has to be accepted before access.