From patchwork Tue Oct 17 20:24:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13426022 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EB92CDB484 for ; Tue, 17 Oct 2023 20:25:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4684380010; Tue, 17 Oct 2023 16:25:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F15580069; Tue, 17 Oct 2023 16:25:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 242C080010; Tue, 17 Oct 2023 16:25:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0FEA68000C for ; Tue, 17 Oct 2023 16:25:35 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E61EEB6248 for ; Tue, 17 Oct 2023 20:25:34 +0000 (UTC) X-FDA: 81356083788.02.A2180CD Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) by imf21.hostedemail.com (Postfix) with ESMTP id A7E121C0002 for ; Tue, 17 Oct 2023 20:25:32 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Wi0F77sy; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf21.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.7 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697574332; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0XBHahavpOJIV/T/BVaUyT1+R5H7YvZKt9BnisFoKfU=; b=423FeLiIRK2nMcnUMYLpvczepiRCXti38swbmKhZdmGlwO4CykxYTGMXYYKsbsjgh1K3QA blTwV8TX3UlJyZhae2I3hdFsU5UNc4VomhB7PPDgUL1FVj2G2yDLCGXFKuxd6+12GEWkqr AYQ/0HEQm6HEaa+NnPyQ89+DPIn8LMM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Wi0F77sy; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf21.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.198.163.7 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697574332; a=rsa-sha256; cv=none; b=luaGdeqTBY8CQ+UQKe+xqfsV65Z7ANSe5gjWcej1gwkx+Znj5s1IN9MS+B8uFDjouvdTGN oOrVcNdJoTYpW94ATH0d/6qwlYxn/K5JTp5QGIfnpyLhFs7AARB6TBz2dComXnVOGZgapw L9KfBXyJCGc10f7w1plOEyB9+nQl/Hk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697574333; x=1729110333; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gcF+O4qhVSDzWKyOPqIfGB0MPXeZPkkPdOYMfsrWVF0=; b=Wi0F77syWz+Ww8UP/UUXMpKooC3U1D2uJrwQVcJPCiR7RDfBxg3kHCzD kz9THRKbpHMV8WJ4JGFOG5WPRIjtQ30E5254ejmDZQFzjBQgutUK3T0j+ jVw1/oSjE4hcIufGz/8afJT3d9YGJirdOvnH0iZNeeFYwrmWK28ykNHpf GSMHnCZ21/5ozGn385Rb650zKTihhuihTA44UIpYuHHKwa9sOLow3KAJd 0uk7TxUm5Lq/HvFCq9j2WNu4ghJ8obkGHlX+LkXSs5BRei8Z9rcTZgagz 3uYA5C6xanIHvO+aOpnP5PSuNrfCOMPUcdM7OGNb59A7eZ4MWWXIv9+TQ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="7429499" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="7429499" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:25:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10866"; a="900040438" X-IronPort-AV: E=Sophos;i="6.03,233,1694761200"; d="scan'208";a="900040438" Received: from rtdinh-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.intel.com) ([10.212.150.155]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2023 13:23:27 -0700 From: Rick Edgecombe To: x86@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, luto@kernel.org, peterz@infradead.org, kirill.shutemov@linux.intel.com, elena.reshetova@intel.com, isaku.yamahata@intel.com, seanjc@google.com, Michael Kelley , thomas.lendacky@amd.com, decui@microsoft.com, sathyanarayanan.kuppuswamy@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Cc: rick.p.edgecombe@intel.com Subject: [PATCH 02/10] x86/mm/cpa: Reject incorrect encryption change requests Date: Tue, 17 Oct 2023 13:24:57 -0700 Message-Id: <20231017202505.340906-3-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231017202505.340906-1-rick.p.edgecombe@intel.com> References: <20231017202505.340906-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A7E121C0002 X-Stat-Signature: hwmuwuyno97s1kqafmzkg7axdi6j88xq X-HE-Tag: 1697574332-500845 X-HE-Meta: U2FsdGVkX199mddcsNBs1BLqLL8v25+ZxvGgo4Pcl3Kc+V0SP0bg4qJnQ88tyqjvM5fwhKvzY7ZUOuoil+0qhnPAr0f3XM+331fuLTDnByrgXxT2dSKJhlS5LhWEwG5lPCX0AFoM4HEQ+KuOq10FfW5ZPHdXD2qD6b1KjKJlYDUXeERmR0wDSutEPjNMcFT3+IcxB2ZXkN8fYj229TZhS2waFMFEdOpnWDpHQDhS4iTusC1SWmMR3qg77sXnryyRZ4dBAo5LVKXiZoFAzbAGV7UFPe3Z8IOLbblddTZwXrsRT2WlymX8sY80D7IDEn6nHkdyr7MsuBTpM+nmxrMDrQuNAAdifzKBgxMPqymREzDTU6XSz/75U0u7HbJiQcA3mt/EXVboU/G3IV4Thx67uTwX7xAlviJjivwwDc73hUxoB9umF8UN/ElEZ1DiK6eHXbsnyc6TjNdDFY96OL0/9AmhnFnqCNAVZzYzbf8gXRwOi+1C2fzD6+Pmkmf4eFWGaZbyYfDj9mT4I1CMA1G18fybpzpCes5LKG0agT+vNGht+/qCcM/Eo5+/Vq3U60podFmY3jdx5/KD85XxxsVmDCycyE2ieZWK4Y2xdmkAdTe/5cRfg0rqD5r4LKfUmow5CH4o+yksIG9Ry2r1zvYq8r7I3XI3umGilytodLiGlHCnDLqsDprto99fLWBT8J216znFdFLdV+udnTc+KAlcoXP4ASvk2wwMPW6J4rVbMtUtMR0UWCDZTLBBkF2zvpCQ+QOuZWH+Btf0bxHh8ArBnueLcTt2XYc/E5Nu7B9hAEflioti3jSLwkDu7maScvNbYlh7ynPnHKi8LNpQ1rvVE0aXLx7tMUyuugaD12p6uekLuQtfKIqyZzlsv9oI2Di/l+rP3mvXh/R9s5gb7rg40Gd9x3po4vQgUgdfHmM+Yc/BRUNLXFZ3owihOmq45FvRgGADJ8r/V5bhcV9eX6z 9zKLme1C cXgR+ZiOINE4yvyUIJn3KMOjt8VycHYdiwy8C6WSg+/1vkVIvJZJioQXBsVC+QejbY+L2G70hrjthiBZmc0aiKt99oti+YoKFz5tFq9YfOg1BIhR1z8hXIHmk6cfl9m11S00lji3B2Oqm/gnL4ZBBZHXxXeiGTNzxfoEz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Kernel memory is "encrypted" by default. Some callers may "decrypt" it in order to share it with things outside the kernel like a device or an untrusted VMM. There is nothing to stop set_memory_encrypted() from being passed memory that is already "encrypted" (aka. "private" on TDX). In fact, some callers do this because ... $REASONS. Unfortunately, part of the TDX decrypted=>encrypted transition is truly one way*. It can't handle being asked to encrypt an already encrypted page Allow __set_memory_enc_pgtable() to detect already-encrypted memory before it hits the TDX code. * The one way part is "page acceptance" [commit log written by Dave Hansen] Signed-off-by: Rick Edgecombe --- arch/x86/mm/pat/set_memory.c | 41 +++++++++++++++++++++++++++++++++++- 1 file changed, 40 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index bda9f129835e..1238b0db3e33 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2122,6 +2122,21 @@ int set_memory_global(unsigned long addr, int numpages) __pgprot(_PAGE_GLOBAL), 0); } +static bool kernel_vaddr_encryped(unsigned long addr, bool enc) +{ + unsigned int level; + pte_t *pte; + + pte = lookup_address(addr, &level); + if (!pte) + return false; + + if (enc) + return pte_val(*pte) == cc_mkenc(pte_val(*pte)); + + return pte_val(*pte) == cc_mkdec(pte_val(*pte)); +} + /* * __set_memory_enc_pgtable() is used for the hypervisors that get * informed about "encryption" status via page tables. @@ -2130,7 +2145,7 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) { pgprot_t empty = __pgprot(0); struct cpa_data cpa; - int ret; + int ret, numpages_in_state = 0; /* Should not be working on unaligned addresses */ if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr)) @@ -2143,6 +2158,30 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc) cpa.mask_clr = enc ? pgprot_decrypted(empty) : pgprot_encrypted(empty); cpa.pgd = init_mm.pgd; + /* + * If any page is already in the right state, bail with an error + * because the code doesn't handled it. This is likely because + * something has gone wrong and isn't worth optimizing for. + * + * If all the memory pages are already in the desired state return + * success. + * + * kernel_vaddr_encryped() does not synchronize against huge page + * splits so take pgd_lock. A caller doing strange things could + * get a new PMD mid level PTE confused with a huge PMD entry. Just + * lock to tie up loose ends. + */ + spin_lock(&pgd_lock); + for (int i = 0; i < numpages; i++) { + if (kernel_vaddr_encryped(addr + (PAGE_SIZE * i), enc)) + numpages_in_state++; + } + spin_unlock(&pgd_lock); + if (numpages_in_state == numpages) + return 0; + else if (numpages_in_state) + return 1; + /* Must avoid aliasing mappings in the highmem code */ kmap_flush_unused(); vm_unmap_aliases();