From patchwork Fri Jul 7 13:44:41 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Lendacky X-Patchwork-Id: 9830405 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 87EF6602CA for ; Fri, 7 Jul 2017 13:45:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D60C2833E for ; Fri, 7 Jul 2017 13:45:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 71940285EB; Fri, 7 Jul 2017 13:45:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 595A32833E for ; Fri, 7 Jul 2017 13:45:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753241AbdGGNpD (ORCPT ); Fri, 7 Jul 2017 09:45:03 -0400 Received: from mail-bn3nam01on0047.outbound.protection.outlook.com ([104.47.33.47]:41408 "EHLO NAM01-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752614AbdGGNoz (ORCPT ); Fri, 7 Jul 2017 09:44:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=lulLCZRh5+0/hlma3vYo2s5gQTfJ1ysXiMpG1t1IGIs=; b=KRMaaVvKdIL6zIV1Ymt1QafP9EFyeho8yVb0YxVLQoHSoz59sj0F8sAENRWNnfM4BJ9McSDSLeJUDKUrFFJIf5ZUCRsGYZ7oSK1w0M5I1VY7wRFeH6Olw+En+ncdiFBNB+eD9OzKdwmY9dCGpj5rbH0H7IJSm+Xle/utD+o04UQ= Authentication-Results: vger.kernel.org; dkim=none (message not signed) header.d=none; vger.kernel.org; dmarc=none action=none header.from=amd.com; Received: from tlendack-t1.amdoffice.net (165.204.78.1) by MWHPR12MB1150.namprd12.prod.outlook.com (10.169.204.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1240.13; Fri, 7 Jul 2017 13:44:43 +0000 From: Tom Lendacky Subject: [PATCH v9 35/38] x86/mm: Add support to encrypt the kernel in-place To: linux-arch@vger.kernel.org, linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, x86@kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, xen-devel@lists.xen.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Brijesh Singh , Toshimitsu Kani , Radim =?utf-8?b?S3LEjW3DocWZ?= , Matt Fleming , Alexander Potapenko , "H. Peter Anvin" , Larry Woodman , Jonathan Corbet , Joerg Roedel , "Michael S. Tsirkin" , Ingo Molnar , Andrey Ryabinin , Dave Young , Rik van Riel , Arnd Bergmann , Konrad Rzeszutek Wilk , Borislav Petkov , Andy Lutomirski , Boris Ostrovsky , Dmitry Vyukov , Juergen Gross , Thomas Gleixner , Paolo Bonzini Date: Fri, 07 Jul 2017 08:44:41 -0500 Message-ID: <20170707134441.29711.59525.stgit@tlendack-t1.amdoffice.net> In-Reply-To: <20170707133804.29711.1616.stgit@tlendack-t1.amdoffice.net> References: <20170707133804.29711.1616.stgit@tlendack-t1.amdoffice.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Originating-IP: [165.204.78.1] X-ClientProxiedBy: DM5PR12CA0054.namprd12.prod.outlook.com (10.175.83.144) To MWHPR12MB1150.namprd12.prod.outlook.com (10.169.204.14) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2b852432-0300-4478-3f04-08d4c53e5440 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(48565401081)(300000503095)(300135400095)(201703131423075)(201703031133081)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:MWHPR12MB1150; X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 3:JpbT9GGtQQeLxhtHep8xiYJvQdNegH2049PS7ID0HEc7chm41mJkeDwBr82OQ7hilk6KHAX9IdMLs0CwKUchLrF2oUCs4TsVZZ81jY0KOeWUuKkOQ2A0opmoOHY1obEgiv4db5ohTrWH5fmEBc9gl5q15NuFFsw5ry5zcUsZKzAIibPKG91Xg+6DfNnsb4owNpHsJSun+CTmMK9WLqgtudrqeAn/SGpj5IY4FzuXEABhwBYmjXCynE3Kk3VI5+7T9Svlnfp5pJ/08umbxeslVPSceHuwdJhEZsWZbXcDt34lkcGpYxUcFV+Xmb1p+tVOXJfQ9BiAwaLIzZ7FinVc/NOFu8duht1rrsYPRpXnHKWF78O87a/feK6WmfMfxc5zs4Rl6mtoraTGOuxzAtfelt6QVUaritt+FOAkqiz8DeAnQIxu4g+1qnPuJuo46XFgXH1tK713W1sdtChhJJM1XXHzm0nmKACO9isWf2Lg6scTQNdDiP0bl5dhknThLMZz1Ijz3JeVPyKFNYpZpwQh4WF/TAFSkq4cKl+7KtX4PxZZdC7oVyJJt/4XKtbpidayVRLh+6zJAZLhXpUVWdmXQ8gI2GX8YRKinw2FMp/1ZdpmaPfP+4y2RcgryK6S5A2JAWnf/0pMbf7nrqXqPnr+7/j/Ur3mNWj4B5pIhtqLD0QCiVT6XZNOIl098QMR+S7WEJpf7j0jXrMF6Vx0mMmUZeBp+ezFEonUbJbaNGVTUQAePNCTLzWOxXXSS+FAcOdciUWfM6AAp65XJoJHSUl2dg== X-MS-TrafficTypeDiagnostic: MWHPR12MB1150: X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 25:U+6kBYc4qOmZG9tQ5vyQvDzs5GtNv9Gh1OjupMDsI1wk0gNwTJnS2VXwRUdtu+Xsfxff5q2utaSSoWuVLuntIut+S1SQ0rtDnXmkmw3Ro4vXTHCQDU77IH+eyqWITpUARY08S5BYrJ0CwDZt9i3OMuQnKD9gHLdmbFC3ZvwBv6NT4qQrLfbbOUtwF3i1Ak6KVd8gEW5XMC9B2H7MkPvohUnjJnBB3rBxdgyH4UUFBzJRVbQ2CaM1BAV5e+ArQM1gTgLt/qXNd5ncZfci/gZvsBYC+3w/EV3b8czGGnRxYr9eFxiOys0+XCQDhpvcu6Z8wmoMxmuCe8+6JHqQ/yVvh0mXwFPKI+53qqXDUMSFirEN8Sf7jb34D1B2nBrLs3QZopyVW3swG7q5xtVOgkBjfzgKQ91xg2qBo5ER3zOufzThY2EFIEKwfH2S/5q34fVjU6xKRggD08TbZRJp7bfRjMPlICSSXZkiNCJLqh6/n5VPy/QrJLCYhkFGdjohx5HY6KP4SEDBvrtSbT1YpzFInGXHg4dPeNJ0OniyIdWdYdH/JxIOZow9+t8OD6/dkWU4gUUMnpacDOD4znS1nlT0JbKKFBoRWzevoi396RNhvpI0V4sOWSSd3WyiwMP5YLyyG3+P40mS3BL+cBjUlp/nZwW4fPpqdGgMg5Z4ZDLyOf+n2G+AUswQ3s7GS/KYk35aUorJyzzD22zDBEu8pSXyoWdQf37o82uNY+1pTA+4vHn7TmevlgsogB95BFKxROAeWxoW5org3cbs4ZF4Wq/PFxWGS/3vwEPeRimddtgRAnWcfeHvLyVBl64NaWNZbMcA14m0SJwIXbp748nzhy8xULe5l+DF//stjEdj705m6YJVH88qgBvnD4FD5iqOsj8/G5cc+8YBaDmU7ewaZpAAS8DKdV4CMnz2YhfC/azqz 1M= X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 31:qXSX0bo90ihCVJRptj0sTTSao3jpIeBSfgSWMO/RRX7eSZNNdfLlSGP07CosG+Hbwas9d0nuo2fuxdm9ZmYiLAn/snqBdFu7kUNyjwHFoYnm/MBIV69LcOTh+jTwfx94+KUltmSCYyLMI+wfPhT3WvyvquCLTpPLjOaSlGLdmP4oxqM26kpfJKI21I7wOTtZ7fXL46gdqjCO1hWYJzaZ29p0XNj38uoYMMI1IzuiMMjyplooSqRDEdhXY7CGM6cgo3I2dXmwAoYz7UnLy0VVYkzr2o5/CYU/vNYNAq86foScfL7kwAUnW6IVhaiQobjVCcPaxMZOsmqXW3msDWU4CJoafEaqOtFTAPPw/rW0FpSqL/FoYPijeB5C8nRwo7X+lvKm2nSx7Rk8BBXOM8Rc+GaWZnqDqkBJkzz50iwGbxABZjBkS2Ewwk8RU8fwsMRlTYILL45EZSKajBc89aJVSVRxUHb5+uTtL4kbUSFzvDP+y4D+TZTvSHUj8jP45BicIJvTHJAZPiivZekjH/UWZxiHUdiSQ0sivzFaTyuuHViewV1lJBN5Lmk6jeP1Cm/UsL2U30cGyW/Hk7DR7bBEFbBFjjx+bKSOGPREG6YJEnv+wZwGpT+5G7HPyXeZjV1SUQgOY5EO4ypZm4TwO0DTX3soZ0N0B/WiwezSl4p7sDY= X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 20:dIXc9fzGGX6ha/R3e9H2eCtyAiGxzHhR41eFb8HSKVX1luK1JvSLjLVwGNyYzMIXlZnI/gwpj0/vAjHs7SpprNgvqFKpCJZtpxYsXYoNUCLJtvw8cR1xb6fQCiyPyElataIWI1tP4LAImsifuYSkNIm2NSYLn3OgA2Ut/eGJL/qYW9EQt8FnJxItTeL+D8b6/gO0dIl5YGfKeZja8WeYJ0LRESQ5q1h24m/TBLHUwJbPdRdFjnP6k9CycixidWIij3ZCE4X+grQphKVM5bXh01eTYgv23DPNNYmL3nUmKQXSDX1qqQVXEd72Kftyw35LaJXvoSJ+zOOJFOQWTZ68frubpI+GOg5O3CBF2cS4ZAqIj8NglxLhiIyaU0gZwcj65nGSTs3Fev7Ez5spRAeojXiTCbP0Q/QCjjY/hzejJ4z5orp8jeOX+Bsyy7euTAMcRNhtEHWSt8Mcc3foJq3ssyGsVbUUxHuMKmjpOH6sR+BSWyPc5RIHqNopJG9O1Vox X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(133145235818549)(236129657087228)(767451399110)(148574349560750)(247924648384137); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(5005006)(2017060910064)(8121501046)(93006095)(93001095)(3002001)(10201501046)(100000703101)(100105400095)(6055026)(6041248)(20161123564025)(20161123555025)(20161123562025)(20161123560025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(6072148)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:MWHPR12MB1150; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:MWHPR12MB1150; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTUwOzQ6WW5nUldqb01wQ1VkbW5ZVjkyWXhFQkJENFdN?= =?utf-8?B?Rlc3dlppUkgwcEN3YXlma01qekdDS1AxM0dpWGRlVUlYbUNia2xwNG5qRjQr?= =?utf-8?B?a0d2TnlQVW1iMW5uYVZmWDhZbElOVG9sc3NzVjV2MkxSaFVvQzV6dzczd2Ri?= =?utf-8?B?SmJYc1NFWXFtWlpxOXlXV1QvVGxEVkJqaG92OTZUcWZ2T1NJclh6SFZPWFBr?= =?utf-8?B?UTBrZlkra3VoRVNmMnZJTkV5Um4xSEFpVjN5b0NKd2ozWklMalBlckY3YTJH?= =?utf-8?B?dUEyS01LZkNkZDdNek1LTXl0N0VsYXlyMkd5OGp3ZFhIYWxwQ3QxSUJPRDRH?= =?utf-8?B?T2NqL1J3d2syelA2V3dzd1YyK1g5VXV1ZjZsdWpxeUhGbTlvRkdxZ1d4Zm5z?= =?utf-8?B?QTUzeGxmdC82Q2tJVHNBN3VUQlhKNE03TTlmNHIrb0ViKzJWamNZTWxXNUJZ?= =?utf-8?B?b21tR2t4TkN1MU1DUjZpaWQ1VmxWeUtHbk1mTHloUGhyN2oxUGt2ZitwZSs0?= =?utf-8?B?dnV1VHhPYVUwd2VCZXU3NXZNS1VYb3lsM0xqNHdERy8wN3lKRFl3eGR6SFpJ?= =?utf-8?B?L1lhclJ5cENsMlpOeGlERTlSMW9BWGRsdFFFZUxmZGhtYk1ZOGU5Vjd4bjE0?= =?utf-8?B?RUlubE1OUzNOVjNFNkhjOU82VUN0MXhuNExnTUpNSHlZZmI2dXE2aWpGM0Zi?= =?utf-8?B?U05oMDFXQWhkS0pyUlZ6UnRaRUVobzBGb3FndU8zTThtb1Z2VXJSNTBPRFpY?= =?utf-8?B?UGRGZndGaGVCTVhDNSs3eS9PMTRMTmlJTlFRVVhySUxFRWRUU1RudmtZTUpS?= =?utf-8?B?eEZHUHMwUmVsU0lHaUl0S2NEZlNVK0JhQ3lmdGIrY04zeDBSTngxS0tFRSto?= =?utf-8?B?dlE0RWljQVl1dnpTWnZJY25IbnVjeEtTRG9aQnI2enRVaUNyY1dGUWJCbnl1?= =?utf-8?B?aFNOa3oxekFUNkRWRytKcWV4bHBiemh5dkZlTmpsRGgvajlZS0VUa2JOdXRm?= =?utf-8?B?VzE0ZjZtZ0dWTDZyVGZodzh5UnA4YzNtaHJJWmJTTG5ydE40aUdGQXM0NEVF?= =?utf-8?B?NDdPRE5IdUl3KzE0UExYVXkxcnJYb2F0eXVBK2NJdytYL0VTVTR1Rjc4b2xl?= =?utf-8?B?VUFwVnpVV0Q1akRoOTJyd0RmYjFjRmVyQ25McitjMnNZUDZ2NTZmK0diVG4v?= =?utf-8?B?ZlBuVGZmeWgzMGlkYmNDRDArS0pGb3dHd1hwRVFpekZDRTZ5dEEzdTBWbE5i?= =?utf-8?B?TlJWRlJYRlZHVTJBSzFwOEJaUUlxb3YyL0pNTWFqTGdQWnNsd2t0WGo0bktU?= =?utf-8?B?dXJkcUR0M1FqbS9FaUFNbEdlK1M2amhDQWRqS3JsSmZ3dHRCMWhCT003SzRZ?= =?utf-8?B?ZG9ZVUJlZGJKbnU3eGcxalgvWHpMbDhtcFhKeGZkdVlYak50enRIb0ZVZXcx?= =?utf-8?B?U21nd2dMVUNWb3M5VFJabXh6bTU3SGNnb0M0L0ppOWd2cnA3bythbkRvMGJM?= =?utf-8?B?b2JMU1h0YnBhelR0ZnhXckRRQXZPU2JkOGhDTnVIL0RCT0F5QUtlV0J0SFJU?= =?utf-8?B?ZDZyRzl5Qkt5TE5kaUtORTVqOG45L2lDeU1tWVVSUUZOT1crV05sODNwTVMy?= =?utf-8?B?RURRWDNsbERBYnBCT1BFNXFadk5ELy9ldE9oTUIxUTJFQmwwM3UxZFZJV0Rk?= =?utf-8?B?V2xleHliMjU5aXZKNXBHK3RDdkRzSkdoU3ZjTWNzTkhXRk5GTWkwTGZVK3dW?= =?utf-8?B?bmVMaEFLSW9FenEzM3dKY3FUV1BXcUQ5UGdqby9rbDFsMm5ZYllPTDJEeC9I?= =?utf-8?B?d2wxVFpPUDRYQ1VMYnVYU3ovY09TcDU3ZG5qYk5CQk03SURSNUZqRmhjNHVl?= =?utf-8?Q?5fTzOMt2Kg=3D?= X-Forefront-PRVS: 0361212EA8 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(39850400002)(39400400002)(39410400002)(39450400003)(39840400002)(39860400002)(25786009)(38730400002)(103116003)(6506006)(54356999)(50986999)(5660300001)(76176999)(7416002)(7406005)(4326008)(110136004)(9686003)(54906002)(55016002)(50466002)(53936002)(66066001)(47776003)(6116002)(3846002)(33646002)(230700001)(1076002)(42186005)(2950100002)(72206003)(478600001)(189998001)(53416004)(83506001)(2906002)(8676002)(81166006)(86362001)(7736002)(23676002)(305945005)(97746001)(921003)(1121003); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR12MB1150; H:tlendack-t1.amdoffice.net; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTUwOzIzOm5tTG15eDlKSEpVdStsOG9HcG45am9IUllT?= =?utf-8?B?Z0ZTVzdhaFFlOThuM2VmT1dxVzM0MFZ1UzhZRGJzVUJYUHQyYzV1NEoyeHpz?= =?utf-8?B?S3N5dmg0U1hvcDNxU0QyWDJ1S0V3ejN1aVRld2RZUlZ2VlpLMG1RTmpvbjdp?= =?utf-8?B?RzUrY3MvSVdrWHJqa29aeGV2Z3cvcEFkdE5OUklER1FMY0NNY0NhMm80eHRz?= =?utf-8?B?aEQ2N1N5NFdhQXRvSzNEQ3RBYTMrdUtmTWhmZ2lLblg4NzA2b1JkYjdIbVdG?= =?utf-8?B?bVY3bXMwOFRIbnVtNFpYbDdGRDBhVVNaTkRQelZVTFBITllEa09FUnBwT3kx?= =?utf-8?B?Rlh2amdseEZubnBGQlFKWTg4amdNYUo2YUF1elRkd2JScUhmRWxCVFJKUjcy?= =?utf-8?B?MkRlSXhIL0tsMytSa2FMMVJoc1RvVTM2bUFvaFkrdXUwVmJwS2hJRlJ1cXp0?= =?utf-8?B?M3BSZEx0Z0ZVMGVTMGlkT3hCZ0ZnbjNqaGd3eGFtNkpMbVlRRFRGWWFML0ZN?= =?utf-8?B?ZCtCRUg1a3BRNHlRL083SnFBVitkRlRjTUNQVFlBOWRQWWlPL253SWpXajNV?= =?utf-8?B?UForNWZiWFpZU0JsNjJlOGFVbmh1TXpTcmJBZS91T2hZSC9mK1g0VmVBZjdT?= =?utf-8?B?MnRqa1Y1NzZrOStyNGhzZkxkbUVGWE4zRFZJYnp6RnFWNExhbnVtdjJYYy9F?= =?utf-8?B?TFB1cHRwU29rMG9HZ3dwZnJGdDlodVNXTnpmTWZsampCTjJobW16RVlTbWNu?= =?utf-8?B?SU1lek1qYjRLejJTYU9jMFBEeGZtMnRNTVU0cGRPcTZMbHBhcEU1T0luZ0I2?= =?utf-8?B?VUx1eFNRS1M3bjFSN09nR0xXWS85emgyTHZFV3cvQ3MxUlBNWFJmZlpyZk1Y?= =?utf-8?B?bzIyQjN2MWNLK3FEZVlyazBvMkE0Tm5WL2ZCTnU0bWZzOXNqeCs4VEtvYUtI?= =?utf-8?B?UkJON2N2dGtnNjlvRjBNTkV4RStabWFtS2FTVFJyRG9pTExDNGtocDZJLzhV?= =?utf-8?B?cktDWjJCMXlaSENKT29VeXYvditXVWJ2eGY3N01DbjFNbC9FQXB0UkJFUFlT?= =?utf-8?B?K2JNa1ltVTdKVnVUQ3gwUUkvRW5meTVRUlloV0JoUmI5RFByeVJEQTgveWZE?= =?utf-8?B?VGhHZHVSakY2WEIyMFo4TUdKa2VrLzB5NkxLUFRTajZJK0QwUFo0NnRsN0V4?= =?utf-8?B?VDBQRjZUbnd5NVZNckZuWFAwSHUwTWlQenRoV3U3aVMzeWZaK1Q2SXp4MnVh?= =?utf-8?B?eHRLTHFZY2Y0SGtCcXMwN3p2eVpodXJ4QzVaTCszdXJOaUV0L1lZejUybGtQ?= =?utf-8?B?dld3YU5tTDBpTXJMQm0rTDBzUDhJdUtEOEtpSG9SUHRvTDQrZm56NDE0ZjRX?= =?utf-8?B?N2hWM1pPMnpPYi82OGl0dk5HYkk1WUFhdDdBZW1jcGY4d0RicUcvRXFSUExm?= =?utf-8?B?c1dYNnhXVVViOTBXNjd6eG9PbHlTRm9SNyttaXc1RUc3ZmJFTlBzMnhtNmNY?= =?utf-8?B?K0lUb3BxdHlidkVjQ2pHdXZIRVViZVFxeHVGZXVFMXk2VG5oUUtYVlhJNXk5?= =?utf-8?B?Vm1seGFBL0JlbFE4S2RUdUMyZjdsNDltdmF2UDZEZjFMdFp2WjUvQm1tZVNh?= =?utf-8?Q?DLMdCoYuq3ZTXqjqSCnP?= X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtNV0hQUjEyTUIxMTUwOzY6dkQraExwS0hOUmFDVXROcmIrT0MrSXJJUjhh?= =?utf-8?B?R0hWNmw1UXorSWUxTHE2T3J4Q0JjMEZSc0tHZDA1R2F5aG5SZWVBRzhIMXEx?= =?utf-8?B?eUd2K1cwdW1HWkZRTzV6blFxYXJTNDVPeGlVd2M2aHBuSHhTQkFGVHdCVVZq?= =?utf-8?B?V040SEZpRENDTy9UV0xEbjJQeTNKckhiREttSExNR1hzTDlGYzc0ZGF1Ylhz?= =?utf-8?B?U3M5QW9SUVY5NGd1Nkd0Ri9Qano1VHE3UjZ6aUhlTUpVN2hwTzVQcGRFandG?= =?utf-8?B?OW1jMC9aYkFCeHB2OWN4ZVB1YytidUIrRmVHK2dRNFc1KzFCeExob081L1Vw?= =?utf-8?B?K2NsTTFhVG5pZ25rM3FjSkhjVE1UVUNjWDZsZ3pxT0ZwYks3eXNxZGQ2RlBT?= =?utf-8?B?UHpEeXN0T0hJOXMxMFZqOFdQcWZ1K0thbG9lV0FqSWdhdTZNbWFDL3FhL2Uy?= =?utf-8?B?S29GS0g2TnlBeVhiS1hhMlhZL0s0ZC92VGMzdzZoWlNJNGxROXVhVHdKaEpa?= =?utf-8?B?RmtkOTFWVWwvVUM0TVVvMk1odTF5QlBCZzh6Vks3V3NUTVYvWitIZjFLUFBq?= =?utf-8?B?MWh2OHZ2bThTd0d3WlVFRTMwTnRzZVh1L1FTOXc4L3VRVmVNSExpWDFYM3Mr?= =?utf-8?B?SWZ6Y295U012UFdzUVQvdGwyek9jcFdwYk0zZzBJTnZ2MmNLV045UWN0RkJ6?= =?utf-8?B?eVlIbkhxeEZQbi9CZmVQRTZPcFhzYVdwUDNnZURQbjBhSmpETmQ2SEhCMHNJ?= =?utf-8?B?QWpCVVl3S3ZvTy9rTFA4bXllL0hJb0JucXJCY0ZpN1F4bSsvWmVxc0w0L21T?= =?utf-8?B?am8wR0psUDhtUC90UjZSSis0UytjNkRyVW5XNDdydFM1eitHS2VSQUhkYWkv?= =?utf-8?B?T1djM2w4TmNNMFFJTDQzK3QrYU91Y2R5Tnl1c1p5QmJNWUFBdWNkd28zWTlT?= =?utf-8?B?b3lUVFdDdGRyMVNOYjlJVjNjSVYwNDhFS1JUR1RtTGdHOE94UUtBbm1NOFNU?= =?utf-8?B?bmRRa2V6TmJWRHFUcTM5QjJ1bDhkY2hTMHRPMmdUVmVLUklyaGtEZVZGOXJj?= =?utf-8?B?N0lDT05oRkZZOWRzRjU3dGJ5aW1LbFlaYUdjUnlnUEFEd3hnUEl5eS8ybFlR?= =?utf-8?B?R1dEaldlWDdKeThuajk1azkyYXo5UE5vajJuTTNhakNQSFJGWVV4MDRCbmlh?= =?utf-8?B?OFE3L00vL1M5QWdWR0k0ODk2azRvSG1YUmlFcUZqdnM2UFl3Y1NOTXZMSUFp?= =?utf-8?B?L0JlbUFETjVmQUhnYTBSaHk2b2hmR1VkZGhuL2xIeDk2TFdOQkV0aDZCZGlT?= =?utf-8?B?QjdndFBCc05Kd2h5bkRBWDNZVEo3TVJZeWY5bVRVbjZ5SmRpckdtZ3F0UWg5?= =?utf-8?Q?3HlA0U1?= X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 5:HLN/NMTgbAdSKrjH6/N2n3x03KtRA9KU8lNu1Khms/NK1aWuWg9UZDHnO0aJhs4GJlFzYVFcDQdrxTaSMh7L6blQL+Df8KHlnuq6y+Xt6BcjF+j6uQHLOq5qt13zS88B/Mtc5Zt2hmFOLQrGhFM0dXaMEECzqt86pJci+wPaauvrlIgspvdQcggtkVQgUJcQkcpDyYFLYC2UX1U2Z7Tw7rdSnQpAeBphYqI3x5YyEU1TxC2E6NBK8lRMdT+mS1HH1XJhnQTUHW9KNfrfGA8wIAb69MtEJ96NtJgPSzLVj+hTkN2LEFSzBi2WvwEL5lYC58z5eZxv64vDfP4NdKHKZMgpwhoI6sxtV1sNDBUaDqEAL9phQerWTW2ehqllpXQSm2HbwMdyDW5h/DC94/KVdfGeACXonefunlr7j6k4UF3SXr7z86X0SF04PpSY4vJUg7GjdCePG8OLbZG6RMOHmXWZk8ThGM0AUQkxLW0pXsptVIMN22iv3dUtFS/mRdjX; 24:SKWykHhcGF8o2HI+nktGq5+Jx5YwCJLVEBHaWRiF0glA+I/IoLDiTmP2QQcotiTFTWJ7ZRxKslLLG5et26aJSV6bLfUF2RkgNyvCoV6xH2Y= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 7:2Z97dUcHkRqIhOUsKQTnqM91i20b5YbWjFq2+CRn++djpJIh3wJFZchNicFOorH0losIDl98aDUnUE8zOcYH1ZfkSbbyxn9TsJQIoqrvdOFf2j6ymwnQ8488KjXAIbHPGKC/cIn+JpMc9nUCiEjZp09R+KY/4uRSK4Io3qAoR6A8TmNAZgApm4t+PD7gHkfBDgk+yGtwVAGq8ixRQljbTEAuG6FKOq6mvf8tgalik+rKNarbtSt5M3OVriUZBkUrHGfsccTXSjQRNiy1Ks6A8IP/1G4NUeKB/KFCJYcHro8x/u9lCJyIT6cmbZjHD95K9oBQ9fEVmvxqkw00jq1KXYfnBT+KnLsHm8AJqATcTiD6yGuphwkbzJI4gowjsGj/SFGduiL6koazqMdHzXXnKNQm3lhzFtfhny8sBrv95NG7mKm+pIP8GJeis+GjqXyFwPxC9dxoNWwL0w57i4nvgCGFerS03KKiIQd5bwnYDd8e8dbvM0V34UO16mVZfbCSCXB9Ys4xSxotyQr237IrifyFfXpy0U+wbn+CjZ+nupmaCcfWvpm6YSE1d8uY2DKK7xrsOkpu/7swqcslNgEs/1AeQwfidsoOwnwMDhwpemY2GXSlbvwIdSfOLYRAx4BIEtwTqo1IlrSdFbqpjobO0Ql+eADk+oNJpHBckRjTYAMNrA/spf7A/A2zM3xWmhTHWJlGJIzxBbgzNcXkcPLwIn23lB0xqJDsh36hpEdaljIrsDppe0QGOz59pI7z0dOBUs5X3IX1Zna3xaQLuJXnDP4OSqvo+ZMuaXK23OaVgqE= X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1150; 20:34qITF/qiOWvCsjbo0HfEQ17obdnYmbj6GFkjyGeXxXTWZFjSPxAj0LW63bA0/Jde3ynUVHPkriIyW9entSh/7VC2WZwLUznF3ovGk1a3NOLJlIP/uXxJW1DwwLAjLm+yZCCCCs7+1soxTkKLZmBI2xp/FWpNrt8ictVrbS9nEClN1eHZ0UeHuUjHS/C6qZs3k4VzP2SjRdP6eYapP4t0vMuAMmf2NPrkIRJAhjLuPQ8F61AFbnzeVuaegGUdz1h X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2017 13:44:43.5410 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1150 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add the support to encrypt the kernel in-place. This is done by creating new page mappings for the kernel - a decrypted write-protected mapping and an encrypted mapping. The kernel is encrypted by copying it through a temporary buffer. Signed-off-by: Tom Lendacky --- arch/x86/include/asm/mem_encrypt.h | 6 + arch/x86/mm/Makefile | 1 arch/x86/mm/mem_encrypt.c | 310 ++++++++++++++++++++++++++++++++++++ arch/x86/mm/mem_encrypt_boot.S | 149 +++++++++++++++++ 4 files changed, 466 insertions(+) create mode 100644 arch/x86/mm/mem_encrypt_boot.S diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 70e55f6..7122c36 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,6 +21,12 @@ extern unsigned long sme_me_mask; +void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, + unsigned long decrypted_kernel_vaddr, + unsigned long kernel_len, + unsigned long encryption_wa, + unsigned long encryption_pgd); + void __init sme_early_encrypt(resource_size_t paddr, unsigned long size); void __init sme_early_decrypt(resource_size_t paddr, diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index a94a7b6..72bf8c0 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -40,3 +40,4 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o +obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index a7400ec..e5d5439 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -21,6 +21,8 @@ #include #include #include +#include +#include /* * Since SME related variables are set early in the boot process they must @@ -199,8 +201,316 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size) set_memory_decrypted((unsigned long)vaddr, size >> PAGE_SHIFT); } +static void __init sme_clear_pgd(pgd_t *pgd_base, unsigned long start, + unsigned long end) +{ + unsigned long pgd_start, pgd_end, pgd_size; + pgd_t *pgd_p; + + pgd_start = start & PGDIR_MASK; + pgd_end = end & PGDIR_MASK; + + pgd_size = (((pgd_end - pgd_start) / PGDIR_SIZE) + 1); + pgd_size *= sizeof(pgd_t); + + pgd_p = pgd_base + pgd_index(start); + + memset(pgd_p, 0, pgd_size); +} + +#define PGD_FLAGS _KERNPG_TABLE_NOENC +#define P4D_FLAGS _KERNPG_TABLE_NOENC +#define PUD_FLAGS _KERNPG_TABLE_NOENC +#define PMD_FLAGS (__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL) + +static void __init *sme_populate_pgd(pgd_t *pgd_base, void *pgtable_area, + unsigned long vaddr, pmdval_t pmd_val) +{ + pgd_t *pgd_p; + p4d_t *p4d_p; + pud_t *pud_p; + pmd_t *pmd_p; + + pgd_p = pgd_base + pgd_index(vaddr); + if (native_pgd_val(*pgd_p)) { + if (IS_ENABLED(CONFIG_X86_5LEVEL)) + p4d_p = (p4d_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK); + else + pud_p = (pud_t *)(native_pgd_val(*pgd_p) & ~PTE_FLAGS_MASK); + } else { + pgd_t pgd; + + if (IS_ENABLED(CONFIG_X86_5LEVEL)) { + p4d_p = pgtable_area; + memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D); + pgtable_area += sizeof(*p4d_p) * PTRS_PER_P4D; + + pgd = native_make_pgd((pgdval_t)p4d_p + PGD_FLAGS); + } else { + pud_p = pgtable_area; + memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD); + pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD; + + pgd = native_make_pgd((pgdval_t)pud_p + PGD_FLAGS); + } + native_set_pgd(pgd_p, pgd); + } + + if (IS_ENABLED(CONFIG_X86_5LEVEL)) { + p4d_p += p4d_index(vaddr); + if (native_p4d_val(*p4d_p)) { + pud_p = (pud_t *)(native_p4d_val(*p4d_p) & ~PTE_FLAGS_MASK); + } else { + p4d_t p4d; + + pud_p = pgtable_area; + memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD); + pgtable_area += sizeof(*pud_p) * PTRS_PER_PUD; + + p4d = native_make_p4d((pudval_t)pud_p + P4D_FLAGS); + native_set_p4d(p4d_p, p4d); + } + } + + pud_p += pud_index(vaddr); + if (native_pud_val(*pud_p)) { + if (native_pud_val(*pud_p) & _PAGE_PSE) + goto out; + + pmd_p = (pmd_t *)(native_pud_val(*pud_p) & ~PTE_FLAGS_MASK); + } else { + pud_t pud; + + pmd_p = pgtable_area; + memset(pmd_p, 0, sizeof(*pmd_p) * PTRS_PER_PMD); + pgtable_area += sizeof(*pmd_p) * PTRS_PER_PMD; + + pud = native_make_pud((pmdval_t)pmd_p + PUD_FLAGS); + native_set_pud(pud_p, pud); + } + + pmd_p += pmd_index(vaddr); + if (!native_pmd_val(*pmd_p) || !(native_pmd_val(*pmd_p) & _PAGE_PSE)) + native_set_pmd(pmd_p, native_make_pmd(pmd_val)); + +out: + return pgtable_area; +} + +static unsigned long __init sme_pgtable_calc(unsigned long len) +{ + unsigned long p4d_size, pud_size, pmd_size; + unsigned long total; + + /* + * Perform a relatively simplistic calculation of the pagetable + * entries that are needed. That mappings will be covered by 2MB + * PMD entries so we can conservatively calculate the required + * number of P4D, PUD and PMD structures needed to perform the + * mappings. Incrementing the count for each covers the case where + * the addresses cross entries. + */ + if (IS_ENABLED(CONFIG_X86_5LEVEL)) { + p4d_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1; + p4d_size *= sizeof(p4d_t) * PTRS_PER_P4D; + pud_size = (ALIGN(len, P4D_SIZE) / P4D_SIZE) + 1; + pud_size *= sizeof(pud_t) * PTRS_PER_PUD; + } else { + p4d_size = 0; + pud_size = (ALIGN(len, PGDIR_SIZE) / PGDIR_SIZE) + 1; + pud_size *= sizeof(pud_t) * PTRS_PER_PUD; + } + pmd_size = (ALIGN(len, PUD_SIZE) / PUD_SIZE) + 1; + pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD; + + total = p4d_size + pud_size + pmd_size; + + /* + * Now calculate the added pagetable structures needed to populate + * the new pagetables. + */ + if (IS_ENABLED(CONFIG_X86_5LEVEL)) { + p4d_size = ALIGN(total, PGDIR_SIZE) / PGDIR_SIZE; + p4d_size *= sizeof(p4d_t) * PTRS_PER_P4D; + pud_size = ALIGN(total, P4D_SIZE) / P4D_SIZE; + pud_size *= sizeof(pud_t) * PTRS_PER_PUD; + } else { + p4d_size = 0; + pud_size = ALIGN(total, PGDIR_SIZE) / PGDIR_SIZE; + pud_size *= sizeof(pud_t) * PTRS_PER_PUD; + } + pmd_size = ALIGN(total, PUD_SIZE) / PUD_SIZE; + pmd_size *= sizeof(pmd_t) * PTRS_PER_PMD; + + total += p4d_size + pud_size + pmd_size; + + return total; +} + void __init sme_encrypt_kernel(void) { + unsigned long workarea_start, workarea_end, workarea_len; + unsigned long execute_start, execute_end, execute_len; + unsigned long kernel_start, kernel_end, kernel_len; + unsigned long pgtable_area_len; + unsigned long paddr, pmd_flags; + unsigned long decrypted_base; + void *pgtable_area; + pgd_t *pgd; + + if (!sme_active()) + return; + + /* + * Prepare for encrypting the kernel by building new pagetables with + * the necessary attributes needed to encrypt the kernel in place. + * + * One range of virtual addresses will map the memory occupied + * by the kernel as encrypted. + * + * Another range of virtual addresses will map the memory occupied + * by the kernel as decrypted and write-protected. + * + * The use of write-protect attribute will prevent any of the + * memory from being cached. + */ + + /* Physical addresses gives us the identity mapped virtual addresses */ + kernel_start = __pa_symbol(_text); + kernel_end = ALIGN(__pa_symbol(_end), PMD_PAGE_SIZE); + kernel_len = kernel_end - kernel_start; + + /* Set the encryption workarea to be immediately after the kernel */ + workarea_start = kernel_end; + + /* + * Calculate required number of workarea bytes needed: + * executable encryption area size: + * stack page (PAGE_SIZE) + * encryption routine page (PAGE_SIZE) + * intermediate copy buffer (PMD_PAGE_SIZE) + * pagetable structures for the encryption of the kernel + * pagetable structures for workarea (in case not currently mapped) + */ + execute_start = workarea_start; + execute_end = execute_start + (PAGE_SIZE * 2) + PMD_PAGE_SIZE; + execute_len = execute_end - execute_start; + + /* + * One PGD for both encrypted and decrypted mappings and a set of + * PUDs and PMDs for each of the encrypted and decrypted mappings. + */ + pgtable_area_len = sizeof(pgd_t) * PTRS_PER_PGD; + pgtable_area_len += sme_pgtable_calc(execute_end - kernel_start) * 2; + + /* PUDs and PMDs needed in the current pagetables for the workarea */ + pgtable_area_len += sme_pgtable_calc(execute_len + pgtable_area_len); + + /* + * The total workarea includes the executable encryption area and + * the pagetable area. + */ + workarea_len = execute_len + pgtable_area_len; + workarea_end = workarea_start + workarea_len; + + /* + * Set the address to the start of where newly created pagetable + * structures (PGDs, PUDs and PMDs) will be allocated. New pagetable + * structures are created when the workarea is added to the current + * pagetables and when the new encrypted and decrypted kernel + * mappings are populated. + */ + pgtable_area = (void *)execute_end; + + /* + * Make sure the current pagetable structure has entries for + * addressing the workarea. + */ + pgd = (pgd_t *)native_read_cr3_pa(); + paddr = workarea_start; + while (paddr < workarea_end) { + pgtable_area = sme_populate_pgd(pgd, pgtable_area, + paddr, + paddr + PMD_FLAGS); + + paddr += PMD_PAGE_SIZE; + } + + /* Flush the TLB - no globals so cr3 is enough */ + native_write_cr3(__native_read_cr3()); + + /* + * A new pagetable structure is being built to allow for the kernel + * to be encrypted. It starts with an empty PGD that will then be + * populated with new PUDs and PMDs as the encrypted and decrypted + * kernel mappings are created. + */ + pgd = pgtable_area; + memset(pgd, 0, sizeof(*pgd) * PTRS_PER_PGD); + pgtable_area += sizeof(*pgd) * PTRS_PER_PGD; + + /* Add encrypted kernel (identity) mappings */ + pmd_flags = PMD_FLAGS | _PAGE_ENC; + paddr = kernel_start; + while (paddr < kernel_end) { + pgtable_area = sme_populate_pgd(pgd, pgtable_area, + paddr, + paddr + pmd_flags); + + paddr += PMD_PAGE_SIZE; + } + + /* + * A different PGD index/entry must be used to get different + * pagetable entries for the decrypted mapping. Choose the next + * PGD index and convert it to a virtual address to be used as + * the base of the mapping. + */ + decrypted_base = (pgd_index(workarea_end) + 1) & (PTRS_PER_PGD - 1); + decrypted_base <<= PGDIR_SHIFT; + + /* Add decrypted, write-protected kernel (non-identity) mappings */ + pmd_flags = (PMD_FLAGS & ~_PAGE_CACHE_MASK) | (_PAGE_PAT | _PAGE_PWT); + paddr = kernel_start; + while (paddr < kernel_end) { + pgtable_area = sme_populate_pgd(pgd, pgtable_area, + paddr + decrypted_base, + paddr + pmd_flags); + + paddr += PMD_PAGE_SIZE; + } + + /* Add decrypted workarea mappings to both kernel mappings */ + paddr = workarea_start; + while (paddr < workarea_end) { + pgtable_area = sme_populate_pgd(pgd, pgtable_area, + paddr, + paddr + PMD_FLAGS); + + pgtable_area = sme_populate_pgd(pgd, pgtable_area, + paddr + decrypted_base, + paddr + PMD_FLAGS); + + paddr += PMD_PAGE_SIZE; + } + + /* Perform the encryption */ + sme_encrypt_execute(kernel_start, kernel_start + decrypted_base, + kernel_len, workarea_start, (unsigned long)pgd); + + /* + * At this point we are running encrypted. Remove the mappings for + * the decrypted areas - all that is needed for this is to remove + * the PGD entry/entries. + */ + sme_clear_pgd(pgd, kernel_start + decrypted_base, + kernel_end + decrypted_base); + + sme_clear_pgd(pgd, workarea_start + decrypted_base, + workarea_end + decrypted_base); + + /* Flush the TLB - no globals so cr3 is enough */ + native_write_cr3(__native_read_cr3()); } void __init sme_enable(void) diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S new file mode 100644 index 0000000..b327e04 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_boot.S @@ -0,0 +1,149 @@ +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include + + .text + .code64 +ENTRY(sme_encrypt_execute) + + /* + * Entry parameters: + * RDI - virtual address for the encrypted kernel mapping + * RSI - virtual address for the decrypted kernel mapping + * RDX - length of kernel + * RCX - virtual address of the encryption workarea, including: + * - stack page (PAGE_SIZE) + * - encryption routine page (PAGE_SIZE) + * - intermediate copy buffer (PMD_PAGE_SIZE) + * R8 - physcial address of the pagetables to use for encryption + */ + + FRAME_BEGIN /* RBP now has original stack pointer */ + + /* Set up a one page stack in the non-encrypted memory area */ + movq %rcx, %rax /* Workarea stack page */ + leaq PAGE_SIZE(%rax), %rsp /* Set new stack pointer */ + addq $PAGE_SIZE, %rax /* Workarea encryption routine */ + + push %r12 + movq %rdi, %r10 /* Encrypted kernel */ + movq %rsi, %r11 /* Decrypted kernel */ + movq %rdx, %r12 /* Kernel length */ + + /* Copy encryption routine into the workarea */ + movq %rax, %rdi /* Workarea encryption routine */ + leaq __enc_copy(%rip), %rsi /* Encryption routine */ + movq $(.L__enc_copy_end - __enc_copy), %rcx /* Encryption routine length */ + rep movsb + + /* Setup registers for call */ + movq %r10, %rdi /* Encrypted kernel */ + movq %r11, %rsi /* Decrypted kernel */ + movq %r8, %rdx /* Pagetables used for encryption */ + movq %r12, %rcx /* Kernel length */ + movq %rax, %r8 /* Workarea encryption routine */ + addq $PAGE_SIZE, %r8 /* Workarea intermediate copy buffer */ + + call *%rax /* Call the encryption routine */ + + pop %r12 + + movq %rbp, %rsp /* Restore original stack pointer */ + FRAME_END + + ret +ENDPROC(sme_encrypt_execute) + +ENTRY(__enc_copy) +/* + * Routine used to encrypt kernel. + * This routine must be run outside of the kernel proper since + * the kernel will be encrypted during the process. So this + * routine is defined here and then copied to an area outside + * of the kernel where it will remain and run decrypted + * during execution. + * + * On entry the registers must be: + * RDI - virtual address for the encrypted kernel mapping + * RSI - virtual address for the decrypted kernel mapping + * RDX - address of the pagetables to use for encryption + * RCX - length of kernel + * R8 - intermediate copy buffer + * + * RAX - points to this routine + * + * The kernel will be encrypted by copying from the non-encrypted + * kernel space to an intermediate buffer and then copying from the + * intermediate buffer back to the encrypted kernel space. The physical + * addresses of the two kernel space mappings are the same which + * results in the kernel being encrypted "in place". + */ + /* Enable the new page tables */ + mov %rdx, %cr3 + + /* Flush any global TLBs */ + mov %cr4, %rdx + andq $~X86_CR4_PGE, %rdx + mov %rdx, %cr4 + orq $X86_CR4_PGE, %rdx + mov %rdx, %cr4 + + /* Set the PAT register PA5 entry to write-protect */ + push %rcx + movl $MSR_IA32_CR_PAT, %ecx + rdmsr + push %rdx /* Save original PAT value */ + andl $0xffff00ff, %edx /* Clear PA5 */ + orl $0x00000500, %edx /* Set PA5 to WP */ + wrmsr + pop %rdx /* RDX contains original PAT value */ + pop %rcx + + movq %rcx, %r9 /* Save kernel length */ + movq %rdi, %r10 /* Save encrypted kernel address */ + movq %rsi, %r11 /* Save decrypted kernel address */ + + wbinvd /* Invalidate any cache entries */ + + /* Copy/encrypt 2MB at a time */ +1: + movq %r11, %rsi /* Source - decrypted kernel */ + movq %r8, %rdi /* Dest - intermediate copy buffer */ + movq $PMD_PAGE_SIZE, %rcx /* 2MB length */ + rep movsb + + movq %r8, %rsi /* Source - intermediate copy buffer */ + movq %r10, %rdi /* Dest - encrypted kernel */ + movq $PMD_PAGE_SIZE, %rcx /* 2MB length */ + rep movsb + + addq $PMD_PAGE_SIZE, %r11 + addq $PMD_PAGE_SIZE, %r10 + subq $PMD_PAGE_SIZE, %r9 /* Kernel length decrement */ + jnz 1b /* Kernel length not zero? */ + + /* Restore PAT register */ + push %rdx /* Save original PAT value */ + movl $MSR_IA32_CR_PAT, %ecx + rdmsr + pop %rdx /* Restore original PAT value */ + wrmsr + + ret +.L__enc_copy_end: +ENDPROC(__enc_copy)