From patchwork Thu Mar 28 11:56:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13608432 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D1590C54E67 for ; Thu, 28 Mar 2024 11:57:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3OLVhlyiuO4QO30ilAe/bOu8SJDRfkVRnx0MaXnT43k=; b=00Ca9mTNBVG40J CkLtEQWvuApfGfCG8aGZ5EBzxSIolwpD8DryCdzMWPtWEMomevq7ggXO3Cr1vr72MraNoWInffe82 vnbINifgPlPseYx/NUQb82XU1FfdnOPD0aRHO6eu2/i4OYA5TKYwxI+dRCXERt6WhDqaZ5+NhgPGY r8BoevU4K9SsWOP+voi6kClNijK0ifVw5XjvM+LMbgRCoXMpf2lyblI/+ZNrcXn3PQG8lk0prTXVc nxQ5eSa7CjLqg2cJ1+U8Hq/at8uHULKPJEQpAa1Hrb9mSIe5Sjv3xC0WUF7u1ul+2+paBL2vDT4Nf P3dgAgpy0pI3dsAs0QLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoNw-0000000DoA4-1z4P; Thu, 28 Mar 2024 11:57:32 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoNn-0000000Do7h-0ygD for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2024 11:57:24 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711627042; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LDvvlSiiNqqQGe3YWm4ASYGqgYaSsgxZc0/sS+9WImg=; b=GwcYemkve3Q9j/IJWHgnPFAPvnA044eqSSbfCMVzN00JUeyj9oqgH6e0qY6BdGthCTA8mN lFmmop3TXuJf/Y8aLWmVSO925A1V4Enqr6SIcEdCZjech9Mtxr1ahYBXBQVE7HYfJJdK18 SmthcB7wa24D4BN3ABpuDe4KVXnfQPQ= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-158-fCzg0PKiOTqM1WSr5Gdhtg-1; Thu, 28 Mar 2024 07:57:18 -0400 X-MC-Unique: fCzg0PKiOTqM1WSr5Gdhtg-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2BC793815EF6; Thu, 28 Mar 2024 11:57:18 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 13F5A1C06834; Thu, 28 Mar 2024 11:57:14 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Catalin Marinas , Will Deacon , Ard Biesheuvel , Kees Cook , Mark Rutland , Pasha Tatashin Subject: [PATCH 1/4] arm64: relocate: Let __relocate_new_kernel_start align on SZ_4K Date: Thu, 28 Mar 2024 19:56:51 +0800 Message-ID: <20240328115656.24090-2-piliu@redhat.com> In-Reply-To: <20240328115656.24090-1-piliu@redhat.com> References: <20240328115656.24090-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240328_045723_443775_065DDBB8 X-CRM114-Status: GOOD ( 11.76 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org For upcoming C implement, let this section align on SZ_4K, so that it makes potential instruction pairs 'adrp, add' work. Signed-off-by: Pingfan Liu Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Kees Cook Cc: Mark Rutland Cc: Pasha Tatashin To: linux-arm-kernel@lists.infradead.org --- arch/arm64/kernel/vmlinux.lds.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 3cd7e76cc562..51eb382ab3a4 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -103,7 +103,7 @@ jiffies = jiffies_64; #ifdef CONFIG_KEXEC_CORE #define KEXEC_TEXT \ - ALIGN_FUNCTION(); \ + . = ALIGN(SZ_4K); \ __relocate_new_kernel_start = .; \ *(.kexec_relocate.text) \ __relocate_new_kernel_end = .; From patchwork Thu Mar 28 11:56:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13608431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D68C0C54E67 for ; Thu, 28 Mar 2024 11:57:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P57O/K4ocDT9bathb96wax8m9T0WY6vhXHsybR2p1hg=; b=SBEp3ouzMdl1YJ /arp3/w7slugNSul5pA7dNdBpbmvBphIepvVvlvNTXyiLCvLHntIe1dluoUgHTywn6+2d3MXAEN12 pQFzQCdriNPpDsgPCRLNS+B3y2rz8P1PBzHAMEnC+JMrQY9Tv9nLtFv1Adl4zJsP2vWGHCZRoELvz Vd9yiM7qo867wj0wNGqGwGcMZQ2uSSJdeNIIXU/Y9Im54bmvjzXsQEKCEAFkdnCS4OBA9eJuwvbhx lSqfKNoGY+F27qJK3G2ThLmrm1Vx52mKWgnk6H4HsHJhHYt4qCJWFvxS9jyjY3Y9FzRCp+21S2yR0 fe6vish7ZK1AIB86yCtQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoNx-0000000DoAS-0Okm; Thu, 28 Mar 2024 11:57:33 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoNq-0000000Do8h-1wmG for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2024 11:57:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711627045; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oVr0pDknD7eE+qcvrSyGL3UT9DYidTr2J136fWz2sOw=; b=e69npG7GKkE6l3PXgIVzAG/GnHWTj2TDsWuNLDWeY/SYDUqOXfEbvED23o+jkn5RhDyxFr s4PloR3dzJPjnLiiSv1XosBCGE5KjaIvYh5pnPpPopFPkoUb3aAkMIl3V5h9A+0ww1YJ1P IDpoAr8ig7GC0VSnN9z+BlX+GAjPsVg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-32-OYs_cn_iM66AdCnE_v7ESA-1; Thu, 28 Mar 2024 07:57:22 -0400 X-MC-Unique: OYs_cn_iM66AdCnE_v7ESA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D707C84B166; Thu, 28 Mar 2024 11:57:21 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id C057D1C060D0; Thu, 28 Mar 2024 11:57:18 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Catalin Marinas , Will Deacon , Ard Biesheuvel , Kees Cook , Mark Rutland , Pasha Tatashin Subject: [PATCH 2/4] arm64: mm: Provide prot param in trans_pgd_idmap_page()'s prototype Date: Thu, 28 Mar 2024 19:56:52 +0800 Message-ID: <20240328115656.24090-3-piliu@redhat.com> In-Reply-To: <20240328115656.24090-1-piliu@redhat.com> References: <20240328115656.24090-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240328_045726_611989_E2462B35 X-CRM114-Status: GOOD ( 14.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Since relocate_kernel code will build stack at the rear of the page, it requires 'wx' on the page. Adapting the prototype of trans_pgd_idmap_page() to make it doable. The trans_pgd_idmap_page() can be enhanced further to support multiple pages. But since the change has met the requirement, it is not necessary to enhance for the time being. Signed-off-by: Pingfan Liu Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Kees Cook Cc: Mark Rutland Cc: Pasha Tatashin To: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/trans_pgd.h | 2 +- arch/arm64/kernel/hibernate.c | 3 ++- arch/arm64/kernel/machine_kexec.c | 4 ++-- arch/arm64/mm/trans_pgd.c | 4 ++-- 4 files changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 033d400a4ea4..c55a8a5670a8 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -31,7 +31,7 @@ int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, unsigned long start, unsigned long end); int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, - unsigned long *t0sz, void *page); + unsigned long *t0sz, void *page, pgprot_t prot); int trans_pgd_copy_el2_vectors(struct trans_pgd_info *info, phys_addr_t *el2_vectors); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 02870beb271e..0c5ce99b7acf 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -203,7 +203,8 @@ static int create_safe_exec_page(void *src_start, size_t length, memcpy(page, src_start, length); caches_clean_inval_pou((unsigned long)page, (unsigned long)page + length); - rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page); + rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page, + PAGE_KERNEL_ROX); if (rc) return rc; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index b38aae5b488d..de4e9e0ad682 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -141,8 +141,8 @@ int machine_kexec_post_load(struct kimage *kimage) reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start; memcpy(reloc_code, __relocate_new_kernel_start, reloc_size); kimage->arch.kern_reloc = __pa(reloc_code); - rc = trans_pgd_idmap_page(&info, &kimage->arch.ttbr0, - &kimage->arch.t0sz, reloc_code); + rc = trans_pgd_idmap_page(&info, &kimage->arch.ttbr0, &kimage->arch.t0sz, + reloc_code, PAGE_KERNEL_EXEC); if (rc) return rc; kimage->arch.phys_offset = virt_to_phys(kimage) - (long)kimage; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 7b14df3c6477..4dfe6a9f9a8b 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -230,7 +230,7 @@ int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **dst_pgdp, * maximum T0SZ for this page. */ int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, - unsigned long *t0sz, void *page) + unsigned long *t0sz, void *page, pgprot_t prot) { phys_addr_t dst_addr = virt_to_phys(page); unsigned long pfn = __phys_to_pfn(dst_addr); @@ -240,7 +240,7 @@ int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, int this_level, index, level_lsb, level_msb; dst_addr &= PAGE_MASK; - prev_level_entry = pte_val(pfn_pte(pfn, PAGE_KERNEL_ROX)); + prev_level_entry = pte_val(pfn_pte(pfn, prot)); for (this_level = 3; this_level >= 0; this_level--) { levels[this_level] = trans_alloc(info); From patchwork Thu Mar 28 11:56:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13608433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B386DC54E67 for ; Thu, 28 Mar 2024 11:58:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lj9TAICmDnQBXY9yv5z/cakTN4qTVeD4BsI22wIIaUo=; b=AXK20287Da7Ky7 UJjBhBxDkjT5uit2g4V/mm9sUlT6sF7+q7Vze9bohnMGTvHOmPOIFqhLBQV4c50sVWpKy8xrumEMp XgT4rMqJ6CKQcXaQiQolElwPeuDkY64Y8TlF7C8STcf7qIQN9foArQsbjN2/I6ka0K7zNzcuA7FZb DMxAOxvOGx6/TUN+nLuKKDgGY9yBF73cZxxQHuqVIB4nd8d4Dl9N3ytrUy4XePIWFuxZRlSdfMd6a V0/S7uCr810dU6K8SXkE3d8xo+nw8ZRZAWgm2TxXKxRGAH7BRsAzsO9Qbsc/IMgpdf/2VD0S40Drk N+UDSlY7wvfKELoNfgug==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoOD-0000000DoJx-38x4; Thu, 28 Mar 2024 11:57:49 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoNu-0000000Do9I-395u for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2024 11:57:32 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711627049; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VNIOnzN2qDOQR695Gd5Wuuz8T045UOwrVbTKpSoFSq4=; b=adqfnpCoEnYQwzL0kAmlANTE4j3bt1tbFFDhDtGMQdgLY0e/odu64axBy+T08AnZ0xKINa cmmO7oOPhYQajwb9GRJk6Ve1cs6s5gQxdiUAyV9qQZSbo7JyfG87kjvtBc5+bfDx5tkCfr rO4otmXhfPrygWZJ42LHwMO2mc9LSks= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-355-bok5-ORDPPyQAPtwdSiTnA-1; Thu, 28 Mar 2024 07:57:26 -0400 X-MC-Unique: bok5-ORDPPyQAPtwdSiTnA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8BC5D8007A3; Thu, 28 Mar 2024 11:57:25 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 76C091C05E1C; Thu, 28 Mar 2024 11:57:22 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Catalin Marinas , Will Deacon , Ard Biesheuvel , Kees Cook , Mark Rutland , Pasha Tatashin Subject: [PATCH 3/4] arm64: kexec: Introduce d_size to carry cacheline size information Date: Thu, 28 Mar 2024 19:56:53 +0800 Message-ID: <20240328115656.24090-4-piliu@redhat.com> In-Reply-To: <20240328115656.24090-1-piliu@redhat.com> References: <20240328115656.24090-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240328_045730_942198_AD0231ED X-CRM114-Status: GOOD ( 12.16 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Introducing kimage_arch.d_size to carry the data cacheline size information, so that the relocate_kernel C routine can be implemented simpler. The cache size info will be used in the next patch Signed-off-by: Pingfan Liu Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Kees Cook Cc: Mark Rutland Cc: Pasha Tatashin To: linux-arm-kernel@lists.infradead.org --- arch/arm64/include/asm/kexec.h | 1 + arch/arm64/kernel/machine_kexec.c | 3 +++ 2 files changed, 4 insertions(+) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 9ac9572a3bbe..882d00786f92 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -116,6 +116,7 @@ struct kimage_arch { phys_addr_t zero_page; unsigned long phys_offset; unsigned long t0sz; + unsigned long d_size; }; #ifdef CONFIG_KEXEC_FILE diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index de4e9e0ad682..b4ae24dcac8c 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -146,6 +146,9 @@ int machine_kexec_post_load(struct kimage *kimage) if (rc) return rc; kimage->arch.phys_offset = virt_to_phys(kimage) - (long)kimage; + kimage->arch.d_size = 4 << cpuid_feature_extract_unsigned_field( + arm64_ftr_reg_ctrel0.sys_val, + CTR_EL0_DminLine_SHIFT); /* Flush the reloc_code in preparation for its execution. */ dcache_clean_inval_poc((unsigned long)reloc_code, From patchwork Thu Mar 28 11:56:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pingfan Liu X-Patchwork-Id: 13608434 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 29C62C54E64 for ; Thu, 28 Mar 2024 11:58:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=C1oCo/heAkhb24/UFXm9GExzp2YYVsykU68KnDdRjO4=; b=lcllu57jL/Lmep gYLP/6zr6vXZP8wF0MKeRYIgHYKp/GPLNe4F3m9lmnIkbO2GegjlfaKVu6Z/l90w3BvbYSsskpZaY eNnWuxyl5U5zV9P1sSQqRy9799CehpY2NhKwjVAij5DtMD3Boj74Kg6HC9eJ3LEgNvn1UxOUcUdkc kcboc+bekWjGPdsWCefPuCAN2ZON6uG+YMDQNF7Debo3BPdaoFxPQv9yQBZcK/XPs8swKwUxfVJaJ 7Nf/AT9kwaDkI1/6S/MY9rg9uYG57clM8cFR0EX+cWNFbtdXqJ51MpzAPXYTOcE+yefsD4pk1Penb qUWEpC5aLvSOmVeQcixg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoOE-0000000DoKM-1u0R; Thu, 28 Mar 2024 11:57:50 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rpoNz-0000000DoBU-2Nwe for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2024 11:57:37 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711627054; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BTgJFzBCr2P3IG7iPBcwHGdKA5jwILQ+AEBDSE+TY6o=; b=cdKbwq0dg9qnhSA58PYcx2d+B2DSpIMWUJD95XyjwlJgeW1f3NdKqazm2c1sGyClbhFX3B 2uB1c4nsG9AUJhRwf/34kAqPZ7z+96T5xVH3J+gqVi0R1GKYhgRC0WE0bQ5ypk3bEcqfbr Qt33Wr+HuRRyUEmcCWnTv837uoaVLzY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-52-JPYmvoFXP6KuZPKhiu626Q-1; Thu, 28 Mar 2024 07:57:29 -0400 X-MC-Unique: JPYmvoFXP6KuZPKhiu626Q-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6BCB484B162; Thu, 28 Mar 2024 11:57:29 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.72.112.40]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2BB551C060D0; Thu, 28 Mar 2024 11:57:25 +0000 (UTC) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Pingfan Liu , Catalin Marinas , Will Deacon , Ard Biesheuvel , Kees Cook , Mark Rutland , Pasha Tatashin Subject: [PATCH 4/4] arm64: kexec: Change relocate_kernel to C code Date: Thu, 28 Mar 2024 19:56:54 +0800 Message-ID: <20240328115656.24090-5-piliu@redhat.com> In-Reply-To: <20240328115656.24090-1-piliu@redhat.com> References: <20240328115656.24090-1-piliu@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240328_045736_046917_F8F0CC9B X-CRM114-Status: GOOD ( 31.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kexec_relocate.o is a self-contained section, and it should be PIE. Beside that, C function call requires stack, which is built on the idmap of the rear of kimage->control_code_page. Signed-off-by: Pingfan Liu Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Cc: Kees Cook Cc: Mark Rutland Cc: Pasha Tatashin To: linux-arm-kernel@lists.infradead.org --- arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 10 -- arch/arm64/kernel/machine_kexec.c | 9 +- arch/arm64/kernel/relocate_kernel.S | 100 -------------- arch/arm64/kernel/relocate_kernel.c | 197 ++++++++++++++++++++++++++++ arch/arm64/kernel/vmlinux.lds.S | 1 + 6 files changed, 206 insertions(+), 112 deletions(-) delete mode 100644 arch/arm64/kernel/relocate_kernel.S create mode 100644 arch/arm64/kernel/relocate_kernel.c diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 467cb7117273..5fc539c6d094 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -13,6 +13,7 @@ CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) # checks due to randomize_kstack_offset. CFLAGS_REMOVE_syscall.o = -fstack-protector -fstack-protector-strong CFLAGS_syscall.o += -fno-stack-protector +CFLAGS_relocate_kernel.o += -fPIE # When KASAN is enabled, a stack trace is recorded for every alloc/free, which # can significantly impact performance. Avoid instrumenting the stack trace diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 5a7dbbe0ce63..ce3f3bed76a4 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -186,16 +186,6 @@ int main(void) #endif BLANK(); #endif -#ifdef CONFIG_KEXEC_CORE - DEFINE(KIMAGE_ARCH_DTB_MEM, offsetof(struct kimage, arch.dtb_mem)); - DEFINE(KIMAGE_ARCH_EL2_VECTORS, offsetof(struct kimage, arch.el2_vectors)); - DEFINE(KIMAGE_ARCH_ZERO_PAGE, offsetof(struct kimage, arch.zero_page)); - DEFINE(KIMAGE_ARCH_PHYS_OFFSET, offsetof(struct kimage, arch.phys_offset)); - DEFINE(KIMAGE_ARCH_TTBR1, offsetof(struct kimage, arch.ttbr1)); - DEFINE(KIMAGE_HEAD, offsetof(struct kimage, head)); - DEFINE(KIMAGE_START, offsetof(struct kimage, start)); - BLANK(); -#endif #ifdef CONFIG_FUNCTION_TRACER DEFINE(FTRACE_OPS_FUNC, offsetof(struct ftrace_ops, func)); #endif diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index b4ae24dcac8c..31d96655664b 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -198,13 +198,18 @@ void machine_kexec(struct kimage *kimage) restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem, 0, 0); } else { - void (*kernel_reloc)(struct kimage *kimage); + void (*kernel_reloc)(struct kimage *kimage, unsigned long sp); + u64 new_sp = (u64)(page_to_pfn(kimage->control_code_page) << PAGE_SHIFT) + + KEXEC_CONTROL_PAGE_SIZE; if (is_hyp_nvhe()) __hyp_set_vectors(kimage->arch.el2_vectors); cpu_install_ttbr0(kimage->arch.ttbr0, kimage->arch.t0sz); kernel_reloc = (void *)kimage->arch.kern_reloc; - kernel_reloc(kimage); + pr_info("jump to relocation at: 0x%llx, with sp:0x%llx\n", + (u64)kernel_reloc, new_sp); + /* new_sp is accessible through idmap */ + kernel_reloc(kimage, new_sp); } BUG(); /* Should never get here. */ diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S deleted file mode 100644 index 413f899e4ac6..000000000000 --- a/arch/arm64/kernel/relocate_kernel.S +++ /dev/null @@ -1,100 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * kexec for arm64 - * - * Copyright (C) Linaro. - * Copyright (C) Huawei Futurewei Technologies. - * Copyright (C) 2021, Microsoft Corporation. - * Pasha Tatashin - */ - -#include -#include - -#include -#include -#include -#include -#include - -.macro turn_off_mmu tmp1, tmp2 - mov_q \tmp1, INIT_SCTLR_EL1_MMU_OFF - pre_disable_mmu_workaround - msr sctlr_el1, \tmp1 - isb -.endm - -.section ".kexec_relocate.text", "ax" -/* - * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. - * - * The memory that the old kernel occupies may be overwritten when copying the - * new image to its final location. To assure that the - * arm64_relocate_new_kernel routine which does that copy is not overwritten, - * all code and data needed by arm64_relocate_new_kernel must be between the - * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The - * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec - * safe memory that has been set up to be preserved during the copy operation. - */ -SYM_CODE_START(arm64_relocate_new_kernel) - /* - * The kimage structure isn't allocated specially and may be clobbered - * during relocation. We must load any values we need from it prior to - * any relocation occurring. - */ - ldr x28, [x0, #KIMAGE_START] - ldr x27, [x0, #KIMAGE_ARCH_EL2_VECTORS] - ldr x26, [x0, #KIMAGE_ARCH_DTB_MEM] - - /* Setup the list loop variables. */ - ldr x18, [x0, #KIMAGE_ARCH_ZERO_PAGE] /* x18 = zero page for BBM */ - ldr x17, [x0, #KIMAGE_ARCH_TTBR1] /* x17 = linear map copy */ - ldr x16, [x0, #KIMAGE_HEAD] /* x16 = kimage_head */ - ldr x22, [x0, #KIMAGE_ARCH_PHYS_OFFSET] /* x22 phys_offset */ - raw_dcache_line_size x15, x1 /* x15 = dcache line size */ - break_before_make_ttbr_switch x18, x17, x1, x2 /* set linear map */ -.Lloop: - and x12, x16, PAGE_MASK /* x12 = addr */ - sub x12, x12, x22 /* Convert x12 to virt */ - /* Test the entry flags. */ -.Ltest_source: - tbz x16, IND_SOURCE_BIT, .Ltest_indirection - - /* Invalidate dest page to PoC. */ - mov x19, x13 - copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 - add x1, x19, #PAGE_SIZE - dcache_by_myline_op civac, sy, x19, x1, x15, x20 - b .Lnext -.Ltest_indirection: - tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - mov x14, x12 /* ptr = addr */ - b .Lnext -.Ltest_destination: - tbz x16, IND_DESTINATION_BIT, .Lnext - mov x13, x12 /* dest = addr */ -.Lnext: - ldr x16, [x14], #8 /* entry = *ptr++ */ - tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ - /* wait for writes from copy_page to finish */ - dsb nsh - ic iallu - dsb nsh - isb - turn_off_mmu x12, x13 - - /* Start new image. */ - cbz x27, .Lel1 - mov x1, x28 /* kernel entry point */ - mov x2, x26 /* dtb address */ - mov x3, xzr - mov x4, xzr - mov x0, #HVC_SOFT_RESTART - hvc #0 /* Jumps from el2 */ -.Lel1: - mov x0, x26 /* dtb address */ - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x28 /* Jumps from el1 */ -SYM_CODE_END(arm64_relocate_new_kernel) diff --git a/arch/arm64/kernel/relocate_kernel.c b/arch/arm64/kernel/relocate_kernel.c new file mode 100644 index 000000000000..348515a0f497 --- /dev/null +++ b/arch/arm64/kernel/relocate_kernel.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * kexec for arm64 + * + * Copyright (C) Linaro. + * Copyright (C) Huawei Futurewei Technologies. + * Copyright (C) 2021, Microsoft Corporation. + * Pasha Tatashin + * Copyright (C) 2024, Red Hat, Inc. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#define __kexec_section __noinstr_section(".kexec_relocate.text") +#define __kexec_entry_section __noinstr_section(".kexec_relocate.entry.text") + +static u64 __kexec_section offset_ttbr1(u64 ttbr) +{ +#ifdef CONFIG_ARM64_VA_BITS_52 + u64 tmp; + + tmp = read_sysreg_s(SYS_ID_AA64MMFR2_EL1); + tmp &= (0xf << ID_AA64MMFR2_EL1_VARange_SHIFT); + if (!tmp) + ttbr |= TTBR1_BADDR_4852_OFFSET; +#endif + return ttbr; +} + +void __kexec_section make_ttbr1_switch(phys_addr_t zero_page, + phys_addr_t pgtable) +{ + unsigned long zero_ttbr; + unsigned long pgtable_ttbr; + + zero_ttbr = phys_to_ttbr(zero_page); + pgtable_ttbr = phys_to_ttbr(pgtable); + pgtable_ttbr = offset_ttbr1(pgtable_ttbr); + + write_sysreg(zero_ttbr, ttbr1_el1); + isb(); + __tlbi(vmalle1); + dsb(nsh); + + write_sysreg(pgtable_ttbr, ttbr1_el1); + isb(); +} + +static void __kexec_section sync(void) +{ + dsb(nsh); + asm volatile("ic iallu"); + dsb(nsh); + isb(); +} + +static void __kexec_section turn_mmu_off(void) +{ + u64 tmp = INIT_SCTLR_EL1_MMU_OFF; + + /* pre_disable_mmu_workaround */ +#ifdef CONFIG_QCOM_FALKOR_ERRATUM_E1041 + isb(); +#endif + write_sysreg(tmp, sctlr_el1); + isb(); +} + +/* The parameter lays out according to the hvc call */ +static void __kexec_section hvc_call(unsigned long vector, unsigned long entry, + unsigned long dtb, unsigned long x3, unsigned long x4) +{ + asm volatile("hvc #0"); +} + +typedef void (*kernel_entry)(u64 dtb, u64 x1, u64 x2, u64 x3); + +static __always_inline void relocate_copy_page(void *dst, void *src) +{ + int i = PAGE_SIZE >> 3; + unsigned long *s, *d; + + s = (unsigned long *)src; + d = (unsigned long *)dst; + for (int j = 0; j < i; j++, d++, s++) + *d = *s; +} + +/* Borrowed from clean_dcache_range_nopatch() in arch/arm64/kernel/alternative.c */ +static __always_inline void clean_dcache_range(u64 d_size, u64 start, u64 end) +{ + u64 cur; + + cur = start & ~(d_size - 1); + do { + /* + * We must clean+invalidate to the PoC in order to avoid + * Cortex-A53 errata 826319, 827319, 824069 and 819472 + * (this corresponds to ARM64_WORKAROUND_CLEAN_CACHE) + */ + asm volatile("dc civac, %0" : : "r" (cur) : "memory"); + } while (cur += d_size, cur < end); +} + +void __kexec_section __arm64_relocate_new_kernel(struct kimage *kimage) +{ + phys_addr_t dtb, el2_vectors, zero_page, ttbr1; + u64 start, phys_offset, ctr_el0, d_size; + kimage_entry_t *ptr, entry; + char *src, *dst; + + zero_page = kimage->arch.zero_page; + ttbr1 = kimage->arch.ttbr1; + start = kimage->start; + dtb = kimage->arch.dtb_mem; + el2_vectors = kimage->arch.el2_vectors; + phys_offset = kimage->arch.phys_offset; + d_size = kimage->arch.d_size; + + make_ttbr1_switch(zero_page, ttbr1); + + /* kimage->head is fetched once */ + for (ptr = &kimage->head; (entry = *ptr) && !(entry & IND_DONE); + ptr = (entry & IND_INDIRECTION) ? + (void *)((entry & PAGE_MASK) - phys_offset) : ptr + 1) { + + if (entry & IND_INDIRECTION) + continue; + else if (entry & IND_DESTINATION) + dst = (char *)((entry & PAGE_MASK) - phys_offset); + else if (entry & IND_SOURCE) { + src = (char *)((entry & PAGE_MASK) - phys_offset); + relocate_copy_page(dst, src); + /* Force all cache line in page to PoC */ + clean_dcache_range(d_size, (u64)dst, (u64)dst + PAGE_SIZE); + dst += PAGE_SIZE; + } + + } + /* wait for writes from copy_page to finish */ + sync(); + turn_mmu_off(); + + if (!el2_vectors) { + kernel_entry entry = (kernel_entry)start; + + entry(dtb, 0, 0, 0); + } else { + /* Jumps from el2 */ + hvc_call(HVC_SOFT_RESTART, start, dtb, 0, 0); + } + +} + +extern void __arm64_relocate_new_kernel(struct kimage *image); + +/* + * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. + * + * The memory that the old kernel occupies may be overwritten when copying the + * new image to its final location. To assure that the + * arm64_relocate_new_kernel routine which does that copy is not overwritten, + * all code and data needed by arm64_relocate_new_kernel must be between the + * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The + * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec + * safe memory that has been set up to be preserved during the copy operation. + * + * Come here through ttbr0, and ttbr1 still takes effect. + */ +void __kexec_entry_section arm64_relocate_new_kernel( + struct kimage *kimage, unsigned long new_sp) +{ + /* + * From now on, no local variable so the new sp can be safely prepared. + * The new stack should be on the control page which is safe during copying + */ + asm volatile( + "mov sp, %0;" + "mov x0, %1;" + "adrp x2, __arm64_relocate_new_kernel;" + "add x2, x2, #:lo12:__arm64_relocate_new_kernel;" + "br x2;" + : + : "r" (new_sp), "r" (kimage) + : + ); + /* never return */ +} diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 51eb382ab3a4..b6781667783c 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -105,6 +105,7 @@ jiffies = jiffies_64; #define KEXEC_TEXT \ . = ALIGN(SZ_4K); \ __relocate_new_kernel_start = .; \ + *(.kexec_relocate.entry.text) \ *(.kexec_relocate.text) \ __relocate_new_kernel_end = .; #else