From patchwork Thu Sep 16 23:13:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12500507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F145C433EF for ; Thu, 16 Sep 2021 23:22:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DF4D0611CA for ; Thu, 16 Sep 2021 23:22:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DF4D0611CA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7AlY9bFA9zcCpvKZ3be+rxI/dg5agaBsLrgThQn7PEg=; b=pEL7OcHwohFfru 5ybvIzLKyH4DPDEqsd9oPPsq0qQKAmO877DSzWih4u3jwDViV7lcGIgrOMaiyBqWm1OSQ1Y4ZqYoB RnF4NOsGoJRr3+ghvPZt8M5NRRQzntlJl39qWjySsHp2UOuwm286x62AJKBu21D+GxOk0WK+RgTaH LpeaKKr9BOVQ1Eqnl5EZ+KkaCvMi39issFSZtZbWJRh6QGUPY4shJvSq4FdVSyqAxztk77tuQExNh 14dxWBbwLCsscCFrd2CeN/gZcX07Qezu0U2EfZuSATW61kkGEBxRX8PHhX4rw1gUbphxhsFxU0DSt YUdC5kGcBy/fly4aB4+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mR0ff-00CdBx-II; Thu, 16 Sep 2021 23:20:00 +0000 Received: from mail-qt1-x834.google.com ([2607:f8b0:4864:20::834]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mR0Zd-00CaDe-IL for linux-arm-kernel@lists.infradead.org; Thu, 16 Sep 2021 23:13:49 +0000 Received: by mail-qt1-x834.google.com with SMTP id g11so7172433qtk.5 for ; Thu, 16 Sep 2021 16:13:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=JDZrEF/sCDnWcKlUfW5zJu5PS6tO5sZA6oFNkHkJDRY=; b=Rc5YN7zxFvRFzl/O1PJTtlOE7q/0aG1hbqNxGTnAQUocHjgAVlL+c1Z9F6T4TFbyqN tIxqNmujTPV7vP1tXrI4hw5+sWHwwhkj7dkO5YBMrjibcMQF1SY6d0/gA30xprUHSPrK 5bpHgtcy+M0Ye9xVi8oMNq+EF0EjcZrGrcSbBmLGy5IOxg3vxoFxOexVk9DeSdp4u9QC xLiw2O7dzu6wSmtrhSiw9JLowsz8FLRaDa05whhIw6IOJD0D6leHB78kf3Fg0Rp0abSG XNEY2vIkJAVE4G/uzSkQ5NjNprXzQ4XgO7WVHPmZFRQx7hi0H61+bdlqSJx3HodcOiZH DV/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JDZrEF/sCDnWcKlUfW5zJu5PS6tO5sZA6oFNkHkJDRY=; b=KWjORbsAukR+ltS43/40JzMulpQP8d6qhTWF137+uTuyiwtRW+PcCwkwIycF2Dzujx jebJlQ4rFP14H3PH6KSOhtk08P0BpWF86dWnDJ3e7M6nA5q/wMF06LVszgNQgHo445P/ LRBhipI++UaNql3sw3HD329ftf3f9fW8GAT9iCIbEkMD0foKwDt2Vu4LEFyC6OmLWiyL YQcQmcYdhhZbutQS4LHVRIyAoOTp5wphZJv+YXb/nmnqzyvBMMUY7TaAXkoFY9kn5JzF GPidWn72wjmTNgXZOsung3dfqe3p2rmYTiss/6aBrEjw1fDGlNux7EYbWCEeYxkfqDIQ Tgqg== X-Gm-Message-State: AOAM530mOe7L29rln74yOgEqVmYd1B0DBpE0h4hXXg7yWXS4JkICNdyb MuwnhvJp4KKCkDx0b7F20Mc4OQ== X-Google-Smtp-Source: ABdhPJysH2s+WIL4w7gagdK4FBh+E6tL7ZSWKhm5aEaAVpwtx+nGBt3pKhCQPXRi/rFizb+cLcakOg== X-Received: by 2002:ac8:7354:: with SMTP id q20mr7634239qtp.329.1631834024082; Thu, 16 Sep 2021 16:13:44 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id az6sm3312891qkb.70.2021.09.16.16.13.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Sep 2021 16:13:43 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com, kernelfans@gmail.com, akpm@linux-foundation.org, madvenka@linux.microsoft.com Subject: [PATCH v17 11/15] arm64: kexec: install a copy of the linear-map Date: Thu, 16 Sep 2021 19:13:21 -0400 Message-Id: <20210916231325.125533-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210916231325.125533-1-pasha.tatashin@soleen.com> References: <20210916231325.125533-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210916_161345_761572_186949FA X-CRM114-Status: GOOD ( 20.85 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org To perform the kexec relocation with the MMU enabled, we need a copy of the linear map. Create one, and install it from the relocation code. This has to be done from the assembly code as it will be idmapped with TTBR0. The kernel runs in TTRB1, so can't use the break-before-make sequence on the mapping it is executing from. The makes no difference yet as the relocation code runs with the MMU disabled. Suggested-by: James Morse Signed-off-by: Pasha Tatashin --- arch/arm64/include/asm/assembler.h | 19 +++++++++++++++++++ arch/arm64/include/asm/kexec.h | 2 ++ arch/arm64/kernel/asm-offsets.c | 2 ++ arch/arm64/kernel/hibernate-asm.S | 20 -------------------- arch/arm64/kernel/machine_kexec.c | 16 ++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 3 +++ 6 files changed, 40 insertions(+), 22 deletions(-) diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index 71999a325055..4289c4e1c2a3 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -483,6 +483,25 @@ alternative_endif _cond_extable .Licache_op\@, \fixup .endm +/* + * To prevent the possibility of old and new partial table walks being visible + * in the tlb, switch the ttbr to a zero page when we invalidate the old + * records. D4.7.1 'General TLB maintenance requirements' in ARM DDI 0487A.i + * Even switching to our copied tables will cause a changed output address at + * each stage of the walk. + */ + .macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2 + phys_to_ttbr \tmp, \zero_page + msr ttbr1_el1, \tmp + isb + tlbi vmalle1 + dsb nsh + phys_to_ttbr \tmp, \page_table + offset_ttbr1 \tmp, \tmp2 + msr ttbr1_el1, \tmp + isb + .endm + /* * reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present */ diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 753a1c398898..d678f0ceb7ee 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -97,6 +97,8 @@ struct kimage_arch { phys_addr_t dtb_mem; phys_addr_t kern_reloc; phys_addr_t el2_vectors; + phys_addr_t ttbr1; + phys_addr_t zero_page; }; #ifdef CONFIG_KEXEC_FILE diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 6a2b8b1a4872..1f565224dafd 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -175,6 +175,8 @@ int main(void) #ifdef CONFIG_KEXEC_CORE DEFINE(KIMAGE_ARCH_DTB_MEM, offsetof(struct kimage, arch.dtb_mem)); DEFINE(KIMAGE_ARCH_EL2_VECTORS, offsetof(struct kimage, arch.el2_vectors)); + DEFINE(KIMAGE_ARCH_ZERO_PAGE, offsetof(struct kimage, arch.zero_page)); + DEFINE(KIMAGE_ARCH_TTBR1, offsetof(struct kimage, arch.ttbr1)); DEFINE(KIMAGE_HEAD, offsetof(struct kimage, head)); DEFINE(KIMAGE_START, offsetof(struct kimage, start)); BLANK(); diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S index a30a2c3f905e..0e1d9c3c6a93 100644 --- a/arch/arm64/kernel/hibernate-asm.S +++ b/arch/arm64/kernel/hibernate-asm.S @@ -15,26 +15,6 @@ #include #include -/* - * To prevent the possibility of old and new partial table walks being visible - * in the tlb, switch the ttbr to a zero page when we invalidate the old - * records. D4.7.1 'General TLB maintenance requirements' in ARM DDI 0487A.i - * Even switching to our copied tables will cause a changed output address at - * each stage of the walk. - */ -.macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2 - phys_to_ttbr \tmp, \zero_page - msr ttbr1_el1, \tmp - isb - tlbi vmalle1 - dsb nsh - phys_to_ttbr \tmp, \page_table - offset_ttbr1 \tmp, \tmp2 - msr ttbr1_el1, \tmp - isb -.endm - - /* * Resume from hibernate * diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 83da6045cd45..50bc0a265c86 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -159,6 +159,8 @@ static void *kexec_page_alloc(void *arg) int machine_kexec_post_load(struct kimage *kimage) { + int rc; + pgd_t *trans_pgd; void *reloc_code = page_to_virt(kimage->control_code_page); long reloc_size; struct trans_pgd_info info = { @@ -175,12 +177,22 @@ int machine_kexec_post_load(struct kimage *kimage) kimage->arch.el2_vectors = 0; if (is_hyp_nvhe()) { - int rc = trans_pgd_copy_el2_vectors(&info, - &kimage->arch.el2_vectors); + rc = trans_pgd_copy_el2_vectors(&info, + &kimage->arch.el2_vectors); if (rc) return rc; } + /* Create a copy of the linear map */ + trans_pgd = kexec_page_alloc(kimage); + if (!trans_pgd) + return -ENOMEM; + rc = trans_pgd_create_copy(&info, &trans_pgd, PAGE_OFFSET, PAGE_END); + if (rc) + return rc; + kimage->arch.ttbr1 = __pa(trans_pgd); + kimage->arch.zero_page = __pa(empty_zero_page); + reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start; memcpy(reloc_code, __relocate_new_kernel_start, reloc_size); kimage->arch.kern_reloc = __pa(reloc_code); diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 9d2400855ee4..a07b737533c3 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -29,10 +29,13 @@ */ SYM_CODE_START(arm64_relocate_new_kernel) /* Setup the list loop variables. */ + ldr x18, [x0, #KIMAGE_ARCH_ZERO_PAGE] /* x18 = zero page for BBM */ + ldr x17, [x0, #KIMAGE_ARCH_TTBR1] /* x17 = linear map copy */ ldr x16, [x0, #KIMAGE_HEAD] /* x16 = kimage_head */ mov x14, xzr /* x14 = entry ptr */ mov x13, xzr /* x13 = copy dest */ raw_dcache_line_size x15, x1 /* x15 = dcache line size */ + break_before_make_ttbr_switch x18, x17, x1, x2 /* set linear map */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */