From patchwork Wed Jun 9 00:44:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 12308521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BBCDAC4743D for ; Wed, 9 Jun 2021 01:00:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7840B60FF2 for ; Wed, 9 Jun 2021 01:00:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7840B60FF2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=quSOAuHeFxOVoTFGtXxV28dOQc2ofRjvqITZCmYBj58=; b=lSMu2K8kcVI7G8 KjTKpDQmAImcNf19+U8oh0A1eRypgufOgr1QYPsFlOQaUF/YUjQo8pKynhvYFN6rtMQkE1FHKHy7x 9tLb4alR3/9JoCrJ5iufYV1pKI9p/kvFseLk9ElVXAXop9sPHPCZkmTtSpQifBEbQjPaGqzEgUitB C+qp0TmEpj36oqPZhFXUaBcCJH772gpZaP89Wsk1R+QDco2JufunBx+IUqSx+dBT0ii0cr80TuW1f wXQLMzzM2kCVJC8jGo+pmQk73lfe1XxJTjYWc0Rmn2rNv+kFPNI/lJere5sOGFEtHEVZHKqRHKtDU 0pBcfU6sz7fmjIrthP0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqmV2-00B0KZ-S7; Wed, 09 Jun 2021 00:55:18 +0000 Received: from mail-qk1-f172.google.com ([209.85.222.172]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1lqmLd-00AvaB-O8 for linux-arm-kernel@lists.infradead.org; Wed, 09 Jun 2021 00:45:35 +0000 Received: by mail-qk1-f172.google.com with SMTP id o27so22134854qkj.9 for ; Tue, 08 Jun 2021 17:45:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Emy7G3R4fnJJtixbqFliehFPYNk4E7oq380r2UBbUTg=; b=YWf0syBZsse8lc/mMTj8DS7PSWN6ra8Xukco5XvSlESCykwl9Vkz4Uhy+rJytLdhAX TsfbCvx+Qf07PG/0Nr25bGgkrY3oUZes58zRrhN1Egyp74h+zKriKvWT1P6QSrpX+Wt9 cCdqQO+3Vu7ZYgpsC50bbrc5Rn8qYOJjeXvu9QlRVsdn/c2iKRfEv94sjnD8Xilu3CJT RIq41RR+cG5Ut8+fzsX1EESUsk8azmnL5AP/DU13BpRy+qv0Mbkco1NhZp5rQX7dYAwj uNPidK7FC8OKN0FBEDUi21YhTZMTNrCelEmFBGTuhg+kU3YNeDwuVf+h4p8QIuNwo817 HKwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Emy7G3R4fnJJtixbqFliehFPYNk4E7oq380r2UBbUTg=; b=cOcz4gKwu4RspwksV/a3R4tkYFcv6d7a8dUpdwpigYbqY3ZqazqofGBsQEFmBU63Zr AZdZKia2kFDnMau/13WYxWRPtdrrZj96h9UFxfSr/dIjIF1sQFKnBJvarTTbZxVDGR8l Qr03X/l6XAiXCKEj2Iwh1Q/gGUvGuPMUXdXMV3P0hoQr1mrH8gXlleWGrdRsxk1yYNfn iI32ILv5OJMwJtvl23mYSm8DPjW35e2yzVD7jH8426YXJOrGub/iIDKj8UCYpQnircID 9rSOsjTWLao8uNick8ui4vvkoPFwxTOAZP50rgDWwYLlIwKq/azd101lz6dhVNk5SdjD HgGQ== X-Gm-Message-State: AOAM530Tijcgd2MjqZJZoncHaXUu0USA5S3IcIftATCRBbxurCgip6Rj C9mqT2rrfOKmo1WYPusnq08GLQ== X-Google-Smtp-Source: ABdhPJzmM3elcFvZ3NQoPrFQ0alk3E3chSyV6enV46/ICZNwkelqZouWMSAEtqgxxJQAPybADdxVXg== X-Received: by 2002:a37:2ec1:: with SMTP id u184mr2736258qkh.500.1623199473091; Tue, 08 Jun 2021 17:44:33 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id n194sm12869011qka.66.2021.06.08.17.44.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 17:44:32 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com, tyhicks@linux.microsoft.com, kernelfans@gmail.com, akpm@linux-foundation.org, madvenka@linux.microsoft.com Subject: [PATCH v15 08/15] arm64: kexec: configure EL2 vectors for kexec Date: Tue, 8 Jun 2021 20:44:12 -0400 Message-Id: <20210609004419.936873-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210609004419.936873-1-pasha.tatashin@soleen.com> References: <20210609004419.936873-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210608_174533_856208_0D86437E X-CRM114-Status: GOOD ( 19.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org If we have a EL2 mode without VHE, the EL2 vectors are needed in order to switch to EL2 and jump to new world with hypervisor privileges. In preparation to MMU enabled relocation, configure our EL2 table now. Kexec uses #HVC_SOFT_RESTART to branch to the new world, so extend el1_sync vector that is provided by trans_pgd_copy_el2_vectors() to support this case. Signed-off-by: Pavel Tatashin --- arch/arm64/Kconfig | 2 +- arch/arm64/include/asm/kexec.h | 1 + arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/machine_kexec.c | 31 +++++++++++++++++++++++++++++++ arch/arm64/mm/trans_pgd-asm.S | 9 ++++++++- 5 files changed, 42 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 9f1d8566bbf9..e775e6fb03c5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1148,7 +1148,7 @@ config CRASH_DUMP config TRANS_TABLE def_bool y - depends on HIBERNATION + depends on HIBERNATION || KEXEC_CORE config XEN_DOM0 def_bool y diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 00dbcc71aeb2..753a1c398898 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -96,6 +96,7 @@ struct kimage_arch { void *dtb; phys_addr_t dtb_mem; phys_addr_t kern_reloc; + phys_addr_t el2_vectors; }; #ifdef CONFIG_KEXEC_FILE diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 92c2ed846730..ee37d0e0fb6d 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -159,6 +159,7 @@ int main(void) #endif #ifdef CONFIG_KEXEC_CORE DEFINE(KIMAGE_ARCH_DTB_MEM, offsetof(struct kimage, arch.dtb_mem)); + DEFINE(KIMAGE_ARCH_EL2_VECTORS, offsetof(struct kimage, arch.el2_vectors)); DEFINE(KIMAGE_HEAD, offsetof(struct kimage, head)); DEFINE(KIMAGE_START, offsetof(struct kimage, start)); BLANK(); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index e88d5cfbef40..e0ef88cc57d2 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "cpu-reset.h" @@ -42,7 +43,9 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" start: %lx\n", kimage->start); pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); + pr_debug(" dtb_mem: %pa\n", &kimage->arch.dtb_mem); pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + pr_debug(" el2_vectors: %pa\n", &kimage->arch.el2_vectors); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -137,9 +140,27 @@ static void kexec_segment_flush(const struct kimage *kimage) } } +/* Allocates pages for kexec page table */ +static void *kexec_page_alloc(void *arg) +{ + struct kimage *kimage = (struct kimage *)arg; + struct page *page = kimage_alloc_control_pages(kimage, 0); + + if (!page) + return NULL; + + memset(page_address(page), 0, PAGE_SIZE); + + return page_address(page); +} + int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); + struct trans_pgd_info info = { + .trans_alloc_page = kexec_page_alloc, + .trans_alloc_arg = kimage, + }; /* If in place, relocation is not used, only flush next kernel */ if (kimage->head & IND_DONE) { @@ -148,6 +169,14 @@ int machine_kexec_post_load(struct kimage *kimage) return 0; } + kimage->arch.el2_vectors = 0; + if (is_hyp_nvhe()) { + int rc = trans_pgd_copy_el2_vectors(&info, + &kimage->arch.el2_vectors); + if (rc) + return rc; + } + memcpy(reloc_code, arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); kimage->arch.kern_reloc = __pa(reloc_code); @@ -200,6 +229,8 @@ void machine_kexec(struct kimage *kimage) restart(is_hyp_nvhe(), kimage->start, kimage->arch.dtb_mem, 0, 0); } else { + if (is_hyp_nvhe()) + __hyp_set_vectors(kimage->arch.el2_vectors); cpu_soft_restart(kimage->arch.kern_reloc, virt_to_phys(kimage), 0, 0); } diff --git a/arch/arm64/mm/trans_pgd-asm.S b/arch/arm64/mm/trans_pgd-asm.S index 831d6369494e..c1f2ed1be6de 100644 --- a/arch/arm64/mm/trans_pgd-asm.S +++ b/arch/arm64/mm/trans_pgd-asm.S @@ -24,7 +24,14 @@ SYM_CODE_START_LOCAL(el1_sync) msr vbar_el2, x1 mov x0, xzr eret -1: /* Unexpected argument, set an error */ +1: cmp x0, #HVC_SOFT_RESTART /* Called from kexec */ + b.ne 2f + mov x0, x2 + mov x2, x4 + mov x4, x1 + mov x1, x3 + br x4 +2: /* Unexpected argument, set an error */ mov_q x0, HVC_STUB_ERR eret SYM_CODE_END(el1_sync)